AI Policy Challenges and Recommendations

[return to AI policy home page]

Artificial intelligence (AI) holds great economic, social, medical, security, and environmental promise. AI systems can help people acquire new skills and training, democratize services, design and deliver faster production times and quicker iteration cycles, reduce energy usage, provide real-time environmental monitoring for pollution and air quality, enhance cybersecurity defenses, boost national output, reduce healthcare inefficiencies, create new kinds of enjoyable experiences and interactions for people, and improve real-time translation services to connect people around the world. In the long-term, we can imagine AI enabling breakthroughs in medicine, basic and applied science, managing complex systems, and creating currently-unimagined products and services. For all of these reasons and many more, researchers are thrilled with the potential uses of AI systems to help manage some of the world’s hardest problems and improve countless lives.

But in order to realize this potential, the challenges associated with AI development have to be addressed. The following 14 topics represent particular areas of concern for the safe and beneficial development of AI, both in the near- and far-term. Each topic is described in brief and followed by examples of existing policy principles and recommendations (listed in alphabetical order) as well as links to additional resources. Addressing these topics should be a priority for policymakers seeking to harness the benefits of AI while preparing for and mitigating potential threats. These topics are not sector-specific (i.e. transportation or healthcare), rather they address overarching AI policy concerns that cut across multiple industries and use cases.

This is intended to be an educational resource. The Future of Life Institute led the development of the Asilomar AI Principles, but does not necessarily endorse the other policy principles or recommendations listed below. You can find more information about national and international AI strategies and policies here, and summaries of AI policy resources here.

This page is a work in progress and will continue to be updated. If you have feedback or information about developments or recommendations that should be included here, please send a short description to jessica@futureoflife.org.

  1. Enabling Beneficial AI Research and Development
  2. Global Governance, Race Conditions, and International Cooperation
  3. Economic Impacts, Labor Shifts, Inequality, and Technological Unemployment
  4. Accountability, Transparency, and Explainability
  5. Surveillance, Privacy, and Civil Liberties
  6. Fairness, Ethics, and Human Rights
  7. Political Manipulation and Computational Propaganda
  8. Human Dignity, Autonomy, and Psychological Impact
  9. Human Health, Augmentation, and Brain-Computer Interfaces
  10. AI Safety
  11. Security and Cybersecurity
  12. Autonomous Weapons
  13. Catastrophic and Existential Risk
  14. Artificial General Intelligence (AGI) and Superintelligence

Enabling Beneficial AI Research and Development

There are a lot of opportunities for beneficial AI research that go beyond what is necessary for effectiveness, many of which are highlighted in this FLI AI safety research landscape. There are also numerous challenges associated with enabling flourishing research and development programs for beneficial AI. One is access to high quality and standardized datasets; another is being able to find and hire people with the right combination of skills to build reliable, high quality products. Limits on immigration and work visas can further exacerbate the shortage of AI researchers. Updating educational programs to include training about how to build safe and beneficial systems is another important component to address this problem. Additionally, it is important to enable the right conditions for research and researchers to flourish, including government support and safe, inclusive work environments.

Principles and Recommendations

Asilomar AI Principles

  • Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
  • Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies.
  • Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Charlevoix Common Vision for the Future of Artificial Intelligence, G7 2018

  • Endeavour to promote human-centric AI and commercial adoption of AI, and continue to advance appropriate technical, ethical and technologically neutral approaches by: safeguarding privacy including through the development of appropriate legal regimes; investing in cybersecurity, the appropriate enforcement of applicable privacy legislation and communication of enforcement decisions; informing individuals about existing national bodies of law, including in relation to how their personal data may be used by AI systems; promoting research and development by industry in safety, assurance, data quality, and data security; and exploring the use of other transformative technologies to protect personal privacy and transparency.
  • Promote investment in research and development in AI that generates public trust in new technologies, and encourage industry to invest in developing and deploying AI that supports economic growth and women’s economic empowerment while addressing issues related to accountability, assurance, liability, security, safety, gender and other biases and potential misuse.
  • Support an open and fair market environment including the free flow of information, while respecting applicable frameworks for privacy and data protection for AI innovation by addressing discriminatory trade practices, such as forced technology transfer, unjustified data localization requirements and source code disclosure, and recognizing the need for effective protection and enforcement of intellectual property rights.

DeepMind Ethics & Society Principles

  • Social benefit. We believe AI should be developed in ways that serve the global social and environmental good, helping to build fairer and more equal societies. Our research will focus directly on ways in which AI can be used to improve people’s lives, placing their rights and well-being at its very heart.

Ethically Aligned Design, IEEE

  • Governments should create research pools that incentivize research on A/IS that benefits the public, but which may not be commercially viable.
  • Enable a cross-disciplinary research environment that encourages research on the fairness, security, transparency, understandability, privacy, and societal impacts of A/IS and that incorporates independent means to properly vet, audit, and assign accountability to the A/IS applications.
  • Expertise can be furthered by setting up technical fellowships, or rotation schemes, where technologists spend an extended time in political offices, or policy makers work with organizations that operate at the intersection of tech-policy, technical engineering, and advocacy (like the American Civil Liberties Union, Article 19, the Center for Democracy and Technology, or Privacy International). This will enhance the technical knowledge of policy makers and strengthen ties between political and technical communities, needed to make good A/IS policy.

Google AI Principles

  • Be socially beneficial. The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides. AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.
  • Uphold high standards of scientific excellence. Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development. We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

The Information Technology Industry (ITI) AI Policy Principles

  • Investment in AI Research and Development: We encourage robust support for research and development (R&D) to foster innovation through incentives and funding. As the primary source of funding for long-term, high-risk research initiatives, we support governments’ investment in research fields specific or highly relevant to AI, including: cyber-defense, data analytics, detection of fraudulent transactions or messages, robotics, human augmentation, natural language processing, interfaces, and visualizations.
  • Global Standards and Best Practices: We promote the development of global voluntary, industry-led, consensus-based standards and best practices. We encourage international collaboration in such activities to help accelerate adoption, promote competition, and enable the cost-effective introduction of AI technologies.
  • Science, Technology, Engineering and Math (STEM) Education: Current and future workers need to be prepared with the necessary education and training to help them succeed. We recognize that delivering training is critical and will require significant investment, not only in STEM education, but also in understanding human behavior via the humanities and social sciences. To ensure employability of the workforce of the future, the public and private sectors should work together to design and deliver work-based learning and training systems, and advance approaches that provide students with real work experiences and concrete skills. In conjunction, prioritizing diversity and inclusion in STEM fields, and in the AI community specifically, will be a key part in ensuring AI develops in the most robust way possible.
  • Public Private Partnership: PPPs will make AI deployments an attractive investment for both government and private industry, and promote innovation, scalability, and sustainability. By leveraging PPPs – especially between industry partners, academic institutions, and governments – we can expedite AI R&D and prepare our workforce for the jobs of the future.

Research and Resources

  1. AI Index 2019 Annual Report,” Raymond Perrault et al., Human-Centered AI Institute, Stanford University, December 2019
  2. Visa Laws, Policies, and Practices: Recommendations for Accelerating the Mobility of Global AI/ML Talent,” Partnership on AI, September 2019
  3. Promotion of Beneficial AI,” Seth D. Baum, Global Catastrophic Risk Institute, July 29, 2016
  4. Global Economic Impacts of Artificial Intelligence,” Analysis Group, February 25, 2016

Global Governance, Race Conditions, and International Cooperation

Power dynamics will likely shift and be tested with greater adoption of AI and the development of stronger AI systems. Discussion of an “AI race” between the great powers has become commonplace, while weaponized AI could also empower new and non-state actors and lower the threshold for entering into wars. Numerous countries have outlined national strategies for AI including how to become or remain competitive with other nations, but there are also important examples of international cooperation emerging. Global governance and international cooperation will be increasingly important to guide the safe and beneficial development of AI while reducing race conditions and national and global security threats.

Principles and Recommendations

Asilomar AI Principles

  • Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
  • Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
  • Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Ethically Aligned Design, IEEE

  • To ensure consistent and appropriate policies and regulations across governments, policymakers should seek informed input from a range of expert stakeholders, including academic, industry, and government officials, to consider questions related to the governance and safe employment of A/IS.
  • To foster a safe international community of A/IS users, policymakers should take similar work being carried out around the world into consideration. Due to the transnational nature of A/IS, globally synchronized policies can have a greater impact on public safety and technological innovation.
  • Begin an international multi-stakeholder dialogue to determine the best practices for using and developing A/IS, and codify this dialogue into international norms and standards. Many industries, in particular system industries (automotive, air and space, defense, energy, medical systems, manufacturing) are going to be significantly changed by the surge of A/IS. A/IS algorithms and applications must be considered as products owned by companies, and therefore the companies must be responsible for the A/IS products not being a threat to humanity.

Research and Resources

  1. OECD Principles on Artificial Intelligence,” OECD, May 2019
  2. Killer Apps The Real Dangers of an AI Arms Race,” Paul Scharre, Foreign Affairs, May 2019
  3. AI Governance: A Research Agenda,” Allan Dafoe, Governance of AI Program, Future of Humanity Institute, University of Oxford, August 2018
  4. Ethically Aligned Design, Version 2: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,” IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2017
  5. China embraces AI: A Close Look and A Long View,” Ian Bremmer, Eurasia Group, December 2017
  6. Report from the AI Race Avoidance Workshop,” GoodAI and AI Roadmap Institute Tokyo, October 2017
  7. Destination unknown: Exploring the impact of Artificial Intelligence on Government,” Centre for Public Impact, September 2017
  8. Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process,” Anthony M. Barrett and Seth D. Baum, The Technological Singularity pp 127-140, May 2017
  9. Existential Risk: Diplomacy and Governance,” Global Priorities Project, 2017
  10. Smart Policies for Artificial Intelligence,” Miles Brundage and Joanna Bryson, arXiv, August 2016
  11. Preparing for the future of Artificial Intelligence,” U.S. White House, 2016
  12. International Cooperation vs. AI Arms Race,” Brian Tomasik, Foundational Research Institute, December 2013
  13. Racing to the precipice: a model of artificial intelligence development”, Stuart Armstrong, et al., Future of Humanity Institute, Oxford University, 2013

Economic Impacts, Labor Shifts, Inequality, and Technological Unemployment

AI is enabling greater workforce automation, which is having a dramatic impact on many industries and could worsen economic disparities by generating wealth for a smaller number of people than previous technological revolutions. This may result in significant job losses, but will also augment the workflow of many jobs. There will be a need for improved retraining programs as well as updated social security measures. Some popular proposals include redistributive economic policies like universal basic income and the “robot tax” in order to offset some of the likely increases in inequality and resulting social and political tensions.

Principles and Recommendations

Asilomar AI Principles

  • Shared Benefit: AI technologies should benefit and empower as many people as possible.
  • Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

Charlevoix Common Vision for the Future of Artificial Intelligence, G7 2018

  • Support lifelong learning, education, training and reskilling, and exchange information on workforce development for AI skills, including apprenticeships, computer science and STEM (science, technology, engineering and mathematics) education, especially for women, girls and those at risk of being left behind.
  • Promote active labour market policies, workforce development and reskilling programs to develop the skills needed for new jobs and for those at risk of being left out, including policies specifically targeting the needs of women and underrepresented populations in order to increase labour participation rates for those groups.
  • Encourage investment in AI technology and innovation to create new opportunities for all people, especially to give greater support and options for unpaid caregivers, the majority of whom today are women.

Ethically Aligned Design, IEEE

  • Establish policies that foster the development of economies able to absorb A/IS, while providing broad job opportunities to those who might otherwise be alienated or unemployed. In addition, the continued development of A/IS talent should be fostered through international collaboration.
  • Continue research into the viability of universal basic income. Such a non-conditional and government-provided addition to people’s income might lighten the economic burden that comes from automation and economic displacement caused by A/IS.

The Information Technology Industry (ITI) AI Policy Principles

  • Workforce: There is concern that AI will result in job change, job loss, and/or worker displacement. While these concerns are understandable, it should be noted that most emerging AI technologies are designed to perform a specific task and assist rather than replace human employees. This type of augmented intelligence means that a portion, but most likely not all, of an employee’s job could be replaced or made easier by AI. While the full impact of AI on jobs is not yet fully known, in terms of both jobs created and displaced, an ability to adapt to rapid technological change is critical. We should leverage traditional human-centered resources as well as new career educational models and newly developed AI technologies to assist both the existing workforce and future workforce in successfully navigating career development and job transitions. Additionally, we must have PPPs [Public Private Partnerships] that significantly improve the delivery and effectiveness of lifelong career education and learning, inclusive of workforce adjustment programs. We must also prioritize the availability of job-driven training to meet the scale of need, targeting resources to programs that produce strong results.
  • Democratizing Access and Creating Equality of Opportunity: While AI systems are creating new ways to generate economic value, if the value favors only certain incumbent entities, there is a risk of exacerbating existing wage, income, and wealth gaps. We support diversification and broadening of access to the resources necessary for AI development and use, such as computing resources, education, and training, including opportunities to participate in the development of these technologies.

Research and Resources

  1. AI Index 2019 Annual Report,” Raymond Perrault et al., Human-Centered AI Institute, Stanford University, December 2019
  2. AI and the Economy,” Jason Furman and Robert Seamans, SSRN, May 29, 2018
  3. Artificial Intelligence and Its Implications for Income Distribution and Unemployment,” Anton Korinek, Joseph E. Stiglitz, The National Bureau of Economic Research, December 2017
  4. The Risks of Artificial Intelligence to Security and the Future of Work,”
    Osonde A. Osoba, William Welser IV, RAND, December 2017
  5. Modeling the Macroeconomic Effects of a Universal Basic Income,” Roosevelt Institute, August 2017
  6. Robots and Jobs: Evidence from US Labor Markets,” Daron Acemoglu, Pascual Restrepo, The National Bureau of Economic Research, March 2017
  7. A Future that Works: Automation, Employment, and Productivity,” McKinsey Global Institute, January 2017
  8. Artificial Intelligence, Automation, and the Economy,” U.S. White House, 2016
  9. The Risk of Automation for Jobs in OECD Countries A Comparative Analysis,” OECD Social, May 2016
  10. A Roadmap for US Robotics: From Internet to Robotics, 2016 Edition,” organized by many universities and the NSF, November 2016
  11. Public predictions for the future of workforce automation,” Aaron Smith, Pew Research Center, 2016
  12. The Future of Jobs Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution,” World Economic Forum, January 2016
  13. The future of employment: How susceptible are jobs to computerisation?” Carl Benedikt Frey and Michael A. Osborne, The Oxford Martin Programme on Technology and Employment, September 2013

Accountability, Transparency, and Explainability

Holding an AI system or its designers accountable poses several challenges. The lack of transparency and explainability, associated with machine learning in particular, means it can be hard or impossible to know why an algorithm made a particular decision. There is also a question of who has access to key algorithms and how understandable they are, a problem exacerbated by the use of proprietary algorithms. As decision-making is ceded to AI systems, there are not clear guidelines about who would be held accountable for undesirable effects. The EU General Data Protection Regulation (GDPR) is one effort to help remedy that situation, for example with the provision to the “right to explanation” about how an automated decision was made.

Principles and Recommendations

AI Now 2017 Report

  • Core​ ​public​ ​agencies,​ ​such​ ​as​ ​those​ ​responsible​ ​for​ ​criminal​ ​justice,​ ​healthcare, welfare,​ ​and​ ​education​ ​(e.g.​ ​“high​ ​stakes”​ ​domains)​ ​should​ ​no​ ​longer​ ​use​ ​“black​ ​box” AI​ ​and​ ​algorithmic​ ​systems.​ This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum they should be available for public auditing, testing, and review, and subject to accountability standards.
  • Strong​ ​standards​ ​for​ ​auditing​ ​and​ ​understanding​ ​the​ ​use​ ​of​ ​AI​ ​systems​ ​“in​ ​the​ ​wild” are​ ​urgently​ ​needed.​ Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision.

Asilomar AI Principles

  • Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  • Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

British Standard guide to the ethical design and application of robots and robotic systems

  • The roles, responsibilities and legal liabilities should be clearly identified for all stages of the robot’s life cycle. It should always be possible to easily discover the person(s) legally responsible for the robot and its behaviour during all stages of the life cycle.
  • There should be defined levels of responsibilities agreed with the individual user, the organization deploying the robot and the organization manufacturing the robot.

DeepMind Ethics & Society Principles

  • Transparent and open. We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre-determine the outcome of studies we commission. When we collaborate or co-publish with external researchers, we will disclose whether they have received funding from us. Any published academic papers produced by the Ethics & Society team will be made available through open access schemes.

Ethically Aligned Design, IEEE

  • Accountability: As duty bearers, states should be obliged to behave responsibly, seek to represent the greater public interest, and be open to public scrutiny of their A/IS policy.

Google AI Principles

  • Be accountable to people. We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

The Information Technology Industry (ITI) AI Policy Principles

  • Interpretability: We are committed to partnering with others across government, private industry, academia, and civil society to find ways to mitigate bias, inequity, and other potential harms in automated decision-making systems. Our approach to finding such solutions should be tailored to the unique risks presented by the specific context in which a particular system operates. In many contexts, we believe tools to enable greater interpretability will play an important role.
  • Liability of AI Systems Due to Autonomy: The use of AI to make autonomous consequential decisions about people, informed by – but often replacing decisions made by – human-driven bureaucratic processes, has led to concerns about liability. Acknowledging existing legal and regulatory frameworks, we are committed to partnering with relevant stakeholders to inform a reasonable accountability framework for all entities in the context of autonomous systems.

Research and Resources

  1. AI Now 2019 Report,” AI Now, 2019
  2. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System,” Partnership on AI, 2019
  3. Algorithmic Accountability Policy Toolkit,” AI Now, October 2018
  4. Accountability of AI Under the Law: The Role of Explanation,” Finale Doshi-Velez and Mason Kortz, Berkman Klein Center, November 2017
  5. Ethically Aligned Design, Version 2: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,” IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2017
  6. AI Now 2017 Report,” AI Now, 2017
  7. Artificial Intelligence and Law Enforcement,” SANS Institute, August 2017
  8. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” Mike Ananny, Kate Crawford, SAGE Journals, December 2016
  9. Designing AI Systems that Obey Our Laws and Values,” Amitai Etzioni, Oren Etzioni, Communications of the ACM, September 2016
  10. European Union regulations on algorithmic decision-making and a “right to explanation,”
    Bryce Goodman and Seth Flaxman, August 2016
  11. Accountable Algorithms,” Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu, University of Pennsylvania Law Review, April 2016
  12. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Matthew U. Scherer, Harvard Journal of Law & Technology, May 2015

Surveillance, Privacy, and Civil Liberties

AI expands surveillance possibilities because it enables real-time monitoring and analysis of video and other data streams, including features such as live facial recognition. These uses raise questions about privacy, justice, and civil liberties, particularly in the policing and law enforcement context. Police forces in the US are already experimenting with the use of AI for enhanced predictive policing. There is also increasing pressure on AI companies and institutions to be more transparent about their data and privacy policies. The EU General Data Protection Regulation (GDPR) is one prominent example of a recent data privacy regulation that has profound implications for AI development given its requirements for data collection and management as well as the “right to explanation”. The California Consumer Privacy Act of 2018 is another important upcoming privacy regulation that will go into effect on January 1, 2020, giving consumers more rights over their personal information.

Principles and Recommendations

Asilomar AI Principles

  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

British Standard guide to the ethical design and application of robots and robotic systems

  • The information storage issues to be resolved include the type of information the robot is allowed to record, who should have access to the information, who is intended to be using the information, how long to keep the data stored and whether it is necessary to obtain informed consent from the user. Robots should follow the principle of “privacy by design”.

Charlevoix Common Vision for the Future of Artificial Intelligence, G7 2018

  • Ensure AI design and implementation respect and promote applicable frameworks for privacy and personal data protection.
  • Endeavour to promote human-centric AI and commercial adoption of AI, and continue to advance appropriate technical, ethical and technologically neutral approaches by: safeguarding privacy including through the development of appropriate legal regimes; investing in cybersecurity, the appropriate enforcement of applicable privacy legislation and communication of enforcement decisions; informing individuals about existing national bodies of law, including in relation to how their personal data may be used by AI systems; promoting research and development by industry in safety, assurance, data quality, and data security; and exploring the use of other transformative technologies to protect personal privacy and transparency.

Google AI Principles

  • Incorporate privacy design principles. We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
  • We will not design or deploy AI in the following application area: Technologies that gather or use information for surveillance violating internationally accepted norms.

The Information Technology Industry (ITI) AI Policy Principles

  • Cybersecurity and Privacy: Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally-accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information-sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de-identification, or aggregation to protect personally identifiable information whenever possible.

Research and Resources

  1. AI Now 2019 Report,” AI Now, 2019
  2. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System,” Partnership on AI, 2019
  3. AI Now 2017 Report,” AI Now, 2017
  4. Artificial Intelligence and Law Enforcement,” SANS Institute, August 2017
  5. Machine Bias,” ProPublica, May 2016
  6. Limitless Worker Surveillance,” Ajunwa et al., California Law Review, March 2016
  7. Policing by Numbers: Big Data and the Fourth Amendment,”  Elizabeth E. Joh, Washington Law Review,  March 2014

Fairness, Ethics, and Human Rights

The field of AI ethics is growing rapidly, with the topics of discrimination, fairness, algorithmic bias, and human rights among the primary areas of concern. Challenges are related to access and inclusion, as well as the perpetuation of inequity through sociotechnical design. Computer science and AI can be relatively homogenous fields, lacking in gender, racial, and other kinds of diversity. This can lead to skewed product design, blind spots, and false assumptions. Algorithms can also reproduce and magnify social biases and discrimination from using training data that mirror existing bias in society or that have a skewed representation. Programmers may also unintentionally introduce their own assumptions into their software. Algorithmic bias can result in both harms of allocation and harms of representation. AI ethics also encompasses the issues of value systems and goals encoded into machines, design ethics, and systemic impacts of AI on social, political and economic structures. Some have also called to more explicitly include justice as a goal of fair, accountable, and transparent “FAT” AI development. AI has the potential to have profound social justice implications if it enables divergent access, disparate systemic impacts, or the exasperation of discrimination and inequities.

Principles and Recommendations

AI Now 2017 Report

  • Before​ ​releasing​ ​an​ ​AI​ ​system,​ ​companies​ ​should​ ​run​ ​rigorous​ ​pre-release​ ​trials​ ​to ensure​ ​that​ ​they​ ​will​ ​not​ ​amplify​ ​biases​ ​and​ ​errors​ due​ ​to​ ​any​ ​issues​ ​with​ ​the​ ​training data,​ ​algorithms,​ ​or​ ​other​ ​elements​ ​of​ ​system​ ​design.​ As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings.
  • After​ ​releasing​ ​an​ ​AI​ ​system,​ ​companies​ ​should​ ​continue​ ​to​ ​monitor​ ​its​ ​use​ ​across different​ ​contexts​ ​and​ ​communities.​ The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized.
  • Expand​ ​AI​ ​bias​ ​research​ ​and​ ​mitigation​ ​strategies​ ​beyond​ ​a​ ​narrowly​ ​technical approach.Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain – such as education, healthcare or criminal justice – legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines.
  • Companies,​ ​universities,​ ​conferences​ ​and​ ​other​ ​stakeholders​ ​in​ ​the​ ​AI​ ​field​ ​should release​ ​data​ ​on​ ​the​ ​participation​ ​of​ ​women,​ ​minorities​ ​and​ ​other​ ​marginalized​ ​groups within​ ​AI​ ​research​ ​and​ ​development.​ Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces.
  • Ethical​ ​codes​ ​meant​ ​to​ ​steer​ ​the​ ​AI​ ​field​ ​should​ ​be​ ​accompanied​ ​by​ ​strong​ ​oversight and​ ​accountability​ ​mechanisms.More work is needed on how to substantively connect high-level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles.

Asilomar AI Principles

  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Shared Benefit: AI technologies should benefit and empower as many people as possible.
  • Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

British Standard guide to the ethical design and application of robots and robotic systems

  • Robots should be designed, built and operated in such a way that they do not violate human dignity and human rights (for example, there could be a facility for the user to turn off the robot temporarily so that it is not “witness” to very private activities – where the need for privacy is determined by the user). Robots should promote human dignity (for example, through self-sufficiency).
  • Robot applications should take account of different cultural norms, including respect for language, religion, age and gender by formal interaction with representatives of these groups.
  • Potential users should not be discriminated against or forced to acquire and use a robot.

Charlevoix Common Vision for the Future of Artificial Intelligence, G7 2018

  • Support and involve women, underrepresented populations and marginalized individuals as creators, stakeholders, leaders and decision-makers at all stages of the development and implementation of AI applications.
  • Support efforts to promote trust in the development and adoption of AI systems with particular attention to countering harmful stereotypes and fostering gender equality. Foster initiatives that promote safety and transparency, and provide guidance on human intervention in AI decision-making processes.

DeepMind Ethics & Society Principles

  • Social benefit. We believe AI should be developed in ways that serve the global social and environmental good, helping to build fairer and more equal societies. Our research will focus directly on ways in which AI can be used to improve people’s lives, placing their rights and well-being at its very heart.
  • Diverse and interdisciplinary. We will strive to involve the broadest possible range of voices in our work, bringing different disciplines together so as to include diverse viewpoints. We recognize that questions raised by AI extend well beyond the technical domain, and can only be answered if we make deliberate efforts to involve different sources of expertise and knowledge.
  • Collaborative and inclusive. We believe a technology that has the potential to impact all of society must be shaped by and accountable to all of society. We are therefore committed to supporting a range of public and academic dialogues about AI. By establishing ongoing collaboration between our researchers and the people affected by these new technologies, we seek to ensure that AI works for the benefit of all.

Ethically Aligned Design, IEEE

  • Non-discrimination: Principles of nondiscrimination, equality, and inclusiveness should underlie the practice of A/IS. The rights-based approach should also ensure that particular focus is given to vulnerable groups, to be determined locally, such as minorities, indigenous peoples, or persons with disabilities.
  • Corporate responsibility: Companies must ensure that when they are developing their technologies based on the values of a certain community, they do so only to the extent that such norms or values fully comply with the rights-based approach. Companies must also not willingly provide A/IS technologies to actors that will use them in ways that lead to human rights violations.
  • Encourage A/IS development to serve the pressing needs of humanity by promoting dialogue and continued debate over the social and ethical implications of A/IS. To better understand the societal implications of A/IS, we recommend that funding be increased for interdisciplinary research on topics ranging from basic research into intelligence to principles on ethics, safety, privacy, fairness, liability, and trustworthiness of A/IS technology. Societal aspects should be addressed not only at an academic level but also through the engagement of business, public authorities, and policy makers. While technical innovation is a goal, it should not be prioritized over the protection of individuals.
  • Responsibility: The rights-based approach shall identify the right holders and the duty bearers, and ensure that duty bearers have an obligation to realize all human rights; this should guide the policy development and implementation of A/IS.

Google AI Principles

  • Avoid creating or reinforcing unfair bias. AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
  • We will not design or deploy AI in the following application area: Technologies whose purpose contravenes widely accepted principles of international law and human rights.

The Information Technology Industry (ITI) AI Policy Principles

  • Democratizing Access and Creating Equality of Opportunity: While AI systems are creating new ways to generate economic value, if the value favors only certain incumbent entities, there is a risk of exacerbating existing wage, income, and wealth gaps. We support diversification and broadening of access to the resources necessary for AI development and use, such as computing resources, education, and training, including opportunities to participate in the development of these technologies.
  • Robust and Representative Data: To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems. AI systems need to leverage large datasets, and the availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.
  • Responsible Design and Deployment: We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.

The Toronto Declaration

  • The public and the private sector have obligations and responsibilities under human rights law to proactively prevent discrimination. When prevention is not sufficient or satisfactory, discrimination should be mitigated.

Research and Resources

  1. National Artificial Intelligence Strategies and Human Rights: A Review,” Global Partners Digital and Stanford’s Global Digital Policy Incubator, April 2020
  2. AI Now 2019 Report,” AI Now, 2019
  3. Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System,” Partnership on AI, 2019
  4. Governing Artificial Intelligence: Upholding Human Rights & Dignity,” Mark Latonero, Data & Society, October 2018
  5. Artificial Intelligence & Human Rights: Opportunities & Risks,” Filippo A. Raso et al. Berkman Klein Center, September 2018
  6. Promotion and protection of the right to freedom of opinion and expression,” Note by the Secretary-General, General Assembly, United Nations, August 2018
  7. Human Rights and Artificial Intelligence: An Urgently Needed Agenda,” Mathias Risse, Carr Center for Human Rights Policy, Harvard Kennedy School, May 2018
  8. Ethically Aligned Design, Version 2: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,” IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2017
  9. AI Now 2017 Report,” AI Now, 2017
  10. Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI,” Wallach et al., SSRN, October 2016
  11. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil, Crown, September 2016
  12. Machine Bias,” Angwin et al., ProPublica, May 2016
  13. Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, John Havens, TarcherPerigee, February 2016
  14. The Ethics of Artificial Intelligence,” Nick Bostrom and Eliezer Yudkowsky, Cambridge Handbook of Artificial Intelligence, 2014
  15. The Scored Society: Due Process for Automated Predictions,” Danielle Keats Citron and Frank A. Pasquale III, Washington Law Review, January 2014
  16. Discrimination in Online Ad Delivery,” Latanya Sweeney, Communications of the ACM, 2013

Political Manipulation and Computational Propaganda

AI amplifies the power of the information wars, enabling the rise of highly personalized and targeted computational propaganda. Recent events have highlighted the proliferation of fake news and social media bots that tailor messages for political ends, for example by inciting fear, anger, and social discord. Improvements in the creation of fake videos will make this challenge even greater. Many worry that key tenets of democracy could be undermined through this proliferation of AI, for example by manipulating the information people see and their ability to make informed decisions.

Principles and Recommendations

Asilomar AI Principles

  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
  • Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

“The MADCOM Future,” Matt Chessen, The Atlantic Council

  • The US Congress should authorize the Department of Homeland Security to protect the US public from foreign online propaganda, manipulation, and disinformation. Congress should direct the executive branch to develop a comprehensive strategy for protecting the American public from malign influence by foreign actors online. Congress should establish an independent agency responsible for coordinating US government efforts to counter foreign information warfare, and should create an independent National Commission on Data Privacy, Information Security, and Disinformation to recommend legislative changes needed to protect Americans. Congress must also remove the shackles from government agencies, by amending the Privacy Act and allowing them to effectively analyze malicious foreign behavior online.
  • The Department of Homeland Security should expand its cybersecurity mission to include protection of the US public from foreign computational propaganda. (Note: this does not, and should not, include counter-messaging against the US public.) Homeland Security should look to cybersecurity threat tracking, information sharing, and incident-response capabilities for models of how to combat computational propaganda. It should fund research on how people and groups are influenced online. And, it should work with the private sector on measures to help the American people become savvier consumers of information.
  • The Department of State should develop a computational engagement strategy for defending against online foreign propaganda, and for effectively using attributed computational engagement tools for public diplomacy overseas. The State Department should also develop a toolkit of options— including diplomatic pressure, sanctions for malign actors, export controls, and international laws and norms—designed to reduce the risk to the US public from foreign computational propaganda.
  • The Department of Defense and the intelligence community (IC) should elevate the importance of information operations to reflect their real-world utility in the twenty-first-century information environment. The Defense Department and the IC should develop AI-enhanced, machine-driven communications tools for use during armed conflicts, and as a deterrent against adversaries during peacetime.
  • Federal, state, and local governments should develop tools for identifying adversary computational propaganda campaigns, and for countering them with measures other than counter-messaging against the US public. Governments should also recognize the significant positive impacts of artificial intelligence technologies, and should not let potential malign uses undermine the proliferation of beneficial technologies.
  • The technology sector should play a key role in developing tools for identifying, countering, and disincentivizing computational propaganda. Technology companies should make these tools ubiquitous, easy to use, and the default for platforms. They should also align business models and industry norms with societal values, and develop industry organizations for self-regulation.

Research and Resources

  1. The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation,” Samantha Bradshaw and Philip N. Howard, Project on Computational Propaganda, 2019
  2. The Human Consequences of Computational Propaganda: Eight Case Studies from the 2018 US Midterm Elections,” Katie Joseff and Samuel Woolley, Institute for the Future, 2019
  3. The MADCOM Future: How Artificial Intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… And what can be done about it,” Matt Chessen, Atlantic Council, September 2017
  4. Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election,” Faris et al., Berkman Klein Center, August 2017
  5. Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation,” Samantha Bradshaw and Philip N. Howard, Computational Propaganda Research Project, University of Oxford, July 2017
  6. Computational Propaganda Worldwide: Executive Summary,” Samuel C. Woolley and Philip N. Howard, Computational Propaganda Research Project, University of Oxford, July 2017
  7. Computational Propaganda in the United States of America: Manufacturing Consensus Online,” Samuel C. Woolley, Douglas R. Guilbeault, Computational Propaganda Research Project, University of Oxford, 2017
  8. Ten simple rules for responsible big data research,” Zook et al., PLOS, March 2017

Human Dignity, Autonomy, and Psychological Impact

AI can enable and scale micro-targeting practices that are particularly persuasive and can manipulate behavior and emotions. People could experience a loss of autonomy if AI systems are used to nudge their behavior, and even their perception of the world. As we cede control to machines in various areas of our lives, there is also a concern that people will lose some of the meaning in their lives. Finally, it is not clear what kinds of relationships people will form with AI systems once they are more capable of natural language, or how this will impact human relationships.

Principles and Recommendations

Asilomar AI Principles

  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives

British Standard guide to the ethical design and application of robots and robotic systems

  • Robots and robotic systems should be designed to avoid inappropriate control over human choice, for example forcing the speed of repetitive tasks on an assembly line. The ultimate authority should stay with the human.
  • The degree of anthropomorphization and humanization, particularly with children and other vulnerable people, should be taken into account. Where there is a degree of anthropomorphization/humanization, a written document explaining how to introduce a robot to children and other vulnerable individuals should be provided with robot products.
  • The economic, psychological and social consequences of the introduction of robots on employment should be assessed and concerns addressed.

Research and Resources

  1. Governing Artificial Intelligence: Upholding Human Rights & Dignity,” Mark Latonero, Data & Society, October 2018
  2. Resisting Reduction: Designing our Complex Future with Machines,” Joichi Ito, MIT Press, November 2017
  3. Work gives our lives meaning. What will we do when robots have taken our jobs?” Jillian Richardson, Quartz, May 3, 2017

Human Health, Augmentation, and Brain-Computer Interfaces

AI is being used to make sense of massive amounts of biomedical data and to help with drug development, diagnosis, and treatment. This may lead to positive advances in precision medicine, though it also raises challenges related to access to care, control of data, and cultural/political understandings of bodies. Some people want to use AI to augment human ability through “smart drugs”, nanobots and devices that can be implanted in our bodies, or by directly linking our brains to computer interfaces. Such uses can be exciting, but also raise safety and ethical challenges, including the possibility of massively exacerbating inequalities between people.

Principles and Recommendations

Asilomar AI Principles

  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

The Morningside Group

  • Privacy and consent. We believe that citizens should have the ability — and right — to keep their neural data private… We propose the following steps to ensure this. For all neural data, the ability to opt out of sharing should be the default choice, and assiduously protected… Even with this approach, neural data from many willing sharers, combined with massive amounts of non-neural data — from Internet searches, fitness monitors and so on — could be used to draw ‘good enough’ conclusions about individuals who choose not to share. To limit this problem, we propose that the sale, commercial transfer and use of neural data be strictly regulated… Another safeguard is to restrict the centralized processing of neural data. We advocate that computational techniques, such as differential privacy or ‘federated learning’, be deployed to protect user privacy. The use of other technologies specifically designed to protect people’s data would help, too. Blockchain-based techniques, for instance, allow data to be tracked and audited, and ‘smart contracts’ can give transparent control over how data are used, without the need for a centralized authority. Lastly, open-data formats and open-source code would allow for greater transparency about what stays private and what is transmitted.
  • Agency and identity. As neurotechnologies develop and corporations, governments and others start striving to endow people with new capabilities, individual identity (our bodily and mental integrity) and agency (our ability to choose our actions) must be protected as basic human rights… We recommend adding clauses protecting such rights (‘neurorights’) to international treaties, such as the 1948 Universal Declaration of Human Rights. However, this might not be enough — international declarations and laws are just agreements between states, and even the Universal Declaration is not legally binding. Thus, we advocate the creation of an international convention to define prohibited actions related to neurotechnology and machine intelligence, similar to the prohibitions listed in the 2010 International Convention for the Protection of All Persons from Enforced Disappearance. An associated United Nations working group could review the compliance of signatory states, and recommend sanctions when needed.
  • Augmentation. Any lines drawn will inevitably be blurry, given how hard it is to predict which technologies will have negative impacts on human life. But we urge that guidelines are established at both international and national levels to set limits on the augmenting neurotechnologies that can be implemented, and to define the contexts in which they can be used… In particular, we recommend that the use of neural technology for military purposes be stringently regulated. For obvious reasons, any moratorium should be global and sponsored by a UN-led commission.
  • Bias. We advocate that countermeasures to combat bias become the norm for machine learning. We also recommend that probable user groups (especially those who are already marginalized) have input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development.

Research and Resources

  1. Four ethical priorities for neurotechnologies and AI,” Yuste et al., Nature, November 2017
  2. How to Optimize Human Biology: Where Genome Editing and Artificial Intelligence Collide,” Eleonore Pauwels, Wilson Center, October 2017
  3. Artificial Intelligence in Precision Cardiovascular Medicine,” Krittanawong et al., Journal of the American College of Cardiology, May 2017
  4. Brain-Computer Interface,” Muhammad Adeel Javaid, SSRN, January 2014

AI Safety

AI safety is a multifaceted field that can refer to efforts to prioritize research of robust and beneficial AI. AI safety also refers to the technical design of AI systems, and particularly the effort by some researchers to design core safety mechanisms that help with the “control problem” among other concerns such as predictability, robustness in new environments, and the avoidance of accidents and unwanted side-effects. It is important to consider whether each AI system has been given enough time to be tested for safety and unintended consequences. Additionally, AI safety concerns the complex consideration of value alignment between humans and an AI system. Finally, social and political conditions contribute to the safety of AI because they can stoke or minimize dangerous race conditions.

Principles and Recommendations

Asilomar AI Principles

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
  • Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

British Standard guide to the ethical design and application of robots and robotic systems

  • Robots as products should be designed to be safe, secure and fit for purpose, as other products.
  • The Precautionary Principle (as described by the European Commission.)

Google AI Principles

  • Be built and tested for safety. We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

The Information Technology Industry (ITI) AI Policy Principles

  • Safety and Controllability: Technologists have a responsibility to ensure the safe design of AI systems. Autonomous AI agents must treat the safety of users and third parties as a paramount concern, and AI technologies should strive to reduce risks to humans. Furthermore, the development of autonomous AI systems must have safeguards to ensure controllability of the AI system by humans, tailored to the specific context in which a particular system operates.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

  • Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  • Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  • Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  • Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Research and Resources

  1. AI safety resources,” Victoria Krakovna
  2. The case for taking AI seriously as a threat to humanity,” Kelsey Piper, Vox, May 2019
  3. The Malicious Use of Artificial Intelligence Forecasting, Prevention, and Mitigation,” written by 26 authors from 14 institutions, February 20, 2018
  4. AI Risk Communications Strategies,” Ben Garfinkel, Allan Dafoe, Owen Cotton-Barratt, August 8, 2016
  5. Concrete Problems in AI Safety,” Amodei et al., Arxiv, July 2016

Security and Cybersecurity

AI impacts the landscape of national and global security in numerous ways, from generating new modes of informational warfare, to expanding the threat landscape, and contributing to destabilization and weaponization. Moreover, AI will increasingly be used as a tool to help carry out cyberattacks. This will both amplify existing threats and pose novel threats, as it will enable attacks at a greater scale, and with greater complexity and sophistication, potentially even from non-sophisticated actors. AI systems also have vulnerabilities of various kinds. AI software can be hacked, and the data it relies upon can be tweaked or manipulated. Adversarial machine learning refers to the scenario in which data inputs are used to confuse an AI system and cause a mistake; it is also used defensively to test the robustness of one’s own design. As AI is increasingly featured in a variety of bots and interfaces that we form connections with, there will also be novel security risks of various kinds relating to abuse of human trust and reliance.

Principles and Recommendations

Asilomar AI Principles

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
  • Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Charlevoix Common Vision for the Future of Artificial Intelligence, G7 2018

  • Endeavour to promote human-centric AI and commercial adoption of AI, and continue to advance appropriate technical, ethical and technologically neutral approaches by: safeguarding privacy including through the development of appropriate legal regimes; investing in cybersecurity, the appropriate enforcement of applicable privacy legislation and communication of enforcement decisions; informing individuals about existing national bodies of law, including in relation to how their personal data may be used by AI systems; promoting research and development by industry in safety, assurance, data quality, and data security; and exploring the use of other transformative technologies to protect personal privacy and transparency.
  • Promote investment in research and development in AI that generates public trust in new technologies, and encourage industry to invest in developing and deploying AI that supports economic growth and women’s economic empowerment while addressing issues related to accountability, assurance, liability, security, safety, gender and other biases and potential misuse.
  • Encourage initiatives, including those led by industry, to improve digital security in AI and developing technologies, such as the Internet of Things and cloud services, as well as through the development of voluntary codes of conduct, standards or guidelines and the sharing of best practices.

Google AI Principles

  • Be built and tested for safety. We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

The Information Technology Industry (ITI) AI Policy Principles

  • Cybersecurity and Privacy: Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally-accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information-sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de-identification, or aggregation to protect personally identifiable information whenever possible.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

  • Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  • Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  • Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  • Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Research and Resources

  1. Toward AI Security: Global Aspirations for a More Resilient Future,” Jessica Cussins Newman, Center for Long-Term Cybersecurity, UC Berkeley, February 2019
  2. Emerging Disruptive Technologies and Their Potential Threat to Strategic Stability and National Security,” Christopher A. Bidwell and Bruce W. MacDonald, Federation of American Scientists, September 2018
  3. Syllabus: Artificial Intelligence and International Security,” Remco Zwetsloot, Future of Humanity Institute, July 2018
  4. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority,” Richard Danzig, Center for a New American Security, May 2018
  5. The Malicious Use of Artificial Intelligence Forecasting, Prevention, and Mitigation,” written by 26 authors from 14 institutions, February 20, 2018
  6. The Risks of Artificial Intelligence to Security and the Future of Work,”
    Osonde A. Osoba, William Welser IV, RAND, December 2017
  7. Ethically Aligned Design, Version 2: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,” IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2017
  8. Artificial Intelligence and National Security,” Greg Allen and Taniel Chan, The Belfer Center, July 2017
  9. Cyber Threat Intelligence and Information Sharing,”Johnson et al., NIST, May 2017
  10. A Roadmap for US Robotics: From Internet to Robotics, 2016 Edition,” organized by many universities and the NSF, November 2016
  11. Research Priorities for Robust and Beneficial Artificial Intelligence,” Stuart Russell, Daniel Dewey, Max Tegmark, arXiv, February 2016
  12. Applications of Artificial Intelligence Techniques to Combating Cyber Crimes: A Review,” Selma Dilek, Hüseyin Çakır, Mustafa Aydın, arXiv, February 2015
  13. Cybersecurity and Artificial Intelligence: From Fixing the Plumbing to Smart Water,” Carl E. Landwehr, IEEE Security & Privacy, 2008

Autonomous Weapons

Weapons and drones are already being automated to various degrees. The question of how much autonomy is acceptable in weapon systems is an ongoing international debate, with many civil society organizations encouraging international and national bans on lethal autonomous weapon systems (LAWS). FLI has additionally launched a pledge through which 160 organizations including Google DeepMind and more than 2,400 individuals including Elon Musk and Stuart Russell have committed to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” Arguments against LAWS include the fact that it may violate international humanitarian law by “removing a human from the loop,” that it is a moral wrong to ask a machine to determine who to kill, and that we need to avoid an AI arms race, which could lower the threshold of war, or alter the speed, scale, and scope of its effects. AI is already a valued tool in the US military for surveillance and intelligence purposes such as Project Maven, though the role of industry in this work has become contentious. Employees at Google protested Google’s role in the project, leading the company to declare that it will not work on autonomous weapons.

Principles and Recommendations

Asilomar AI Principles

  • AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

British Standard guide to the ethical design and application of robots and robotic systems

  • Robots should not be designed solely or primarily to kill or harm humans.
  • The use of robots in military applications should not remove responsibility and accountability from a human. The deployment of robots should be in accordance with international humanitarian law and laws governing armed conflict.

Campaign to Stop Killer Robots

  • Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology. Human control of any combat robot is essential to ensuring both humanitarian protection and effective legal control. The campaign seeks to prohibit taking a human out-of-the-loop with respect to targeting and attack decisions. A comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons–weapons that operate on their own without human intervention–is urgently needed. This could be achieved through an international treaty, as well as through national laws and other measures. The Campaign to Stop Killer Robots urge all countries to consider and publicly elaborate their policy on fully autonomous weapons, particularly with respect to the ethical, legal, policy, technical, and other concerns that have been raised. We support any action to urgently address fully autonomous weapons in any forum, including the Convention on Conventional Weapons (CCW), which held three meetings in 2014-2016 to discuss questions relating to lethal autonomous weapons systems.

Google AI Principles

  • We will not design or deploy AI in the following application area: Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

Human Rights Watch: “Losing Humanity”

  • Robots with complete autonomy would be incapable of meeting international humanitarian law standards.
  • To All States:
    • Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
    • Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
    • Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.
  • To Roboticists and Others Involved in the Development of Robotic Weapons:
    • Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.

Research and Resources

  1. Open letters supporting an international ban of autonomous weapons from AI and robotics researchers around the world, from Canada, from Australia, from Belgium, and from AI companies
  2. Lethal Artificial Intelligence and Change: The Future of International Peace and Security,” Denise Garcia, International Studies Review, May 2018
  3. Mapping the Development of Autonomy in Weapon Systems,” Dr Vincent Boulanin and Maaike Verbruggen, SIPRI, November 2017
  4. Ensuring Lethal Autonomous Weapon Systems Comply with International Humanitarian Law,” Maziar Homayounnejad, SSRN, November 2017
  5. Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban,” Human Rights Watch and the Harvard Law School International Human Rights Clinic, December 2016
  6. Autonomous Weapons and Operational Risk,” Paul Scharre, the Center for New American Security, March 2016
  7. Robotics: Ethics of artificial intelligence,” Stuart Russell, Nature, May 2015
  8. Killing by Machine,” Article 36, April 2015
  9. Shaking the Foundations: The Human Rights Implications of Killer Robots,” Human Rights Watch and the Harvard Law School International Human Rights Clinic, May 2014
  10.  “The Case Against Killer Robots: Why the United States Should Ban Them,” Denise Garcia, Foreign Affairs, May 2014

Catastrophic and Existential Risk

Key strategists, AI researchers, and business leaders believe that advanced AI poses one of the greatest threats to human survival, including catastrophic and existential risks to humanity in the long-term (see the Artificial General Intelligence (AGI) and Superintelligence section for more on this point.) The combination of AI with cyber, nuclear, robotic/drone, or biological weapons could additionally be devastating for enormous numbers of people. There are very few robust protections in place today if weapons become “cognified” in this way.

Principles and Recommendations

Asilomar AI Principles

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Google AI Principles

  • Be built and tested for safety. We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

  • Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  • Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  • Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  • Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Research and Resources

Artificial General Intelligence (AGI) and Superintelligence

The idea that machine intelligence could equal human intelligence in most or all domains is called strong AI or artificial general intelligence (AGI). The idea of such a machine then greatly surpassing human intelligence, possibly through recursive self-improvement, is referred to as superintelligence, or the intelligence explosion. Many AI experts agree that artificial general intelligence is possible, and only disagree about the timelines and qualifications. AGI would encounter all of the challenges of narrow AI, but would additionally pose it own risks such as containment.

Principles and Recommendations

Asilomar AI Principles

  • Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
  • Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
  • Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Research and Resources

  1. The case for taking AI seriously as a threat to humanity,” Kelsey Piper, Vox, May 2019
  2. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority,” Richard Danzig, Center for a New American Security, May 2018
  3. Existential Risk: Diplomacy and Governance,” Global Priorities Project, 2017
  4. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy,”
    Seth D. Baum, Global Catastrophic Risk Institute, November 2017
  5. Artificial General Intelligence: Timeframes & Policy White Paper,” Allison Duettmann, Foresight Institute, November 2017
  6. Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process,” Anthony M. Barrett and Seth D. Baum, The Technological Singularity pp 127-140, May 2017
  7. Policy Desiderata in the Development of Machine Superintelligence,” Nick Bostrom, Allan Dafoe and Carrick Flynn, 2017
  8. Superintelligence, Nick Bostrom, 2014
  9. International Cooperation vs. AI Arms Race,” Brian Tomasik, Foundational Research Institute, December 2013

 

The Latest from the Future of Life Institute
Subscribe To Our Newsletter

Stay up to date with our grant announcements, new podcast episodes and more.

Invalid email address
You can unsubscribe at any time.