Discussion About the Asilomar AI Principles

Click on any of the articles below to join the discussions

AI Should Provide a Shared Benefit for as Many People as Possible

Shared Benefit Principle: AI technologies should benefit and empower as many people as possible. Today, the combined wealth of the eight richest people in the world is greater than that of the poorest half of the global population. That is, 8 people have more than the combined wealth of 3,600,000,000 others. This is already an […]

Research for Beneficial Artificial Intelligence

Click here to see this page in other languages: Chinese  Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. It’s no coincidence that the first Asilomar Principle is about research. On the face of it, the Research Goal Principle may not seem as glamorous or exciting as some […]

When Should Machines Make Decisions?

Click here to see this page in other languages: Chinese   Russian  Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives. When is it okay to let a machine make a decision instead of a person? Most of us allow Google Maps to choose the best route to a […]

Artificial Intelligence: The Challenge to Keep It Safe

Safety Principle: AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible. When a new car is introduced to the world, it must pass various safety tests to satisfy not just government regulations, but also public expectations. In fact, safety has become a top selling point among […]

Can AI Remain Safe as Companies Race to Develop It?

Click here to see this page in other languages: Chinese   Russian  Race Avoidance Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards. Artificial intelligence could bestow incredible benefits on society, from faster, more accurate medical diagnoses to more sustainable management of energy resources, and so much more. But in today’s economy, […]

Safe Artificial Intelligence May Start with Collaboration

Click here to see this page in other languages: Chinese     Research Culture Principle: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. Competition and secrecy are just part of doing business. Even in academia, researchers often keep ideas and impending discoveries to themselves until grants or publications are finalized. […]

Can We Properly Prepare for the Risks of Superintelligent AI?

Risks Principle: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. We don’t know what the future of artificial intelligence will look like. Though some may make educated guesses, the future is unclear. AI could keep developing like all other technologies, […]

Artificial Intelligence and Income Inequality

Click here to see this page in other languages: Chinese   Shared Prosperity Principle: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. Income inequality is a well recognized problem. The gap between the rich and poor has grown over the last few decades, but it became increasingly pronounced after the […]

Is an AI Arms Race Inevitable?

Click here to see this page in other languages:  Russian  AI Arms Race Principle: An arms race in lethal autonomous weapons should be avoided.* Perhaps the scariest aspect of the Cold War was the nuclear arms race. At its peak, the US and Russia held over 70,000 nuclear weapons, only a fraction of which could […]

Preparing for the Biggest Change in Human History

Click here to see this page in other languages: Chinese   Importance Principle: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. In the history of human progress, a few events have stood out as especially revolutionary: the intentional […]

How Smart Can AI Get?

Click here to see this page in other languages : Chinese    Russian  Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have […]

Can We Ensure Privacy in the Era of Big Data?

Personal Privacy Principle: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data. A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and […]

How Do We Align Artificial Intelligence with Human Values?

Click here to see this page in other languages: Chinese    German Japanese     Russian  A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Artificial intelligence. Recently, […]

A Principled AI Discussion in Asilomar

We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown […]

Rules for Discussion

We want to encourage discussion of all principles, but please abide by the following rules to keep the discussion focused and friendly:

Users should adhere to a code of conduct in which the discussion remains civil, helpful, and constructive. Users should only post comments or questions that they would also make in person, in a professional public setting, such as a classroom or seminar hall.

Posts that do not contribute to constructive discussion will not be approved or will be removed. In particular, posts must be appropriate for an academic workplace; that is, comments should not contain language or content that is:

  • Rude or disrespectful;
  • Combative or overly aggressive;
  • Unpleasant or offensive by common workplace standards;
  • Targeted at individuals, rather than at arguments or ideas.

To enable useful discussions, posts should also not contain language or content that is:

  • Outside of the scope of the forum topic;
  • Overly repetitive, including comments repeated in multiple forums;
  • Incomprehensible or extremely lengthy;
  • Commercial in nature.

We will not accept comments that have promotions/links to personal blogs. We will not accept comments that do not address the topic of the post in a scientific and rational manner. We value a variety of opinions so please feel free to disagree, just do so politely.

Full Interviews with AI Researchers

The Latest from the Future of Life Institute
Subscribe To Our Newsletter

Stay up to date with our grant announcements, new podcast episodes and more.

Invalid email address
You can unsubscribe at any time.