FLI September, 2018 Newsletter

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film The Man Who Saved the World), Max Tegmark (FLI)

To celebrate that September 26th, 2018 is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983, was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say. Read the full story here.

Also in attendance at the ceremony was Stephen Mao who helped produce the movie about Petrov, “The Man Who Saved the World,” which has just been released on AmazoniTunes, and GooglePlay.

The threat that we might over-trust algorithms with nuclear weapons could grow as we add more AI features to nuclear systems, as was discussed in this month’s FLI podcast. Petrov’s FLI award was also covered in VoxDaily MailEngineering 360, and The Daily Star.

European Parliament Passes Resolution Supporting a Ban on Killer Robots
By Ariel Conn

The European Parliament passed a resolution on September 12, 2018 calling for an international ban on lethal autonomous weapons systems (LAWS). The resolution was adopted with 82% of the members voting in favor of it.

Among other things, the resolution calls on its Member States and the European Council “to develop and adopt, as a matter of urgency … a common position on lethal autonomous weapon systems that ensures meaningful human control over the critical functions of weapon systems, including during deployment.”

Also mentioned in the resolution were the many open letters signed by AI researchers and scientists from around the world, who are calling on the UN to negotiate a ban on LAWS.

Listen: Podcast with Will MacAskill

How are we to make progress on AI alignment given moral uncertainty? What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty?

In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics.

Topics discussed in this episode include:

  • Will’s current normative and metaethical credences
  • How we ought to practice AI alignment given moral uncertainty
  • Moral uncertainty in preference aggregation
  • Idealizing persons and their preferences
  • The most neglected portion of AI alignment
To listen to the podcast, click here, or find us on SoundCloudiTunesGoogle Play and Stitcher.

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Topics discussed in this episode include:

  • The sophisticated military robots developed by Soviets during the Cold War
  • How technology shapes human decision-making in war
  • “Automation bias” and why having a “human in the loop” is trickier than it sounds
  • The United States’ stance on automation with nuclear weapons
  • Why weaker countries might have more incentive to build AI into warfare
  • “Deep fakes” and other ways AI could sow instability and provoke crisis
  • The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
You can listen to the podcast here, and check us out on SoundCloudiTunesGoogle Play, and Stitcher.

AI Safety Research Highlights

Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich
By Jolene Creighton

Thomas G. Dietterich, Emeritus Professor of Computer Science at Oregon State University, explains that solving this identification problem begins with ensuring that our AI systems aren’t too confident — that they recognize when they encounter a foreign object and don’t misidentify it as something that they are acquainted with.

What We’ve Been Up to This Month

Max Tegmark, Lucas Perry, and Ariel Conn all helped present the Future of Life Award to Stanislav Petrov’s children.

Max Tegmark spent much of the past month giving various talks about AI safety in South Korea, Japan & China.

Ariel Conn attended the Cannes Corporate Media and TV Awards where Slaughterbots was awarded the Gold prize (see picture). She also participated in a panel discussion about AI, jobs, and ethics for an event in Denver hosted by the Littler Law Firm.

FLI in the News

If you’re interested in job openings, research positions, and volunteer opportunities at FLI and our partner organizations, please visit our Get Involved page.

Highlighted opportunity: 
The Centre for the Study of Existential Risk (CSER) invites applications for an Academic Programme Manager.

The Latest from the Future of Life Institute
Subscribe To Our Newsletter

Stay up to date with our grant announcements, new podcast episodes and more.

Invalid email address
You can unsubscribe at any time.