Special Newsletter: 2021 Future of Life Award

FLI Special Newsletter: Future of Life Award 2021

This picture shows Joe Farman (left), Susan Solomon (centre) and Stephen O. Andersen (right), recipients of the 2021 Future of Life Award.

Three Heroes Who Helped Save the Ozone Layer

What happens when the technologies that we can’t live without become technologies that we can’t live with?

The Future of Life Institute is thrilled to present Joe Farman, Susan Solomon, and Stephen O. Andersen with the 2021 Future of Life Award for their important contributions to the passage and success of the Montreal Protocol. The Protocol banned the production and use of ozone-depleting chlorofluorocarbon gases (CFCs) and as a result the Antarctic ozone hole is now closing.

Had the world not acted, the global ozone layer would have collapsed by 2050. By 2070, the UV index would have been 30 – anything over 11 is extreme – causing roughly 2.8 million excess skin cancer deaths and 45 million cataracts. It’s estimated that the world would have also been been 4.5 degrees (F) warmer – a level most climatologists agree is disastrously high – prompting the collapse of entire ecosystems and agriculture.

In 1985, geophysicist Joe Farman and his team from the British Antarctic Survey discovered the ozone hole above Antarctica. Their measurements indicated an alarming rate of ozone depletion and effectively shocked the scientific community, as well as governments and wider society, into action.

Atmospheric chemist Susan Solomon determined the cause of the hole: stratospheric clouds that form only above Antarctica were catalyzing additional ozone-depleting reactions when lit by the sun during spring. Her research drove momentum on the road to regulation, and she herself acted as an important bridge between the scientific community and policymakers, showing the importance of interdisciplinary communication in creating a united front against global challenges.

From medical to military uses, 240 industrial sectors would need to be reorganized to prevent global catastrophe. Stephen O. Anderson, Deputy Director for Stratospheric Ozone Protection at the US Environmental Protection Agency during the Reagan administration, took on the challenge of transforming this industrial juggernaut. He founded and, from 1988 to 2012, co-chaired the Technology and Economic Assessment Panel, working with industry to develop hundreds of innovative solutions for phasing out CFCs. His work was critical to the Protocol’s success.

The Future of Life Award is given annually to individuals who, without having received much recognition at the time, have helped make today dramatically better than it may otherwise have been. You can find out more about the Award here, and please explore the educational materials we have produced about this year’s winners below!

MinuteEarth’s “How to Solve Every Crisis” Video

To celebrate the fifth anniversary of the Future of Life Award, FLI collaborated with popular YouTube channel MinuteEarth to produce a video drawing together lessons from the stories of the Montreal Protocol, the focus of this year’s award, and the eradication of smallpox, the focus of last year’s award, for managing global catastrophic threats — from ecological devastation to the spread of global pandemics and beyond.

Watch the video here.

Special Podcast Episodes

Photo of Susan Solomon and Stephen O. Andersen, podcast guests and winners of the 2021 Future of Life Award. FLI Podcast Special: Susan Solomon and Stephen O. Andersen on Saving the Ozone Layer

In this special episode of the FLI Podcast, Lucas Perry speaks with our 2021 Future of Life Award winners about what Stephen Andersen describes as “science and politics at its best” — the scientific research that revealed ozone depletion and the work that went into the Montreal Protocol, which steered humanity away from the chemical compounds that caused it.

Among other topics, Susan Solomon discusses the inquiries and discoveries that led her to study the atmosphere above the Antarctic, and Stephen describes how together science and public pressure moved industry faster than the speed of politics. To wrap up, the two apply lessons learnt to today’s looming global threats, including climate change.

Cosmic Queries in the O-Zone: Saving the World with Susan Solomon & Stephen Andersen


What happens to our planet without ozone? How did entire industries move to new, safer chemicals? How does the public’s interest in environmental issues create the possibility for meaningful action — and could we do it all again in today’s divided world?

Astrophysicist Neil deGrasse Tyson and Comedian Chuck Nice speak with Susan and Stephen on the popular podcast StarTalk about what they did to save the planet — and what’s left to be done.

Further (Non-FLI) Resources

The Hole: A Short Film on the Montreal Protocol, narrated by Sir David Attenborough

A short film produced by the United Nations Ozone Secretariat explaining the scientific and policy demands that drove the Montreal Protocol, a global, cooperative ban on CFC’s to save the ozone layer. The protocol was the first universally ratified treaty with the United Nations and, more importantly, had we not taken action, NASA estimates the ozone hole could have been 10 times worse.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

FLI Summer 2021 Newsletter

FLI Summer 2021 Newsletter

FLI’s Take on the EU AI Act

How you understand risk may differ from how your neighbors understand it. But when threats appear, it’s critical for everyone to agree — and act. That’s what’s driving our work on the European Union’s AI Act, defined as “one of the first major policy initiatives worldwide focused on protecting people from harmful AI” in a recent article in Wired magazine.

The article references our work and priorities in the EU: With the very definition of “High Risk” under negotiation, we’re making the case that the threshold for what counts as “subliminal manipulation” should be lowered — and should include addictive adtech, which contributes to misinformation, extremism and, arguably, poor mental health.

The European Commission is the first major regulator in the world to propose a law on AI and will ultimately set policy for the EU’s 27 member states. FLI has submitted its feedback on this landmark act, which you can read here. Our top recommendations include:

  • Ban any and all AI manipulation that adversely impacts fundamental rights or seriously distorts human decision-making.
  • Ensure AI providers consider the social impact of their systems — because applications that do not violate individual rights may nonetheless have broader societal consequences.
  • Require a complete risk assessment of AI systems, rather than classifying entire systems by a single use. The current proposal, for example, would regulate an AI that assesses students’ performance, but would have nothing to say when that same AI offers biased recommendations in educational materials.


Taken together, there are 10 recommendations that build on FLI’s foundational Asimolar Principles for AI governance.  

Policy & Outreach Efforts

How do you prove that you’ve been harmed by an AI when you can’t access the data or algorithm that caused it? If a self-learning AI causes harm 11 years after the product was put on the market, should its producer be allowed to disavow liability? And can a car producer shift liability of an autonomous vehicle simply by burying a legal clause in lengthy terms and conditions?

FLI explored these and other questions in our response to the EU’s new consultation on AI liability. We argued that new rules are necessary to protect the rights of consumers and to encourage AI developers to make their products safer. You can download our full response here.

“Lethal Autonomous Weapons Exist; They Must Be Banned”

Following a recent UN report stating that autonomous weapons were deployed to kill Libyan National Army forces in 2020, Stuart Russell and FLI’s Max Tegmark, Emilia Javorsky and Anthony Aguirre co-authored an article in IEEE Spectrum calling for an immediate moratorium on the development, deployment, and use of lethal autonomous weapons.

Future of Life Institute set to launch $25 million grant program for Existential Risk Reduction


FLI intends to launch its $25M grants program in the coming weeks! This program will focus on reducing existential risks, events that could cause the permanent collapse of civilization or even human extinction.

Watch this space!

New Podcast Episodes

Michael Klare on the Pentagon’s view of Climate Change and the Risk of State Collapse

The US Military views climate change as a leading threat to national security, says Michael Klare. On this episode of the FLI Podcast, Klare, the Five College Professor of Peace & World Security Studies, discussed the Pentagon’s strategy for adapting to this emergent threat.

In the interview, Klare notes that climate change has already done “tremendous damage” to US military bases across the Gulf of Mexico. Later, he discusses how global warming is driving new humanitarian crises that the military must respond to. Also of interest: the military’s view of climate change as a “threat multiplier,” a complicating factor in the complex web of social, economic, and diplomatic tensions that could heighten the probability of armed conflict.

Avi Loeb on Oumuamua, Aliens, Space Archeology, Great Filters and Superstructures

Oumuamua, an object with seemingly unnatural properties, appeared from beyond our solar system in 2017. Its appearance raised questions – and controversial theories – about where it came from. In this episode of the FLI Podcast, Avi Loeb, Professor of Science at Harvard University, shared theories of Oumuamua’s origins — and why science sometimes struggles to explore extraordinary events.

Loeb describes the common properties of space debris – “bricks left over from the construction project of the solar system” – and what he finds so unique about Oumuamua among these celestial objectsHe shares why many mainstream theories don’t satisfy him, and the history of scientists investigating challenging questions back to the days of Copernicus.


A new preliminary report from the US Office of the Director of National Intelligence reported 144 cases of what it called “unidentified aerial phenomena” — a new phrase for UFOs. In this bonus episode, Lucas continues his conversation with Avi Loeb to discuss the importance of this report and what it means for science and the search for extraterrestrial intelligence.

News & Reading

The Centre for the Study of Existential Risk is an interdisciplinary research centre at the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilizational collapse.

They are seeking a Senior Research Associate / Academic Programme Manager to play a central role in the operation and delivery of research programmes; including the management of major research projects, line management of postdoctoral researchers, strategic planning, and fundraising.

For consideration, apply by 20 September.

How the U.S. Military can Fight the ‘Existential Threat’ of Climate Change

After the US Secretary of Defense called climate change “a profoundly destabilizing force for our world,” our recent podcast guest Michael Klare penned an Op-Ed in the LA Times.Klare, the Five College Professor of Peace & World Security Studies, calls on the Pentagon to outline specific actions that would lead to “far greater reductions in fossil fuel use and greenhouse gas emissions,” including allocating research funds to green technologies.
Rain Observed at the Summit of Greenland Ice Sheet for the First Time

Rain was reported in area that has only seen temperatures above freezing three times in recorded history. Rain on the ice sheet, which is 10,551 feet above sea level, is warmer than the ice, creating conditions for melting water to run off, or re-freeze.

A recent UN report has suggested that sustained global temperatures beyond 2 degrees Celsius would lead to the total collapse of the ice sheet. The presence of rain could accelerate a melt-off already underway, eventually elevating sea levels by as much as 23 feet.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

FLI June 2021 Newsletter

FLI June 2021 Newsletter

The Future of Life Institute is delighted to announce a $25M multi-year grant program aimed at reducing existential risk. Existential risks are events that could cause human extinction or permanently and drastically curtail humanity’s potential, and currently efforts to mitigate these risks receive remarkably little funding and attention relative to their importance. This program is made possible by the generosity of cryptocurrency pioneer Vitalik Buterin and the Shiba Inu community.

Specifically, the program will support interventions designed to directly reduce existential risk; prevent politically destabilising events that compromise international cooperation; actively improve international cooperation; and develop positive visions for the long-term future that incentivise both international cooperation and the development of beneficial technologies. The emphasis on collaboration stems from our conviction that technology is not a zero-sum game, and that in all likelihood it will cause humanity to either flourish, or else flounder.

Shiba Inu Grants will support projects; particularly research. Vitalik Buterin Fellowships will bolster the pipeline through which talent flows towards our areas of focus; this may include funding for high school summer programs, college summer internships, graduate fellowships and postdoctoral fellowships.

To read more about the program, click here.

New Podcast Episodes

Nicolas Berggruen on the Dynamics of Power, Wisdom and Ideas in the Age of AI

In this episode of the Future of Life Institute Podcast, Lucas is joined by investor and philanthropist Nicolas Berggruen to discuss the nature of wisdom, why it lags behind technological growth and the power that comes with technology, and the role ideas play in the value alignment of technology.

Later in the episode, the conversation turns to the increasing concentration of power and wealth in society, universal basic income and a proposal for universal basic capital.

To listen, click here.

Reading & Resources

The Centre for the Study of Existential Risk is hiring for a Deputy Director!

The Centre for the Study of Existential Risk, University of Cambridge, is looking for a new Deputy Director. This role will involve taking full operational responsibility for the day-to-day activities of the Centre, including people and financial management, and contributing to strategic planning for the centre.

CSER is looking for someone with strong experience in operations and strategy, with the interest and intellectual versatility to engage with and communicate the Centre’s research.

The deadline for applications is Sunday 4 July. More details on both the role and person profile are available in the further particulars, here.

The Leverhulme Centre for the Future of Intelligence (CFI) and CSER are also hiring for a Centre Administrator to lead the Department’s professional services support team. Further details can be found here.

The Global Catastrophic Risk Institute is looking for collaborators and advisees!

The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.

Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

Find more details here!

FLI May 2021 Newsletter

FLI May 2021 Newsletter

The outreach team is now recruiting Spanish and Portuguese speakers for translation work!

The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet.

We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term.

For more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.

New Podcast Episodes

Bart Selman on the Promises and Perils of Artificial Intelligence

In this new podcast episode, Lucas is joined by Professor of Computer Science at Cornell University Bart Selman to discuss all things artificial intelligence.

Highlights of the interview include Bart talking about what superintelligence could consist in, whether superintelligent systems might solve problems like income inequality and whether they could teach us anything about moral philosophy. He also discusses the possibility of AI consciousness, the grave threat of lethal autonomous weapons and whether the global race to advanced artificial intelligence may negatively affect our chances of successfully solving the alignment problem. Enjoy!

Reading & Resources

The Centre for the Study of Existential Risk is hiring for a Deputy Director!

The Centre for the Study of Existential Risk, University of Cambridge, is looking for a new Deputy Director. This role will involve taking full operational responsibility for the day-to-day activities of the Centre, including people and financial management, and contributing to strategic planning for the centre.

CSER is looking for someone with strong experience in operations and strategy, with the interest and intellectual versatility to engage with and communicate the Centre’s research.

The deadline for applications is Sunday 4 July. More details on both the role and person profile are available in the further particulars, here.

The Leverhulme Centre for the Future of Intelligence (CFI) and CSER are also hiring for a Centre Administrator to lead the Department’s professional services support team. Further details can be found here.

The Global Catastrophic Risk Institute is looking for collaborators and advisees!

The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.

Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

Find more details here!

This article in the New York Times details how scientific breakthroughs together with advocacy efforts caused the average lifespan to double between 1920 and 2020. We were particularly pleased to see last year’s Future of Life Award winner Bill Foege mentioned for his crucial role in the eradication of smallpox.

“The story of our extra life span almost never appears on the front page of our actual daily newspapers, because the drama and heroism that have given us those additional years are far more evident in hindsight than they are in the moment. That is, the story of our extra life is a story of progress in its usual form: brilliant ideas and collaborations unfolding far from the spotlight of public attention, setting in motion incremental improvements that take decades to display their true magnitude.”

The International Committee of the Red Cross (ICRC) recently released its official position on autonomous weapons; “Unpredictable autonomous weapon systems should be expressly ruled out…This would best be achieved with a prohibition on autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted and explained.”

FLI April 2021 Newsletter

FLI April 2021 Newsletter

Exciting Updates to the FLI Website!

Thanks to the tireless efforts of Anna Yelizarova and Meia Chita-Tegmark, there have been some exciting updates to our website! We have a new and improved homepage as well as new landing pages for each of our four areas of focus; AI, biotechnology, nuclear weapons and climate change. Our hope is that these changes will make the site easier to navigate and the educational resources easier to access for both historical and new visitors.

European Commission releases its proposal for a comprehensive regulation of AI systems

The European Commission has published its long-awaited proposal for a comprehensive regulation of AI systems. It recommends that systems considered a clear threat to the safety, livelihoods and rights of EU citizens be banned, including systems or applications that manipulate human behaviour, and that other “high risk” systems be subject to strict safety requirements. If adopted by the European Parliament, these regulations would apply across all the member states of the European Union.

Having actively participated in the Commission’s debate about future AI governance, our policy team is looking forward to reviewing and providing feedback on the proposal at the earliest opportunity.

FLI’s Ongoing Policy Efforts in the U.S.

The 
U.S. Congress has introduced a number of bills that would dramatically reform U.S. government funding for research and development. We continue to support policymakers as they evaluate how to advance innovation in emerging technologies while being attuned to safety and ethical concerns. This builds on the work FLI did to support the National AI Initiative Act that passed last December.

New Podcast Episodes

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

In this episode of the Future of Life Institute Podcast, Lucas Perry is joined by Jaan Tallinn, an investor, philanthropist, founding engineer of Skype and co-founder of the Future of Life Institute and the Centre for the Study of Existential Risk.

“AI is the only meta-technology such that if you get AI right, you can fix the other technologies.”

Jaan explains why he believes we should prioritise the mitigation of risks from artificial intelligence and synthetic biology ahead of those from climate change and nuclear weapons, why it’s productive to think about AI adoption as a delegation process and why, despite his concern about the possibility of unaligned artificial general intelligence, he continues to invest heavily in AI research. He also discusses generational forgetfulness and his current strategies for maximising philanthropic impact, including funding the development of promising software.

Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

Joscha Bach, cognitive scientist and AI researcher, and Anthony Aguirre, UCSC Professor of Physics and FLI co-founder, come together to explore the world through the lens of computation and discuss the difficulties we face on the way to beneficial futures.

In this mind-blowing episode, Joscha and Anthony discuss digital physics, the idea that all quantities in nature are finite and discrete, making all physical processes intrinsically computational, and the nature of knowledge and human consciousness. In addition, they consider bottlenecks to beneficial futures, the role mortality plays in preventing poorly aligned incentives within institutions and whether competition between multiple AGIs could produce positive outcomes.

Reading & Resources

Malaria vaccine hailed as potential breakthrough

The Jenner Institute at the University of Oxford announces that a newly developed malaria vaccine proved to be 77% effective when trialled in 450 children in Burkina Faso.

If these findings hold up in larger trials, this will be the first malaria vaccine to reach the World Health Organisation’s goal of at least 75% efficacy, with the most effective malaria vaccine to date having only shown 55% efficacy.

FLI March 2021 Newsletter

FLI March 2021 Newsletter

The Future of Life Institute is hiring for a Director of European Policy, Policy Advocate, and Policy Researcher.

The Director of European Policy will be responsible for leading and managing FLI’s European-based policy and advocacy efforts on both lethal autonomous weapon systems and on artificial intelligence.

The Policy Advocate will be responsible for supporting FLI’s ongoing policy work and advocacy in the U.S. government, especially (but not exclusively) at a federal level. They will be focused primarily on influencing near-term policymaking on artificial intelligence to maximise the societal benefits of increasingly powerful AI systems. Additional policy areas of interest may include synthetic biology, nuclear weapons policy, and the general management of global catastrophic and existential risk.

The Policy Researcher will be responsible for supporting FLI’s ongoing policy work in a wide array of governance for through the production of thoughtful, practical policy research. In this role, this position will be focused primarily on researching near-term policymaking on artificial intelligence to maximise the societal benefits of increasingly powerful AI systems. Additional policy areas of interest may include lethal autonomous weapon systems, synthetic biology, nuclear weapons policy, and the general management of global catastrophic and existential risk.

The positions are remote, though from varying locations, and pay is negotiable, competitive, and commensurate with experience.

Applications are now rolling until the positions are filled.

For further information about the roles and how to apply, click here.

FLI Relaunches autonomousweapons.org

We are pleased to announce that thanks to the brilliant efforts of Emilia Javorsky and Anna Yelizarova, we have now relaunched autonomousweapons.org. This site is intended as a comprehensive educational resource where anyone can go to learn about lethal autonomous weapon systems; weapons that can identify, select and target individuals without human intervention.

Lethal autonomous weapons are not the stuff of science fiction, nor do they look like anything like the Terminator; they are already here in the form of unmanned aerial vehicles, vessels, and tanks. As the United States, United Kingdom, Russia, China, Israel and South Korea all race to develop and deploy them en masse, the need for international regulation to maintain meaningful human control over the use of lethal force has become ever more pressing.

Using autonomousweapons.org, you can read up on the global debate surrounding these emerging systems, the risks – from the potential for violations of international humanitarian law and algorithmic bias in facial recognition technologies to their being the ideal weapon for terror and assassination – the policy options and how you can get involved.

Nominate an Unsung Hero for the 2021 Future of Life Award!

We’re excited to share that we’re accepting nominations for the 2021 Future of Life Award!

The Future of Life Award is given to an individual who, without having received much recognition at the time, has helped make today dramatically better than it may otherwise have been.

The first two recipients, Vasili Arkhipov and Stanislav Petrov, made judgements that likely prevented a full-scale nuclear war between the U.S. and U.S.S.R. In 1962, amid the Cuban Missile Crisis, Arkhipov, stationed aboard a Soviet submarine headed for Cuba, refused to give his consent for the launch of a nuclear torpedo when the captain became convinced that war had broken out. In 1983, Petrov decided not to act on an early-warning detection system that had erroneously indicated five incoming US nuclear missiles. We know today that a global nuclear war would cause a nuclear winter, possibly bringing about the permanent collapse of civilisation, if not human extinction. The third recipient, Matthew Meselson, was the driving force behind the 1972 Biological Weapons Convention. Having been ratified by 183 countries, the treaty is credited with preventing biological weapons from ever entering into mainstream use. The 2020 winners, William Foege and Viktor Zhdanov, made critical contributions towards the eradication of smallpox. Foege pioneered the public health strategy of ‘ring vaccination’ and surveillance while Zhdanov, the Deputy Minister of Health for the Soviet Union at the time, convinced the WHO to launch and fund a global eradication programme. Smallpox is thought to have killed 500 million people in its last century and its eradication in 1980 is estimated to have saved 200 million lives so far.

The Award is intended not only to celebrate humanity’s unsung heroes, but to foster a dialogue about the existential risks we face. We also hope that by raising the profile of individuals worth emulating, the Award will contribute to the development of desirable behavioural norms.

If you know of someone who has performed an incredible act of service to humanity but been overlooked by history, nominate them for the 2021 Award. This person may have made a critical contribution to a piece of groundbreaking research, set an important legal precedent, or perhaps alerted the world to a looming crisis; we’re open to suggestions! If your nominee wins, you’ll receive $3,000 from FLI as a token of our gratitude.

 

New Podcast Episodes

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

In this episode of AI Alignment Podcast, Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.

Among other topics, Roman discusses the need for impossibility results within computer science, the halting problem, and his research findings on AI explainability, comprehensibility, and controllability, as well as how these facets relate to each other and to AI alignment.

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons

In this episode of the Future of Life Podcast, we are joined by Stuart Russell, Professor of Computer Science at the University of California, Berkeley, and Zachary Kallenborn, self-described “analyst in horrible ways people kill each other” and drone swarms expert, to discuss the highest risk aspects of lethal autonomous weapons.

Stuart and Zachary cover a wide range of topics, including the potential for drone swarms to become weapons of mass destruction, as well as how they could be used to deploy biological, chemical and radiological weapons, the risks of rapid escalation of conflict, unpredictability and proliferation, and how the regulation of lethal autonomous weapons could set a precedent for future AI governance.

To learn more about lethal autonomous weapons, visit autonomousweapons.org.

Reading & Resources

Max Tegmark on the INTO THE IMPOSSIBLE Podcast

Max Tegmark joined Dr. Brian Keating on the INTO THE IMPOSSIBLE podcast to discuss questions such as whether we can grow our prosperity through automation without leaving people lacking income or purpose, how we can make future artificial intelligence systems more robust such that they do what we want without crashing, malfunctioning or getting hacked, and whether we should fear an arms race in lethal autonomous weapons.

How easy would it be to snuff out humanity?

“If you play Russian roulette with one or two bullets in the cylinder, you are more likely to survive than not, but the stakes would need to be astonishingly high – or the value you place on your life inordinately low – for this to be a wise gamble.”

Read this fantastic overview of the existential and global catastrophic risks humanity currently faces by Lord Martin Rees, Astronomer Royal and Co-founder of the Centre for the Study of Existential Risk, University of Cambridge.

Disease outbreaks more likely in deforestation areas, study finds

“Diseases are filtered and blocked by a range of predators and habitats in a healthy, biodiverse forest. When this is replaced by a palm oil plantation, soy fields or blocks of eucalyptus, the specialist species die off, leaving generalists such as rats and mosquitoes to thrive and spread pathogens across human and non-human habitats.”

A new study suggests that epidemics are likely to increase as a result of environmental destruction, in particular, deforestation and monoculture plantations.

 

Boris Johnson is playing a dangerous nuclear game

“By deciding to increase the cap, the UK – the world’s third country to develop its own nuclear capability – is sending the wrong signal: rearm. Instead, the world should be headed to the negotiating table to breathe new life into the arms control talks…The UK could play an important role in stopping the new nuclear arms race, instead of restarting it.”

A useful analysis by Professor of History at Harvard University Serhii Plokhy on how Prime Minister Boris Johnson may fuel a nuclear arms race by increasing the United Kingdom’s nuclear stockpile by 40%.

FLI January 2021 Newsletter

FLI January 2021 Newsletter

Reflections on 2020 from FLI’s President

2020 reminded us that our civilization is vulnerable. Will we humans wisely use our ever more powerful technology to end disease and poverty and create a truly inspiring future, or will we sloppily use it to drive ever more species extinct, including our own? We’re rapidly approaching this fork in the road: the past year saw the power of our technology grow rapidly, exemplified by GPT3mu-zeroAlphaFold 2 and dancing robots, while the wisdom with which we manage our technology remained far from spectacular: the Open Skies Treaty collapsed, the only remaining US-Russia nuclear treaty (New Start), is due to expire next month, meaningful regulation of harmful AI remains absent, AI-fuelled filter bubbles polarize the world, and an arms-race in lethal autonomous weapons is ramping up.

It’s been a great honor for me to get to work with such a talented and idealistic team at our institute to make tomorrow’s technology help humanity flourish rather than flounder. With Bill Gates, Tony Fauci & Jennifer Doudna, we honored the heroes who helped save 200 million lives by eradicating smallpox. As other examples of tech-for-good, FLI members researched how machine-learning can help with the UN Sustainable Development Goals and developed free online tools for better predictions and helping people break out of their filter bubbles.

On the AI policy front, FLI was the civil society co-champion for the UN Secretary General’s Roadmap for Digital Cooperation: Recommendation 3C on Artificial Intelligence alongside Finland, France and two UN organizations, whose final recommendations included “life and death decisions should not be delegated to machines”. FLI also produced formal and informal advice on AI risk management to the U.S. Government, the European Union, and other policymaking fora, resulting in a series of high-value successes. On the nuclear disarmament front, we previously organized an open letter signed by 30 Nobel Laureates and thousands of other scientists from 100 countries in support of the UN Treaty on the Prohibition of Nuclear Weapons. This treaty has now gathered enough signatures to enter into force January 22, 2021, which will help stigmatize the new nuclear arms race and pressure the nations driving it to reduce their arsenals down towards the minimim levels needed for deterrence.

On the outreach front, our FLI podcast grew 23% in 2020, with about 300,000 listens to fascinating conversations about existential risk and related topics, with guests including Yuval Noah Harari, Sam Harris, Steven Pinker, Stuart Russell, George Church and thinkers from OpenAI, DeepMind and MIRI. They reminded us that even seemingly insurmountable challenges can be overcome with creativity, willpower and sustained effort. Technology is giving life the potential to flourish like never before, so let’s seize this opportunity together!

Max Tegmark

Policy & Advocacy Efforts

The 2020 Future of Life Award

On 9th December 2020, the Future of Life Award was bestowed upon William Foege and Viktor Zhdanov for their critical contributions towards the eradication of smallpox. The $100,000 Future of Life Award was presented to Dr. William Foege and Dr. Viktor Zhdanov by FLI’s co-founder Max Tegmark in an online ceremony attended by Bill Gates, Dr. Anthony Fauci, freshly minted Nobel Laureate Dr. Jennifer Doudna and Dr. Matthew Meselson, Winner of the 2019 Future of Life Award. Since Dr. Viktor Zhdanov passed away in 1987, his sons Viktor and Michael accepted the award on his behalf.

Viktor Zhdanov has been called ‘the best person who ever lived’ by Oxford Professor Will MacAskill for successfully persuading the World Health Assembly to initiate a global smallpox eradication programme and encouraging collaboration between the United States and the Soviet Union amid the Cold War. While working for the Africa Centers for Disease Control and Prevention in Nigeria, Foege pioneered the highly successful surveillance and ‘ring vaccination’ strategy, greatly reducing the number of vaccines needed to achieve herd immunity. This strategy was ultimately used to eradicate smallpox around the world.

The lessons learned from overcoming smallpox remain highly relevant to public health and, in particular, the COVID-19 pandemic. “In selecting Bill Foege and Viktor Zhdanov as recipients of its prestigious 2020 award, the Future of Life Institute reminds us that seemingly impossible problems can be solved when science is respected, international collaboration is fostered, and goals are boldly defined. As we celebrate this achievement quarantined in our homes and masked outdoors, what message could be more obvious or more audacious?”, Dr. Rachel Bronson, President and CEO of the Bulletin of Atomic Scientists, says.

The National Artificial Intelligence Initiative Act Passes Into US Law

Throughout 2020, FLI actively supported and advocated for the National AI Initiative Act (NAIIA) in the U.S. The Act authorises nearly $6.5 billion across the next five years for AI research and development.

The new law is likely to result in significant improvements to federal support for safety-related AI research, an outcome FLI will continue to advocate for in the coming year. It authorises $390 million for the National Institute of Standards and Technology to support the development of a risk-mitigation framework for AI systems as well as guidelines to promote trustworthy AI systems, for instance.

FLI’s President Max Tegmark Spoke at a Symposium on Responsible AI

On 16th December 2020, Max Tegmark spoke at a European symposium on “Responsible Artificial Intelligence” hosted by NORSUS Norwegian Institute for Sustainable Research and INSCICO. Accompanied by a distinguished panel of key actors in the fields of ethics and AI, Max presented on “Getting AI to work for democracy, not against it.”

New Podcast Episodes

The Future of Life Award 2020 Podcast; Saving 200,000,000 Lives by Eradicating Smallpox

In its last century, smallpox killed around 500 million people. Its eradication in 1980, due in large part to the efforts of William Foege and Viktor Zhdanov, is estimated to have saved 200 million lives – so far. In the Future of Life Award 2020 podcast, we are joined by William Foege and Viktor Zhdanov’s sons, Viktor and Michael, to discuss Foege’s and Zhdanov’s personal background, their contributions to the global efforts to eradicate smallpox, the history of smallpox itself and, more generally, issues of biology in the 21st century, including COVID-19, bioterrorism and synthetic pandemics.

Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

In this episode of the Future of Life Podcast, theoretical physicist Sean Carroll joins us to discuss the intellectual movements that have at various points changed the course of human progress, including the Age of Enlightenment and Scientific Revolution, the metaphysical theses that have arisen from these movements, including spacetime substantivalism and physicalism, and how these theses bear on our understanding of free will and consciousness. The conversation also touches on the roles of intuition and data in our moral and epistemological frameworks, the Many-Worlds interpretation of quantum mechanics and the importance of epistemic charity in conversational settings.

For more on Sean Carroll, visit his website or check out his podcast Mindscape.

News & Opportunities

Are you a Superforecaster? Register for Metaculus’ AI Prediction Tournament

Metaculus, a community dedicated to generating accurate predictions about real-world future events by collecting and aggregating the collective wisdom and intelligence of its participants, has launched a large-scale, comprehensive forecasting tournament dedicated to predicting advances in artificial intelligence.Sponsored by Open Philanthropy, Metaculus’ aim is to build an accurate map of the future of AI by collecting a massive dataset of AI forecasts over a range of time-frames and then training models to aggregate those forecasts. In addition to contributing to the development of insights about the development of AI, participants put themselves in the running for cash prizes that total $50,000 in value.Round One is now open. Register here.

With GPT-3, OpenAI showed that a single deep-learning model could be trained to use language – to produce sonnets or code – by providing it with masses of text. It went on to show that by substituting text with pixels, AI could be trained to produce incomplete images.

Now, they’ve announced DALL·E, a 12-billion parameter version of GPT-3 trained to generate images from text descriptions (e.g. “an armchair in the shape of an avocado”), using a dataset of text–image pairs. It doesn’t seem to be the case that DALL·E is simply regurgitating images as some might have worried since its responses to unusual prompts (“a snail made of a harp”) are as impressive as its responses to rather ordinary ones. Indeed, its ability to manage bizarre prompts is suggestive of its ability to perform zero-shot visual reasoning

DeepMind’s AlphaFold Solves Biology’s Protein Folding Problem

The function of a protein is closely linked with its unique 3D shape. The protein folding problem in biology is the challenge of trying to predict a protein’s shape. If biologists were able to figure out proteins’ shape, this would pave the way for a multitude of breakthroughs, including the development of treatments for diseases.

And now, the Critical Assessment of Protein Structure Prediction, a biennial blind assessment intended to identify state of the art technology in protein structure prediction, has recognised DeepMind’s AI system AlphaFold as the solution to the protein folding problem. It’s has been called a ‘once in a generation advance’ by Calico’s Founder and CEO, Arthur Levinson, and is indicative of ‘how computational methods are poised to transform research in biology.’