Posts in this category appear in the left sidebar (column 3).

Not Cool: A Climate Podcast

FLI is excited to announce the latest in our podcast line-up: Not Cool: A Climate Podcast! In this new series, hosted by Ariel Conn, we’ll hear directly from climate experts from around the world, as they answer every question we can think of about the climate crisis. And we’ve launched it just in time for the United Nations Climate Action Summit, which begins on September 23.

You can listen to the short trailer above that highlights what we’ll be covering in the coming months, or read the transcript below. And of course you can jump right in to the first episode — all podcasts for this series can be found at futureoflife.org/notcool. You can also always listen to all FLI podcasts on any of your favorite podcast platforms just by searching for “Future of Life Institute.” The Not Cool podcasts are all there, and we’ll be releasing new episodes every Tuesday and Thursday for at least the next couple of months. We hope these interviews will help you better understand the science and policies behind the climate crisis and what we can all do to prevent the worst effects of climate change.

We want to make sure we get your questions answered too! If you haven’t had a chance to fill out our survey about what you want to learn about climate change, please consider doing so now, and let us know what you’d like to learn.

Transcript

This is really the issue of our times, and our children and grandchildren will not forgive us if we don’t contain this problem.

~Jessica Troni, Senior Programme Officer, UN Environment-Global Environment Facility Climate Change Adaptation portfolio.

Climate change, to state the obvious, is a huge and complicated problem. The crisis is a problem so big it’s being studied by people with PhDs in meteorology, geology, physics, chemistry, psychology, economics, political science, and more. It’s a problem that needs to be tackled at every level, from individual action to international cooperation. It’s a problem that seems daunting, to say the least. Yet it’s a problem that must be solved. And that’s where hope lies. You see, as far as existential threats to humanity go, climate change stands out as being particularly solvable. Challenging? Yes. But not impossible.

The trends are bad. I will quote René Dubos who said, however, “Trends are not destiny.” So the trends are bad, but we can change the trends.

~Suzanne Jones, Mayor, Boulder CO // Executive Director, Eco-Cycle

Unlike the threats posed by artificial intelligence, biotechnology or nuclear weapons, you don’t need to have an advanced science degree or be a high-ranking government official to start having a meaningful impact on your own carbon footprint. Each of us can begin making lifestyle changes today that will help. The people you vote into office at all levels of government, from local to national, can each  influence and create better climate policies. But this is a problem for which every action each of us takes truly does help.

When you have a fractal, complicated, humongous, super wicked problem like this, it means there’s some facet of it that every person on the planet can do something about it. Artist, communicator, teacher, engineer, entrepreneur. There’s something in it for everybody.

~Andrew Revkin, Head of Initiative on Communication and Sustainability, Columbia University // Science & Environmental Journalist

I’m Ariel Conn, and I’m the host of Not Cool, a climate podcast that dives deep into understanding both the climate crisis and the solutions. I started this podcast because the news about climate change seems to get worse with each new article and report, but the solutions, at least as reported, remain vague and elusive. I wanted to hear from the scientists and experts themselves to learn what’s really going on and how we can all come together to solve this crisis. And so I’ll be talking with climate experts from around the world, including scientists, journalists, policy experts and more, to learn the problems climate change poses, what we know and what’s still uncertain about our future climate, and what we can all do to help put the brakes on this threat.

We’ll look at some of the basic science behind climate change and global warming, like the history of climate modeling, what the carbon cycle is, what tipping points are and whether we’ve already passed some, what extreme weather events are and why they’re getting worse. We’ll look at the challenges facing us, from political inertia to technical roadblocks. We’ll talk about the impacts on human health and lifestyles from the spread of deadly diseases to national security threats to problems with zoning laws. We’ll learn about geoengineering, ocean acidification, deforestation, and how local communities can take action, regardless of what’s happening at the federal level.

I think the most important thing that every single person can do is talk more about climate change.  Social momentum is the key to political momentum and getting real action.

~John Cook, Founder, SkepticalScience.com // Research Assistant Professor, Center for Climate Change Communication, George Mason University

Let’s start talking. Let’s build momentum. And let’s take real action. Because climate change is so not cool.

Visit futureoflife.org/notcool for a complete list of episodes, which we will be updating every Tuesday and Thursday for at least the next couple of months. And we hope you’ll also join the discussion. You can find us on twitter using #NotCool and #ChangeForClimate.

Dr. Matthew Meselson Wins 2019 Future of Life Award

On April 9th, Dr. Matthew Meselson received the $50,000 Future of Life Award at a ceremony at the University of Boulder’s Conference on World Affairs. Dr. Meselson was a driving force behind the 1972 Biological Weapons Convention, an international ban that has prevented one of the most inhumane forms of warfare known to humanity. April 9th marked the eve of the Convention’s 47th anniversary.

Meselson’s long career is studded with highlights: proving Watson and Crick’s hypothesis on DNA structure, solving the Sverdlovsk Anthrax mystery, ending the use of Agent Orange in Vietnam. But it is above all his work on biological weapons that makes him an international hero.

“Through his work in the US and internationally, Matt Meselson was one of the key forefathers of the 1972 Biological Weapons Convention,” said Daniel Feakes, Chief of the Biological Weapons Convention Implementation Support Unit. “The treaty bans biological weapons and today has 182 member states. He has continued to be a guardian of the BWC ever since. His seminal warning in 2000 about the potential for the hostile exploitation of biology foreshadowed many of the technological advances we are now witnessing in the life sciences and responses which have been adopted since.”

Meselson became interested in biological weapons during the 60s, while employed with the U.S. Arms Control and Disarmament Agency. It was on a tour of Fort Detrick, where the U.S. was then manufacturing anthrax, that he learned the motivation for developing biological weapons: they were cheaper than nuclear weapons. Meselson was struck, he says, by the illogic of this — it would be an obvious national security risk to decrease the production cost of WMDs.

Do you know someone deserving of the Future of Life Award? If so, please consider submitting their name to our Unsung Hero Search page. If we decide to give the award to your nominee, you will receive a $3,000 prize from FLI for your contribution.

The use of biological weapons was already prohibited by the 1925 Geneva Protocol, an international treaty that the U.S. had never ratified. So Meselson wrote a paper, “The United States and the Geneva Protocol,” outlining why it should do so. Meselson knew Henry Kissinger, who passed his paper along to President Nixon, and by the end of 1969 Nixon renounced biological weapons.

Next came the question of toxins — poisons derived from living organisms. Some of Nixon’s advisors believed that the U.S. should renounce the use of naturally derived toxins, but retain the right to use artificial versions of the same substances. It was another of Meselson’s papers, “What Policy for Toxins,” that led Nixon to reject this arbitrary distinction and to renounce the use of all toxin weapons.

On Meselson’s advice, Nixon had resubmitted the Geneva Protocol to the Senate for approval. But he also went beyond the terms of the Protocol — which only ban the use of biological weapons — to renounce offensive biological research itself. Stockpiles of offensive biological substances, like the anthrax that Meselson had discovered at Fort Detrick, were destroyed.

Once the U.S. adopted this more stringent policy, Meselson turned his attention to the global stage. He and his peers wanted an international agreement stronger than the Geneva Protocol, one that would ban stockpiling and offensive research in addition to use and would provide for a verification system. From their efforts came the Biological Weapons Convention, which was signed in 1972 and is still in effect today.

“Thanks in significant part to Professor Matthew Meselson’s tireless work, the world came together and banned biological weapons, ensuring that the ever more powerful science of biology helps rather than harms humankind. For this, he deserves humanity’s profound gratitude,” said former UN Secretary-General Ban Ki-Moon.

Meselson has said that biological warfare “could erase the distinction between war and peace.” Other forms of war have a beginning and an end — it’s clear what is warfare and what is not. Biological warfare would be different: “You don’t know what’s happening, or you know it’s happening but it’s always happening.”

And the consequences of biological warfare can be greater, even, than mass destruction; Attacks on DNA could fundamentally alter humankind. FLI honors Matthew Meselson for his efforts to protect not only human life but also the very definition of humanity.

Said Astronomer Royal Lord Martin Rees, “Matt Meselson is a great scientist — and one of very few who have been deeply committed to making the world safe from biological threats. This will become a challenge as important as the control of nuclear weapons — and much more challenging and intractable. His sustained and dedicated efforts fully deserve wider acclaim.”

“Today biotech is a force for good in the world, associated with saving rather than taking lives, because Matthew Meselson helped draw a clear red line between acceptable and unacceptable uses of biology”, added MIT Professor and FLI President Max Tegmark. “This is an inspiration for those who want to draw a similar red line between acceptable and unacceptable uses of artificial intelligence and ban lethal autonomous weapons.

To learn more about Matthew Meselson, listen to FLI’s two-part podcast featuring him in conversation with Ariel Conn and Max Tegmark. In Part One, Meselson describes how he helped prove Watson and Crick’s hypothesis of DNA structure and recounts the efforts he undertook to get biological weapons banned. Part Two focuses on three major incidents in the history of biological weapons and the role played by Meselson in resolving them.

Publications by Meselson include:

The Future of Life Award is a prize awarded by the Future of Life Institute for a heroic act that has greatly benefited humankind, done despite personal risk and without being rewarded at the time. This prize was established to help set the precedent that actions benefiting future generations will be rewarded by those generations. The inaugural Future of Life Award was given to the family of Vasili Arkhipov in 2017 for single-handedly preventing a Soviet nuclear attack against the US in 1962, and the 2nd Future of Life Award was given to the family of Stanislav Petrov for preventing a false-alarm nuclear war in 1983.

 

 

 

Women for the Future

This Women’s History Month, FLI has been celebrating with Women for the Future, a campaign to honor the women who’ve made it their job to create a better world for us all. The field of existential risk mitigation is largely male-dominated, so we wanted to emphasize the value –– and necessity –– of female voices in our industry. We profiled 34 women we admire, and got their takes on what they love (and don’t love) about their jobs, what advice they’d give women starting out in their fields, and what makes them hopeful for the future.

These women do all sorts of things. They are researchers, analysts, professors, directors, founders, students. One is a state senator; one is a professional poker player; two are recipients of the Nobel Peace Prize. They work on AI, climate change, robotics, disarmament, human rights, and more. What ultimately brings them together is a shared commitment to the future of humanity.

Women in the US remain substantially underrepresented in academia, government, STEM, and other industries. They make up an estimated 12% of machine learning researchers, they comprise roughly 30% of the authors on the latest IPCC report, and they’ve won about 16% of Nobel Peace Prizes awarded to individuals.

Nevertheless, the women that we profiled had overwhelmingly positive things to say about their experiences in this industry.

They are, without exception, deeply passionate about what they do. As Jade Leung, Head of Research and Partnerships at the University of Oxford’s Center for the Governance of Artificial Intelligence, put it: “It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”

And they all want to see more women join their fields. “I’ve found the [existential risk] community extremely welcoming and respectful,” said Liv Boeree, professional poker player and co-founder of Raising for Effective Giving, “so I’d recommend it highly to any woman who is interested in pursuing work in this area.”

Bing Song, Vice President of the Berggruen Institute, agreed. “Women should embrace and dive into this new area of thinking about the future of humanity,” she said, adding, “Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned.”

“Our talents and skills are needed,” concluded Sonia Cassidy, Director Of Operations at Alliance to Feed the Earth in Disasters, “and so are you!”

Find a list of all 34 women on the Women for the Future homepage, or scroll through the slideshow below. Click on a name or photo to learn more. 



Rasha Abdul Rahim

Deputy Director of Amnesty Tech, Amnesty International

“[W]hen people around the world and civil society can think of a potent idea that’s worth fighting for, and stick at the concept however long it may take, and develop the proposal to get traction from political leaders, we really can make a difference.”


Elizabeth Barnes

Safety Team Member, OpenAI

“You can probably learn things much faster than you expect. It’s easy to think that learning some new skill will be impossibly hard. I’ve been surprised a lot of times how quickly things go from being totally overwhelming and incomprehensible to pretty alright.”


Rebecca Boehm

Economist, Union of Concerned Scientists

“The recent elevation of conversations about the importance of racial equity and inclusion makes me very hopeful for our future. I believe solving the big food and agricultural issues we are facing will require not only the voices, but the leadership of a diverse set of people.”


Liv Boeree

Co-founder, Raising for Effective Giving (REG) | Ambassador, www.effectivegiving.org

“I’ve found the [existential risk] community extremely welcoming and respectful, so I’d recommend it highly to any woman who is interested in pursuing work in this area.”


Astrid Caldas

Senior Climate Scientist, Union of Concerned Scientists

“Learn as much as you can not only from academic institutions or NGOs, but from people on the frontlines and those who are being the most impacted by climate change. Attend events, visit places if you can, to see first hand how people are dealing with the issues, and find out how you can help them become more resilient. Sometimes it is as simple as showing them a website they didn’t know about, or telling them about grants and other resources to protect their homes from floods.”


Rosie Campbell

Assistant Director, Center for Human-Compatible AI (CHAI) at UC Berkeley

“I’m a big advocate for diversity. We’re trying to solve big, important problems, and it’s worrying to think we could be missing out on important perspectives. I’d love to see more women in AI safety!”


Sonia Cassidy

Director Of Operations, Alliance to Feed the Earth in Disasters (ALLFED)

“Do not ever underestimate yourself and what women bring into the world, this field or any other. Our talents and skills are needed, and so are you!”


Carla Zoe Cremer

Research Affiliate, Centre for the Study of Existential Risk | Researcher, Leverhulme Centre for the Future of Intelligence

“My ideas are taken seriously and my work is appreciated. The problems in existential risk are hard, unsolved and numerous — which means that everyone welcomes your initiative and contributions and will not hold you back if you try something new.”


Kristina Dahl

Senior Climate Scientist, Union of Concerned Scientists

“Climate change is at this incredible nexus of science, culture, policy, and the environment. To do this job well, one has to bring to the table a love of the environment, a willingness to identify and fight for the policies needed to protect it, a sensitivity to the diverse range of decisions people make in their daily lives, and a fascination with the nitty-gritty bits of the science.”


Jeanne Dietsch

State Senator, NH | Founder and CEO of multiple tech startups

“Entrepreneurs: make sure that your company is competitive, that you have innovative processes and/or products. And I will paraphrase Michael Bloomberg: ‘Hire honest people who are smarter than yourself.'”


Anca Dragan

Assistant Professor, UC Berkeley

“I’m hopeful that progress in intelligence and AI tools can lead to freeing up more people to spend more time on education and creative pursuits — I think that would make for a wonderful future for us.”

Photo: Human-Machine Interaction / Anca Dragan / Photos Copyright Noah Berger / 2016


Beatrice Fihn

Executive Director, ICAN

“Don’t be too intimidated or impressed by senior people and ‘important’ people. Most of them don’t actually know as much as they come across as knowing.”


Danit Gal

Project Assistant Professor, Cyber Civilization Research Center, Keio University

“If you find something that moves you — be it further developments in an established field, a way to combine existing fields to create new ones, or something that’s entirely off the beaten path — pursue it. The act of pursuing the things that fascinate you is the real experience you need. If you can combine this with something that’s useful and beneficial to this world, you’ve won the game.”


Paula Garcia

Energy analyst, Union of Concerned Scientists

“Read, talk to others that work in the renewable energy industry, identify where in the value chain you want to contribute, and go for it!”


Rose Hadshar

Project Manager, Research Scholars Programme, Future of Humanity Institute

“[S]o many extremely able people are trying to make [the future] good.”


Emilia Javorsky

Director, Scientists Against Inhumane Weapons

“I’m a pretty optimistic person at baseline, but particularly so after getting to know the incredible people that compose the x-risk community. They care so deeply about engineering a positive future for humanity — I feel tremendously grateful to have the opportunity to work with them!”


Natalie Jones

PhD Student, University of Cambridge | Research Affiliate, CSER

“The other people working in this field are so fiercely intelligent and capable. It’s hard not to have a conversation which leaves you with a perspective or idea you hadn’t thought of before. This, and the knowledge that one is doing useful and important work, combine to make it very rewarding.”


Jade Leung

Head of Research and Partnerships, Center for the Governance of Artificial Intelligence, University of Oxford

“It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”


Cassidy Nelson

Research Scholar, Future of Humanity Institute, University of Oxford

“I feel I’m surrounded by people who care deeply about life and addressing large and complex risks. I feel this field’s focus, while grim on its own, is also intrinsically coupled with the desire and hope that the future can go well. I remain hopeful that if we can navigate the next century safely, a better existence awaits us and our descendants. I am inspired by what could be possible for conscious life and I hope that my career can help ensure no catastrophic event occurs before our future is secured.”


Charlie Oliver

Founder/CEO, TECH 2025 (Served Fresh Media)

“Don’t allow other people to define your dreams and don’t allow them to place limits on what you can do. And just as important, if not more so, don’t limit your own potential with soul-crushing self-doubt. A little self-doubt is okay and quite normal. But when it begins to keep you from taking big risks necessary to discover your strengths and path, you have to fix that right away or that type of thinking will fester.”


Marie-Therese Png

PhD Student, Oxford Internet Institute

“Underrepresented perspectives — women, people of colour, and other intersectional identities — are highly valuable at this point in uncovering blindspots. Your concerns may not currently be represented in the research community, but it doesn’t mean they shouldn’t be. There is low replaceability because if you weren’t there it wouldn’t be any single person’s main focus. When you’re a minority in the room it’s even more important to overcome audience inhibition and speak up or a blindspot may persist.”


Carina Prunkl

Senior Research Scholar, Future of Humanity Institute, University of Oxford

“AI is a really exciting field to work in and there is a real need for people with diverse academic backgrounds – you don’t need to be a coder to make substantial contributions. Make use of existing women networks or write directly to women researchers if you would like to know what it is like to work at a particular organisation or with a particular team. Most of us are more than happy to help and share our experiences.”


Francesca Rossi

AI Ethics Global Leader and Distinguished Research Staff Member, IBM Research

“My advice to women is to believe in what they are and what they are passionate about, to behave according to their values and attitudes without trying to mimic anybody else, and to be fully aware that their contribution is essential for advancing AI in the most inclusive, fair, and responsible way.”


Susi Snyder

Managing Director, Don’t Bank on the Bomb, PAX & ICAN

“Find your passion, produce the research that supports your policy recommendation and demand the space to say your piece. I always think to the first US woman that ran on a major party for President- Shirley Chisholm, she said “if they don’t give you a seat, bring a folding chair”.  I think about the fact that there are (some) more seats now, and that’s amazing. There is still a long, long way to go before equity, but there are some serious efforts to move closer to that day.”


Bing Song

Vice President, Berggruen Institute | Director of the Institute’s China Center

“Women should embrace and dive into this new area of thinking about the future of humanity. Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned. More broad based, inclusive, non-confrontational and equanimous thinking, which is more typically associated with the female approach to things, is sorely needed in this world.”


Shuchi Talati

Geoengineering Research, Governance and Public Engagement Fellow, Union of Concerned Scientists

“Domestic and international dedication to addressing climate change is continuously growing. Though we are far from where we need to be, I remain optimistic that we’re on a promising path.”


Mary Wareham

Coordinator of the Campaign to Stop Killer Robots | Advocacy Director of Human Rights Watch arms division

“Study what you are passionate about and not what you think will get you a job.”


Jody Williams

Chairwoman, Nobel Women’s Initiative | Nobel Laureate

“If I have advice, it would be to be clear about who you want to be in your life and what you stand for — and then go for it.”


Bonnie Wintle

Research Fellow, School of Biosciences, University of Melbourne | Research Affiliate, Centre for the Study of Existential Risk (CSER), University of Cambridge

“Seeing the huge turnout of school kids and young people at climate change demonstrations gives me hope for the future. The next generation of leaders and decision makers seem to be proactive and genuinely interested in addressing these problems.”


Baobao Zhang

PhD Candidate, Political Science, Yale University | Research Affiliate, Center for the Governance of AI, University of Oxford

“AI policy is a nascent but rapidly growing field. I think this is a good time for women to enter the field. Sometimes women are hesitant to enter a new discipline because they don’t feel they have adequate knowledge or experience. My work has taught me that you can quickly learn on the job and that you can apply the skills and knowledge you already have to your new job.”


Meia Chita-Tegmark

Co-founder, Future of Life Institute | Postdoctoral Scholar, Tufts University

“Be brave. This is our world too, we can’t let it be shaped by men alone.”


Ariel Conn

Director of Communications/Outreach and Weapons Policy Advisor, Future of Life Institute

“Success in this job comes with much greater satisfaction than success in any other job I’ve had.”


Jessica Cussins Newman

AI Policy Specialist, Future of Life Institute | Research Fellow, UC Berkeley Center for Long-Term Cybersecurity

“Don’t discount yourself just because you think you don’t have the right background — the field is actively looking for ways to learn from other disciplines.”


Victoria Krakovna

Cofounder, Future of Life Institute | Research Scientist, DeepMind

“It’s great to see more and more talented and motivated people entering the field to work on these interesting and difficult problems.”

Women for the Future

In honor of Women’s History Month, FLI presents Women for the Future, a celebration of the women who’ve made it their job to create a better world for us all. Click on a name or photo to learn more.

Make sure to check back here, as we’ll be updating this page throughout the month.

Rasha Abdul Rahim

Deputy Director of Amnesty Tech, Amnesty International

“[W]hen people around the world and civil society can think of a potent idea that’s worth fighting for, and stick at the concept however long it may take, and develop the proposal to get traction from political leaders, we really can make a difference.”

Elizabeth Barnes

Safety Team Member, OpenAI

“You can probably learn things much faster than you expect. It’s easy to think that learning some new skill will be impossibly hard. I’ve been surprised a lot of times how quickly things go from being totally overwhelming and incomprehensible to pretty alright.”

Rebecca Boehm

Economist, Union of Concerned Scientists

“The recent elevation of conversations about the importance of racial equity and inclusion makes me very hopeful for our future. I believe solving the big food and agricultural issues we are facing will require not only the voices, but the leadership of a diverse set of people.”

Liv Boeree

Co-founder, Raising for Effective Giving (REG) | Ambassador, www.effectivegiving.org

“I’ve found the [existential risk] community extremely welcoming and respectful, so I’d recommend it highly to any woman who is interested in pursuing work in this area.”

Astrid Caldas

Senior Climate Scientist, Union of Concerned Scientists

“Learn as much as you can not only from academic institutions or NGOs, but from people on the frontlines and those who are being the most impacted by climate change. Attend events, visit places if you can, to see first hand how people are dealing with the issues, and find out how you can help them become more resilient. Sometimes it is as simple as showing them a website they didn’t know about, or telling them about grants and other resources to protect their homes from floods.”

Rosie Campbell

Assistant Director, Center for Human-Compatible AI (CHAI) at UC Berkeley

“I’m a big advocate for diversity. We’re trying to solve big, important problems, and it’s worrying to think we could be missing out on important perspectives. I’d love to see more women in AI safety!”

Sonia Cassidy

Director Of Operations, Alliance to Feed the Earth in Disasters (ALLFED)

“Do not ever underestimate yourself and what women bring into the world, this field or any other. Our talents and skills are needed, and so are you!”

Carla Zoe Cremer

Research Affiliate, Centre for the Study of Existential Risk | Researcher, Leverhulme Centre for the Future of Intelligence

“My ideas are taken seriously and my work is appreciated. The problems in existential risk are hard, unsolved and numerous — which means that everyone welcomes your initiative and contributions and will not hold you back if you try something new.”

Kristina Dahl

Senior Climate Scientist, Union of Concerned Scientists

“Climate change is at this incredible nexus of science, culture, policy, and the environment. To do this job well, one has to bring to the table a love of the environment, a willingness to identify and fight for the policies needed to protect it, a sensitivity to the diverse range of decisions people make in their daily lives, and a fascination with the nitty-gritty bits of the science.”

Jeanne Dietsch

State Senator, NH | Founder and CEO of multiple tech startups

“Entrepreneurs: make sure that your company is competitive, that you have innovative processes and/or products. And I will paraphrase Michael Bloomberg: ‘Hire honest people who are smarter than yourself.'”

Anca Dragan

Assistant Professor, UC Berkeley

“I’m hopeful that progress in intelligence and AI tools can lead to freeing up more people to spend more time on education and creative pursuits — I think that would make for a wonderful future for us.”

Photo: Human-Machine Interaction / Anca Dragan / Photos Copyright Noah Berger / 2016

Beatrice Fihn

Executive Director, ICAN

“Don’t be too intimidated or impressed by senior people and ‘important’ people. Most of them don’t actually know as much as they come across as knowing.”

Danit Gal

Project Assistant Professor, Cyber Civilization Research Center, Keio University

“If you find something that moves you — be it further developments in an established field, a way to combine existing fields to create new ones, or something that’s entirely off the beaten path — pursue it. The act of pursuing the things that fascinate you is the real experience you need. If you can combine this with something that’s useful and beneficial to this world, you’ve won the game.”

Paula Garcia

Energy analyst, Union of Concerned Scientists

“Read, talk to others that work in the renewable energy industry, identify where in the value chain you want to contribute, and go for it!”

Rose Hadshar

Project Manager, Research Scholars Programme, Future of Humanity Institute

“[S]o many extremely able people are trying to make [the future] good.”

Emilia Javorsky

Director, Scientists Against Inhumane Weapons

“I’m a pretty optimistic person at baseline, but particularly so after getting to know the incredible people that compose the x-risk community. They care so deeply about engineering a positive future for humanity — I feel tremendously grateful to have the opportunity to work with them!”

Natalie Jones

PhD Student, University of Cambridge | Research Affiliate, CSER

“The other people working in this field are so fiercely intelligent and capable. It’s hard not to have a conversation which leaves you with a perspective or idea you hadn’t thought of before. This, and the knowledge that one is doing useful and important work, combine to make it very rewarding.”

Jade Leung

Head of Research and Partnerships, Center for the Governance of Artificial Intelligence, University of Oxford

“It is a rare, sometimes overwhelming, always humbling privilege to be in a position to work directly on a challenge which I believe is one of the most important facing us this century.”

Cassidy Nelson

Research Scholar, Future of Humanity Institute, University of Oxford

“I feel I’m surrounded by people who care deeply about life and addressing large and complex risks. I feel this field’s focus, while grim on its own, is also intrinsically coupled with the desire and hope that the future can go well. I remain hopeful that if we can navigate the next century safely, a better existence awaits us and our descendants. I am inspired by what could be possible for conscious life and I hope that my career can help ensure no catastrophic event occurs before our future is secured.”

Charlie Oliver

Founder/CEO, TECH 2025 (Served Fresh Media)

“Don’t allow other people to define your dreams and don’t allow them to place limits on what you can do. And just as important, if not more so, don’t limit your own potential with soul-crushing self-doubt. A little self-doubt is okay and quite normal. But when it begins to keep you from taking big risks necessary to discover your strengths and path, you have to fix that right away or that type of thinking will fester.”

Marie-Therese Png

PhD Student, Oxford Internet Institute

“Underrepresented perspectives — women, people of colour, and other intersectional identities — are highly valuable at this point in uncovering blindspots. Your concerns may not currently be represented in the research community, but it doesn’t mean they shouldn’t be. There is low replaceability because if you weren’t there it wouldn’t be any single person’s main focus. When you’re a minority in the room it’s even more important to overcome audience inhibition and speak up or a blindspot may persist.”

Carina Prunkl

Senior Research Scholar, Future of Humanity Institute, University of Oxford

“AI is a really exciting field to work in and there is a real need for people with diverse academic backgrounds – you don’t need to be a coder to make substantial contributions. Make use of existing women networks or write directly to women researchers if you would like to know what it is like to work at a particular organisation or with a particular team. Most of us are more than happy to help and share our experiences.”

Francesca Rossi

AI Ethics Global Leader and Distinguished Research Staff Member, IBM Research

“My advice to women is to believe in what they are and what they are passionate about, to behave according to their values and attitudes without trying to mimic anybody else, and to be fully aware that their contribution is essential for advancing AI in the most inclusive, fair, and responsible way.”

Susi Snyder

Managing Director, Don’t Bank on the Bomb, PAX & ICAN

“Find your passion, produce the research that supports your policy recommendation and demand the space to say your piece. I always think to the first US woman that ran on a major party for President- Shirley Chisholm, she said “if they don’t give you a seat, bring a folding chair”.  I think about the fact that there are (some) more seats now, and that’s amazing. There is still a long, long way to go before equity, but there are some serious efforts to move closer to that day.”

Bing Song

Vice President, Berggruen Institute | Director of the Institute’s China Center

“Women should embrace and dive into this new area of thinking about the future of humanity. Male dominance in past millennia in shaping the world and in how we approach the universe, humanity, and life needs to be questioned. More broad based, inclusive, non-confrontational and equanimous thinking, which is more typically associated with the female approach to things, is sorely needed in this world.”

Shuchi Talati

Geoengineering Research, Governance and Public Engagement Fellow, Union of Concerned Scientists

“Domestic and international dedication to addressing climate change is continuously growing. Though we are far from where we need to be, I remain optimistic that we’re on a promising path.”

Mary Wareham

Coordinator of the Campaign to Stop Killer Robots | Advocacy Director of Human Rights Watch arms division

“Study what you are passionate about and not what you think will get you a job.”

Jody Williams

Chairwoman, Nobel Women’s Initiative | Nobel Laureate

“If I have advice, it would be to be clear about who you want to be in your life and what you stand for — and then go for it.”

Bonnie Wintle

Research Fellow, School of Biosciences, University of Melbourne | Research Affiliate, Centre for the Study of Existential Risk (CSER), University of Cambridge

“Seeing the huge turnout of school kids and young people at climate change demonstrations gives me hope for the future. The next generation of leaders and decision makers seem to be proactive and genuinely interested in addressing these problems.”

Baobao Zhang

PhD Candidate, Political Science, Yale University | Research Affiliate, Center for the Governance of AI, University of Oxford

“AI policy is a nascent but rapidly growing field. I think this is a good time for women to enter the field. Sometimes women are hesitant to enter a new discipline because they don’t feel they have adequate knowledge or experience. My work has taught me that you can quickly learn on the job and that you can apply the skills and knowledge you already have to your new job.”

 

Meia Chita-Tegmark

Co-founder, Future of Life Institute | Postdoctoral Scholar, Tufts University

“Be brave. This is our world too, we can’t let it be shaped by men alone.”

Ariel Conn

Director of Communications/Outreach and Weapons Policy Advisor, Future of Life Institute

“Success in this job comes with much greater satisfaction than success in any other job I’ve had.”

Jessica Cussins Newman

AI Policy Specialist, Future of Life Institute | Research Fellow, UC Berkeley Center for Long-Term Cybersecurity

“Don’t discount yourself just because you think you don’t have the right background — the field is actively looking for ways to learn from other disciplines.”

Victoria Krakovna

Cofounder, Future of Life Institute | Research Scientist, DeepMind

“It’s great to see more and more talented and motivated people entering the field to work on these interesting and difficult problems.”

$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa

Click here to see this page in other languages:  German Russian 

To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)

Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.

One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.

In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.

But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.

Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”

University College London Mathematics Professor  Hannah Fry, author of  the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.

Stanislav Petrov around the time he helped avert WWIII

$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust

$2 million has been allocated to fund research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially. The money was donated by Elon Musk to cover grants through the Future of Life Institute (FLI). Ten grants have been selected for funding.

Said Tegmark, “I’m optimistic that we can create an inspiring high-tech future with AI as long as we win the race between the growing power of AI and the wisdom with which the manage it. This research is to help develop that wisdom and increasing the likelihood that AGI will be best rather than worst thing to happen to humanity.”

Today’s artificial intelligence (AI) is still quite narrow. That is, it can only accomplish narrow sets of tasks, such as playing chess or Go, driving a car, performing an Internet search, or translating languages. While the AI systems that master each of these tasks can perform them at superhuman levels, they can’t learn a new, unrelated skill set (e.g. an AI system that can search the Internet can’t learn to play Go with only its search algorithms).

These AI systems lack that “general” ability that humans have to make connections between disparate activities and experiences and to apply knowledge to a variety of fields. However, a significant number of AI researchers agree that AI could achieve a more “general” intelligence in the coming decades. No one knows how AI that’s as smart or smarter than humans might impact our lives, whether it will prove to be beneficial or harmful, how we can design it safely, or even how to prepare society for advanced AI. And many researchers worry that the transition could occur quickly.

Anthony Aguirre, co-founder of FLI and physics professor at UC Santa Cruz, explains, “The breakthroughs necessary to have machine intelligences as flexible and powerful as our own may take 50 years. But with the major intellectual and financial resources now being directed at the problem it may take much less. If or when there is a breakthrough, what will that look like? Can we prepare? Can we design safety features now, and incorporate them into AI development, to ensure that powerful AI will continue to benefit society? Things may move very quickly and we need research in place to make sure they go well.”

Grant topics include: training multiple AIs to work together and learn from humans about how to coexist, training AI to understand individual human preferences, understanding what “general” actually means, incentivizing research groups to avoid a potentially dangerous AI race, and many more. As the request for proposals stated, “The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.”

FLI hopes that this round of grants will help ensure that AI remains beneficial as it becomes increasingly intelligent. The full list of FLI recipients and project titles includes:

Primary Investigator Project Title Amount Recommended Email
Allan Dafoe, Yale University Governance of AI Programme $276,000 allan.dafoe@yale.edu
Stefano Ermon, Stanford University Value Alignment and Multi-agent Inverse Reinforcement Learning $100,000 ermon@cs.stanford.edu
Owain Evans, Oxford University Factored Cognition: Amplifying Human Cognition for Safely Scalable AGI $225,000 owain.evans@philosophy.ox.ac.uk
The Anh Han, Teesside University Incentives for Safety Agreement Compliance in AI Race $224,747 t.han@tees.ac.uk
Jose Hernandez-Orallo, University of Cambridge Paradigms of Artificial General Intelligence and Their Associated Risks $220,000 jorallo@dsic.upv.es
Marcus Hutter, Australian National University The Control Problem for Universal AI: A Formal Investigation $276,000 marcus.hutter@anu.edu.au
James Miller, Smith College Utility Functions: A Guide for Artificial General Intelligence Theorists $78,289 jdmiller@smith.edu
Dorsa Sadigh, Stanford University Safe Learning and Verification of Human-AI Systems $250,000 dorsa@cs.stanford.edu
Peter Stone, University of Texas Ad hoc Teamwork and Moral Feedback as a Framework for Safe Robot Behavior $200,000 pstone@cs.utexas.edu
Josh Tenenbaum, MIT Reverse Engineering Fair Cooperation $150,000 jbt@mit.edu

 

Some of the grant recipients offered statements about why they’re excited about their new projects:

“The team here at the Governance of AI Program are excited to pursue this research with the support of FLI. We’ve identified a set of questions that we think are among the most important to tackle for securing robust governance of advanced AI, and strongly believe that with focused research and collaboration with others in this space, we can make productive headway on them.” -Allan Dafoe

“We are excited about this project because it provides a first unique and original opportunity to explicitly study the dynamics of safety-compliant behaviours within the ongoing AI research and development race, and hence potentially leading to model-based advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants. It also provides an important opportunity to validate our prior results on the importance of commitments and other mechanisms of trust in inducing global pro-social behavior, thereby further promoting AI for the common good.” -The Ahn Han

“We are excited about the potentials of this project. Our goal is to learn models of humans’ preferences, which can help us build algorithms for AGIs that can safely and reliably interact and collaborate with people.” -Dorsa Sadigh

This is FLI’s second grant round. The first launch in 2015, and a comprehensive list of papers, articles and information from that grant round can be found here. Both grant rounds are part of the original $10 million that Elon Musk pledged to AI safety research.

FLI cofounder, Viktoriya Krakovna, also added: “Our previous grant round promoted research on a diverse set of topics in AI safety and supported over 40 papers. The next grant round is more narrowly focused on research in AGI safety and strategy, and I am looking forward to great work in this area from our new grantees.”

Learn more about these projects here.

AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons

Leading AI companies and researchers take concrete action against killer robots, vowing never to develop them.

Stockholm, Sweden (July 18, 2018) After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 160 AI-related companies and organizations from 36 countries, and 2,400 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Max Tegmark, president of the Future of Life Institute (FLI) which organized the effort, announced the pledge on July 18 in Stockholm, Sweden during the annual International Joint Conference on Artificial Intelligence (IJCAI), which draws over 5,000 of the world’s leading AI researchers. SAIS and EurAI were also organizers of this year’s IJCAI.

Said Tegmark, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human “in-the-loop.” That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system. (This does not include today’s drones, which are under human control. It also does not include autonomous systems that merely defend against other weapons, since “lethal” implies killing a human.)

The pledge begins with the statement:

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

Another key organizer of the pledge, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, points out the thorny ethical issues surrounding LAWS. He states:

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, has long been a strong opponent of lethal autonomous weapons. He says:

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

In addition to the ethical questions associated with LAWS, many advocates of an international ban on LAWS are concerned that these weapons will be difficult to control – easier to hack, more likely to end up on the black market, and easier for bad actors to obtain –  which could become destabilizing for all countries, as illustrated in the FLI-released video “Slaughterbots”.

In December 2016, the Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion regarding LAWS at the UN. By the most recent meeting in April, twenty-six countries had announced support for some type of ban, including China. And such a ban is not without precedent. Biological weapons, chemical weapons, and space weapons were also banned not only for ethical and humanitarian reasons, but also for the destabilizing threat they posed.

The next UN meeting on LAWS will be held in August, and signatories of the pledge hope this commitment will encourage lawmakers to develop a commitment at the level of an international agreement between countries. As the pledge states:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

 

As seen in the press

Stephen Hawking in Memoriam

As we mourn the loss of Stephen Hawking, we should remember that his legacy goes far beyond science. Yes, of course he was one of the greatest scientists of the past century, discovering that black holes evaporate and helping found the modern quest for quantum gravity. But he also had a remarkable legacy as a social activist, who looked far beyond the next election cycle and used his powerful voice to bring out the best in us all. As a founding member of FLI’s Scientific Advisory board, he tirelessly helped us highlight the importance of long-term thinking and ensuring that we use technology to help humanity flourish rather than flounder. I marveled at how he could sometimes answer my emails faster than my grad students. His activism revealed the same visionary fearlessness as his scientific and personal life: he saw further ahead than most of those around him and wasn’t afraid of controversially sounding the alarm about humanity’s sloppy handling of powerful technology, from nuclear weapons to AI.

On a personal note, I’m saddened to have lost not only a long-time collaborator but, above all, a great inspiration, always reminding me of how seemingly insurmountable challenges can be overcome with creativity, willpower and positive attitude. Thanks Stephen for inspiring us all!

2018 International AI Safety Grants Competition

For many years, artificial intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant recent success, and great future promise. This recent success has raised an important question: how can we ensure that the growing power of AI is matched by the growing wisdom with which we manage it? […]

AI Researchers Create Video to Call for Autonomous Weapons Ban at UN

In response to growing concerns about autonomous weapons, a coalition of AI researchers and advocacy organizations released a fictitious video on Monday that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous.

The video was launched in Geneva, where AI researcher Stuart Russell presented it at an event at the United Nations Convention on Conventional Weapons hosted by the Campaign to Stop Killer Robots.

Russell, in an appearance at the end of the video, warns that the technology described in the film already exists and that the window to act is closing fast.

Support for a ban has been mounting. Just this past week, over 200 Canadian scientists and over 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban. Earlier this summer, over 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/Robotics researchers and others, including Elon Musk and Stephen Hawking.

These letters indicate both grave concern and a sense that the opportunity to curtail lethal autonomous weapons is running out.

Noel Sharkey of the International Committee for Robot Arms Control explains, “The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

Drone technology today is very close to having fully autonomous capabilities. And many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability. The US and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.

A ban can exert great power on the trajectory of technological development without needing to stop every instance of misuse. Max Tegmark, MIT Professor and co-founder of the Future of Life Institute, points out, “People’s knee-jerk reaction that bans can’t help isn’t historically accurate: the bioweapon ban created such a powerful stigma that, despite treaty cheating, we have almost no bioterror attacks today and almost all biotech funding is civilian.”

As Toby Walsh, an AI professor at the University of New South Wales, argues: “The academic community has sent a clear and consistent message. Autonomous weapons will be weapons of terror, the perfect tool for those who have no qualms about the terrible uses to which they are put. We need to act now before this future arrives.”

More than 70 countries are participating in the meeting taking place November 13 – 17 organized by the 2016 Fifth Review Conference at the UN, which established a Group of Governmental Experts on lethal autonomous weapons. The meeting is chaired by Ambassador Amandeep Singh Gill of India, and the countries will continue negotiations of what could become an historic international treaty.

For more information about autonomous weapons, see the following resources:

An Open Letter to the United Nations Convention on Certain Conventional Weapons

An Open Letter to the United Nations Convention on Certain Conventional Weapons

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Translations: Chinese GermanJapanese    Russian

FULL LIST OF SIGNATORIES TO THE OPEN LETTER

To add your company, please contact Toby Walsh at tw@cse.unsw.edu.au.

Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
Charles Gretton, founder of Hivery, Australia.
Brad Lorge, founder & CEO of Premonition.io, Australia
Brenton O’Brien, founder & CEO of Microbric, Australia.
Samir Sinha, founder & CEO of Robonomics AI, Australia.
Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
Peter Turner, founder & MD of Tribotix, Australia.
Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
Geoffrey Hinton, founder of DNNResearch Inc, Canada.
James Chow, founder & CEO of UBTECH Robotics, China.
Robert Li, founder & CEO of Sankobot, China.
Marek Rosa, founder & CEO of GoodAI, Czech Republic.
Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
Markus Järve, founder & CEO of Krakul, Estonia.
Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
Raul Bravo, founder & CEO of DIBOTICS, France.
Ivan Burdun, founder & President of AIXTREE, France.
Raphael Cherrier, founder & CEO of Qucit, France.
Alain Garnier, founder & CEO of ARISEM (acquired by Thales), founder & CEO of Jamespot, France.
Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
Charles Ollion, founder & Head of Research at Heuritech, France.
Anis Sahbani, founder & CEO of Enova Robotics, France.
Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
Marcus Frei, founder & CEO of NEXT.robotics, Germany.
Kristinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
Fahad Azad, founder of Robosoft Systems, India.
Debashis Das, Ashish Tupate & Jerwin Prabu, founders (incl. CEO) of Bharati Robotics, India.
Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
Pranay Kishore, founder & CEO of Phi Robotics Research, India.
Shahid Memom, founder & CTO of Vanora Robots, India.
Krishnan Nambiar & Shahid Memon, founders, CEO & CTO of Vanora Robotics, India.
Achu Wilson, founder & CTO of Sastra Robotics, India.
Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
Parsa Ghaffari, founder & CEO of Aylien, Ireland.
Alan Holland, founder & CEO of Keelvar Systems, Ireland.
Alessandro Prest, founder & CTO of LogoGrab, Ireland.
Frank Reeves, founder & CEO of Avvio, Ireland.
Alessio Bonfietti, founder & CEO of MindIT, Italy.
Angelo Sudano, founder & CTO of ICan Robotics, Italy.
Domenico Talia, founder and R&D Director of DtoK Labs, Italy.
Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot Corporation, Japan.
Andrejs Vasiljevs, founder and CEO of Tilde, Latvia.
Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
Rob Brouwer, founder and Director of Operatins, Aeronavics, New Zealand.
Philip Solaris, founder and CEO of X-Craf Enterprises, New Zealand.
Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at BlueEye Robotics, Norway.
Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
Jasper Horrell, founder of DeepData, South Africa.
Onno Huyser and Mark van Wyk, founders of FlyH2 Aerospace, South Africa.
Toni Ferrate, founder & CEO of RO-BOTICS, Spain.
José Manuel del Río, founder & CEO of Aisoy Robotics, Spain.
Victor Martin, founder & CEO of Macco Robotics, Spain.
Angel Lis Montesinos, founder & CTO of Neuronalbite, Spain.
Timothy Llewellynn, founder & CEO of nViso, Switzerland.
Francesco Mondada, founder of K-Team, Switzerland.
Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders, President & CEO of Nnaisense, Switzerland.
Satish Ramachandran, founder of AROBOT, United Arab Emirates.
Silas Adekunle, founder & CEO of Reach Robotics, UK.
Steve Allpress, founder & CTO of FiveAI, UK.
John Bishop, founder and Director of Tungsten Centre for Intelligent Data Analytis, UK.
Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
Daniel Hulme, founder & CEO of Satalia, UK.
Bradley Kieser, founder & Director of SMS Speedway, UK.
Charlie Muirhead & Tabitha Goldstaub, founders & CEO of CognitionX, UK.
Geoff Pegman, founder & MD of R U Robots, UK.
Demis Hassabis & Mustafa Suleyman, founders, CEO & Head of Applied AI, DeepMind, UK.
Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
Steve Cousins, founder & CEO of Savioke, USA.
Brian Gerkey, founder & CEO of Open Source Robotics, USA.
Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
John Hobart, founder & CEO of Coria, USA.
Henry Hu, founder & CEO of Cafe X Technologies, USA.
Zaib Husain, founder and CEO of Makerarm, Inc.
Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
Kris Kitchen, founder & Chief Data Scientit at Qieon Research, USA.
Justin Lane, founder of Prospecture Simulation, USA.
Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
Brian Mingus, founder & CTO of Latently, USA.
Mohammad Musa, founder & CEO at Deepen AI, USA.
Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
Erik Nieves, founder & CEO of PlusOne Robotics, USA.
Steve Omohundro, founder & President of Possibility Research, USA.
Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
Greg Phillips, founder & CEO, ThinkIt Data Solutins, USA.
Dan Reuter, founder & CEO of Electric Movement, USA.
Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA.
Dan Rubins, founder & CEO of Legal Robot, USA.
Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
Andrew Schroeder, founder of WeRobotics, USA.
Stanislav Shalunov, founder & CEO of Clostra, USA
Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
Martin Spencer, founder & CEO of GeckoSystems, USA.
Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
Michael Stuart, founder & CEO of Lucid Holdings, USA.
Madhuri Trivedi, founder & CEO of OrangeHC, USA.
Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.
Reza Zadeh, founder & CEO of Matroid, USA.

Superintelligence survey

Click here to see this page in other languages:  Chinese  French  German Japanese  Russian

The Future of AI – What Do You Think?

Max Tegmark’s new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts’ forecasts, and explores a broad spectrum of views on what will/should happen. But it’s time to expand the conversation. If we’re going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Max’s book. To join the conversation yourself, please take the survey here.


How soon, and should we welcome or fear it?

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Do you want superintelligence?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

What Should the Future Look Like?

In his book, Tegmark argues that we shouldn’t passively ask “what will happen?” as if the future is predetermined, but instead ask what we want to happen and then try to create that future.  What sort of future do you want?

If superintelligence arrives, who should be in control?
If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?
What should a future civilization strive for?
Do you want life spreading into the cosmos?

The Ideal Society?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isn’t developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Here’s a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios — along with fun explanations about what AI is, how it works, how it’s impacting us today, and what else the future might bring — when you order Max’s new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

United Nations Adopts Ban on Nuclear Weapons

Today, 72 years after their invention, states at the United Nations formally adopted a treaty which categorically prohibits nuclear weapons.

With 122 votes in favor, one vote against, and one country abstaining, the “Treaty on the Prohibition of Nuclear Weapons” was adopted Friday morning and will open for signature by states at the United Nations in New York on September 20, 2017. Civil society organizations and more than 140 states have participated throughout negotiations.

On adoption of the treaty, ICAN Executive Director Beatrice Fihn said:

“We hope that today marks the beginning of the end of the nuclear age. It is beyond question that nuclear weapons violate the laws of war and pose a clear danger to global security. No one believes that indiscriminately killing millions of civilians is acceptable – no matter the circumstance – yet that is what nuclear weapons are designed to do.”

In a public statement, Former Secretary of Defense William Perry said:

“The new UN Treaty on the Prohibition of Nuclear Weapons is an important step towards delegitimizing nuclear war as an acceptable risk of modern civilization. Though the treaty will not have the power to eliminate existing nuclear weapons, it provides a vision of a safer world, one that will require great purpose, persistence, and patience to make a reality. Nuclear catastrophe is one of the greatest existential threats facing society today, and we must dream in equal measure in order to imagine a world without these terrible weapons.”

Until now, nuclear weapons were the only weapons of mass destruction without a prohibition treaty, despite the widespread and catastrophic humanitarian consequences of their intentional or accidental detonation. Biological weapons were banned in 1972 and chemical weapons in 1992.

This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and does not consider them legitimate tools of war. The repeated objection and boycott of the negotiations by many nuclear-weapon states demonstrates that this treaty has the potential to significantly impact their behavior and stature. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviors, even in states not party to the treaty.

“This is a triumph for global democracy, where the pro-nuclear coalition of Putin, Trump and Kim Jong-Un were outvoted by the majority of Earth’s countries and citizens,” said MIT Professor and FLI President Max Tegmark.

“The strenuous and repeated objections of nuclear armed states is an admission that this treaty will have a real and lasting impact,” Fihn said.

The treaty also creates obligations to support the victims of nuclear weapons use (Hibakusha) and testing and to remediate the environmental damage caused by nuclear weapons.

From the beginning, the effort to ban nuclear weapons has benefited from the broad support of international humanitarian, environmental, nonproliferation, and disarmament organizations in more than 100 states. Significant political and grassroots organizing has taken place around the world, and many thousands have signed petitions, joined protests, contacted representatives, and pressured governments.

“The UN treaty places a strong moral imperative against possessing nuclear weapons and gives a voice to some 130 non-nuclear weapons states who are equally affected by the existential risk of nuclear weapons. … My hope is that this treaty will mark a sea change towards global support for the abolition of nuclear weapons. This global threat requires unified global action,” said Perry.

Fihn added, “Today the international community rejected nuclear weapons and made it clear they are unacceptable.It is time for leaders around the world to match their values and words with action by signing and ratifying this treaty as a first step towards eliminating nuclear weapons.”

 

Images courtesy of ICAN.

 

Comprehensively bans nuclear weapons and related activity. It will be illegal for parties to undertake any activities related to nuclear weapons. It bans the use, development, testing, production, manufacturing, acquiring, possession, stockpiling, transferring, receiving, threatening to use, stationing, installation, or deploying of nuclear weapons.  [Article 1]

Bans any assistance with prohibited acts. The treaty bans assistance with prohibited acts, and should be interpreted as prohibiting states from engaging in military preparations and planning to use nuclear weapons, financing their development and manufacture, or permitting the transit of them through territorial waters or airspace. [Article 1]

Creates a path for nuclear states which join to eliminate weapons, stockpiles, and programs. It requires states with nuclear weapons that join the treaty to remove them from operational status and destroy them and their programs, all according to plans they would submit for approval. It also requires states which have other country’s weapons on their territory to have them removed. [Article 4]

Verifies and safeguards that states meet their obligations. The treaty requires a verifiable, time-bound, transparent, and irreversible destruction of nuclear weapons and programs and requires the maintenance and/or implementation of international safeguards agreements. The treaty permits safeguards to become stronger over time and prohibits weakening of the safeguard regime. [Articles 3 and 4]

Requires victim and international assistance and environmental remediation. The treaty requires states to assist victims of nuclear weapons use and testing, and requires environmental remediation of contaminated areas. The treaty also obliges states to provide international assistance to support the implementation of the treaty. The text requires states to join the Treaty, and to encourage others to join, as well as to meet regularly to review progress. [Articles 6, 7, and 8]

NEXT STEPS

Opening for signature. The treaty will be open for signature on 20 September at the United Nations in New York. [Article 13]

Entry into force. Fifty states are required to ratify the treaty for it to enter into force.  At a national level, the process of ratification varies, but usually requires parliamentary approval and the development of national legislation to turn prohibitions into national legislation. This process is also an opportunity to elaborate additional measures, such as prohibiting the financing of nuclear weapons. [Article 15]

First meeting of States Parties. The first Meeting of States Parties will take place within a year after the entry into force of the Convention. [Article 8]

SIGNIFICANCE AND IMPACT OF THE TREATY

Delegitimizes nuclear weapons. This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and do not consider them legitimate weapons, creating the foundation of a new norm of international behaviour.

Changes party and non-party behaviour. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviours, even in states not party to the treaty. This is true for treaties ranging from those banning cluster munitions and land mines to the Convention on the law of the sea. The prohibition on assistance will play a significant role in changing behaviour given the impact it may have on financing and military planning and preparation for their use.

Completes the prohibitions on weapons of mass destruction. The treaty completes work begun in the 1970s, when Chemical weapons were banned, and the 1990s when biological weapons were banned.

Strengthens International Humanitarian Law (“Laws of War”). Nuclear weapons are intended to kill millions of civilians – non-combatants – a gross violation of International Humanitarian Law. Few would argue that the mass slaughter of civilians is acceptable and there is no way to use a nuclear weapon in line with international law. The treaty strengthens these bodies of law and norm.

Remove the prestige associated with proliferation. Countries often seek nuclear weapons for the prestige of being seen as part of an important club. By more clearly making nuclear weapons an object of scorn rather than achievement, their spread can be deterred.

FLI sought to increase support for the negotiations from the scientific community this year. We organized an open letter signed by over 3700 scientists in 100 countries, including 30 Nobel Laureates. You can see the letter here and the video we presented recently at the UN here.

This post is a modified version of the press release provided by the International Campaign to Abolish Nuclear Weapons (ICAN).

Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations

Click here to see this page in other languages: Chinese    German

Delegates from most UN member states are gathering in New York to negotiate a nuclear weapons ban, where they will also receive a letter of support that has been signed by thousands of scientists from around over 80 countries – including 28 Nobel Laureates and a former US Secretary of Defense. “Scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought”, the letter explains.

The letter was delivered at a ceremony at 1pm on Monday March 27 in the UN General Assembly Hall to Her Excellency Ms. Elayne Whyte Gómez from Costa Rica, who is presiding over the negotiations.

Despite all the attention to nuclear terrorism and nuclear rogue states, one of the greatest threats from nuclear weapons has always been mishaps and accidents among the established nuclear nations. With political tensions and instability increasing, this threat is growing to alarming levels: “The probability of a nuclear calamity is higher today, I believe, that it was during the cold war,” according to former U.S. Secretary of Defense William J. Perry, who signed the letter.

“Nuclear weapons represent one of the biggest threats to our civilization. With the unpredictability of the current world situation, it is more important than ever to get negotiations about a ban on nuclear weapons on track, and to make these negotiations a truly global effort,” says neuroscience professor Edvard Moser from Norway, 2014 Nobel Laureate in Physiology/Medicine.

Professor Wolfgang Ketterle from MIT, 2001 Nobel Laureate in Physics, agrees: “I see nuclear weapons as a real threat to the human race and we need an international consensus to reduce this threat.”

Currently, the US and Russia have about 14,000 nuclear weapons combined, many on hair-trigger alert and ready to be launched on minutes notice, even though a Pentagon report argued that a few hundred would suffice for rock-solid deterrence. Yet rather than trim their excess arsenals, the superpowers plan massive investments to replace their nuclear weapons by new destabilizing ones that are more lethal for a first strike attack.

“Unlike many of the world’s leaders I care deeply about the future of my grandchildren. Even the remote possibility of a nuclear war presents an unconscionable threat to their welfare. We must find a way to eliminate nuclear weapons,” says Sir Richard J. Roberts, 1993 Nobel Laureate in Physiology or Medicine.

“Most governments are frustrated that a small group of countries with a small fraction of the world’s population insist on retaining the right to ruin life on Earth for everyone else with nuclear weapons, ignoring their disarmament promises in the non-proliferation treaty”, says physics professor Max Tegmark from MIT, who helped organize the letter. “In South Africa, the minority in control of the unethical Apartheid system didn’t give it up spontaneously on their own initiative, but because they were pressured into doing so by the majority. Similarly, the minority in control of unethical nuclear weapons won’t give them up spontaneously on their own initiative, but only if they’re pressured into doing so by the majority of the world’s nations and citizens.”

The idea behind the proposed ban is to provide such pressure by stigmatizing nuclear weapons.

Beatrice Fihn, who helped launch the ban movement as Executive Director of the International Campaign to Abolish Nuclear Weapons, explains that such stigmatization made the landmine and cluster munitions bans succeed and can succeed again: “The market for landmines is pretty much extinct—nobody wants to produce them anymore because countries have banned and stigmatized them.  Just a few years ago, the United States—who never signed the landmines treaty—announced that it’s basically complying with the treaty. If the world comes together in support of a nuclear ban, then nuclear weapons countries will likely follow suit, even if it doesn’t happen right away.

Susi Snyder from from the Dutch “Don’t Bank on the Bomb” project explains:

If you prohibit the production, possession, and use of these weapons and the assistance with doing those things, we’re setting a stage to also prohibit the financing of the weapons. And that’s one way that I believe the ban treaty is going to have a direct and concrete impact on the ongoing upgrades of existing nuclear arsenals, which are largely being carried out by private contractors.”

Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created”, the letter states, motivating a ban.

“The horror that happened at Hiroshima and Nagasaki should never be repeated.  Nuclear weapons should be banned,” says Columbia University professor Martin Chalfie, 2008 Nobel Laureate in Chemistry.

Norwegian neuroscience professor May-Britt Moser, a 2014 Nobel Laureate in Physiology/Medicine, says, “In a world with increased aggression and decreasing diplomacy – the availability nuclear weapons is more dangerous than ever. Politicians are urged to ban nuclear weapons. The world today and future generations depend on that decision.”

The open letter: https://futureoflife.org/nuclear-open-letter/

A Principled AI Discussion in Asilomar

We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

This sense among the attendees echoes a wider societal engagement with AI that has heated up dramatically over the past few years. Due to this rising awareness of AI, dozens of major reports have emerged from academia (e.g. the Stanford 100 year report), government (e.g. two major reports from the White House), industry (e.g. materials from the Partnership on AI), and the nonprofit sector (e.g. a major IEEE report).

In planning the Asilomar meeting, we hoped both to create meaningful discussion among the attendees, and also to see what, if anything, this rather heterogeneous community actually agreed on. We gathered all the reports we could and compiled a list of scores of opinions about what society should do to best manage AI in coming decades. From this list, we looked for overlaps and simplifications, attempting to distill as much as we could into a core set of principles that expressed some level of consensus. But this “condensed” list still included ambiguity, contradiction, and plenty of room for interpretation and worthwhile discussion.

Leading up to the meeting, we extensively surveyed meeting participants about the list, gathering feedback, evaluation, and suggestions for improved or novel principles. The responses were folded into a significantly revised version for use at the meeting. In Asilomar, we gathered more feedback in two stages. First, small breakout groups discussed subsets of the principles, giving detailed refinements and commentary on them. This process generated improved versions (in some cases multiple new competing versions) and a few new principles. Finally, we surveyed the full set of attendees to determine the level of support for each version of each principle.

After such detailed, thorny and sometimes contentious discussions and a wide range of feedback, we were frankly astonished at the high level of consensus that emerged around many of the statements during that final survey. This consensus allowed us to set a high bar for inclusion in the final list: we only retained principles if at least 90% of the attendees agreed on them.

What remained was a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list. This collection of principles is by no means comprehensive and it’s certainly open to differing interpretations, but it also highlights how the current “default” behavior around many relevant issues could violate principles that most participants agreed are important to uphold.

We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.

To start the discussion, here are some of the things other AI researchers who signed the Principles had to say about them.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“Value alignment is a big one. Robots aren’t going to try to revolt against humanity, but they’ll just try to optimize whatever we tell them to do. So we need to make sure to tell them to optimize for the world we actually want.”

-Anca Dragan, Assistant Professor in the EECS Department at UC Berkeley, and co-PI for the Center for Human Compatible AI
Read her complete interview here.

Shared Prosperity
“I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously — I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

-Yoshua Bengio, Professor of CSOR at the University of Montreal, and head of the Montreal Institute for Learning Algorithms (MILA)
Read his complete interview here.

Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology. As humans we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”

-Kay Firth-Butterfield, Executive Director of AI-Austin.org, and an adjunct Professor of Law at the University of Texas at Austin
Read her complete interview here.

Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“It’s absolutely crucial that individuals should have the right to manage access to the data they generate… AI does open new insight to individuals and institutions. It creates a persona for the individual or institution – personality traits, emotional make-up, lots of the things we learn when we meet each other. AI will do that too and it’s very personal. I want to control how [my] persona is created. A persona is a fundamental right.”

-Guruduth Banavar, VP, IBM Research, Chief Science Officer, Cognitive Computing
Read his complete interview here.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“The one closest to my heart. … AI systems should behave in a way that is aligned with human values. But actually, I would be even more general than what you’ve written in this principle. Because this principle has to do not only with autonomous AI systems, but I think this is very important and essential also for systems that work tightly with humans in the loop, and also where the human is the final decision maker. Because when you have human and machine tightly working together, you want this to be a real team. So you want the human to be really sure that the AI system works with values aligned to that person. It takes a lot of discussion to understand those values.”

-Francesca Rossi, Research scientist at the IBM T.J. Watson Research Centre, and a professor of computer science at the University of Padova, Italy, currently on leave
Read her complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“One reason that I got involved in these discussions is that there are some topics I think are very relevant today, and one of them is the arms race that’s happening amongst militaries around the world already, today. This is going to be very destabilizing. It’s going to upset the current world order when people get their hands on these sorts of technologies. It’s actually stupid AI that they’re going to be fielding in this arms race to begin with and that’s actually quite worrying – that it’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today. You have to see the recent segment on 60 Minutes to see the terrifying swarms of robot UAVs that the American military is now experimenting with.”

-Toby Walsh, Guest Professor at Technical University of Berlin, Professor of Artificial Intelligence at the University of New South Wales, and leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research
Read his complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I’m not a fan of wars, and I think it could be extremely dangerous. Obviously I think that the technology has a huge potential, and even just with the capabilities we have today it’s not hard to imagine how it could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

-Stefano Ermon, Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory
Read his complete interview here.

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“I agree! As a scientist, I’m against making strong or unjustified assumptions about anything, so of course I agree. Yet this principle bothers me … because it seems to be implicitly saying that there is an immediate danger that AI is going to become superhumanly, generally intelligent very soon, and we need to worry about this issue. This assertion … concerns me because I think it’s a distraction from what are likely to be much bigger, more important, more near term, potentially devastating problems. I’m much more worried about job loss and the need for some kind of guaranteed health-care, education and basic income than I am about Skynet. And I’m much more worried about some terrorist taking an AI system and trying to program it to kill all Americans than I am about an AI system suddenly waking up and deciding that it should do that on its own.”

-Dan Weld, Professor of Computer Science & Engineering and Entrepreneurial Faculty Fellow at the University of Washington
Read his complete interview here.

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“In many areas of computer science, such as complexity or cryptography, the default assumption is that we deal with the worst case scenario. Similarly, in AI Safety, we should assume that AI will become maximally capable and prepare accordingly. If we are wrong, we will still be great shape.”

-Roman Yampolskiy, Associate Professor of CECS at the University of Louisville, and founding director of the Cyber Security Lab
Read his complete interview here.

Obama’s Nuclear Legacy

The following article and infographic were originally posted on Futurism.

The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”

One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.

The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.

How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.

EA Global X Boston Conference

The first EA Global X conference, EAGxBoston, is being held at MIT on April 30th, 12:30-6:30pm. Boston EAs have created an incredible lineup bringing together a who’s who of researchers, EAs, EA orgs, and up-and-coming orgs including:
Dean Karlan (Yale, Innovations for Poverty Action)
Joshua Greene (Harvard, Moral Cognition Lab)
Rachel Glennerster (MIT, Poverty Action Lab)
Piali Mukhopadhyay (GiveDirectly)
Bruce Friedrich (The Good Food Institute)
Julia Wise (The Centre for Effective Altruism)
Ian Ross (Hampton Creek, Facebook)
Allison Smith (Animal Charity Evaluators)
Elizabeth Pearce (Boston University, Iodine Global Network)
Cher-Wen DeWitt (One Acre Fund)
Rhonda Zapatka (Trickle Up)
Elijah Goldberg (ImpactMatters)
Jason Ketola (MaxMind)
Lucia Sanchez (Innovations for Poverty Action)
Sharon Nunez Gough (Animal Equality)
Bruce Friedrich (The Good Food Institute, New Crop Capital)
Jon Camp (The Humane League)
Victoria Krakovna (Harvard, Future of Life Institute)
Eric Gastfriend (Harvard Business School EA, FLI, and formerly 80,000 Hours)
Dillon Bowen (Tufts EA, formerly 80,000 Hours and Giving What We Can)
Jason Trigg (earning-to-give at a startup and formerly as a hedge fund quant)
and more

The day will be filled with talks, panels, and networking opportunities. The program will address the major effective altruist cause areas of global health poverty and development, animal agriculture, and global catastrophic risk, as well as movement concerns like conducting research, building community, and choosing a career direction. We will also be introducing some up-and-coming organizations.

FLI’s Victoria Krakovna, Richard Mallah, and Lucas Perry participated in a panel about Global Catastrophic Risks.

More information and registration can be found on the conference website:
http://eagxboston.com

All proceeds after our minimum costs will be donated to EA charities. If you need a tax-receipt, please contact Randy Carlton <[masked]>. Please note that the early bird special ends on April 19th.

We have a limited amount of space, so if you’d like to join, please register today and share this invitation with interested friends via our Facebook group:
https://www.facebook.com/EAGxBoston/

Let’s get together, and learn what we can do even better together!

EAGxBoston Team from MIT Sloan EA, MIT EA, Tufts EA, Harvard EA, HBS EA, Animal Charity Evaluators and The Commonwealth Market
http://eagxboston.com

Hawking Says ‘Don’t Bank on the Bomb’ and Cambridge Votes to Divest $ 1Billion From Nuclear Weapons

1,000 nuclear weapons are plenty enough to deter any nation from nuking the US, but we’re hoarding over 7,000, and a long string of near-misses have highlighted the continuing risk of an accidental nuclear war which could trigger a nuclear winter, potentially killing most people on Earth. Yet rather than trimming our excess nukes, we’re planning to spend $4 million per hour for the next 30 years making them more lethal.

Although I’m used to politicians wasting my tax dollars, I was shocked to realize that I was voluntarily using my money for this nuclear boondoggle by investing in the very companies that are lobbying for and building new nukes: some of the money in my bank account gets loaned to them and my S&P500 mutual fund invests in them. “If you want to slow the nuclear arms race, then put your money where your mouth is and don’t bank on the bomb!”, my physics colleague Stephen Hawking told me. To make it easier for others to follow his sage advice, I made an app for that together with my friends at the Future of Life Institute, and launched this “Brief History of Nukes” that’s 3.14 long in honor of Hawking’s fascination with pi.

Our campaign got off to an amazing start this weekend at an MIT conferencewhere our Mayor Denise Simmons announced that the Cambridge City Council has unanimously decided to divest their billion dollar city pension fund from nuclear weapons production. “Not in our name!”, she said, and drew a standing ovation. “It’s my hope that this will inspire other municipalities, companies and individuals to look at their investments and make similar moves”.

“In Europe, over 50 large institutions have already limited their nuclear weapon investments, but this is our first big success in America”, said Susi Snyder, who leads the global nuclear divestment campaign dontbankonthebomb.com. Boston College philosophy major Lucas Perry, who led the effort to persuade Cambridge to divest, hoped that this online analysis tool will create a domino effect: “I want to empower other students opposing the nuclear arms race to persuade their own towns and universities to follow suit.”

Many financial institutions now offer mutual funds that cater to the growing interest in socially responsible investing, including Ariel, Calvert, Domini, Neuberger, Parnassuss, Pax World and TIAA-CREF. “We appreciate and share Cambridge’s desire to exclude nuclear weapons production from its pension fund. Pension funds are meant to serve the long-term needs of retirees, a service that nuclear weapons do not offer”, said Julie Fox Gorte, Senior Vice President for Sustainable Investing at Pax World.

“Divestment is a powerful way to stigmatize the nuclear arms race through grassroots campaigning, without having to wait for politicians who aren’t listening”, said conference co-organizer Cole Harrison, Executive Director of Massachusetts Peace Action, the nation’s largest grassroots peace organization. “If you’re against spending more money making us less safe, then make sure it’s not your money.”

You’ll find our divestment app here. If you’d like to persuade your own municipality to follow Cambridge’s lead, using their policy order as a model, here it is:

WHEREAS: Nations across the globe still maintain over 15,000 nuclear weapons, some of which are hundreds of times more powerful than those that obliterated Hiroshima and Nagasaki, and detonation of even a small fraction of these weapons could create a decade-long nuclear winter that could destroy most of the Earth’s population; and
WHEREAS: The United States has plans to invest roughly one trillion dollars over the coming decades to upgrade its nuclear arsenal, which many experts believe actually increases the risk of nuclear proliferation, nuclear terrorism, and accidental nuclear war; and
WHEREAS: In a period where federal funds are desperately needed in communities like Cambridge in order to build affordable housing, improve public transit, and develop sustainable energy sources, our tax dollars are being diverted to and wasted on nuclear weapons upgrades that would make us less safe; and
WHEREAS: Investing in companies producing nuclear weapons implicitly supports this misdirection of our tax dollars; and
WHEREAS: Socially responsible mutual funds and other investment vehicles are available that accurately match the current asset mix of the City of Cambridge Retirement Fund while excluding nuclear weapons producers; and
WHEREAS: The City of Cambridge is already on record in supporting the abolition of nuclear weapons, opposing the development of new nuclear weapons, and calling on President Obama to lead the nuclear disarmament effort; now therefore be it
ORDERED: That the City Council go on record opposing investing funds from the Cambridge Retirement System in any entities that are involved in or support the production or upgrading of nuclear weapons systems; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Cambridge Peace Commissioner and other appropriate City staff to organize an informational forum on possibilities for Cambridge individuals and institutions to divest their pension funds from investments in nuclear weapons contractors; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Board of the Cambridge Retirement System and other appropriate City staff to ensure divestment from all companies involved in production of nuclear weapons systems, and in entities investing in such companies, and the City Manager is requested to report back to the City Council about the implementation of said divestment in a timely manner.

AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research

The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

phoenix_convention_center1

Phoenix Convention Center where AAAI 2016 is taking place.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

 

 

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!