Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer.

To learn more, I spoke with Rob Wiblin and Brenton Mayer of 80,000 Hours. The following are highlights of the interview, but you can listen to the full podcast above or read the transcript here.

Can you give us some background about 80,000 Hours?

Rob: 80,000 Hours has been around for about six years and started when Benjamin Todd and Will MacAskill wanted to figure out how they could do as much good as possible. They started looking into things like the odds of becoming an MP in the UK or if you became a doctor, how many lives would you save. Pretty quickly, they were learning things that no one else had investigated.

They decided to start 80,000 Hours, which would conduct this research in a more systematic way and share it with people who wanted to do more good with their career.

80,000 hours is roughly the number of hours that you’d work in a full-time professional career. That’s a lot of time, so it pays off to spend quite a while thinking about what you’re going to do with that time.

On the other hand, 80,000 hours is not that long relative to the scale of the problems that the world faces. You can’t tackle everything. You’ve only got one career, so you should be judicious about what problems you try to solve and how you go about solving them.

How do you help people have more of an impact with their careers?

Brenton: The main thing is a career guide. We’ll talk about how to have satisfying careers, how to work on one of the world’s most important problems, how to set yourself up early so that later on you can have a really large impact.

The second part that we do is do career coaching and try to apply advice to individuals.

What is earning to give?

Rob: Earning to give is the career approach where you try to make a lot of money and give it to organizations that can use it to have a really large positive impact. I know people who can make millions of dollars a year doing the thing they love and donate most of that to effective nonprofits, supporting 5, 10, 15, possibly even 20 people to do direct work in their place.

Can you talk about research you’ve been doing regarding the world’s most pressing problems?

Rob: One of the first things we realized is that if you’re trying to help people alive today, your money can go further in the developing world. We just need to scale up solutions to basic health problems and economic issues that have been resolved elsewhere.

Moving beyond that, what other groups in the world are extremely neglected? Factory farmed animals really stand out. There’s very little funding focused on improving farm animal welfare.

The next big idea was, of all the people that we could help, what fraction are alive today? We think that it’s only a small fraction. There’s every reason to think humanity could live for another 100 generations on Earth and possibly even have our descendants alive on other planets.

We worry a lot about existential risks and ways that civilization can go off track and never recover. Thinking about the long-term future of humanity is where a lot of our attention goes and where I think people can have the largest impact with their career.

Regarding artificial intelligence safety, nuclear weapons, biotechnology and climate change, can you consider different ways that people could pursue either careers or “earn to give” options for these fields?

Rob: One would be to specialize in machine learning or other technical work and use those skills to figure out how can we make artificial intelligence aligned with human interests. How do we make the AI do what we want and not things that we don’t intend?

Then there’s the policy and strategy side, trying to answer questions like how do we prevent an AI arms race? Do we want artificial intelligence running military robots? Do we want the government to be more involved in regulating artificial intelligence or less involved? You can also approach this if you have a good understanding of politics, policy, and economics. You can potentially work in government, military or think tanks.

Things like communications, marketing, organization, project management, and fundraising operations — those kinds of things can be quite hard to find skilled, reliable people for. And it can be surprisingly hard to find people who can handle media or do art and design. If you have those skills, you should seriously consider applying to whatever organizations you admire.

[For nuclear weapons] I’m interested in anything that can promote peace between the United States and Russia and China. A war between those groups or an accidental nuclear incident seems like the most likely thing to throw us back to the stone age or even pre-stone age.

I would focus on ensuring that they don’t get false alarms; trying to increase trust between the countries in general and the communication lines so that if there are false alarms, they can quickly diffuse the situation.

The best opportunities [in biotech] are in early surveillance of new diseases. If there’s a new disease coming out, a new flu for example, it takes  a long time to figure out what’s happened.

And when it comes to controlling new diseases, time is really of the essence. If you can pick it up within a few days or weeks, then you have a reasonable shot at quarantining the people and following up with everyone that they’ve met and containing it. Any technologies that we can invent or any policies that will allow us to identify new diseases before they’ve spread to too many people is going to help with both natural pandemics, and also any kind of synthetic biology risks, or accidental releases of diseases from biological researchers.

Brenton: A Wagner and Weitzman paper suggests that there’s about a 10% chance of warming larger than 4.8 degrees Celsius, or a 3% chance of more than 6 degrees Celsius. These are really disastrous outcomes. If you’re interested in climate change, we’re pretty excited about you working on these very bad scenarios. Sensible things to do would be improving our ability to forecast; thinking about the positive feedback loops that might be inherent in Earth’s climate; thinking about how to enhance international cooperation.

Rob: It does seem like solar power and storage of energy from solar power is going to have the biggest impact on emissions over at least the next 50 years. Anything that can speed up that transition makes a pretty big contribution.

Rob, can you explain your interest in long-term multigenerational indirect effects and what that means?

Rob: If you’re trying to help people and animals thousands of years in the future, you have to help them through a causal chain that involves changing the behavior of someone today and then that’ll help the next generation and so on.

One way to improve the long-term future of humanity is to do very broad things that improve human capabilities like reducing poverty, improving people’s health, making schools better.

But in a world where the more science and technology we develop, the more power we have to destroy civilization, it becomes less clear that broadly improving human capabilities is a great way to make the future go better. If you improve science and technology, you both improve our ability to solve problems and create new problems.

I think about what technologies can we invent that disproportionately make the world safer rather than more risky. It’s great to improve the technology to discover new diseases quickly and to produce vaccines for them quickly, but I’m less excited about generically pushing forward the life sciences because there’s a lot of potential downsides there as well.

Another way that we can robustly prepare humanity to deal with the long-term future is to have better foresight about the problems that we’re going to face. That’s a very concrete thing you can do that puts humanity in a better position to tackle problems in the future — just being able to anticipate those problems well ahead of time so that we can dedicate resources to averting those problems.

To learn more, visit 80000hours.org and subscribe to Rob’s new podcast.

Podcast: Life 3.0 – Being Human in the Age of Artificial Intelligence

Elon Musk has called it a compelling guide to the challenges and choices in our quest for a great future of life on Earth and beyond, while Stephen Hawking and Ray Kurzweil have referred to it as an introduction and guide to the most important conversation of our time. “It” is Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence.

Tegmark is a physicist and AI researcher at MIT, and he’s also the president of the Future of Life Institute.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

What makes Life 3.0 an important read for anyone who wants to understand and prepare for our future?

There’s been lots of talk about AI disrupting the job market and enabling new weapons, but very few scientists talk seriously about what I think is the elephant in the room: What will happen, once machines outsmart us at all tasks?

Will superhuman artificial intelligence arrive in our lifetime? Can and should it be controlled, and if so, by whom? Can humanity survive in the age of AI? And if so, how can we find meaning and purpose if super-intelligent machines provide for all our needs and make all our contributions superfluous?

I’m optimistic that we can create a great future with AI, but it’s not going to happen automatically. We have to win this race between the growing power of the technology, and the growing wisdom with which we manage it. We don’t want to learn from mistakes. We want to get things right the first time because that might be the only time we have.

There is still a lot of AI researchers who are telling us not to worry. What is your response to them?

There are two very basic questions where the world’s leading AI researchers totally disagree.

One of them is when, if ever, are we going to get super-human general artificial intelligence? Some people think it’s never going to happen or take hundreds of years. Many others think it’s going to happen in decades. The other controversy is what’s going to happen if we ever get beyond human-level AI?

Then there are a lot of very serious AI researchers who think that this could be the best thing ever to happen, but it could also lead to huge problems. It’s really boring to sit around and quibble about whether we should worry or not. What I’m interested in is asking what concretely can we do today that’s going to increase the chances of things going well because that’s all that actually matters.

There’s also a lot of debate about whether people should focus on just near-term risks or just long-term risks.

We should obviously focus on both. What you’re calling the short-term questions, like how for example, do you make computers that are robust, and do what they’re supposed to do and not crash and don’t get hacked. It’s not only something that we absolutely need to solve in the short term as AI gets more and more into society, but it’s also a valuable stepping stone toward tougher questions. How are you ever going to build a super-intelligent machine that you’re confident is going to do what you want, if you can’t even build a laptop that does what you want instead of giving you the blue screen of death or the spinning wheel of doom.

If you want to go far in one direction, first you take one step in that direction.

You mention 12 options for what you think a future world with superintelligence will look like. Could you talk about a couple of the future scenarios? And then what are you hopeful for, and what scares you?

Yeah, I confess, I had a lot of fun brainstorming these different scenarios. When we envision the future, we almost inadvertently obsess about gloomy stuff. Instead, we really need these positive visions to think what kind of society would we like to have if we have enough intelligence at our disposal to eliminate poverty, disease, and so on? If it turns out that AI can help us solve these challenges, what do we want?

If we have very powerful AI systems, it’s crucial that their goals are aligned with our goals. We don’t want to create machines, which are first very excited about helping us, and then later get as bored with us as kids get with Legos.

Finally, what should the goals be that we want these machines to safeguard? There’s obviously no consensus on Earth for that. Should it be Donald Trump’s goals? Hillary Clinton’s goals? ISIS’s goals? Whose goals should it be? How should this be decided? This conversation can’t just be left to tech nerds like myself. It has to involve everybody because it’s everybody’s future that’s at stake here.

If we actually create an AI or multiple AI systems that can do this, what do we do then?

That’s one of those huge questions that everybody should be discussing. Suppose we get machines that can do all our jobs, produce all our goods and services for us. How do you want to distribute this wealth that’s produced? Just because you take care of people materially, doesn’t mean they’re going to be happy. How do you create a society where people can flourish and find meaning and purpose in their lives even if they are not necessary as producers? Even if they don’t need to have jobs?

You have a whole chapter dedicated to the cosmic endowment and what happens in the next billion years and beyond. Why should we care about something so far into the future?

It’s a beautiful idea if our cosmos can continue to wake up more, and life can flourish here on Earth, not just for the next election cycle, but for billions of years and throughout the cosmos. We have over a billion planets in this galaxy alone, which are very nice and habitable. If we think big together, this can be a powerful way to put our differences aside on Earth and unify around the bigger goal of seizing this great opportunity.

If we were to just blow it by some really poor planning with our technology and go extinct, wouldn’t we really have failed in our responsibility.

What do you see as the risks and the benefits of creating an AI that has consciousness?

There is a lot of confusion in this area. If you worry about some machine doing something bad to you, consciousness is a complete red herring. If you’re chased by a heat-seeking missile, you don’t give a hoot whether it has a subjective experience. You wouldn’t say, “Oh I’m not worried about this missile because it’s not conscious.”

If we create very intelligent machines, if you have a helper robot who you can have conversations with and says pretty interesting things. Wouldn’t you want to know if it feels like something to be that helper robot? If it’s conscious, or if it’s just a zombie pretending to have these experiences? If you knew that it felt conscious much like you do, presumably that would put it ethically in a very different situation.

It’s not our universe giving meaning to us, it’s we conscious beings giving meaning to our universe. If there’s nobody experiencing anything, our whole cosmos just goes back to being a giant waste of space. It’s going to be very important for these various reasons to understand what it is about information processing that gives rise to what we call consciousness.

Why and when should we concern ourselves with outcomes that have low probabilities?

I and most of my AI colleagues don’t think that the probability is very low that we will eventually be able to replicate human intelligence in machines. The question isn’t so much “if,” although there are certainly a few detractors out there, the bigger question is “when.”

If we start getting close to the human-level AI, there’s an enormous Pandora’s Box, which we want to open very carefully and just make sure that if we build these very powerful systems, they should have enough safeguards built into them already that some disgruntled ex-boyfriend isn’t going to use that for a vendetta, and some ISIS member isn’t going to use that for their latest plot.

How can the average concerned citizen get more involved in this conversation, so that we can all have a more active voice in guiding the future of humanity and life?

Everybody can contribute! We set up a website, ageofai.org, where we’re encouraging everybody to come and share their ideas for how they would like the future to be. We really need the wisdom of everybody to chart a future worth aiming for. If we don’t know what kind of future we want, we’re not going to get it.

Podcast: The Art of Predicting with Anthony Aguirre and Andrew Critch

How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, what it takes to make money on the stock market, and the bystander effect regarding existential risks.

Anthony is a professor of physics at the University of California at Santa Cruz. He’s one of the founders of the Future of Life Institute, of the Foundational Questions Institute, and most recently of metaculus.com, which is an online effort to crowdsource predictions about the future of science and technology. Andrew is on a two-year leave of absence from MIRI to work with UC Berkeley’s Center for Human Compatible AI. He cofounded the Center for Applied Rationality, and previously worked as an algorithmic stock trader at Jane Street Capital.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

Ariel: To start, what are predictions? What are the hallmarks of a good prediction? How does that differ from just guessing?

Anthony: I would say there are four aspects to a good prediction. One, it should be specific, well-defined and unambiguous. If you predict something’s going to happen, everyone should agree on whether that thing has happened or not. This can be surprisingly difficult to do.

Second, it should be probabilistic. A really good prediction is a probability for something happening.

Third, a prediction should be precise. If you give everything a 50% chance, you’ll never be terribly wrong, but you’ll also never be terribly right. Predictions are really interesting to the extent that they say something is either very likely or very unlikely. Precision is what we would aim for.

Fourth, you want to be well-calibrated. If there are 100 things that you predict with 90% confidence, around 90% of those things should come true.

The precision and the calibration kind of play off against each other, but it’s very difficult to be both about the future.

Andrew: Of the properties Anthony said, being specific, meaning it’s clear what the prediction is saying and when it will be settled — I think people really don’t appreciate how psychologically valuable that is.

People really undervalue the extent to which the specificity property of prediction is also part of your own training as a predictor. The last property that Anthony said, being calibration, is not just a property of a prediction. It’s a property of a predictor.

A good predictor is somebody who strives for calibration while also trying to be precise and get their probabilities as close to zero and one as they can.

Ariel: What is the difference between prediction versus just guessing or intuition? For example, knowing that the eclipse will happen in August versus not knowing what the weather will be like yet.

Andrew: The problem is that weather data is very unpredictable, and the locations of planets and moons and stars are predictable. I would say that it’s lack of a reliable model for making the prediction or a reliable method.

Anthony: There is an incredibly accurate prediction of the eclipse this coming August, but there is some tiny bit of uncertainty that you don’t see because we know so precisely where the planets are.

When you look at weather, there’s lots of uncertainty because we don’t have some measurement device at every position measuring every temperature and density of the atmosphere and the water at every point on earth. There’s uncertainty in the initial conditions, and then the physics amplifies those initial uncertainties into bigger uncertainties later on. That’s the hallmark of a chaotic physical system, which the atmosphere happens to be.

It’s an interesting thing that the different physical systems are so different in their predictability.

Andrew: That’s a really important thing for people to realize about predicting the future. They see the stock market, how unpredictable it is, and they know the stock market has something to do with the news and with what’s going on in the world. That must mean that the world itself is extremely hard to predict, but I think that’s an error. The reason the stock market is hard to predict is because it is a prediction.

If you’ve already made a prediction, predicting what is wrong about your prediction is really hard — if you knew that, you would have just made that part of your prediction to begin with. That’s something to meditate on. The world is not always as hard to predict as the stock market. I can predict that there’s going to be a traffic jam tomorrow on the commute from the East Bay to San Francisco, between the hours of 6:00 a.m. and 10:00 a.m.

I think some aspects of social systems are actually very easy to predict. An individual human driver, might be very hard to predict. But if you see 10,000 people driving down the highway, you get a strong sense of whether there’s going to be a traffic jam. Sometimes unpredictable phenomena can add up to predictable phenomena, and I think that’s a really important feature of making good long-term predictions with complicated systems.

Anthony: It’s often said that climate is more predictable than weather. Although the individual fluctuations day-to-day are difficult to predict, it’s very easy to predict that, in general, winter in the Northern Hemisphere is going to be colder than the summer. There are lots of statistical regularities that emerge, when you average over large numbers.

Ariel: As we’re trying to understand what the impact of artificial intelligence will be on humanity how do we consider what would be a complex prediction? What’s a simple prediction? What sort of information do we need to do this?

Anthony: Well, that’s a tricky one. One of the best methods of prediction for lots of things is just simple extrapolation. There are many physical systems that, once you can discern if they have a trend, you can fit a pretty simple function to.

When you’re talking about artificial intelligence, there are some hard aspects to predict, but also some relatively easy aspects to predict, like looking at the amount of funding that’s being given to artificial intelligence research or the computing power and computing speed and efficiency, following Moore’s Law and variants of it.

Andrew: People often think of mathematics as a source of certainty, but sometimes you can be certain that you are uncertain or you can be certain that you can’t be certain about something else.

A simple trend, like Moore’s Law, is a summary of what you see from a very complicated system, namely a bunch of companies and a bunch of people working to build smaller and faster and cheaper and more energy efficient hardware. That’s a very complicated system that somehow adds up to fairly simple behavior.

A hallmark of good prediction is, when you find a trend, the first question you should ask yourself is what is giving rise to this trend, and can I expect that to continue? That’s a bit of an art. It’s kind of more art than science, but it’s a critical art, because otherwise we end up blindly following trends that are bound to fail.

Ariel: I want to ask about who is making the prediction. With AI, for example, we see smart people in the field who predict AI will make life great and others are worried. With existential risks we see surveys and efforts in which experts in the field try to predict the odds of human extinction. How much can we rely on “experts in the field”?

Andrew: I can certainly tell you that thinking for 30 consecutive minutes about what could cause human extinction is much more productive than thinking for one consecutive minute. There are hard-to-notice mistakes about human extinction predictions that you probably can’t figure out from 30 seconds of reasoning.

Not everyone who’s an expert, say, in nuclear engineering or artificial intelligence is an expert in reasoning about human extinction. You have to be careful who you call an expert.

Anthony: I also feel that something similar is true about prediction. In general, making predictions is greatly aided if you have domain knowledge and expertise in the thing that you’re making a prediction about, but far from sufficient to make accurate predictions.

One of the experiences I’ve seen running Metaculus, is that there are people that know a tremendous amount about a subject and just are terrible at making predictions about it. Other people, who, even if their actual domain knowledge is lower, the fact that they are comfortable with statistics, that they’ve had practice making predictions are just much, much better at it.

Ariel: Anthony, with Metaculus, one of the things that you’re trying to do is get more people involved in predicting. What is the benefit of more people?

Anthony: There are a few benefits. One is that lots of people get the benefit of practice. Thinking about things that you tend to be more wrong on and what they might correlate with — that’s incredibly useful and makes you more effective.

In terms of actually creating accurate predictions, you’ll have more people who are really good at it. You can figure out who is good at predicting, and who is good at predicting a particular type of thing. One of the interesting things is that it isn’t just luck. There is a skill that people can develop and obtain, and then can be relied upon in the future.

Then, the third, and maybe this is the most important, is just statistics. Aggregating lots of people’s predictions tends to make a more accurate aggregate.

Andrew: I would also just like to say that I think the existence of systems like Metaculus are going to be really important for society improving its ability to understand the world.

Whose job is it to think for a solid hour about a human extinction risk? The answer is almost nobody. So we ought not to expect that just averaging the wisdom of the crowds is going to do super well on answering a question like that.

Ariel: Back to artificial intelligence and the question of timelines. How helpful is it for us to try to make predictions about when things will happen with AI? And who should make those predictions?

Andrew: I have made a career shift to coming up with trying to design control mechanisms for highly intelligent AI. I made that career shift, based on my own personal forecast of the future and what I think will be important, but I don’t reevaluate that forecast every day, just as I don’t reevaluate what neighborhood I should live in every day. You, at some point, need to commit to a path and follow that path for a little while to get anything done.

I think most AI researchers should, at some point, do the mental exercise of mapping out timelines and seeing what needs to happen, but they should do it deeply once every few years in collaboration with a few other people, and then stick to something that they think is going to help steer AI in a positive direction. I see a tendency to too frequently reevaluate timeline analyses of what’s going to happen in AI.

My answer to you is kind of everyone, but not everyone at once.

Anthony: I think there’s one other interesting question, which is the degree to which we want there to be accurate predictions and lots of people know what those accurate predictions are.

In general, I think more information is better, but it’s not necessarily the case that more information is better all the time. Suppose, that I became totally convinced, using Metaculus, that there was a high probability that artificial superintelligence was happening in the next 10 years. That would be a pretty big deal. I’d really want to think through what effect that information would have on various actors, national governments, companies, and so on. It could instigate a lot of issues. Those are things that I think we have to really carefully consider.

Andrew: Yeah, Anthony, I think that’s a great important issue. I don’t think there are enough scientific norms in circulation for what to do with a potentially dangerous discovery. Honestly, I feel like the discourse in most of science is a little bit head in the sand about the feasibility of creating existential risks from technology.

You might think it would be so silly and dumb to have some humans produce some technology that accidentally destroyed life, but just because it’s silly doesn’t mean it won’t happen. It’s the bystander effect. It’s very easy for us to fall into the trap of: “I don’t need to worry about developing dangerous technology, because if I was close to something dangerous, surely someone would have thought that through.”

You have to ask: whose job is it to be worried? If no one in the artificial intelligence community is point on noticing existential threats, maybe no one will notice the existential threats and that will be bad. The same goes for the technology that could be used by bad actors to produce dangerous synthetic viruses.

If you’ve got something that you think is 1% likely to pose an extinction threat, that seems like a small probability. Nonetheless, if 100 people have a 1% chance of causing human extinction, well someone probably has a good chance of doing it.

Ariel: Is there something hopeful that you want to add?

Anthony: Pretty much every decision that we make is implicitly built on a prediction. I think that if we can get better at predicting, individually, as a group, as a society, that should really help us choose a more wise path into the future, and hopefully that can happen.

Andrew: Hear, hear.

Visit metaculus.com to try your hand at the art of predicting.

 

Podcast: Banning Nuclear and Autonomous Weapons with Richard Moyes and Miriam Struyk

How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36. He’s worked closely with the International Campaign to Abolish Nuclear Weapons, he helped found the Campaign to Stop Killer Robots, and he coined the phrase “meaningful human control” regarding autonomous weapons.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety here.

Why is a ban on nuclear weapons important, even if nuclear weapons states don’t sign?

Richard: This process came out the humanitarian impact of nuclear weapons: from the use of a single nuclear weapon that would potentially kill hundreds of thousands of people, up to the use of multiple nuclear weapons which could have devastating impacts for human society and for the environment as a whole. These weapons should be considered illegal because their effects cannot be contained or managed in a way that avoids massive suffering.

At the same time, it’s a process that’s changing the landscape against which those states continue to maintain and assert the validity of their maintenance of nuclear weapons. By changing that legal background, we’re potentially in position to put much more pressure on those states to move towards disarmament as a long-term agenda.

Miriam: At a time when we see erosion of international norms, it’s quite astonishing that in less than two weeks, we’ll have an international treaty banning nuclear weapons. For too long nuclear weapons were mythical, symbolic weapons, but we never spoke about what these weapons actually do and whether we think that’s illegal.

This treaty brings back the notion of what do these weapons do and do we want that.

It also brings democratization of security policy. This is a process that was brought about by several states and also by NGOs, by the ICRC and other actors. It’s so important that it’s actually citizens speaking about nukes and whether we think they’re acceptable or not.

What is an autonomous weapon system?

Richard: If I might just backtrack a little — an important thing to recognize in all of these contexts is that these weapons don’t prohibit themselves — weapons have been prohibited because a diverse range of actors from civil society and from international organizations and from states have worked together.

Autonomous weapons are really an issue of new and emerging technologies and the challenges that new and emerging technologies present to society particularly when they’re emerging in the military sphere — a sphere which is essentially about how we’re allowed to kill each other or how we’re allowed to use technologies to kill each other.

Autonomous weapons are a movement in technology to a point where we will see computers and machines making decisions about where to apply force, about who to kill when we’re talking about people, or what objects to destroy when we’re talking about material.

What is the extent of autonomous weapons today versus what do we anticipate will be designed in the future?

Miriam: It depends a lot on your definition of course. I’m still, in a way, a bit of an optimist by saying that perhaps we can prevent the emergence of lethal autonomous weapon systems. But I also see some similarities that lethal autonomous weapons systems, like we had with nuclear weapons a few decades ago, can lead to an arms race, and can lead to more global insecurity, and can also lead to warfare.

The way we’re approaching lethal autonomous weapon systems is to try to ban them before we see horrible humanitarian consequences. How does that change your approach from previous weapons?

Richard: That this is a more future-orientated debate definitely creates different dynamics. But other weapon systems have been prohibited. Blinding laser weapons were prohibited when there was concern that laser systems designed to blind people were going to become a feature of the battlefield.

In terms of autonomous weapons, we already see significant levels of autonomy in certain weapon systems today and again I agree with Miriam in terms of recognition that certain definitional issues are very important in all of this.

One of the ways we’ve sought to orientate to this is by thinking about the concept of meaningful human control. What are the human elements that we feel are important to retain? We are going to see more and more autonomy within military operations. But in certain critical functions around how targets are identified and how force is applied and over what period of time — those are areas where we will potentially see an erosion of a level of human, essentially moral, engagement that is fundamentally important to retain.

Miriam: This is not so much about a weapon system but how do we control warfare and how do we maintain human control in the sense that it’s a human deciding who is legitimate target and who isn’t.

An argument in favor of autonomous weapons is that they can ideally make decisions better than humans and potentially reduce civilian casualties. How do you address that argument?

Miriam: We’ve had that debate with other weapon systems, as well, where the technological possibilities were not what they were promised to be as soon as they were used.

It’s an unfair debate because it’s mainly from states with developed industries who are most likely the ones using some form of lethal autonomous weapons systems first. Flip the question and say, ‘what if these systems will be used against your soldiers or in your country?’ Suddenly you enter a whole different debate. I’m highly skeptical of people who say it could actually be beneficial.

Richard: I feel like there are assertions of “goodies” and “baddies” and our ability to label one from the other. To categorize people and things in society in such an accurate way is somewhat illusory and something of a misunderstanding of the reality of conflict.

Any claims that we can somehow perfect violence in a way where it can be distributed by machinery to those who deserve to receive it and that there’s no tension or moral hazard in that — that is extremely dangerous as an underpinning concept because, in the end, we’re talking about embedding categorizations of people and things within a micro bureaucracy of algorithms and labels.

Violence in society is a human problem and it needs to continue to be messy to some extent if we’re going to recognize it as a problem.

What is the process right now for getting lethal autonomous weapons systems banned?

Miriam: We started the International Campaign to Stop Killer Robots in 2013 — it immediately gave a push to the international discussion, including the one on the Human Rights Council and within the Conventional Weapons in Geneva. We saw a lot of debates there in 2013, 2014, and 2015and the last one was in April.

At the last CCW meeting it was decided that a group of governmental experts should start within CCW to look at these type of weapons which was applauded by many states.

Unfortunately, due to financial issues, the meeting has been canceled. So we’re in a bit of a silence mode right now. But that doesn’t mean there’s no progress. We have 19 states who called for a ban, and more than 70 states within the CCW framework discussing this issue. We know from other treaties that you need these kind of building blocks.

Richard: Engaging scientists and roboticists and AI practitioners around these themes — it’s one of the challenges sometimes that the issues around weapons and conflict can sometimes be treated as very separate from other parts of society. It is significant that the decisions that get made about the limits essentially of AI-driven decision making about life and death in the context of weapons could well have implications in the future regarding how expectations and discussions get set elsewhere.

What is the most important for people to understand about nuclear and autonomous weapon systems?

Miriam: Both systems go way beyond the discussion about weapon systems: it’s about what kind of world and society do we want to live in. None of these — not killer robots, not nuclear weapons — are an answer to any of the threats that we face right now, be it climate change, be it terrorism. It’s not an answer. It’s only adding more fuel to an already dangerous world.

Richard: Nuclear weapons — they’ve somehow become a very abstract, rather distant issue. Simple recognition of the scale of humanitarian harm from a nuclear weapon is the most substantial thing — hundreds of thousands killed and injured. [Leaders of nuclear states are] essentially talking about incinerating hundreds of thousands of normal people — probably in a foreign country — but recognizable, normal people. The idea that that can be approached in some ways glibly or confidently at all is I think very disturbing. And expecting that at no point will something go wrong — I think it’s a complete illusion.

On autonomous weapons — what sort of society do we want to live in, and how much are we prepared to hand over to computers and machines? I think handing more and more violence over to such processes does not augur well for our societal development.

This podcast was edited by Tucker Davey.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Podcast: Creative AI with Mark Riedl & Scientists Support a Nuclear Ban

If future artificial intelligence systems are to interact with us effectively, Mark Riedl believes we need to teach them “common sense.” In this podcast, I interviewed Mark to discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining “common sense reasoning.” We also discuss the “big red button” problem with AI safety, the process of teaching rationalization to AIs, and computational creativity. Mark is an associate professor at the Georgia Tech School of interactive computing, where his recent work focuses on human-AI interaction and how humans and AI systems can understand each other.

The following transcript has been heavily edited for brevity (the full podcast also includes interviews about the UN negotiations to ban nuclear weapons, not included here). You can read the full transcript here.

Ariel: Can you explain how an AI could learn from stories?

Mark: I’ve been looking at ‘common sense errors’ or ‘common sense goal errors.’ When humans want to communicate to an AI system what they want to achieve, they often leave out the most basic rudimentary things. We have this model that whoever we’re talking to understands the everyday details of how the world works. If we want computers to understand how the real world works and what we want, we have to figure out ways of slamming lots of common sense, everyday knowledge into them.

When looking for sources of common sense knowledge, we started looking at stories – fiction, non-fiction, blogs. When we write stories we implicitly put everything that we know about the real world and how our culture works into characters.

One of my long-term goals is to say: ‘How much cultural and social knowledge can we extract by reading stories, and can we get this into AI systems who have to solve everyday problems, like a butler robot or a healthcare robot?’

Ariel: How do you choose which stories to use?

Mark: Through crowd sourcing services like Mechanical Turk, we ask people to tell stories about common things like, how do you go to a restaurant or how do you catch an airplane. Lots of people tell a story about the same topic and have agreements and disagreements, but the disagreements are a very small proportion. So we build an AI system that looks for commonalities. The common elements that everyone implicitly agrees on bubble to the top and the outliers get left along the side. And AI is really good at finding patterns.

Ariel: How do you ensure that’s happening?

Mark: When we test our AI system, we watch what it does, and we have things we do not want to see the AI do. But we don’t tell it in advance. We’ll put it into new circumstances and say, do the things you need to do, and then we’ll watch to make sure those [unacceptable] things don’t happen.

When we talk about teaching robots ethics, we’re really asking how we help robots avoid conflict with society and culture at large. We have socio-cultural patterns of behavior to help humans avoid conflict with other humans. So when I talk about teaching morality to AI systems, what we’re really talking about is: can we make AI systems do the things that humans normally do? That helps them fit seamlessly into society.

Stories are written by all different cultures and societies, and they implicitly encode moral constructs and beliefs into their protagonists and antagonists. We can look at stories from different continents and even different subcultures, like inner city versus rural.

Ariel: I want to switch to your recent paper on Safely Interruptible Agents, which were popularized in the media as the big red button problem.

Mark: At some point we’ll have robots and AI systems that are so sophisticated in their sensory abilities and their abilities to manipulate the environment, that they can theoretically learn that they have an off switch – what we call the big red button – and learn to keep humans from turning them off.

If an AI system gets a reward for doing something, turning it off means it loses the reward. A robot that’s sophisticated enough can learn that certain actions in the environment reduce future loss of reward. We can think of different scenarios: locking a door to a control room so the human operator can’t get in, physically pinning down a human. We can let our imaginations go even wilder than that.

Robots will always be capable of making mistakes. We’ll always want an operator in the loop who can push this big red button and say: ‘Stop. Someone is about to get hurt. Let’s shut things down.’ We don’t want robots learning that they can stop humans from stopping them, because that ultimately will put people into harms way.

Google and their colleagues came up with this idea of modifying the basic algorithms inside learning robots, so that they are less capable of learning about the big red button. And they came up with this very elegant theoretical framework that works, at least in simulation. My team and I came up with a different approach: to take this idea from The Matrix, and flip it on its head. We use the big red button to intercept the robot’s sensors and motor controls and move it from the real world into a virtual world, but the robot doesn’t know it’s in a virtual world. The robot keeps doing what it wants to do, but in the real world the robot has stopped moving.

Ariel: Can you also talk about your work on explainable AI and rationalization?

Mark: Explainability is a key dimension of AI safety. When AI systems do something unexpected or fail unexpectedly, we have to answer fundamental questions: Was this robot trained incorrectly? Did the robot have the wrong data? What caused the robot to go wrong?

If humans can’t trust AI systems, they won’t use them. You can think of it as a feedback loop, where the robot should understand humans’ common sense goals, and the humans should understand how robots solve problems.

We came up with this idea called rationalization: can we have a robot talk about what it’s doing as if a human were doing it? We get a bunch of humans to do some tasks, we get them to talk out loud, we record what they say, and then we teach the robot to use those same words in the same situations.

We’ve tested it in computer games. We have an AI system that plays Frogger, the classic arcade game in which the frog has to cross the street. And we can have a Frogger talk about what it’s doing. It’ll say things like “I’m waiting for a gap in the cars to open before I can jump forward.”

This is significant because that’s what you’d expect something to say, but the AI system is doing something completely different behind the scenes. We don’t want humans watching Frogger to have to know anything about rewards and reinforcement learning and Bellman equations. It just sounds like it’s doing the right thing.

Ariel: Going back a little in time – you started with computational creativity, correct?

Mark: I have ongoing research in computational creativity. When I think of human AI interaction, I really think, ‘what does it mean for AI systems to be on par with humans?’ The human is going make cognitive leaps and creative associations, and if the computer can’t make these cognitive leaps, it ultimately won’t be useful to people.

I have two things that I’m working on in terms of computational creativity. One is story writing. I’m interested in how much of the creative process of storytelling we can offload from the human onto a computer. I’d like to go up to a computer and say, “hey computer, tell me a story about X, Y or Z.”

I’m also interested in whether an AI system can build a computer game from scratch. How much of the process of building the construct can the computer do without human assistance?

Ariel: We see fears that automation will take over jobs, but typically for repetitive tasks. We’re still hearing that creative fields will be much harder to automate. Is that the case?

Mark: I think it’s a long, hard climb to the point where we’d trust AI systems to make creative decisions, whether it’s writing an article for a newspaper or making art or music.

I don’t see it as a replacement so much as an augmentation. I’m particularly interested in novice creators – people who want to do something artistic but haven’t learned the skills. I cannot read or write music, but sometimes I get these tunes in my head and I think I can make a song. Can we bring the AI in to become the skills assistant? I can be the creative lead and the computer can help me make something that looks professional. I think this is where creative AI will be the most useful.

For the second half of this podcast, I spoke with scientists, politicians, and concerned citizens about why they support the upcoming negotiations to ban nuclear weapons. Highlights from these interviews include comments by Congresswoman Barbara Lee, Nobel Laureate Martin Chalfie, and FLI president Max Tegmark.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Podcast: Climate Change with Brian Toon and Kevin Trenberth

Too often, the media focus their attention on climate-change deniers, and as a result, when scientists speak with the press, it’s almost always a discussion of whether climate change is real. Unfortunately, that can make it harder for those who recognize that climate change is a legitimate threat to fully understand the science and impacts of rising global temperatures.

I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different discussion. I wanted better answers about what climate change is, what its effects could be, and how can we prepare for the future.

The discussion that follows has been edited for clarity and brevity, and I’ve added occasional comments for context. You can also listen to the podcast above or read the full transcript here for more in-depth insight into these issues.

Our discussion began with a review of the scientific evidence behind climate change.

Trenberth: “The main source of human-induced climate change is from increasing carbon dioxide and other greenhouse gases in the atmosphere. And we have plenty of evidence that we’re responsible for the over 40% increase in carbon dioxide concentrations in the atmosphere since pre-industrial times, and more than half of that has occurred since 1980.”

Toon: “I think the problem is that carbon dioxide is rising proportional to population on the Earth. If you just plot carbon dioxide in the last few decades versus global population, it tracks almost exactly. In coming decades, we’re increasing global population by a million people a week. That’s a new city in the world of a million people every week somewhere, and the amount of energy that’s already committed to supporting this increasing population is very large.”

The financial cost of climate change is also quite large.

Trenberth: “2012 was the warmest year on record in the United States. There was a very widespread drought that occurred, starting here in Colorado, in the West. The drought itself was estimated to cost about $75 billion. Superstorm Sandy is a different example, and the damages associated with that are, again, estimated to be about $75 billion. At the moment, the cost of climate and weather related disasters is something like $40 billion a year.”

We discussed possible solutions to climate change, but while solutions exist, it was easy to get distracted by just how large – and deadly — the problem truly is.

Toon: “Technologically, of course, there are lots of things we can do. Solar energy and wind energy are both approaching or passing the cost of fossil fuels, so they’re advantageous. [But] there’s other aspects of this like air pollution, for example, which comes from burning a lot of fossil fuels. It’s been estimated to kill seven million people a year around the Earth. Particularly in countries like China, it’s thought to be killing about a million people a year. Even in the United States, it’s causing probably 10,000 or more deaths a year.”

Unfortunately, Toon may be underestimating the number of US deaths resulting from air pollution. A 2013 study out of MIT found that air pollution causes roughly 200,000 early deaths in the US each year. And there’s still the general problem that carbon in the atmosphere (not the same as air pollution) really isn’t something that will go away anytime soon.

Toon: “Carbon dioxide has a very, very long lifetime. Early IPCC reports would often say carbon dioxide has a lifetime of 50 years. Some people interpreted that to mean it’ll go away in 50 years, but what it really meant was that it would go into equilibrium with the oceans in about 50 years. When you go somewhere in your car, about 20% of that carbon dioxide that is released to the atmosphere is still going to be there in thousands of years. The CO2 has lifetimes of thousands and thousands of years, maybe tens or hundreds of thousands of years. It’s not reversible.”

Trenberth: “Every springtime, the trees take up carbon dioxide and there’s a draw-down of carbon dioxide in the atmosphere, but then, in the fall, the leaves fall on the forest floor and the twigs and branches and so on, and they decay and they put carbon dioxide back into the atmosphere. People talk about growing more trees, which can certainly take carbon dioxide out of the atmosphere to some extent, but then what do you do with all the trees? That’s part of the issue. Maybe you can bury some of them somewhere, but it’s very difficult. It’s not a full solution to the problem.”

Toon: “The average American uses the equivalent of about five tons of carbon a year – that’s an elephant or two. That means every year you have to go out in your backyard and bury an elephant or two.”

We know that climate change is expected to impact farming and sea levels. And we know that the temperature changes and increasing ocean acidification could cause many species to go extinct. But for the most part, scientists aren’t worried that climate change alone could cause the extinction of humanity. However, as a threat multiplier – that is, something that triggers other problems – climate change could lead to terrible famines, pandemics, and war. And some of this may already be underway.

Trenberth: “You don’t actually have to go a hundred years or a thousand years into the future before things can get quite disrupted relative to today. You can see some signs of that if you look around the world now. There’s certainly studies that have suggested that the changes in climate, and the droughts that occur and the wildfires and so on are already extra stressors on the system and have exacerbated wars in Sudan and in Syria. It’s one of the things which makes it very worrying for security around the world to the defense department, to the armed services, who are very concerned about the destabilizing effects of climate change around the world.”

Some of the instabilities around the world today are already leading to discussion about the possibility of using nuclear weapons. But too many nuclear weapons could trigger the “other” climate change: nuclear winter.

Toon: “Nuclear winter is caused by burning cities. If there were a nuclear war in which cities were attacked then the smoke that’s released from all those fires can go into the stratosphere and create a veil of soot particles in the upper atmosphere, which are very good at absorbing sunlight. It’s sort of like geoengineering in that sense; it reduces the temperature of the planet. Even a little war between India and Pakistan, for example — which, incidentally, have about 400 nuclear weapons between them at the moment — if they started attacking each other’s cities, the smoke from that could drop the temperature of the Earth back to preindustrial conditions. In fact, it’d be lower than anything we’ve seen in the climate record since the end of the last ice age, which would be devastating to mid-latitude agriculture.

“This is an issue people don’t really understand: the world food storage is only about 60 days. There’s not enough food on that planet to feed the population for more than 60 days. There’s only enough food in an average city to feed the city for about a week. That’s the same kind of issue that we’re coming to also with the changes in agriculture that we might face in the next century just from global warming. You have to be able to make up those food losses by shipping food from some other place. Adjusting to that takes a long time.”

Concern about our ability to adjust was a common theme. Climate change is occurring so rapidly that it will be difficult for all species, even people, to adapt quickly enough.

Trenberth: “We’re way behind in terms of what is needed because if you start really trying to take serious action on this, there’s a built-in delay of 20 or 30 years because of the infrastructure that you have in order to change that around. Then there’s another 20-year delay because the oceans respond very, very slowly. If you start making major changes now, you end up experiencing the effects of those changes maybe 40 years from now or something like that. You’ve really got to get ahead of this.

“The atmosphere is a global commons. It belongs to everyone. The air that’s over the US, a week later is over in Europe, and a week later it’s over China, and then a week later it’s back over the US again. If we dump stuff into the atmosphere, it gets shared among all of the nations.”

Toon: “Organisms are used to evolving and compensating for things, but not on a 40-year timescale. They’re used to slowly evolving and slowly responding to the environment, and here they’re being forced to respond very quickly. That’s an extinction problem. If you make a sudden change in the environment, you can cause extinctions.”

As dire as the situation might seem, there are still ways in which we can address climate change.

Toon: “I’m hopeful, at the local level, things will happen, I’m hopeful that money will be made out of converting to other energy systems, and that those things will move us forward despite the inability, apparently, of politicians to deal with things.”

Trenberth: “The real way of doing this is probably to create other kinds of incentives such as through a carbon tax, as often referred to, or a fee on carbon of some sort, which recognizes the downstream effects of burning coal both in terms of air pollution and in terms of climate change that’s currently not built into the cost of burning coal, and it really ought to be.”

Toon: “[There] is not really a question anymore about whether climate change is occurring or not. It certainly is occurring. However, how do you respond to that? What do you do? At least in the United States, it’s very clear that we’re a capitalistic society, and so we need to make it economically advantageous to develop these new energy technologies. I suspect that we’re going to see the rise of China and Asia in developing renewable energy and selling that throughout the world for the reason that it’s cheaper and they’ll make money out of it. [And] we’ll wake up behind the curve.”

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Podcast: Law and Ethics of Artificial Intelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology.

In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

Ariel: I typically think of ethics as the driving force behind law. As such, Ryan, I was hoping you could talk about the ethical issues facing us today when it comes to artificial intelligence.

Ryan: Broadly speaking, the mission of both ethics and law might be to discover how to best structure life within a community and to see to it that that community does flourish once we know certain truths. Ethics does some of the investigation about what kinds of things matter morally, what kinds of lives are valuable, how should we treat other people. Law does an excellent job of codifying those things and enforcing those things.

One of the easiest ways of telling whether a decision is a moral decision is whether it stands to make some people better off and some people worse off. And we’re seeing that take place right now with artificial intelligence. That adds new wrinkles to these decisions because oftentimes the decisions of AI are opaque to us, they’re difficult to understand, they might be totally mysterious. And while we’re fascinated by what AI can do, I think the developers of AI have implemented these technologies before we fully understand what they’re capable of and how they’re making decisions.

Ariel: Can you give some examples of that?

Ryan: There was an excellent piece by ProPublica about bias in the criminal justice system, where they use risk assessment algorithms to judge, for example, a person’s probability of re-committing a crime after they’re released from prison.

ProPublica did an audit of this software, and they found that not only does it make mistakes about half the time, but it was systematically underestimating the threat from white defendants and systematically overestimating the threat from black defendants. White defendants were being given leaner sentences, black defendants as a group were being given harsher sentences.

When the company that produced the algorithm was asked about this, they said look, it takes in something like 137 factors, but race is not one of them’. So it was making mistakes that were systematically biased in a way that was race-based, and it was difficult to explain why. This is the kind of opaque decision making that’s taking place by artificial intelligence.

Ariel: As AI advances, what are some of the ethical issues that you anticipate cropping up?

Ryan: There’s been a lot of ink spilled about the threat that automation poses to unemployment. Some of the numbers coming out of places like Oxford are quite alarming. They say as many of 50% of American jobs could be eliminated by automation in the next couple decades.

Besides the obvious fact that having unemployed people is bad for society, it raises more foundational questions about the way that we think about work, the way that we think about people having to “earn a living” or “contribute to society.” The idea that someone needs to work in order to be kept alive. And most of us walk around with some kind of moral claim like this in our back pocket without fully considering the implications.

Ariel: And Matt, what are some of the big legal issues facing us today when it comes to artificial intelligence?

Matt: The way that legal systems across the world work is by assigning legal rights and responsibilities to people. The assumption is that any decision that has an impact on the life of another person is going to be made by a person. So when you have a machine making the decisions rather than humans, one of the fundamental assumptions of our legal system goes away. Eventually that’s going to become very difficult because there seems to be the promise of AI displacing human decisionmakers out of a wide variety of sectors. As that happens, it’s going to be much more complicated to come up with lines of legal responsibility.

I don’t think we can comprehend what society is going to be like 50 years from now if a huge number of industries ranging from medicine to law to financial services are in large part being run by the decisions of machines. At some point, the question is how much control can humans really say that they still have.

Ariel: You were talking earlier about decision making with autonomous technologies, and one of the areas where we see this is with self driving cars and autonomous weapons. I was hoping you could both talk about the ethical and legal implications in those spheres.

Matt: Part of the problem with relying on law to set standards of behavior is that law does not move as fast as technology does. It’s going to be a long time before the really critical changes in our legal systems are changed in a way that allows for the widespread deployment of autonomous vehicles.

One thing that I could envision happening in the next 10 years is that pretty much all new vehicles while they’re on an expressway are controlled by an autonomous system, and it’s only when they get off an expressway and onto a surface street that they switch to having the human driver in control of the vehicle. So, little by little, we’re going to see this sector of our economy get changed radically.

Ryan: One of my favorite philosophers of technology [is] Langdon Winner. His famous view is that we are sleepwalking into the future of technology. We’re continually rewriting and recreating these structures that affect how we’ll live, how we’ll interact with each other, what we’re able to do, what we’re encouraged to do, what we’re discouraged from doing. We continually recreate these constraints on our world, and we do it oftentimes without thinking very carefully about it. To steal a line from Winston Churchill, technology seems to get halfway around the world before moral philosophy can put its pants on. And we’re seeing that happening with autonomous vehicles.

Tens of thousands of people die on US roads every year. Oftentimes those crashes involve choices about who is going to be harmed and who’s not, even if that’s a trade-off between someone outside the car and a passenger or a driver inside the car.

These are clearly morally important decisions, and it seems that manufacturers are still trying to brush these aside. They’re either saying that these are not morally important decisions, or they’re saying that the answers to them are obvious. They’re certainly not always questions with obvious answers. Or if the manufacturers admit that they’re difficult answers, then they think, ‘well the decisions are rare enough that to agonize over them might postpone other advancements in the technology’. That’s a legitimate concern, if it were true that these decisions were rare, but there are tens of thousands of people killed on US roads and hundreds of thousands who are injured every year.

Ariel: I’d like to also look at autonomous weapons. Ryan, what’s your take on some of the ethical issues?

Ryan: There could very well be something that’s uniquely troubling, uniquely morally problematic about delegating the task of who should live and who should die to a machine. But once we dig into these arguments, it’s extremely difficult to pinpoint exactly what’s problematic about killer robots. We’d be right to think, today, that machines probably aren’t reliable enough to make discernments in the heat of battle about which people are legitimate targets and which people are not. But if we imagine a future where robots are actually pretty good at making those kinds of decisions, where they’re perhaps even better behaved than human soldiers, where they don’t get confused, they don’t see their comrade killed and go on a killing spree or go into some berserker rage, and they’re not racist, or they don’t have the kinds of biases that humans are vulnerable to…

If we imagine a scenario where we can greatly reduce the number of innocent people killed in war, this starts to exert a lot of pressure on that widely held public intuition that autonomous weapons are bad in themselves, because it puts us in the position then of insisting that we continue to use human war fighters to wage war even when we know that will contribute to many more people dying from collateral damage. That’s an uncomfortable position to defend.

Ariel: Matt, how do we deal with accountability?

Matt: Autonomous weapons are going to inherently be capable of reacting on time scales that are shorter than humans’ time scales in which they can react. I can easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II.

At that point, you start to wonder where human decision makers can enter into the military decision making process. Right now there’s very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That’s going to become much blurrier when the decisions are not being made by human soldiers, but rather by autonomous systems. It’s going to become even more complicated as machine learning technology is incorporated into these systems, where they learn from their observations and experiences in the field on the best way to react to different military situations.

Ariel: Matt, in recent talks you mentioned that you’re less concerned about regulations for corporations because it seems like corporations are making an effort to essentially self-regulate. I’m interested in how that compares to concerns about government misusing AI and whether self-regulation is possible with government.

Matt: We are living in an age, with the advent of the internet, that is an inherently decentralizing force. In a decentralizing world, we’re going to have to think of new paradigms of how to regulate and govern the behavior of economic actors. It might make sense to reexamine some of those decentralized forms of regulation and one of those is industry standards and self-regulation.

One reason why I am particularly hopeful in the sphere of AI is that there really does seem to be a broad interest among the largest players in AI to proactively come up with rules of ethics and transparency in many ways that we generally just haven’t seen in the age since the Industrial Revolution.

One macro trend unfortunately in the world stage today is increasingly nationalist tendencies. That leads me to be more concerned than I would have been 10 years ago that these technologies are going to be co-opted by governments, and ironically that it’s going to be governments rather than companies that are the greatest obstacle to transparency because they will want to establish some sort of national monopoly on the technologies within their borders.

Ryan: I think that international norms of cooperation can be valuable. The United States is not a signatory to the Ottawa Treaty that banned anti-personnel landmines, but because so many other countries are, there exists the informal stigma that’s attached to it, that if we used anti-personnel landmines in battle, we’d face backlash that’s probably equivalent to if we had been signatories of that treaty.

So international norms of cooperation, they’re good for something, but they’re also fragile. For example, in much of the western world, there has existed an informal agreement that we’re not going to experiment by modifying the genetics of human embryos. So it was a shock a year or two ago when some Chinese scientists announced that they were doing just that. I think it was a wake up call to the West to realize those norms aren’t universal, and it was a valuable reminder that when it comes to things that are as significant as modifying the human genome or autonomous weapons and artificial intelligence more generally, they have such profound possibilities for reshaping human life that we should be working very stridently to try to arrive at some international agreements that are not just toothless and informal.

Ariel: I want to go in a different direction and ask about fake news. I was really interested in what you both think of this from a legal and ethical standpoint.

Matt: Because there are now so many different sources for news, it becomes increasingly difficult to decide what is real. And there is a loss that we are starting to see in our society of that shared knowledge of facts. There are literally different sets of not just worldviews, but of worlds, that people see around them.

A lot of fake news websites aren’t intentionally trying to make large amounts of money, so even if a fake news story does monumental damage, you’re not going to be able to recoup the damages to your reputation from that person or that entity. It’s an area where it’s difficult for me to envision how the law can manage that, at least unless we come up with new regulatory paradigms that reflect the fact that our world is going to be increasingly less centralized than it has been during the industrial age.

Ariel: Is there anything else that you think is important for people to know?

Ryan: There is still a great value in appreciating when we’re running roughshod over questions that we didn’t even know existed. That is one of the valuable contributions that [moral philosophers] can make here, is to think carefully about the way that we behave, the way that we design our machines to interact with one another and the kinds of effects that they’ll have on society.

It’s reassuring that people are taking these questions very seriously when it comes to artificial intelligence, and I think that the advances we’ve seen in artificial intelligence in the last couple of years have been the impetus for this turn towards the ethical implications of the things we create.

Matt: I’m glad that I got to hear Ryan’s point of view. The law is becoming a less effective tool for managing the societal changes that are happening. And I don’t think that that will change unless we think through the ethical questions and the moral dilemmas that are going to be presented by a world in which decisions and actions are increasingly undertaken by machines rather than people.

This podcast and transcript were edited by Tucker Davey.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Podcast: UN Nuclear Weapons Ban with Beatrice Fihn and Susi Snyder

Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. Previous nuclear treaties have included the Test Ban Treaty, and the Non-Proliferation Treaty. But in the 70 plus years of the United Nations, the countries have yet to agree on a treaty to completely ban nuclear weapons. The negotiations will begin this March. To discuss the importance of this event, I interviewed Beatrice Fihn and Susi Snyder. Beatrice is the Executive Director of the International Campaign to Abolish Nuclear Weapons, also known as ICAN, where she is leading a global campaign consisting of about 450 NGOs working together to prohibit nuclear weapons. Susi is the Nuclear Disarmament Program Manager for PAX in the Netherlands, and the principal author of the Don’t Bank on the Bomb series. She is an International Steering Group member of ICAN.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

ARIEL: First, Beatrice, you spearheaded much, if not all, of this effort. Can you explain: What is the ban? What will it cover? What’s going to be prohibited? And Susi, can you weigh in as well?

BEATRICE: So, it sounds counterintuitive, but nuclear weapons are really the only weapons of mass destruction that are not prohibited by an international treaty. We prohibited chemical weapons and biological weapons, landmines and cluster munitions—but nuclear weapons are still legal for some.

We’re hoping that this treaty will be a very clear-cut prohibition; that nuclear weapons are illegal because of the humanitarian consequences that they cause if used. And it should include things like using nuclear weapons, possessing nuclear weapons, transferring nuclear weapons, assisting with those kind of things. Basically, a very straightforward treaty that makes it clear that, under international law, nuclear weapons are unacceptable.

SUSI: This whole system where some people think that nuclear weapons are legal for them, but they’re illegal for others—that’s a problem. Negotiations are going to start to make nuclear weapons illegal for everybody.

The thing is, nobody can deal with the consequences of using nuclear weapons. What better cure than to prevent it? And the way to prevent it is to ban the weapons.

ARIEL: The UN has been trying to prohibit nuclear weapons since 1945. Why has it taken this long?

BEATRICE: There is no prohibition on nuclear weapons, but there are many treaties and many regulations governing nuclear weapons. Almost all governments in the world agree that nuclear weapons are really bad and they should be eliminated. It’s a strange situation where governments, including the two—Russia and the United States—with the most nuclear weapons, agree ‘these are really horrible weapons, we don’t think they should be used. But we don’t want to prohibit them, because it still kind of suits us that we have them.’

For a very long time, I think the whole world just accepted that nuclear weapons are around. They’re this kind of mythical weapons almost. Much more than just a weapon—they’re magic. They keep peace and stability, they ended World War II, they made sure that there was no big war in Europe during the Cold War. [But] nuclear weapons can’t fight the kind of threats that we face today: climate change, organized crime, terrorism. It’s not an appropriate weapon for this millennium.

SUSI: The thing is, also, now people are talking again. And when you start talking about what it is that nuclear weapons do, you get into the issue of the fact that what they do isn’t contained by a national border. A nuclear weapon detonation, even a small one, would have catastrophic effects and would resonate around the world.

There’s been a long-time focus of making these somehow acceptable; making it somehow okay to risk global annihilation, okay to risk catastrophe. And now it has become apparent to an overwhelming majority of governments that this is not okay.

ARIEL: The majority of countries don’t have nuclear weapons. There’s only a handful of countries that actually have nuclear weapons, and the U.S. and Russia have most of those. And it doesn’t look like the U.S. and Russia are going to agree to the ban. So, if it passes, what happens then? How does it get enforced?

SUSI: If you prohibit the making, having, using these weapons and the assistance with doing those things, we’re setting a stage to also prohibit the financing of the weapons. That’s one way I believe the ban treaty is going to have a direct and concrete impact on existing nuclear arsenals. Because all the nuclear weapon possessors are modernizing their arsenals, and most of them are using private contractors to do so. By stopping the financing that goes into these private contractors, we’re going to change the game.

One of the things we found in talking to financial institutions, is they are waiting and aching for a clear prohibition because right now the rules are fuzzy. It doesn’t matter if the U.S. and Russia sign on to have that kind of impact, because financial institutions operate with their headquarters in lots of other places. We’ve seen with other weapons systems that as soon as they’re prohibited, financial institutions back off, and producers know they’re losing the money because of the stigma associated with the weapon.

BEATRICE: I think that sometimes we forget that it’s more than nine states that are involved in nuclear weapons. Sure, there’s nine states: U.S., U.K., Russia, France, China India, Pakistan, Israel, and North Korea.

But there are also five European states that have American nuclear weapons on their soil: Belgium, Germany, Netherlands, Italy, and Turkey. And in addition to that, all of the NATO states and a couple of others—such as Japan, Australia, and South Korea—are a part of the U.S. nuclear umbrella.

We’ve exposed these NATO states and nuclear umbrella states, for being a bit hypocritical. They like to think that they are promoters of disarmament, but they are ready to have nuclear weapons being used on others on their behalf. So, even countries like Norway, for example, who are a part of a nuclear weapons alliance and say that, you know, ‘the U.S. could use nuclear weapons to protect us.’ On what? Maybe cities, civilians in Russia or in China or something like that. And if we argue that people in Norway need to be protected by nuclear weapons—one of the safest countries in the world, richest countries in the world—why do we say that people in Iran can’t be protected by similar things? Or people in Lebanon, or anywhere else in the world?

This treaty makes it really clear who is okay with nuclear weapons and who isn’t. And that will create a lot of pressure on those states that enjoy the protection of nuclear weapons today, but are not really comfortable admitting it.

ARIEL: If you look at a map of the countries that opposed the resolution vs. the countries that either supported it or abstained, there is a Northern Hemisphere vs. Southern Hemisphere thing, where the majority of countries in North America, and Europe and Russia all oppose a ban, and the rest of the countries would like to see a ban. It seems that if a war were to break out between nuclear weapon countries, it would impact these northern countries more than the southern countries. I was wondering, is that the case?

BEATRICE: I think countries that have nuclear weapons somehow imagine that they are safer with them. But it makes them targets of nuclear weapons as well. It’s unlikely that anyone would use nuclear weapons to attack Senegal, for example. So I think that people in nuclear-armed states often forget that they are also the targets of nuclear weapons.

I find it very interesting as well. In some ways, we see this as a big fight for equality. A certain type of country—the richest countries in the world, the most militarily powerful with or without the nuclear weapons—have somehow taken power over the ability to destroy the entire earth. And now we’re seeing that other countries are demanding that that ends. And we see a lot of similarities to other power struggles—civil rights movements, women’s right to vote, the anti-Apartheid movement—where a powerful minority oppresses the rest of the world. And when there’s a big mobilization to change that, there’s obviously a lot of resistance. The powerful will never give up that absolute power that they have, voluntarily. I think that’s really what this treaty is about at this point.

SUSI: A lot of it is tied to money, to wealth and to an unequal distribution of wealth, or unequal perception of wealth and the power that is assumed with that unequal distribution. It costs a lot of money to make nuclear weapons, develop nuclear weapons, and it also requires an intensive extraction of resources. And some of those resources have come from some of these states that are now standing up and strongly supporting the negotiations towards the prohibition.

ARIEL: Is there anything you recommend the general public can do?

BEATRICE: We have a website that is aimed to the public, to find out a little bit more about this. We can send an email to your Foreign Minister and tweet your Foreign Minister and things like that, it’s called nuclearban.org. We’ll also make sure that the negotiations, when they’re webcast, that we’ll share that link on that website.

ARIEL: Just looking at the nuclear weapons countries, I thought it was very interesting that China, India, and Pakistan abstained from voting, and North Korea actually supported a ban. Did that come as a surprise? What does it mean?

BEATRICE: There’s a lot of dynamics going on in this, which means also that the positions are not fixed. I think countries like Pakistan, India, and China have traditionally been very supportive of the UN as a venue to negotiate disarmament. They are states that perhaps think that Russia and the U.S.—which have much more nuclear weapons—that they are the real problem. They sort of sit on the sides with their smaller arsenals, and perhaps don’t feel as much pressure in the same way that the U.S. and Russia feel to negotiate things.

And also, of course, they have very strong connections with the Southern Hemisphere countries, developing countries. Their decisions on nuclear weapons are very connected to other political issues in international relations. And when it comes to North Korea, I don’t know. It’s very unpredictable. We weren’t expecting them to vote yes, I don’t know if they will come. It’s quite difficult to predict.

ARIEL: What do you say to people who do think we still need nuclear weapons?

SUSI: I ask them why. Why do they think we need nuclear weapons? Under what circumstance is it legitimate to use a weapon that will level a city? One bomb that destroys a city, and that will cause harm not just to the people who are involved in combat. What justifies that kind of horrible use of a weapon? And what are the circumstances that you’re willing to use them? I mean, what are the circumstances where people feel it’s okay to cause this kind of destruction?

BEATRICE: Nuclear weapons are meant to destroy entire cities—that’s their inherent quality. They mass murder entire communities indiscriminately very, very fast. That’s what they are good at. The weapon itself is meant to kill civilians, and that is unacceptable.

And most people that defend nuclear weapons, they admit that they don’t want to use them. They are never supposed to be used, you are just supposed to threaten with them. And then you get into this sort of illogical debate, about how, in order for the threat to be real—and for others to perceive the threat—you have to be serious about using them. It’s very naive to think that we will get away as a civilization without them being used if we keep them around forever.

SUSI: There’s a reason that nuclear weapons have not been used in war in over 70 years: the horror they unleash is too great. Even military leaders, once they retire and are free to speak their minds, say very clearly that these are not a good weapon for military objectives.

ARIEL: I’m still going back to this— Why now? Why are we having success now?

BEATRICE: It’s very important to remember that we’ve had successes before, and very big ones as well. In 1970, the Nuclear Non-Proliferation Treaty entered into force. And that is the treaty that prevents proliferation of nuclear weapons — the treaty that said, ‘okay, we have these five states, and they’ve already developed weapons, they’re not ready to get rid of them, but at least we’ll cap it there, and no one else is allowed.’ And that really worked quite well. Only four more countries developed nuclear weapons after that. But the rest of the world understood that it was a bad idea. And the big bargain in that treaty was that the five countries that got to keep their nuclear weapons only got to keep them for a while—they committed, that one day they would disarm, but there was no timeline in the treaty. So I think that was a huge success.

In the ‘80s, we saw these huge, huge public mobilization movements and millions of people demonstrating on the street trying to stop the nuclear arms race. And they were very successful as well. They didn’t get total nuclear disarmament, but the nuclear freeze movement achieved a huge victory.

We were very, very close to disarmament at the Reykjavik summit with Gorbachev and Reagan. And that was also a huge success. Governments negotiated the Comprehensive Test Ban Treaty, which prevents countries from testing nuclear weapons. And that hasn’t entered into force yet, but almost all states have signed it. It has not been ratified by some key players, like the United States, but the norm is still there, and it’s been quite an effective treaty despite that it’s not yet entered into force. Only one state has continued testing, and that’s North Korea, since the treaty was signed.

But somewhere along the way we got very focused on non-proliferation and trying to stop the testing, stop them producing fissile material, and we forgot to work on the fundamental delegitimization of nuclear weapons. We forgot to say that nuclear weapons are unacceptable. That is what we’re trying to do right now.

SUSI: The world is different in a lot of ways than it was in 1945. The UN is different in a lot of ways. Remember, one of the purposes of the UN at the outset was to help countries decolonize and to restore them to their own people, and that process took some time. In a lot of those countries, those former colonized societies are coming back and saying, ‘well, we have a voice of global security as well, and this is part of ensuring our security.’

This is the moment where this perfect storm has come; we’re prohibiting illegitimate weapons. It’s going to be fun!

BEATRICE: I think that we’ve been very inspired in ICAN by the campaigns to ban landmines and the campaigns to ban cluster munitions, because they were a different type of treaty. Obviously chemical weapons were prohibited, biological weapons were prohibited, but the landmine and cluster munition processes of prohibition that were developed on those weapons were about stigmatizing the weapon, and they didn’t need all states to be on board with it. And we saw that it worked. Just a few years ago, the United States—who never signed the landmines treaty—announced that it’s basically complying with the treaty. They have one exception at the border of South Korea. That means that they can’t sign it, but otherwise they are complying with it. The market for landmines is pretty much extinct—nobody wants to produce them anymore because countries have banned and stigmatized them.

And with cluster munitions we see a similar trend. We’ve seen those two treaties work, and I think that’s also why we feel confident that we can move ahead this time, even without the nuclear-armed states onboard. It will have an impact anyway.

To learn more about the ban and how you can help encourage your country to support the ban, visit nuclearban.org and icanw.org.

This podcast was edited by Tucker Davey.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Podcast: Top AI Breakthroughs, with Ian Goodfellow and Richard Mallah

2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of the Deep Learning textbook, and he’s a lead inventor of Generative Adversarial Networks.

The following interview has been heavily edited for brevity, but you can listen to it in its entirety above or read the full transcript here.

Ariel: Two events stood out to me in 2016. The first was AlphaGo, which beat the world’s top Go champion, Lee Sedol last March. What is AlphaGo, and why was this such an incredible achievement?

Ian: AlphaGo was DeepMind’s system for playing the game of Go. It’s a game where you place stones on a board with two players, the object being to capture as much territory as possible. But there are hundreds of different positions where we can place a stone on each turn. It’s not even remotely possible to use a computer to simulate many different Go games and figure out how the game will progress in the future. The computer needs to rely on intuition the same way that human Go players can look at a board and get kind of a sixth sense that tells them whether the game is going well or poorly for them, and where they ought to put the next stone. It’s computationally infeasible to explicitly calculate what each player should do next.

Richard: The DeepMind team has one network for what’s called value learning and another deep network for policy learning. The policy is, basically, which places should I evaluate for the next piece. The value network is how good that state is, in terms of the probability that the agent will be winning. And then they do a Monte Carlo tree search, which means it has some randomness and many different paths — on the order of thousands of evaluations. So it’s much more like a human considering a handful of different moves and trying to determine how good those moves would be.

Ian: From 2012 to 2015 we saw a lot of breakthroughs where the exciting thing was that AI was able to copy a human ability. In 2016, we started to see breakthroughs that were all about exceeding human performance. Part of what was so exciting about AlphaGo was that AlphaGo did not only learn how to predict what a human expert Go player would do, AlphaGo also improved beyond that by practicing playing games against itself and learning how to be better than the best human player. So we’re starting to see AI move beyond what humans can tell the computer to do.

Ariel: So how will this be applied to applications that we’ll interact with on a regular basis? How will we start to see these technologies and techniques in action ourselves?

Richard: With these techniques, a lot of them are research systems. It’s not necessarily that they’re going to directly go down the pipeline towards productization, but they are helping the models that are implicitly learned inside of AI systems and machine learning systems to get much better.

Ian: There are other strategies for generating new experiences that resemble previously seen experiences. One of them is called WaveNet. It’s a model produced by DeepMind in 2016 for generating speech. If you provide a sentence, just written down, and you’d like to hear that sentence spoken aloud, WaveNet can create an audio waveform that sounds very realistically like a human pronouncing that sentence written down. The main drawback to WaveNet right now is that it’s fairly slow. It has to generate the audio waveform one piece at a time. I believe it takes WaveNet two minutes to produce one second of audio, so it’s not able to make the audio fast enough to hold an interactive conversation.

Richard: And similarly, we’ve seen applications to colorizing black and white photos, or turning sketches into somewhat photo-realistic images, being able to turn text into images.

Ian: Yeah one thing that really highlights how far we’ve come is that in 2014, one of the big breakthroughs was the ability to take a photo and produce a sentence summarizing what was in the photo. In 2016, we saw different methods for taking a sentence and producing a photo that contains the imagery described by the sentence. It’s much more complicated to go from a few words to a very realistic image containing thousands or millions of pixels than it is to go from the image to the words.

Another thing that was very exciting in 2016 was the use of generative models for drug discovery. Instead of imagining new images, the model could actually imagine new molecules that are intended to have specific medicinal effects.

Richard: And this is pretty exciting because this is being applied towards cancer research, developing potential new cancer treatments.

Ariel: And then there was Google’s language translation program, Google Neural Machine Translation. Can you talk about what that did and why it was a big deal?

Ian: It’s a big deal for two different reasons. First, Google Neural Machine Translation is a lot better than previous approaches to machine translation. Google Neural Machine Translation removes a lot of the human design elements, and just has a neural network figure out what to do.

The other thing that’s really exciting about Google Neural Machine Translation is that the machine translation models have developed what we call an “Interlingua.” It used to be that if you wanted to translate from Japanese to Korean, you had to find a lot of sentences that had been translated from Japanese to Korean before, and then you could train a machine learning model to copy that translation procedure. But now, if you already know how to translate from English to Korean, and you know how to translate from English to Japanese, in the middle, you have Interlingua. So you translate from English to Interlingua and then to Japanese, English to Interlingua and then to Korean. You can also just translate Japanese to Interlingua and Korean to Interlingua and then Interlingua to Japanese or Korean, and you never actually have to get translated sentences from every pair of languages.

Ariel: How can the techniques that are used for language apply elsewhere? How do you anticipate seeing this developed in 2017 and onward?

Richard: So I think what we’ve learned from the approach is that deep learning systems are able to create extremely rich models of the world that can actually express what we can think, which is a pretty exciting milestone. Being able to combine that Interlingua with more structured information about the world is something that a variety of teams are working on — it is a big, open area for the coming years.

Ian: At OpenAI one of our largest projects, Universe, allows a reinforcement learning agent to play many different computer games, and it interacts with these games in the same way that a human does, by sending key presses or mouse strokes to the actual game engine. The same reinforcement learning agent is able to interact with basically anything that a human can interact with on a computer. By having one agent that can do all of these different things we will really exercise our ability to create general artificial intelligence instead of application-specific artificial intelligence. And projects like Google’s Interlingua have shown us that there’s a lot of reason to believe that this will work.

Ariel: What else happened this year that you guys think is important to mention?

Richard: One-shot [learning] is when you see just a little bit of data, potentially just one data point, regarding some new task or some new category, and you’re then able to deduce what that class should look like or what that function should look like in general. So being able to train systems on very little data from just general background knowledge, will be pretty exciting.

Ian: One thing that I’m very excited about is this new area called machine learning security where an attacker can trick a machine learning system into taking the wrong action. For example, we’ve seen that it’s very easy to fool an object-recognition system. We can show it an image that looks a lot like a panda and it gets recognized as being a school bus, or vice versa. It’s actually possible to fool machine learning systems with physical objects. There was a paper called Accessorize to a Crime, that showed that by wearing unusually-colored glasses it’s possible to thwart a face recognition system. And my own collaborators at GoogleBrain and I wrote a paper called Adversarial Examples in the Physical World, where we showed that we can make images that look kind of grainy and noisy, but when viewed through a camera we can control how an object-recognition system will respond to those images.

Ariel: Is there anything else that you thought was either important for 2016 or looking forward to 2017?

Richard: Yeah, looking forward to 2017 I think there will be more focus on unsupervised learning. Most of the world is not annotated by humans. There aren’t little sticky notes on things around the house saying what they are. Being able to process [the world] in a more unsupervised way will unlock a plethora of new applications.

Ian: It will also make AI more democratic. Right now, if you want to use really advanced AI you need to have not only a lot of computers but also a lot of data. That’s part of why it’s mostly very large companies that are competitive in the AI space. If you want to get really good at a task you basically become good at that task by showing the computer a million different examples. In the future, we’ll have AI that can learn much more like a human learns, where just showing it a few examples is enough. Once we have machine learning systems that are able to get the general idea of what’s going on very quickly, in the way that humans do, it won’t be necessary to build these gigantic data sets anymore.

Richard: One application area I think will be important this coming year is automatic detection of fake news, fake audio and fake images and fake video. Some of the applications this past year have actually focused on generating additional frames of video. As those get better, as the photo generation that we talked about earlier gets better, and also as audio templating gets better… I think it was Adobe that demoed what they called PhotoShop for Voice, where you can type something in and select a person, and it will sound like that person saying whatever it is that you typed. So we’ll need ways of detecting that, since this whole concept of fake news is quite at the fore these days.

Ian: It’s worth mentioning that there are other ways of addressing the spread of fake news. Email spam uses a lot of different clues that it can statistically associate with whether people mark the email as spam or not. We can do a lot without needing to advance the underlying AI systems at all.

Ariel: Is there anything that you’re worried about, based on advances that you’ve seen in the last year?

Ian: The employment issue. As we’re able to automate our tasks in the future, how will we make sure that everyone benefits from that automation? And the way that society is structured, right now increasing automation seems to lead to increasing concentration of wealth, and there are winners and losers to every advance. My concern is that automating jobs that are done by millions of people will create very many losers and a small number of winners who really win big.

Richard: I’m also slightly concerned with the speed at which we’re approaching additional generality. It’s extremely cool to see systems be able to do lots of different things, and being able to do tasks that they’ve either seen very little of or none of before. But it raises questions as to when we implement different types of safety techniques. I don’t think that we’re at that point yet, but it raises the issue.

Ariel: To end on a positive note: looking back on what you saw last year, what has you most hopeful for our future?

Ian: I think it’s really great that AI is starting to be used for things like medicine. In the last year we’ve seen a lot of different machine learning algorithms that could exceed human abilities at some tasks, and we’ve also started to see the application of AI to life-saving application areas like designing new medicines. And this makes me very hopeful that we’re going to start seeing superhuman drug design, and other kinds of applications of AI to just really make life better for a lot of people in ways that we would not have been able to do without it.

Richard: Various kinds of tasks that people find to be drudgery within their jobs will be automatable. That will lead them to be open to working on more value-added things with more creativity, and potentially be able to work in more interesting areas of their field or across different fields. I think the future is wide open and it’s really what we make of it, which is exciting in itself.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Podcast: FLI 2016 – A Year In Review

For FLI, 2016 was a great year, full of our own success, but also great achievements from so many of the organizations we work with. Max, Meia, Anthony, Victoria, Richard, Lucas, David, and Ariel discuss what they were most excited to see in 2016 and what they’re looking forward to in 2017.

AGUIRRE: I’m Anthony Aguirre. I am a professor of physics at UC Santa Cruz, and I’m one of the founders of the Future of Life Institute.

STANLEY: I’m David Stanley, and I’m currently working with FLI as a Project Coordinator/Volunteer Coordinator.

PERRY: My name is Lucas Perry, and I’m a Project Coordinator with the Future of Life Institute.

TEGMARK: I’m Max Tegmark, and I have the fortune to be the President of the Future of Life Institute.

CHITA-TEGMARK: I’m Meia Chita-Tegmark, and I am a co-founder of the Future of Life Institute.

MALLAH: Hi, I’m Richard Mallah. I’m the Director of AI Projects at the Future of Life Institute.

KRAKOVNA: Hi everyone, I am Victoria Krakovna, and I am one of the co-founders of FLI. I’ve recently taken up a position at Google DeepMind working on AI safety.

CONN: And I’m Ariel Conn, the Director of Media and Communications for FLI. 2016 has certainly had its ups and downs, and so at FLI, we count ourselves especially lucky to have had such a successful year. We’ve continued to progress with the field of AI safety research, we’ve made incredible headway with our nuclear weapons efforts, and we’ve worked closely with many amazing groups and individuals. On that last note, much of what we’ve been most excited about throughout 2016 is the great work these other groups in our fields have also accomplished.

Over the last couple of weeks, I’ve sat down with our founders and core team to rehash their highlights from 2016 and also to learn what they’re all most looking forward to as we move into 2017.

To start things off, Max gave a summary of the work that FLI does and why 2016 was such a success.

TEGMARK: What I was most excited by in 2016 was the overall sense that people are taking seriously this idea – that we really need to win this race between the growing power of our technology and the wisdom with which we manage it. Every single way in which 2016 is better than the Stone Age is because of technology, and I’m optimistic that we can create a fantastic future with tech as long as we win this race. But in the past, the way we’ve kept one step ahead is always by learning from mistakes. We invented fire, messed up a bunch of times, and then invented the fire extinguisher. We at the Future of Life Institute feel that that strategy of learning from mistakes is a terrible idea for more powerful tech, like nuclear weapons, artificial intelligence, and things that can really alter the climate of our globe.

Now, in 2016 we saw multiple examples of people trying to plan ahead and to avoid problems with technology instead of just stumbling into them. In April, we had world leaders getting together and signing the Paris Climate Accords. In November, the United Nations General Assembly voted to start negotiations about nuclear weapons next year. The question is whether they should actually ultimately be phased out; whether the nations that don’t have nukes should work towards stigmatizing building more of them – with the idea that 14,000 is way more than anyone needs for deterrence. And – just the other day – the United Nations also decided to start negotiations on the possibility of banning lethal autonomous weapons, which is another arms race that could be very, very destabilizing. And if we keep this positive momentum, I think there’s really good hope that all of these technologies will end up having mainly beneficial uses.

Today, we think of our biologist friends as mainly responsible for the fact that we live longer and healthier lives, and not as those guys who make the bioweapons. We think of chemists as providing us with better materials and new ways of making medicines, not as the people who built chemical weapons and are all responsible for global warming. We think of AI scientists as – I hope, when we look back on them in the future – as people who helped make the world better, rather than the ones who just brought on the AI arms race. And it’s very encouraging to me that as much as people in general – but also the scientists in all these fields – are really stepping up and saying, “Hey, we’re not just going to invent this technology, and then let it be misused. We’re going to take responsibility for making sure that the technology is used beneficially.”

CONN: And beneficial AI is what FLI is primarily known for. So what did the other members have to say about AI safety in 2016? We’ll hear from Anthony first.

AGUIRRE: I would say that what has been great to see over the last year or so is the AI safety and beneficiality research field really growing into an actual research field. When we ran our first conference a couple of years ago, they were these tiny communities who had been thinking about the impact of artificial intelligence in the future and in the long-term future. They weren’t really talking to each other; they weren’t really doing much actual research – there wasn’t funding for it. So, to see in the last few years that transform into something where it takes a massive effort to keep track of all the stuff that’s being done in this space now. All the papers that are coming out, the research groups – you sort of used to be able to just find them all, easily identified. Now, there’s this huge worldwide effort and long lists, and it’s difficult to keep track of. And that’s an awesome problem to have.

As someone who’s not in the field, but sort of watching the dynamics of the research community, that’s what’s been so great to see. A research community that wasn’t there before really has started, and I think in the past year we’re seeing the actual results of that research start to come in. You know, it’s still early days. But it’s starting to come in, and we’re starting to see papers that have been basically created using these research talents and the funding that’s come through the Future of Life Institute. It’s been super gratifying. And seeing that it’s a fairly large amount of money – but fairly small compared to the total amount of research funding in artificial intelligence or other fields – but because it was so funding-starved and talent-starved before, it’s just made an enormous impact. And that’s been nice to see.

CONN: Not surprisingly, Richard was equally excited to see AI safety becoming a field of ever-increasing interest for many AI groups.

MALLAH: I’m most excited by the continued mainstreaming of AI safety research. There are more and more publications coming out by places like DeepMind and Google Brain that have really lent additional credibility to the space, as well as a continued uptake of more and more professors, and postdocs, and grad students from a wide variety of universities entering this space. And, of course, OpenAI has come out with a number of useful papers and resources.

I’m also excited that governments have really realized that this is an important issue. So, while the White House reports have come out recently focusing more on near-term AI safety research, they did note that longer-term concerns like superintelligence are not necessarily unreasonable for later this century. And that they do support – right now – funding safety work that can scale toward the future, which is really exciting. We really need more funding coming into the community for that type of research. Likewise, other governments – like the U.K. and Japan, Germany – have all made very positive statements about AI safety in one form or another. And other governments around the world.

CONN: In addition to seeing so many other groups get involved in AI safety, Victoria was also pleased to see FLI taking part in so many large AI conferences.

KRAKOVNA: I think I’ve been pretty excited to see us involved in these AI safety workshops at major conferences. So on the one hand, our conference in Puerto Rico that we organized ourselves was very influential and helped to kick-start making AI safety more mainstream in the AI community. On the other hand, it felt really good in 2016 to complement that with having events that are actually part of major conferences that were co-organized by a lot of mainstream AI researchers. I think that really was an integral part of the mainstreaming of the field. For example, I was really excited about the Reliable Machine Learning workshop at ICML that we helped to make happen. I think that was something that was quite positively received at the conference, and there was a lot of good AI safety material there.

CONN: And of course, Victoria was also pretty excited about some of the papers that were published this year connected to AI safety, many of which received at least partial funding from FLI.

KRAKOVNA: There were several excellent papers in AI safety this year, addressing core problems in safety for machine learning systems. For example, there was a paper from Stuart Russell’s lab published at NIPS, on cooperative IRL. This is about teaching AI what humans want – how to train an RL algorithm to learn the right reward function that reflects what humans want it to do. DeepMind and FHI published a paper at UAI on safely interruptible agents, that formalizes what it means for an RL agent not to have incentives to avoid shutdown. MIRI made an impressive breakthrough with their paper on logical inductors. I’m super excited about all these great papers coming out, and that our grant program contributed to these results.

CONN: For Meia, the excitement about AI safety went beyond just the technical aspects of artificial intelligence.

CHITA-TEGMARK: I am very excited about the dialogue that FLI has catalyzed – and also engaged in – throughout 2016, and especially regarding the impact of technology on society. My training is in psychology; I’m a psychologist. So I’m very interested in the human aspect of technology development. I’m very excited about questions like, how are new technologies changing us? How ready are we to embrace new technologies? Or how our psychological biases may be clouding our judgement about what we’re creating and the technologies that we’re putting out there. Are these technologies beneficial for our psychological well-being, or are they not?

So it has been extremely interesting for me to see that these questions are being asked more and more, especially by artificial intelligence developers and also researchers. I think it’s so exciting to be creating technologies that really force us to grapple with some of the most fundamental aspects, I would say, of our own psychological makeup. For example, our ethical values, our sense of purpose, our well-being, maybe our biases and shortsightedness and shortcomings as biological human beings. So I’m definitely very excited about how the conversation regarding technology – and especially artificial intelligence – has evolved over the last year. I like the way it has expanded to capture this human element, which I find so important. But I’m also so happy to feel that FLI has been an important contributor to this conversation.

CONN: Meanwhile, as Max described earlier, FLI has also gotten much more involved in decreasing the risk of nuclear weapons, and Lucas helped spearhead one of our greatest accomplishments of the year.

PERRY: One of the things that I was most excited about was our success with our divestment campaign. After a few months, we had great success in our own local Boston area with helping the City of Cambridge to divest its $1 billion portfolio from nuclear weapon producing companies. And we see this as a really big and important victory within our campaign to help institutions, persons, and universities to divest from nuclear weapons producing companies.

CONN: And in order to truly be effective we need to reach an international audience, which is something Dave has been happy to see grow this year.

STANLEY: I’m mainly excited about – at least, in my work – the increasing involvement and response we’ve had from the international community in terms of reaching out about these issues. I think it’s pretty important that we engage the international community more, and not just academics. Because these issues – things like nuclear weapons and the increasing capabilities of artificial intelligence – really will affect everybody. And they seem to be really underrepresented in mainstream media coverage as well.

So far, we’ve had pretty good responses just in terms of volunteers from many different countries around the world being interested in getting involved to help raise awareness in their respective communities, either through helping develop apps for us, or translation, or promoting just through social media these ideas in their little communities.

CONN: Many FLI members also participated in both local and global events and projects, like the following we’re about  to hear from Victoria, Richard, Lucas and Meia.

KRAKOVNA: The EAGX Oxford Conference was a fairly large conference. It was very well organized, and we had a panel there with Demis Hassabis, Nate Soares from MIRI, Murray Shanahan from Imperial, Toby Ord from FHI, and myself. I feel like overall, that conference did a good job of, for example, connecting the local EA community with the people at DeepMind, who are really thinking about AI safety concerns like Demis and also Sean Legassick, who also gave a talk about the ethics and impacts side of things. So I feel like that conference overall did a good job of connecting people who are thinking about these sorts of issues, which I think is always a great thing.  

MALLAH: I was involved in this endeavor with IEEE regarding autonomy and ethics in autonomous systems, sort of representing FLI’s positions on things like autonomous weapons and long-term AI safety. One thing that came out this year – just a few days ago, actually, due to this work from IEEE – is that the UN actually took the report pretty seriously, and it may have influenced their decision to take up the issue of autonomous weapons formally next year. That’s kind of heartening.

PERRY: A few different things that I really enjoyed doing were giving a few different talks at Duke and Boston College, and a local effective altruism conference. I’m also really excited about all the progress we’re making on our nuclear divestment application. So this is an application that will allow anyone to search their mutual fund and see whether or not their mutual funds have direct or indirect holdings in nuclear weapons-producing companies.

CHITA-TEGMARK:  So, a wonderful moment for me was at the conference organized by Yann LeCun in New York at NYU, when Daniel Kahneman, one of my thinker-heroes, asked a very important question that really left the whole audience in silence. He asked, “Does this make you happy? Would AI make you happy? Would the development of a human-level artificial intelligence make you happy?” I think that was one of the defining moments, and I was very happy to participate in this conference.

Later on, David Chalmers, another one of my thinker-heroes – this time, not the psychologist but the philosopher – organized another conference, again at NYU, trying to bring philosophers into this very important conversation about the development of artificial intelligence. And again, I felt there too, that FLI was able to contribute and bring in this perspective of the social sciences on this issue.

CONN: Now, with 2016 coming to an end, it’s time to turn our sites to 2017, and FLI is excited for this new year to be even more productive and beneficial.

TEGMARK: We at the Future of Life Institute are planning to focus primarily on artificial intelligence, and on reducing the risk of accidental nuclear war in various ways. We’re kicking off by having an international conference on artificial intelligence, and then we want to continue throughout the year providing really high-quality and easily accessible information on all these key topics, to help inform on what happens with climate change, with nuclear weapons, with lethal autonomous weapons, and so on.

And looking ahead here, I think it’s important right now – especially since a lot of people are very stressed out about the political situation in the world, about terrorism, and so on – to not ignore the positive trends and the glimmers of hope we can see as well.

CONN: As optimistic as FLI members are about 2017, we’re all also especially hopeful and curious to see what will happen with continued AI safety research.

AGUIRRE: I would say I’m looking forward to seeing in the next year more of the research that comes out, and really sort of delving into it myself, and understanding how the field of artificial intelligence and artificial intelligence safety is developing. And I’m very interested in this from the forecast and prediction standpoint.

I’m interested in trying to draw some of the AI community into really understanding how artificial intelligence is unfolding – in the short term and the medium term – as a way to understand, how long do we have? Is it, you know, if it’s really infinity, then let’s not worry about that so much, and spend a little bit more on nuclear weapons and global warming and biotech, because those are definitely happening. If human-level AI were 8 years away… honestly, I think we should be freaking out right now. And most people don’t believe that, I think most people are in the middle it seems, of thirty years or fifty years or something, which feels kind of comfortable. Although it’s not that long, really, on the big scheme of things. But I think it’s quite important to know now, which is it? How fast are these things, how long do we really have to think about all of the issues that FLI has been thinking about in AI? How long do we have before most jobs in industry and manufacturing are replaceable by a robot being slotted in for a human? That may be 5 years, it may be fifteen… It’s probably not fifty years at all. And having a good forecast on those good short-term questions I think also tells us what sort of things we have to be thinking about now.

And I’m interested in seeing how this massive AI safety community that’s started develops. It’s amazing to see centers kind of popping up like mushrooms after a rain all over and thinking about artificial intelligence safety. This partnership on AI between Google and Facebook and a number of other large companies getting started. So to see how those different individual centers will develop and how they interact with each other. Is there an overall consensus on where things should go? Or is it a bunch of different organizations doing their own thing? Where will governments come in on all of this? I think it will be interesting times. So I look forward to seeing what happens, and I will reserve judgement in terms of my optimism.

KRAKOVNA: I’m really looking forward to AI safety becoming even more mainstream, and even more of the really good researchers in AI giving it serious thought. Something that happened in the past year that I was really excited about, that I think is also pointing in this direction, is the research agenda that came out of Google Brain called “Concrete Problems in AI Safety.” And I think I’m looking forward to more things like that happening, where AI safety becomes sufficiently mainstream that people who are working in AI just feel inspired to do things like that and just think from their own perspectives: what are the important problems to solve in AI safety? And work on them.

I’m a believer in the portfolio approach with regards to AI safety research, where I think we need a lot of different research teams approaching the problems from different angles and making different assumptions, and hopefully some of them will make the right assumption. I think we are really moving in the direction in terms of more people working on these problems, and coming up with different ideas. And I look forward to seeing more of that in 2017. I think FLI can also help continue to make this happen.

MALLAH: So, we’re in the process of fostering additional collaboration among people in the AI safety space. And we will have more announcements about this early next year. We’re also working on resources to help people better visualize and better understand the space of AI safety work, and the opportunities there and the work that has been done. Because it’s actually quite a lot.

I’m also pretty excited about fostering continued theoretical work and practical work in making AI more robust and beneficial. The work in value alignment, for instance, is not something we see supported in mainstream AI research. And this is something that is pretty crucial to the way that advanced AIs will need to function. It won’t be very explicit instructions to them; they’ll have to be making decision based on what they think is right. And what is right? It’s something that… or even structuring the way to think about what is right requires some more research.

STANLEY: We’ve had pretty good success at FLI in the past few years helping to legitimize the field of AI safety. And I think it’s going to be important because AI is playing a large role in industry and there’s a lot of companies working on this, and not just in the US. So I think increasing international awareness about AI safety is going to be really important.

CHITA-TEGMARK: I believe that the AI community has raised some very important questions in 2016 regarding the impact of AI on society. I feel like 2017 should be the year to make progress on these questions, and actually research them and have some answers to them. For this, I think we need more social scientists – among people from other disciplines – to join this effort of really systematically investigating what would be the optimal impact of AI on people. I hope that in 2017 we will have more research initiatives, that we will attempt to systematically study other burning questions regarding the impact of AI on society. Some examples are: how can we ensure the psychological well-being for people while AI creates lots of displacement on the job market as many people predict. How do we optimize engagement with technology, and withdrawal from it also? Will some people be left behind, like the elderly or the economically disadvantaged? How will this affect them, and how will this affect society at large?

What about withdrawal from technology? What about satisfying our need for privacy? Will we be able to do that, or is the price of having more and more customized technologies and more and more personalization of the technologies we engage with… will that mean that we will have no privacy anymore, or that our expectations of privacy will be very seriously violated? I think these are some very important questions that I would love to get some answers to. And my wish, and also my resolution, for 2017 is to see more progress on these questions, and to hopefully also be part of this work and answering them.

PERRY: In 2017 I’m very interested in pursuing the landscape of different policy and principle recommendations from different groups regarding artificial intelligence. I’m also looking forward to expanding out nuclear divestment campaign by trying to introduce divestment to new universities, institutions, communities, and cities.

CONN: In fact, some experts believe nuclear weapons pose a greater threat now than at any time during our history.

TEGMARK: I personally feel that the greatest threat to the world in 2017 is one that the newspapers almost never write about. It’s not terrorist attacks, for example. It’s the small but horrible risk that the U.S. and Russia for some stupid reason get into an accidental nuclear war against each other. We have 14,000 nuclear weapons, and this war has almost happened many, many times. So, actually what’s quite remarkable and really gives a glimmer of hope is that – however people may feel about Putin and Trump – the fact is they are both signaling strongly that they are eager to get along better. And if that actually pans out and they manage to make some serious progress in nuclear arms reduction, that would make 2017 the best year for nuclear weapons we’ve had in a long, long time, reversing this trend of ever greater risks with ever more lethal weapons.

CONN: Some FLI members are also looking beyond nuclear weapons and artificial intelligence, as I learned when I asked Dave about other goals he hopes to accomplish with FLI this year.

STANLEY: Definitely having the volunteer team – particularly the international volunteers – continue to grow, and then scale things up. Right now, we have a fairly committed core of people who are helping out, and we think that they can start recruiting more people to help out in their little communities, and really making this stuff accessible. Not just to academics, but to everybody. And that’s also reflected in the types of people we have working for us as volunteers. They’re not just academics. We have programmers, linguists, people having just high school degrees all the way up to Ph.D.’s, so I think it’s pretty good that this varied group of people can get involved and contribute, and also reach out to other people they can relate to.

CONN: In addition to getting more people involved, Meia also pointed out that one of the best ways we can help ensure a positive future is to continue to offer people more informative content.

CHITA-TEGMARK: Another thing that I’m very excited about regarding our work here at the Future of Life Institute is this mission of empowering people to information. I think information is very powerful and can change the way people approach things: they can change their beliefs, their attitudes, and their behaviors as well. And by creating ways in which information can be readily distributed to the people, and with which they can engage very easily, I hope that we can create changes. For example, we’ve had a series of different apps regarding nuclear weapons that I think have contributed a lot to peoples knowledge and has brought this issue to the forefront of their thinking.

CONN: Yet as important as it is to highlight the existential risks we must address to keep humanity safe, perhaps it’s equally important to draw attention to the incredible hope we have for the future if we can solve these problems. Which is something both Richard and Lucas brought up for 2017.

MALLAH: I’m excited about trying to foster more positive visions of the future, so focusing on existential hope aspects of the future. Which are kind of the flip side of existential risks. So we’re looking at various ways of getting people to be creative about understanding some of the possibilities, and how to differentiate the paths between the risks and the benefits.

PERRY: Yeah, I’m also interested in creating and generating a lot more content that has to do with existential hope. Given the current global political climate, it’s all the more important to focus on how we can make the world better.

CONN: And on that note, I want to mention one of the most amazing things I discovered this past year. It had nothing to do with technology, and everything to do with people. Since starting at FLI, I’ve met countless individuals who are dedicating their lives to trying to make the world a better place. We may have a lot of problems to solve, but with so many groups focusing solely on solving them, I’m far more hopeful for the future. There are truly too many individuals that I’ve met this year to name them all, so instead, I’d like to provide a rather long list of groups and organizations I’ve had the pleasure to work with this year. A link to each group can be found at futureoflife.org/2016, and I encourage you to visit them all to learn more about the wonderful work they’re doing. In no particular order, they are:

Machine Intelligence Research Institute

Future of Humanity Institute

Global Catastrophic Risk Institute

Center for the Study of Existential Risk

Ploughshares Fund

Bulletin of Atomic Scientists

Open Philanthropy Project

Union of Concerned Scientists

The William Perry Project

ReThink Media

Don’t Bank on the Bomb

Federation of American Scientists

Massachusetts Peace Action

IEEE (Institute for Electrical and Electronics Engineers)

Center for Human-Compatible Artificial Intelligence

Center for Effective Altruism

Center for Applied Rationality

Foresight Institute

Leverhulme Center for the Future of Intelligence

Global Priorities Project

Association for the Advancement of Artificial Intelligence

International Joint Conference on Artificial Intelligence

Partnership on AI

The White House Office of Science and Technology Policy

The Future Society at Harvard Kennedy School

 

I couldn’t be more excited to see what 2017 holds in store for us, and all of us at FLI look forward to doing all we can to help create a safe and beneficial future for everyone. But to end on an even more optimistic note, I turn back to Max.

TEGMARK: Finally, I’d like – because I spend a lot of my time thinking about our universe – to remind everybody that we shouldn’t just be focused on the next election cycle. We have not decades, but billions of years of potentially awesome future for life, on Earth and far beyond. And it’s so important to not let ourselves get so distracted by our everyday little frustrations that we lose sight of these incredible opportunities that we all stand to gain from if we can get along, and focus, and collaborate, and use technology for good.

Autonomous Weapons: an Interview With the Experts

FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but he’s also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a he’s a Spokesperson for the Campaign to Stop Killer Robots.

The following interview has been edited for brevity, but you can read it in its entirety here or listen to it above.

ARIEL: Dr. Roff, I’d like to start with you. With regard to the database, what prompted you to create it, what information does it provide, how can we use it?

ROFF: The main impetus behind the creation of the database [was] a feeling that the same autonomous or automated weapons systems were brought out in discussions over and over and over again. It made it seem like there wasn’t anything else to worry about. So I created a database of about 250 autonomous systems that are currently deployed [from] Russia, China, the United States, France, and Germany. I code them along a series of about 20 different variables: from automatic target recognition [to] the ability to navigate [to] acquisition capabilities [etc.].

It’s allowing everyone to understand that autonomy isn’t just binary. It’s not a yes or a no. Not many people in the world have a good understanding of what modern militaries fight with, and how they fight.

ARIEL: And Dr. Asaro, your research is about liability. How is it different for autonomous weapons versus a human overseeing a drone that just accidentally fires on the wrong target.

ASARO: My work looks at autonomous weapons and other kinds of autonomous systems and the interface of the ethical and legal aspects. Specifically, questions about the ethics of killing, and the legal requirements under international law for killing in armed conflict. These kind of autonomous systems are not really legal and moral agents in the way that humans are, and so delegating the authority to kill to them is unjustifiable.

One aspect of accountability is, if a mistake is made, holding people to account for that mistake. There’s a feedback mechanism to prevent that error occurring in the future. There’s also a justice element, which could be attributive justice, in which you try to make up for loss. Other forms of accountability look at punishment itself. When you have autonomous systems, you can’t really punish the system. More importantly, if nobody really intended the effect that the system brought about then it becomes very difficult to hold anybody accountable for the actions of the system. The debate — it’s really kind of framed around this question of the accountability gap.

ARIEL: One of the things we hear a lot in the news is about always keeping a human in the loop. How does that play into the idea of liability? And realistically, what does it mean?

ROFF: I actually think this is just a really unhelpful heuristic. It’s hindering our ability to think about what’s potentially risky or dangerous or might produce unintended consequences. So here’s an example: the UK’s Ministry of Defense calls this the Empty Hangar Problem. It’s very unlikely that they’re going to walk down to an airplane hangar, look in, and be like, “Hey! Where’s the airplane? Oh, it’s decided to go to war today.” That’s just not going to happen.

These systems are always going to be used by humans, and humans are going to decide to use them. A better way to think about this is in terms of task allocation. What is the scope of the task, and how much information and control does the human have before deploying that system to execute? If there is a lot of time, space, and distance between the time the decision is made to field it and then the application of force, there’s more time for things to change on the ground, and there’s more time for the human to basically [say] they didn’t intend for this to happen.

ASARO: If self-driving cars start running people over, people will sue the manufacturer. But there’s no mechanisms in international law for the victims of bombs and missiles and potentially autonomous weapons to sue the manufacturers of those systems. That just doesn’t happen. So there’s no incentives for companies that manufacture those [weapons] to improve safety and performance.

ARIEL: Dr. Asaro, we’ve briefly mentioned definitional problems of autonomous weapons — how does the liability play in there?

ASARO: The law of international armed conflict is pretty clear that humans are the ones that make the decisions, especially about a targeting decision or the taking of a human life in armed conflict. This question of having a system that could range over many miles and many days and select targets on its own is where things are problematic. Part of the definition is: how do you figure out exactly what constitutes a targeting decision, and how do you ensure that a human is making that decision? That’s the direction the discussion at the UN is going as well. Instead of trying to define what’s an autonomous system, what we focus on is the targeting decision and firing decisions of weapons for individual attacks. What we want to acquire is meaningful human control over those decisions.

ARIEL: Dr. Roff, you were working on the idea of meaningful human control, as well. Can you talk about that?

ROFF: If [a commander] fields a weapon that can go from attack to attack without checking back with her, then the weapon is making the proportionality calculation, and she [has] delegated her authority and her obligation to a machine. That is prohibited under IHL, and I would say is also morally prohibited. You can’t offload your moral obligation to a nonmoral agent. So that’s where our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.

ARIEL: Is there anything else you think is important to add?

ROFF: We still have limitations AI. We have really great applications of AI, and we have blind. It would be really incumbent on the AI community to be vocal about where they think there are capacities and capabilities that could be reliably and predictably deployed on such systems. If they don’t think that those technologies or applications could be reliably and predictably deployed, then they need to stand up and say as much.

ASARO: We’re not trying to prohibit autonomous operations of different kind of systems or the development and application of artificial intelligence for a wide range of civilian and military applications. But there are certain applications, specifically the lethal ones, that have higher standards of moral and legal requirements that need to be met.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

 

Nuclear Winter with Alan Robock and Brian Toon

The UN voted last week to begin negotiations on a global nuclear weapons ban, but for now, nuclear weapons still jeopardize the existence of almost all people on earth.

I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.

Toon and Robock have studied and modeled nuclear winter off and on for over 30 years, and they joined forces ten years ago to use newer climate models to look at the climate effects of a small nuclear war.

The following interview has been heavily edited, but you can listen to it in its entirety here or read the complete transcript here.

Ariel: How is it that you two started working together?

Toon: This was initiated by a reporter. At the time, Pakistan and India were having a conflict over Kashmir and threatening each other with nuclear weapons. A reporter wanted to know what effect this might have on the rest of the planet. I calculated the amount of smoke and found, “Wow that was a lot of smoke!”

Alan had a great volcano model, so at the American Geophysical Union meeting that year, I tried to convince him to work on this problem. Alan was pretty skeptical.

Robock: I don’t remember being skeptical. I remember being very interested. I said, “How much smoke would there be?” Brian told me 5,000,000 tons of smoke, and I said, “That sounds like a lot!”

We put it into a NASA climate model and found it would be the largest climate change in recorded human history. The basic physics is very simple. If you block out the Sun, it gets cold and dark at the Earth’s surface.

We hypothesized that if each country used half of their nuclear arsenal, that would be 50 weapons on each side. We assumed the simplest bomb, which is the size dropped on Hiroshima and Nagasaki — a 15 kiloton bomb.

The answer is the global average temperature would go down by about 1.5 degrees Celsius. In the middle of continents, temperature drops would be larger and last for a decade or more.

We took models that calculate agricultural productivity and calculated how wheat, corn, soybeans, and rice production would change. In the 5 years after this war, using less than 1% of the global arsenal on the other side of the world, global food production would go down by 20-40 percent for 5 years, and for the next 5 years, 10-20 percent.

Ariel: Could you address criticisms of whether or not the smoke would loft that high or spread globally?

Toon: The only people that have been critical are Alan and I. The Departments of Energy and Defense, which should be investigating this problem, have done absolutely nothing. No one has done studies of fire propagation in big cities — no fire department is going to go put out a nuclear fire.

As far as the rising smoke, we’ve had people investigate that and they all find the same things: it goes into the upper atmosphere and then self-lofts. But, these should be investigated by a range of scientists with a range of experiences.

Robock: What are the properties of the smoke? We assume it would be small, single, black particles. That needs to be investigated. What would happen to the particles as they sit in the stratosphere? Would they react with other particles? Would they degrade? Would they grow? There are additional questions and unknowns.

Toon: Alan made lists of the important issues. And we have gone to every agency that we can think of, and said, “Don’t you think someone should study this?” Basically, everyone we tried so far has said, “Well, that’s not my job.”

Ariel: Do you think there’s a chance then that as we acquired more information that even smaller nuclear wars could pose similar risks? Or is 100 nuclear weapons the minimum?

Robock: First, it’s hard to imagine how once a nuclear war starts, it could be limited. Communications are destroyed, people panic — how would people even be able to rationally have a nuclear war and stop?

Second, we don’t know. When you get down to small numbers, it depends on what city, what time of year, the weather that day. And we don’t want to emphasize India and Pakistan – any two nuclear countries could do this.

Toon: The most common thing that happens when we give a talk is someone will stand up and say, “Oh, but a war would only involve one nuclear weapon.” But the only nuclear war we’ve had, the nuclear power, the United States, used every weapon that it had on civilian targets.

If you have 1000 weapons and you’re afraid your adversary is going to attack you with their 1000 weapons, you’re not likely to just bomb them with one weapon.

Robock: Let me make one other point. If the United States attacked Russia on a first strike and Russia did nothing, the climate change resulting from that could kill almost everybody in the United States. We’d all starve to death because of the climate response. People used to think of this as mutually assured destruction, but really it’s being a suicide bomber: it’s self-assured destruction.
Ariel: What scares you most regarding nuclear weapons?

Toon: Politicians’ ignorance of the implications of using nuclear weapons. Russia sees our advances to keep Eastern European countries free — they see that as an attempt to move military forces near Russia where [NATO] could quickly attack them. There’s a lack of communication, a lack of understanding of [the] threat and how people see different things in different ways. So Russians feel threatened when we don’t even mean to threaten them.

Robock: What scares me is an accident. There have been a number of cases where we came very close to having nuclear war. Crazy people or mistakes could result in a nuclear war. Some teenaged hacker could get into the systems. We’ve been lucky to have gone 71 years without a second nuclear war. The only way to prevent it is to get rid of the nuclear weapons.

Toon: We have all these countries with 100 weapons. All those countries can attack anybody on the Earth and destroy most of the country. This is ridiculous, to develop a world where everybody can destroy anybody else on the planet. That’s what we’re moving toward.

Ariel: Is there anything else you think the public needs to understand about nuclear weapons or nuclear winter?

Robock: I would think about all of the countries that don’t have nuclear weapons. How did they make that decision? What can we learn from them?

The world agreed to a ban on chemical weapons, biological weapons, cluster munitions, land mines — but there’s no ban on the worst weapon of mass destruction, nuclear weapons. The UN General Assembly voted next year to negotiate a treaty to ban nuclear weapons, which will be a first step towards reducing the arsenals and disarmament. But people have to get involved and demand it.

Toon: We’re not paying enough attention to nuclear weapons. The United States has invested hundreds of billions of dollars in building better nuclear weapons that we’re never going to use. Why don’t we invest that in schools or in public health or in infrastructure? Why invest it in worthless things we can’t use?

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The Age of Em: Review and Podcast

Interview with Robin Hanson
A few weeks ago, I had the good fortune to interview Robin Hanson about his new book, The Age of Em. We discussed his book, the future and evolution of humanity, and the research he’s doing for his next book. You can listen to all of that here. And read on for my review of Hanson’s book…

Age of Em Review

As I’ve interviewed more and more people who focus on and worry about the future, an interesting theme keeps popping up: choice. Over and over, scholars and researchers desperately try to remind us that we have a say in our future. The choices we make will impact whether or not our future goes the way we want it to.

But choosing a path for our future isn’t like picking a breakfast cereal. Our options aren’t all in front of us, with detailed information on the side telling us how we may or may not benefit from each choice. Few of us can predict how our decisions will shape our own lives, let alone the future of humanity. That’s where Robin Hanson comes in.

Hanson, a professor of economics at George Mason University, advocates that the activities that will shape our future can be much better informed by what is likely to happen than most of us realize.  He recently wrote the book, The Age of Em: Work, Love and Life when Robots Rule the Earth, which describes one possible path that may lay before us based on current economic and technologic trends.

What Is the Age of Em?

Em is short for emulation — in this case, a brain emulation. In this version of the future, people will choose to have their brains scanned, uploaded and possibly copied, creating a new race of robots and other types of machine intelligence. Because they’re human emulations, Hanson expects ems will think and feel just as we would. However, without biological needs or aging processes, they’ll be much cheaper to run and maintain. And because they can do anything we can – just at a much lower cost – it will be more practical to switch from human to em. Humans who resist this switch or who are too old to be useful as ems will end up on the sidelines of society. When (if) ems take over the world, it will be because humans chose to make the transition.

Interestingly, the timeline for how long ems will rule the world will depend on human versus em perspective. Because ems are essentially machines, they can run at different speeds. Hanson anticipates that over the course of two regular “human” years, most ems will have experienced a thousand years – along with all of the societal changes that come with a thousand years of development. Hanson’s book tells the story of their world: their subsistence lifestyles made glamorous by virtual reality; the em clans comprised of the best and the brightest human minds; and literally, how the ems will work, love, and live.

It’s a very detailed work, and it’s easy to get caught up in the details of which aspects of em life are likely, which details seem unrealistic, and even if ems are more likely than artificial intelligence to take over the world next. And there have been excellent discussions and reviews of the details of the book, like this one at Slate Star Codex. But I’m writing this review almost as much in response to commentary I’ve read about the book as I am about the book itself because there’s another question that’s important to ask as well: Is this the future that we want?

What do we want?

For a book without a plot or characters, it offers a surprisingly engaging and compelling storyline. Perhaps that’s because this is the story of us. It’s the story of humanity — the story of how we progress and evolve. And it’s also the story of how we, as we know ourselves, end.

It’s easy to look at this new world with fear. We’re so focused on production and the bottom line that, in the future, we’ve literally pushed humanity to the sidelines and possibly to extinction. Valuing productivity is fine, but do we really want to take it to this level? Can we stop this from happening, and if so, how?

Do we even want to stop it from happening? Early on Hanson encourages us to remember that people in the past would have been equally horrified by our own current lifestyle. He argues that this future may be different from what we’re used to, but it’s reasonable to expect that humans will prefer transitioning to an em lifestyle in the future. And from that perspective, we can look on this new world with hope.

As I read The Age of Em, I was often reminded of A Brave New World by Aldous Huxley. Huxley described his book as a “negative utopia,” but much of what he wrote has become commonplace and trivial today — mass consumerism, drugs to make us happy, a freer attitude about sex, a preference for mindless entertainment to deep thought. Though many of us may not necessarily consider these attributes of modern society to be a utopia, most people today would choose our current lifestyle over that of the 1930s, and we typically consider our lives better now than at any point in history. Even among people today, we see sharp divides between older generations who are horrified by how much privacy is lost thanks to the Internet and younger generations who see the benefits of increased information and connectivity outweighing any potential risks. Most likely, a similar attitude shift will take place as (if) we move toward a world of ems.

Yet while it’s reasonable to accept that in the future we would likely consider ems to be a positive step for humanity, the questions still remain: Is this what we want, or are we just following along on a path, unable to stop or change directions? Can we really choose our future?

Studying the future

In the book, Hanson says, “If we first look carefully at what is likely to happen if we do nothing, such a no-action baseline can help us to analyze what we might do to change those outcomes. This book, however, only offers the nearest beginnings of such policy analysis.” Hanson looks at where we’ve been, he looks at where we are now, and then he draws lines out into the future to figure out the direction we’ll go. And he does this for every aspect of life. In fact, given that this book is about the future, it also provides considerable insight into who we are now. But it represents only one possible vision for the future.

There are many more people who study history than the future, primarily because we already have information and writings and artifacts about historic events. But though we can’t change the past, we can impact the future. As the famous quote (or paraphrase) by George Santayana goes, “Those who fail to learn history are doomed to repeat it.” So perhaps learning history is only half the story. Perhaps it’s time to reevaluate the prevailing notion that the future is something that can’t be studied.

Only as we work through different possible scenarios for our future, can we better understand how decisions today will impact humanity later on. And with that new information, maybe we can start to make choices that will guide us toward a future we’re all excited about.

Final thoughts

Love it or hate it, agree with it or not, Hanson’s book and his approach to thinking about the future are extremely important for anyone who wants to have a say in the future of humanity. It’s easy to argue over whether or not ems represent the most likely future. It’s just as easy to get lost in the minutia of the em world and debate whether x, y, or z will happen. And these discussions are necessary if we’re to understand what could happen in the future. But to do only that is to miss an important point: something will happen, and we have to decide if we want a role in creating the future or if we want to stand idly by.

I highly recommend The Age of Em, I look forward to Hanson’s next book, and I hope others will answer his call to action and begin studying the future.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

Podcast: What Is Our Current Nuclear Risk?

A conversation with Lucas Perry about nuclear risk

Participants:

  • Ariel Conn— Ariel oversees communications and digital media at FLI, and as such, she works closely with members of the nuclear community to help present information about the costs and risks of nuclear weapons.
  • Lucas Perry—Lucas has been actively working with the Mayor and City Council of Cambridge, MA to help them divest from nuclear weapons companies, and he works closely with groups like Don’t Bank on the Bomb to bring more nuclear divestment options to the U.S.

Summary

In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe. (You can find more links to information about these issues at the bottom of the page.)

Transcript

Ariel:  I’m Ariel Conn with the Future of Life Institute, and I’m here with Lucas Perry, also a member of FLI, to talk about the increasing risks of nuclear weapons, and what we can do to decrease those risks.

With the end of the Cold War, and the development of the two new START treaties, we’ve dramatically decreased the number of nuclear weapons around the world. Yet even though there are fewer weapons, they still represent a real and growing threat. In the last few months, FLI has gotten increasingly involved in efforts to decrease the risks of nuclear weapons.

One of the first things people worry about when it comes to decreasing the number of nuclear weapons or altering our nuclear posture is whether or not we can still maintain effective deterrence.

Lucas, can you explain how deterrence works?

Lucas: Sure, deterrence is the idea that to protect ourselves from other nuclear states who might want to harm us through nuclear strikes, if we have our own nuclear weapons primed and ready to be fired, it would deter another nuclear state from firing on us, knowing that we would retaliate with similar, or even more, nuclear force.

Ariel:  OK, and along the same lines, can you explain what hair trigger alert is?

Lucas: Hair trigger alert is a Cold War-era strategy that has nuclear weapons armed and ready for launch within minutes. It ensures mutual and total annihilation, and thus acts as a means of deterrence. But the problem here is that it also increases the likelihood of accidental nuclear war.

Ariel:  Can you explain how an accidental nuclear war could happen? And, also, has it almost happened before?

Lucas: Having a large fraction of our nuclear weapons on hair trigger alert creates the potential for accidental nuclear war through the fallibility of the persons and instruments involved with the launching of nuclear weapons, in junction with the very small amount of time actually needed to fire the nuclear missiles.

Us humans are known to be prone to making mistakes on a daily basis, and we even make the same mistakes multiple times. Computers, radars, and all of the other instruments and technology that go into the launching and detecting of nuclear strikes are intrinsically fallible, as well, as they are prone to breaking and committing error.

So there is the potential for us to fire missiles when an instrument gives us false alarm or a person—say, the President—under the pressure of needing to make a decision within only a few minutes, decides to fire missiles due to some misinterpretation of a situation. This susceptibility to error is actually so great that groups such as the Union of Concerned Scientists have been able to identify at least 21 nuclear close calls where nuclear war was almost started by mistake.

Ariel:  How long does the President actually have to decide whether or not to launch a retaliatory attack?

Lucas: The President actually only has about 12 minutes to decide whether or not to fire our missiles in retaliation. After our radars have detected the incoming missiles, and after this information has been conveyed to the President, there has already been some non-negligible amount of time—perhaps 5 to 15 minutes—where nuclear missiles might already be inbound. So he only has another few minutes—say, about 10 or 12 minutes—to decide whether or not to fire ours in retaliation. But this is also highly contingent upon where the missiles are coming from and how early we detected their launch.

Ariel:  OK, and then do you have any examples off the top of your head of times where we’ve had close calls that almost led to an unnecessary nuclear war?

Lucas: Out of the twenty-or-so nuclear close calls that have been identified by the Union of Concerned Scientists, among other organizations, a few that stand out to me are—for example, in 1980, the Soviet Union launched four submarine-based missiles from near the Kuril Islands as part of a training exercise, which led to a triggering of American early-warning sensors.

And even in 1995, Russian early-warning radar detected a missile launch off the coast of Norway with flight characteristics very similar to that of US submarine missiles. This led to all Russian nuclear forces going into full alert, and even the President at the time got his nuclear football ready and was prepared for full nuclear retaliation. But they ended up realizing that this was just a Norwegian scientific rocket.

These examples really help to illustrate how hair trigger alert is so dangerous. Persons and instruments are inevitably going to make mistakes, and this is only made worse when nuclear weapons are primed and ready to be launched within only minutes.

Ariel:  Going back to deterrence: Do we actually need our nuclear weapons to be on hair trigger alert in order to have effective deterrence?

Lucas: Not necessarily. The current idea is that we keep our intercontinental ballistic missiles (ICBMs), which are located in silos, on hair trigger alert so that these nuclear weapons can be launched before the silos are destroyed by an enemy strike. But warheads can be deployed without being on hair trigger alert, on nuclear submarines and bombers, without jeopardizing national security. If nuclear weapons were to be fired at the United States with the intention of destroying our nuclear missile silos, then we could authorize the launch of our submarine- and bomber-based missiles over the time span of hours and even days. These missiles wouldn’t be able to be intercepted, and would thus offer a means of retaliation, and thus deterrence, without the added danger of being on hair trigger alert.

Ariel:  How many nuclear weapons does the Department of Defense suggest we need to maintain effective deterrence?

Lucas: Studies have shown that only about 300 to 1,000 nuclear weapons are necessary for deterrence. An example of this would be, about 450 of these bombs could be located on submarines and bombers spread out throughout the world, with about another 450 at home on reserve and in silos.

Ariel:  So how many nuclear weapons are there in the US and around the world?

Lucas: There are currently about 15,700 nuclear weapons on this planet. Russia and the US are the main holders of these, with Russia having about 7,500 and the US having about 7,200. Other important nuclear states to note are China, Israel, the UK, North Korea, France, India, and Pakistan.

Ariel:  OK, so basically we have a lot more nuclear weapons than we actually need.

Lucas: Right. If only about 300 to 1,000 are needed for deterrence, then the amount of nuclear weapons on this planet could be exponentially less than it is currently. And the amount that we have right now is actually just blatant overkill. It’s a waste of resources and it increases the risk of accidental nuclear war, making both the countries that have them and the countries that don’t have them, more at risk.

Ariel:  I want to consider this idea of the countries that don’t have them being more at risk. I’m assuming you’re talking about nuclear winter. Can you explain what nuclear winter is?

Lucas: Nuclear winter is an indirect effect of nuclear war. When nuclear weapons go off they create large firestorms from all of the infrastructure, debris, and trees that are set on fire surrounding the point of detonation. These massive firestorms release enormous amounts of soot and smoke into the air that goes into the atmosphere and can block out the sun for months and even years at a time. This drastically reduces the amount of sunlight that is able to get to the Earth, and it thus causes a significant decrease in average global temperatures.

Ariel:  How many nuclear weapons would actually have to go off in order for us to see a significant drop in temperature?

Lucas: About 100 Hiroshima-sized nuclear weapons would decrease average global temperatures by about 1.25 degrees Celsius. When these 100 bombs go off, they would release about 5 million tons of smoke lofted high into the stratosphere. And now, this change of 1.25 degrees Celsius of average global temperatures might seem very tiny, but studies actually show that this will lead to a shortening of growing seasons by up to 30 days and a 10% reduction in average global precipitation. Twenty million people would die directly from the effects of this, but then hundreds of millions of people would die in the following months from a lack of food due to the decrease in average global temperatures and a lack of precipitation.

Ariel:  And that’s hundreds of millions of people around the world, right? Not just in the regions where the war took place?

Lucas: Certainly. The soot and smoke from the firestorms would spread out across the entire planet and be affecting the amount of precipitation and sunlight that everyone receives. It’s not simply that the effects of nuclear war are contained to the countries involved with the nuclear strikes, but rather, potentially the very worst effects of nuclear war create global changes that would affect us all.

Ariel:  OK, so that was for a war between India and Pakistan, which would be small, and it would be using smaller nuclear weapons than what the US and Russia have. So if just an accident were to happen that triggered both the US and Russia to launch their nuclear weapons that are on hair trigger alert, what would the impacts of that be?

Lucas: Well, the United States has about a thousand weapons on hair trigger alert. I’m not exactly sure as to how many there are in Russia, but we can assume that it’s probably a similar amount. So if a nuclear war of about 2,000 weapons were to be exchanged between the United States and Russia, it would cause 510 million tons of smoke to rise into the stratosphere, which would lead to a 4 degrees Celsius change in average global temperatures. And compared to an India-Pakistan conflict, this would lead to catastrophically more casualties from a lack of food and from the direct effects of these nuclear bombs.

Ariel:  And over what sort of time scale is that expected to happen?

Lucas: The effects of nuclear winter, and perhaps even what might one day be nuclear summer, would be lasting over the time span of not just months, but years, even decades.

Ariel:  What’s nuclear summer?

Lucas: So nuclear summer is a more theoretical effect of nuclear war. With nuclear winter you have tons of soot and ash and smoke in the sky blotting out the sun, but additionally, there has actually been an enormous amount of CO2 released from the burning all of the infrastructure and forests and grounds due to the nuclear blasts. After decades, once all of this soot and ash and smoke begin to settle back down onto the Earth’s surface, there will still be this enormous remaining amount of CO2.

So nuclear summer is a hypothetical indirect effect of nuclear war, after nuclear winter, after the soot has fallen down, where there would be a huge spike in average global temperatures due to the enormous amount of CO2 left over from the firestorms.

Ariel: And so how likely is all of this to happen? Is there actually a chance that these types of wars could occur? Or is this mostly something that people are worrying about unnecessarily?

Lucas: The risk of a nuclear war is non-zero. It’s very difficult to quantify exactly what the risks are, but we can say that we have seen at least 21 nuclear close calls where nuclear war was almost started by mistake. And these 21 close calls are actually just the ones that we know about. How many more nuclear close calls have there been that we simply don’t know about, or that governments have been able to keep a secret? We can reflect that as tensions rise between the United States and Russia, and as the risk of terrorism and cyber attack continues to rise, and the conflict between India and Pakistan is continually exacerbated, the threat of nuclear war is actually increasing. It’s not going down.

Ariel:  So there is a risk, and we know that we have more nuclear weapons than we actually need for deterrence. Even if we want to keep enough weapons for deterrence, we don’t need as many as we have. I’m guessing that the government is not going to do anything about this, so what can people do to try to have an impact themselves?

Lucas: A method of engaging with this nuclear issue that has a potentially high efficacy is divesting. We have power as voters, consumers, and producers, but perhaps even more importantly, we have power over what we invest in. We have the power to choose to invest in companies that are socially responsible or ones which are not. So through divestment, we can take money away from nuclear weapons producers. But not only that, we can also work to stigmatize nuclear weapons production and our current nuclear situation through our divestment efforts.

Ariel:  But my understanding is that most of our nuclear weapons are funded by the government, so how would a divestment campaign actually be impactful, given that the money for nuclear weapons wouldn’t necessarily disappear?

Lucas: The most important part of divestment in this area of nuclear weapons is actually the stigmatization. When you see massive amounts of people divesting from something, it creates a lot of light and heat on the subject. It influences the public consciousness and helps to bring back to light this issue of nuclear weapons. And once you have stigmatized something to a critical point, it effectively renders its target politically and socially untenable. Divestment also stimulates new education and research on the topic, while also getting persons invested in the issue.

Ariel:  And so have there been effective campaigns that used divestment in the past?

Lucas: There have been a lot of different campaigns in the past that have used divestment as an effective means of creating important change in the world. A few examples of these are divestment from tobacco, South African apartheid, child labor, and fossil fuels. In all of these instances, persons were divesting from institutions involved in these socially irresponsible acts. Through doing so, they created much stigmatization of these issues, they created capital flight from them, and also created a lot of negative media attention that helped to bring light to these issues and show people the truth of what was going on.

Ariel:  I know FLI was initially inspired by a lot of the work that Don’t Bank on the Bomb has done. Can you talk a bit about some of the work they’ve done and what their success has been?

Lucas: The Don’t Bank on the Bomb campaign has been able to identify direct and indirect investments in nuclear weapons producers, made by large institutions in both Europe and America. Through this they have worked to engage with many banks in Europe to help them to not include these direct or indirect investments in their portfolios and mutual funds, thus helping them to construct socially responsible funds. A few examples of these successes are A&S Bank, ASR, and the Cooperative Bank.

Ariel:  So you’ve been very active with FLI in trying to launch a divestment campaign in the US. I was hoping you could talk a little about the work you’ve done so far and the success you’ve had.

Lucas: Inspired by a lot of the work that’s been done through the Don’t Bank on the Bomb campaign, in junction with resources provided by them, we were able to engage with the city of Cambridge and work with them and help them to divest $1 billion from nuclear weapons-producing companies. As we continue our divestment campaign, we’re really passionate about making the information needed for divestment transparent and open. Currently we’re working on a web app that will allow you to search your mutual fund and see whether not it has direct or indirect investments in nuclear weapons producers. Through doing so, we hope to not only be helping cities and municipalities and institutions divest, but also individuals like you and me.

Ariel:  Lucas, this has been great. Thank you so much for sharing information about the work you’ve been doing so far. If anyone has any questions about how they can divest from nuclear weapons, please email Lucas at lucas@futureoflife.org. You can also check out our new web app at futureoflife.org/invest.

[end of recorded material]

Learn more about nuclear weapons in the 21st Century:

What is hair-trigger alert?

How many nuclear weapons are there and who has them?

What are the consequences of nuclear war?

What would the world look like after a U.S and Russia nuclear war?

How many nukes would it take to make the Earth uninhabitable?

What are the specific effects of nuclear winter?

What can I do to mitigate the risk of nuclear war?

Do we really need so many nuclear weapons on hair-trigger alert?

What sort of new nuclear policy could we adopt?

How can we restructure strategic U.S nuclear forces?

Podcast: Concrete Problems in AI Safety with Dario Amodei and Seth Baum

Many researchers in the field of artificial intelligence worry about potential short-term consequences of AI development. Yet far fewer want to think about the long-term risks from more advanced AI. Why? To start to answer that question, it helps to have a better understanding of what potential issues we could see with AI as it’s developed over the next 5-10 years. And it helps to better understand the concerns actual researchers have about AI safety, as opposed to fears often brought up in the press.

We brought on Dario Amodei and Seth Baum to discuss just that. Amodei, who now works with OpenAI, was the lead author on the recent, well-received paper Concrete Problems in AI Safety. Baum is the Executive Director of the Global Catastrophic Risk Institute, where much of his research is also on AI safety.

Not in a good spot to listen? You can always read the transcript here.

If you’re still new to or learning about AI, the following terminology might help:

Artificial Intelligence (AI): A machine or program that can learn to perform cognitive tasks, similar to those achieved by the human brain. Typically, the program, or agent, is expected to be able to interact with the real world in some way without constant supervision from its creator. Microsoft Office is considered a computer program because it will do only what it is programmed to do. Siri is considered by most to be a very low-level AI because it must adapt to its surroundings, respond to a wide variety of owners, and understand a wide variety of requests, not all of which can be programmed for in advance. Levels of artificial intelligence fall along a spectrum:

  • Narrow AI: This is an artificial intelligence that can only perform a specific task. Siri can look up anything on a search engine, but it can’t write a book or drive a car. Google’s self-driving cars can drive you where you want to go, but they can’t cook dinner. AlphaGo can beat the world’s best Go player, but it can’t play Monopoly or research cancer. Each of these programs can do the program they’re designed for as well as, or better than humans, but they don’t come close to the breadth of capabilities humans have.
  • Short-term AI concerns: The recent increase in AI development has many researchers concerned about problems that could arise in the next 5-10 years. Increasing autonomy will impact the job market and potentially income inequality. Biases, such as sexism and racism, have already cropped up in some programs, and people worry this could be exacerbated as AIs become more capable. Many wonder how we can ensure control over systems after they’ve been released for the public, as seen with Microsoft’s problems with its chatbot Tay. Transparency is another issue that’s often brought up — as AIs learn to adapt to their surroundings, they’ll modify their programs for increased efficiency and accuracy, and it will become increasingly difficult to track why an AI took some action. These are some of the more commonly mentioned concerns, but there are many others.
  • Advanced AI and Artificial General Intelligence (AGI): As an AI program expands its capabilities, it will be considered advanced. Once it achieves human-level intelligence in terms of both capabilities and breadth, it will be considered generally intelligent.
  • Long-term AI concerns: Current expectations are that we could start to see more advanced AI systems within the next 10-30 years. For the most part, the concerns for long-term AI are similar to those of short-term AI, except that, as AIs become more advanced, the problems that arise as a result could be more damaging, destructive, and/or devastating.
  • Superintelligence: AI that is smarter than humans in all fields.

Agent: A program, machine, or robot with some level of AI capabilities that can act autonomously in a simulated environment or the real world.

Machine Learning: An area of AI research that focuses on how the agent can learn from its surroundings, experiences, and interactions in order to improve how well it functions and performs its assigned tasks. With machine learning, the AI will adapt to its environment without the need for additional programming. AlphaGo, for example, was not programmed to be better than humans from the start. None of its programmers were good enough at the game of Go to compete with the world’s best. Instead, it was programmed to play lots of games of Go with the intent to win. Each time it won or lost a game, it learned more about how to win in the future.

Training: These are the iterations a machine-learning program must go through in order learn how to better meet its goal by making adjustments to the program’s settings. In the case of AlphaGo, training involved playing Go over and over.

Neural Networks (Neural Nets) and Deep Neural Nets: Neural nets are programs that were inspired by the way the central nervous system of animals processes information, especially with regard to pattern recognition. These are important tools within a machine learning algorithm that can help the AI process and learn from the information it receives. Deep neural nets have more layers of complexity.

Reinforcement Learning: Similar to training a dog. The agent receives positive or negative feedback for each iteration of its training, so that it can learn which actions it should seek out and which it should avoid.

Objective Function: This is the goal of the AI program (it can also include subgoals). Using AlphaGo as an example again, the primary objective function would have been to win the game of Go.

Terms from the paper, Concrete Problems in AI Safety, that might not be obvious (all are explained in the podcast, as well):

  • Reward Hacking: When the AI system comes up with an undesirable way to achieve its goal or objective function. For example, if you tell a robot to clean up any mess it sees, it might just throw away all messes so it can’t see them anymore.
  • Scalable Oversight: Training an agent to solve problems on its own without requiring constant oversight from a human.
  • Safe Exploration: Training an agent to explore its surroundings safely, without injuring itself or others and without triggering some negative outcome that could be difficult to recover from.
  • Robustness to distributional shifts: Training an agent to adapt to new environments and to understand when the environment has changed so it knows to be more cautious.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.