FLI Podcast (Part 2): Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.   

Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.

Topics discussed in this episode include:

  • The value of verification, regardless of the challenges
  • The 1979 Sverdlovsk anthrax outbreak
  • The use of “rainbow” herbicides during the Vietnam War, including Agent Orange
  • The Yellow Rain Controversy

Publications and resources discussed in this episode include:

  • The Sverdlovsk anthrax outbreak of 1979, Matthew Meselson, Jeanne Guillemin, Martin Hugh-Jones, Alexander Langmuir, Ilona Popova, Alexis Shelokov, and Olga Yampolskaya, Science, 18 November 1994, Vol. 266, pp 1202-1208.
  • Preliminary Report- Herbicide Assessment Commission of the American Association for the Advancement of Science, Matthew Meselson, A. H. Westing, J. D. Constable, and Robert E. Cook, 30 December 1970, private circulation, 8 pp. Reprinted in Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6806-6807.
  • “Background Material Relevant to Presentations at the 1970 Annual Meeting of the AAAS”, Herbicide Assessment Commission of the AAAS, with A.H. Westing and J.D. Constable, December 1970, private circulation, 48 pp. Reprinted in the Congressional Record, U.S. Senate, Vol. 118-part 6, 3 March 1972, pp 6807-6813.
  • “The Yellow Rain Affair: Lessons from a Discredited Allegation”, with Julian Perry Robinson Terrorism, War, or Disease? eds. A.L. Clunan, P.R. Lavoy, and SB Martin, Stanford University Press, Stanford, California. 2008, pp 72-96.
  • Yellow Rain by Thomas D. Seeley, Joan W. Nowicke, Matthew Meselson, Jeanne Guillemin and Pongthep Akratanakul, Scientific American, September 1985, Vol. 253, pp 128-137.
  • 2017 Antiplant Chemical Warfare in Vietnam by Matthew Meselson Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA

Click here for Part 1: From DNA to Banning Biological Weapons with Matthew Meselson and Max Tegmark

Four-ship formation on a defoliation spray run. (U.S. Air Force photo)

Ariel: Hi everyone. Ariel Conn here with the Future of Life Institute. And I would like to welcome you to part two of our two-part FLI podcast with special guest Matthew Meselson and special guest/co-host Max Tegmark. You don’t need to have listened to the first episode to follow along with this one, but I do recommend listening to the other episode, as you’ll get to learn about Matthew’s experiment with Franklin Stahl that helped prove Watson and Crick’s theory of DNA and the work he did that directly led to US support for a biological weapons ban. In that episode, Matthew and Max also talk about the value of experiment and theory in science, as well as how to get some of the world’s worst weapons banned. But now, let’s get on with this episode and hear more about some of the verification work that Matthew did over the years to help determine if biological weapons were being used or developed illegally, and the work he did that led to the prohibition of Agent Orange.

Matthew, I’d like to ask about a couple of projects that you were involved in that I think are really closely connected to issues of verification, and those are the Yellow Rain Affair and the Russian Anthrax incident. Could you talk a little bit about what each of those was?

Matthew: Okay, well in 1979, there was a big epidemic of anthrax in the Soviet city of Sverdlovsk, just east of the Ural mountains, in the beginning of Siberia. We learned about this epidemic not immediately but eventually, through refugees and other sources, and the question was, “What caused it?” Anthrax can occur naturally. It’s commonly a disease of bovids, that is cows or sheep, and when they die of anthrax, the carcass is loaded with the anthrax bacteria, and when the bacteria see oxygen, they become tough spores, which can last in the earth for a long, long time. And then if another bovid comes along and manages to eat something that’s got those spores, he might get anthrax and die, and the meat from these animals who died of anthrax, if eaten, can cause gastrointestinal anthrax, and that can be lethal. So, that’s one form of anthrax. You get it by eating.

Now, another form of anthrax is inhalation anthrax. In this country, there were a few cases of men who worked in leather factories with leather that had come from anthrax-affected animals, usually imported, which had live anthrax spores on the leather that got into the air of the shops where people were working with the leather. Men would breathe this contaminated air and the infection in that case was through the lungs.

The question here was, what kind of anthrax was this: inhalational or gastrointestinal? And because I was by this time known as an expert on biological weapons, the man who was dealing with this issue at the CIA in Langley, Virginia — a wonderful man named Julian Hoptman, a microbiologist by training — asked me if I’d come down and work on this problem at the CIA. He had two daughters who were away at college, and so he had a spare bedroom, so I actually lived with Julian and his wife. And in this way, I was able to talk to Julian night and day, both at the breakfast and dinner table, but also in the office. Of course, we didn’t talk about classified things except in the office.

Now, we knew from the textbooks that the incubation period for inhalation anthrax was thought to be four, five, six, seven days; Between the time you inhale it, four, five days later, if you hadn’t yet come down with it, you probably wouldn’t. Well, we knew from classified sources that people were dying of this anthrax over a period of six weeks, April all the way into the middle of May 1979. So, if the incubation period was really that short, you couldn’t explain how that would be airborne because a cloud goes by right away. Once it’s gone, you can’t inhale it anymore. So that made the conclusion that it was airborne difficult to reach. You could still say, well maybe it got stirred up again by people cleaning up the site, maybe the incubation period is longer than we thought, but there was a problem there.

And so the conclusion of our working group was that it was probable that it was airborne. In the CIA, at that time at least, in a conclusion that goes forward to the president, you couldn’t just say, “Well maybe, sort of like, kind of like, maybe if …” Words like that just didn’t work, because the poor president couldn’t make heads nor tails. Every conclusion had to be called “possible,” “probable,” or “confirmed.” Three levels of confidence.

So, the conclusion here was that it was probable that it was inhalation, and not ingestion. The Soviets said that it was bad meat, but I wasn’t convinced, mainly because of this incubation period thing. So I decided that the best thing to do would be to go and look. Then you might find out what it really was. Maybe by examining the survivors or maybe by talking to people — just somehow, if you got over there, with some kind of good luck, you could figure out what it was. I had no very clear idea, but when I would meet any high level Soviet, I’d say, “Could I come over there and bring some colleagues and we would try to investigate?”

The first time that happened was with a very high-level Soviet who I met in Geneva, Switzerland. He was a member of what’s called the Military Industrial Commission in the Soviet Union. They decided on all technical issues involving the military, and that would have included their biological weapons establishments, and we knew that they had a big biological laboratory in the city of Sverdlovsk, there was no doubt about that. So, I told them, “I want to go in and inspect. I’ll bring some friends. We’d like to look.” And he said, “No problem. Write to me.”

So, I wrote to him, and I also went to the CIA and said, “Look, I got to have a map because maybe they’d let me go there and take me to the wrong place, and I wouldn’t know it’s the wrong place, and I wouldn’t learn anything. So, the CIA gave me a map — which turned out to be wrong, by the way — but then I got a letter back from this gentleman saying no, actually they couldn’t let us go because of the shooting down of the Korean jet #007, if any of you remember that. A Russian fighter plane shot down a Korean jet — a lot of passengers on it and they all got killed. Relations were tense. So, that didn’t happen.

Then the second time, an American and the Russian Minister of Health got a Nobel prize. The winner over there was the minister of health named Chazov, and the fellow over here was Bernie Lown in our medical school, who I knew. So, I asked Bernie to take a letter when he went next time to see his friend Chazov in Moscow, to ask him if he could please arrange that I could take a team to Sverdlovsk, to go investigate on site. And when Bernie came back from Moscow, I asked him and he said, “Yeah. Chazov says it’s okay, you can go.” So, I sent a telex — we didn’t have email — to Chazov saying, “Here’s the team. We want to go. When can we go?” So, we got back a telex saying, “Well, actually, I’ve sent my right-hand guy who’s in charge of international relations to Sverdlovsk, and he looked around, and there’s really no evidence left. You’d be wasting your time,” which means no, right? So, I telexed back and said, “Well, scientists always make friends and something good always comes from that. We’d like to go to Sverdlovsk anyway,” and I never heard back. And then, the Soviet Union collapses, and we have Yeltsin now, and it’s the Russian Republic.

It turns out that a group of — I guess at that time they were still Soviets — Soviet biologists came to visit our Fort Detrick, and they were the guests of our Academy of Sciences. So, there was a welcoming party, and I was on the welcoming party, and I was assigned to take care of one particular one, a man named Mr. Yablokov. So, we got to know each other a little bit, and at that time we went to eat crabs in a Baltimore restaurant, and I told him I was very interested in this epidemic in Sverdlovsk, and I guess he took note of that. He went back to Russia and that was that. Later, I read in a journal that the CIA produced, abstracts from the Russian literature press, that Yeltsin had ordered his minister, or his assistant for Environment and Health, to investigate the anthrax epidemic back in 1979, and the guy who he appointed to do this investigation for him was my Mr. Yablokov, who I knew.

So, I sent a telex to Mr. Yablokov saying, “I see that President Yeltsin has asked for you to look into this old epidemic and decide what really happened, and that’s great, I’m glad he did that, and I’d like to come and help you. Could I come and help you?” So, I got back a telex saying, “Well, it’s a long time ago. You can’t bring skeletons out of the closet, and anyway, you’d have to know somebody there.” Basically it was a letter that said no. But then my friend Alex Rich of Cambridge Massachusetts, a great molecular biologist and X-ray crystallographer at MIT, had a party for a visiting Russian. Who is the visiting Russian but a guy named Sverdlov, like Sverdlovsk, and he’s staying with Alex. And Alex’s wife came over to me and said, “Well, he’s a very nice guy. He’d been staying with us for several days. I make him breakfast and lunch. I make the bed. Maybe you could take him for a while.”

So we took him into our house for a while, and I told him that I had been given a turn down by Mr. Yablokov, and this guy whose name is Sverdlov, which is an immense coincidence, said, “Oh, I know Yablokov very well. He’s a pal. I’ll talk to him. I’ll get it fixed so you can go.” Now, I get a letter. In this letter, handwritten by Mr. Yablokov, he said, “Of course, you can go, but you’ve got to know somebody there to invite you.” Oh, who would I know there?

Well, there had been an American Physicist, a solid-state physicist named Ellis who was there on a United States National Academy of Sciences–Russian Academy of Sciences Exchange Agreement doing solid-state physics with a Russian solid-state physicist there in Sverdlovsk. So, I called Don Ellis and I asked him, “That guy who you cooperated with in Sverdlovsk — whose name was Gubanov — I need someone to invite me to go to Sverdlovsk, and you probably still maintain contact with him over there in Sverdlovsk, and you could ask him to invite me.” And Don said, “I don’t have to do that. He’s visiting me today. I’ll just hand him the telephone.”

So, Mr. Gubanov comes on the telephone and he says, “Of course I’ll invite you, my wife and I have always been interested in that epidemic.” So, a few days later, I get a telex from the rector of the university there in Sverdlovsk, who was a mathematical physicist. And he says, “The city is yours. Come on. We’ll give you every assistance you want.” So we went, and I formed a little team, which included a pathologist, thinking maybe we’ll get ahold of some information of autopsies that could decide whether it was inhalation or gastrointestinal. And we need someone who speaks Russian; I had a friend who was a virologist who spoke Russian. And we need a guy who knows a lot about anthrax, and veterinarians know a lot about anthrax, so I got a veterinarian. And we need an anthropologist who knows a lot about how to work with people and that happened to be my wife, Jeanne Guillemin.

So, we all go over there, we were assigned a solid-state physicist, a guy named Borisov, to take us everywhere. He knew how to fix everything. Cars that wouldn’t work, and also the KGB. He was a genius, and became a good friend. It turns out that he had a girlfriend, and she, by this time, had been elected to be a member of the Duma. In other words, she’s a congresswoman. She’s from Sverdlovsk. She had been a friend of Yeltsin. She had written Yeltsin a letter, which my friend Borisov knew about, and I have a photocopy of the letter. What it says is, “Dear Boris Nikolayevich,”that’s Yeltsin, “My constituents here at Sverdlovsk want to know if that anthrax epidemic was caused by a government activity or not. Because if it was, the families of those who died — they’re entitled to double pension money, just like soldiers killed in war.” So, Yeltsin writes back, “We will look into it.” And that’s why my friend Yablokov got asked to look into it. It was decided eventually that it was the result of government activity — by Yeltsin, he decided that — and so he had to have a list of the people who were going to get the extra pensions. Because otherwise everybody would say, “I’d like to have an extra pension.” So there had to be a list.

So she had this list with 68 names of the people who had died of anthrax during this time period in 1979. The list also had the address where they lived. So,now my wife, Jeanne Guillemin, Professor of Anthropology at Boston College, goes door-to-door — with two Russian women who were professors at the university and who knew English so they could communicate with Jeanne — knocks on the doors: “We would like to talk to you for a little while. We’re studying health, we’re studying the anthrax epidemic of 1979. We’re from the university.”

Everybody let them in except one lady who said she wasn’t dressed, so she couldn’t let anybody in. So in all the other cases, they did an interview and there were lots of questions. Did the person who died have TB? Was that person a smoker? One of the questions was where did that person work, and did they work in the day or the night? We asked that question because we wanted to make a map. If it had been inhalation anthrax, it had to be windborne, and depending on the wind, it might have been blown in a straight line if the wind was of a more or less unchanging direction.

If, on the other hand, it was gastrointestinal, people get bad meat from black market sellers all over the place, and the map of where they were wouldn’t show anything important, they’d just be all over the place. So, we were able to make a map when we got back home, we went back there a second time to get more interviews done, and Jeanne went back a third time to get even more interviews done. So, finally we had interviews with families of nearly all of those 68 people, and so we had 68 map locations: where they lived, and where they worked, and whether it was day or night. Nearly all of them were daytime workers.

When we plotted where they lived, they lived all over the southern part of the city of Sverdlovsk. When we plotted where they were likely would have been in the daytime, they all fell in to one narrow zone with one point at the military biological lab. The lab was inside the city. The other point was at the city limit: The last case was at the edge of the city limit, the southern part. We also had meteorological information, which I had brought with me from the United States. We knew the wind direction every three hours, and there was only one day when the wind was constantly blowing in the same direction, and that same direction was exactly the direction along which the people who died of anthrax lived.

Well, bad meat does not blow around in straight lines. Clouds of anthrax spores do. It was rigorous: We could conclude from this, with no doubt whatsoever, that it had been airborne, and we published this in Science magazine. It was really a classic of epidemiology, you couldn’t ask for anything better. Also, the autopsy records were inspected by the pathologist along with our trip, and he concluded from the autopsy specimens that it was inhalation. So, there was that evidence, too, and that was published in the PNAS. So, that really ended the mystery. The Soviet explanation was just wrong, and the CIA explanation, which was only probable: it was confirmed.

Max: Amazing detective story.

Matthew: I liked going out in the field, using whatever science I knew to try and deal with questions of importance to arms control, especially chemical and biological weapons arms control. And that happened to me on three occasions, one I just told you. There were two others.

Ariel: So, actually real quick before you get into that. I just want to mention that we will share or link to that paper and the map. Because I’ve seen the map that shows that straight line, and it is really amazing, thank you.

Matthew: Oh good.

Max: I think at the meta level this is also a wonderful example of what you mentioned earlier there, Matthew, about verification. It’s very hard to hide big programs because it’s so easy for some little thing to go wrong or not as planned and then something like this comes out.

Matthew: Exactly. By the way, that’s why having a verification provision in the treaty is worth it even if you never inspect. Let’s say that the guys who are deciding whether or not to do something which is against the treaty, they’re in a room and they’re deciding whether or not to do it. Okay? Now it is prohibited by a treaty that provides for verification. Now they’re trying to make this decision and one guy says, “Let’s do it. They’ll never see it. They’ll never know it.” Another guy says, “Well, there is a provision for verification. They may ask for a challenge inspection.” So, even the remote possibility that, “We might get caught,” might be enough to make that meeting decide, “Let’s not do it.” If it’s not something that’s really essential, then there is a potential big price.

If, on the other hand, there’s not even a treaty that allows the possibility of a challenge inspection, if the guy says, “Well, they might find it,” the other guy is going to say, “How are they going to find it? There’s no provision for them going there. We can just say, if they say, ‘I want to go there,’ we say, ‘We don’t have a treaty for that. Let’s make a treaty, then we can go to your place, too.’” It makes a difference: Even a provision that’s never used is worth having. I’m not saying it’s perfection, but it’s worth having. Anyway, let’s go on to one of these other things. Where do you want me to go?

Ariel: I’d really love to talk about the Agent Orange work that you did. So, I guess if you could start with the Agent Orange research and the other rainbow herbicides research that you were involved in. And then I think it would be nice to follow that up with, sort of another type of verification example, of the Yellow Rain Affair.

Matthew: Okay. The American Association for the Advancement of Science, the biggest organization of science in the United States, became, as the Vietnam War was going on, more and more concerned that the spraying of herbicides in Vietnam might cause ecological or health harm. And so at successive national meetings, there were resolutions to have it looked into. And as a result of one of those resolutions, the AAAS asked a fellow named Fred Tschirley to look into it. Fred was at the Department of Agriculture, but he was one of the people who developed the military use of herbicides. He did a study, and he concluded that there was no great harm. Possibly to the mangrove forest, but even then they would regenerate.

But at the next annual meeting, there was more appealing on the part of the membership, and now they wanted the AAAS to do its own investigation, and the compromise was they’d do their own study to design an investigation, and they had to have someone to lead that. So, they asked a fellow named John Cantlon, who was provost of Michigan State University, would he do it, and he said yes. And after a couple of weeks, John Cantlon said, “I can’t do this. I’m being pestered by the left and the right and the opponents on all sides and it’s just, I can’t do it. It’s too political.”

So, then they asked me if I would do it. Well, I decided I’d do it. The reason was that I wanted to see the war. Here I’d been very interested in chemical and biological weapons; very interested in war, because that’s the place where chemical and biological weapons come into play. If you don’t know anything about war, you don’t know what you’re talking about. I taught a course at Harvard for over two years on war, but that wasn’t like being there. So, I said I’d do it.

I formed a little group to do it. A guy named Arthur Westing, who had actually worked with herbicides and who was a forester himself and had been in the army in Korea, and I think had a battlefield promotion to captain. Just the right combination of talents. Then we had a chemistry graduate student, a wonderful guy named Bob Baughman. So, to design a study, I decided I couldn’t do it sitting here in Cambridge, Massachusetts. I’d have to go to Vietnam and do a pilot study in order to design a real study. So, we went to Vietnam — by the way, via Paris, because I wanted to meet the Vietcong people, I wanted them to give me a little card we could carry in our boots that would say, if we were captured, “We’re innocent scientists, don’t imprison us.” And we did get such little cards that said that. We were never captured by the Vietcong, but we did have some little cards.

Anyway, we went to Vietnam and we found, to my surprise, that the military assistance command, that is the United States Military in Vietnam, very much wanted to help our investigation. They gave us our own helicopter. That is, they assigned a helicopter and a pilot to me. And anywhere we wanted to go, I’d just call a certain number the night before and then go to Tan Son Nhut Air Base, and there would be a helicopter waiting with a pilot instructed FAD — fly as directed.

So, one of the things we did was to fly over a valley on which herbicides had been sprayed to kill the rice. John Constable, the medical member of our team, and I did two flights of that so we could take a lot of pictures. And the man who had designed this mission, a chemical corps captain named Captain Franz, had designed the mission and requested it and gotten permission through a series of review processes that it was really an enemy crop production area, not an area of indigenous Montagnard people growing food for their own eating, but rather enemy soldiers growing it for themselves.

So we took a lot of pictures and as we flew, Colonel Franz said, “See down there, there are no houses. There’s no civilian population. It’s just military down there. Also, the rice is being grown on terraces on the hillsides. The Montagnard people don’t do that. They just grow it down in the valley. They don’t practice terracing. And also, the extent of the rice fields down there — that’s all brand new. Fields a few years ago were much, much smaller in area. So, that’s how we know that it’s an enemy crop production area.” And he was a very nice man, and we believed him. And then we got home, and we had our films developed.

Well, we had very good cameras and although you couldn’t see from the aircraft, you could certainly see in the film: The valley was loaded with little grass shacks with yellow roofs — meaning that they were built recently, because you have to replace the roofs every once in a while with straw and if it gets too old, it turns black, but if there’s yellow, it means that somebody is living in those. And there were hundreds and hundreds of them.

We got from the Food and Agriculture Organization in Rome how much rice you need to stay alive for one year, and what area in hectares of dry rice — because this isn’t patty rice, it’s dry rice — you’d need to make that much rice, and we measured the area that was under cultivation from our photographs, and the area was just enough to support that entire population, if we assumed that there were five people who needed to be fed in every one of the houses that we counted.

Also, we could get from the French aerial photography that they had done in the late 1940s, and it turns out that the rice fields had not expanded. They were exactly the same. So it wasn’t that the military had moved in and made bigger rice fields: They were the same. So, everything that Colonel Franz said was just wrong. I’m sure he believed it, but it was wrong.

So, we made great big color enlargements of our photographs — we took photographs all up and down this valley, 15 kilometers long — and we made one set for Ambassador Bunker; one copy for General Abrams — Creighton Abrams was the head of our military assistance command; and one set for Secretary of State Rogers; along with a letter saying that this one case that we saw may not be typical, but in this one case, this crop destruction program was achieving the opposite of what it intended. It was denying food to the civilian population and not to the enemy. It was completely mistaken. So, as a result, I think, of that, but I have no proof, only the time connection, but right after that in early November — we’d sent the stuff in early November — Ambassador Bunker and General Abrams ordered a new review of the crop destruction program. Was it in response to our photographs and our letter? I don’t know, but I think it was.

The result of that review was a recommendation by Ambassador Bunker and General Abrams to stop the herbicide program immediately. They sent this recommendation back in a top secret telegram to Washington. Well, the top-secret telegram fell into the hands of the Washington Post, and they published it. Well, now here are the Ambassador and the General on the spot, saying to stop doing something in Vietnam. How on earth can anybody back in Washington gainsay them? Of course, President Nixon had to stop it right away. There’d be no grounds. How could he say, “Well, my guys here in Washington, in spite of what the people on the spot say, tell us we should continue this program.”

So that very day, he announced that the United States would stop all herbicide operations in Vietnam in a rapid and orderly manner. That very day happened to be the day that I, John Constable, and Art Westing were on the stage at the annual meeting in Chicago of the AAAS, reporting on our trip to Vietnam. And the president of AAAS ran up to me to tell me this news, because it just came in while I was talking, giving our report. So, that’s how it got stopped, and thanks to General Abrams.

By the way, the last day I was in Vietnam, General Abrams had just come back from Japan — he’d had an operation for gallbladder, and he was still convalescing. We spent all morning talking with each other. And he asked me at one point, “What about the military utility of the herbicides?” And of course, I said I had no idea what it was, or not. And he said, “Do you want to know what I think?” I said, “Yes, sir.” He said, “I think it’s shit.” I said, “Well, why are we doing it here?” He said, “You don’t understand anything about this war, young man. I do what I’m ordered to do from Washington. It’s Washington who tells me to use this stuff, and I have to use it because if I didn’t have those 55-gallon drums of herbicides offloaded on the decks at Da Nang and Saigon, then they’d make walls. I couldn’t offload the stuff I need over those walls. So, I do let the chemical corps use this stuff.” He said, “Also, my son, who is a captain up in I Corps, agrees with me about that.”

I wrote something about this recently, which I sent to you, Ariel. I want to be sure my memory was right about the conversation with General Abrams — who, by the way, was a magnificent man. He is the man who broke through at the Battle of the Bulge in World War II. He’s the man about whom General Patton, the great tank general, said, “There’s only one tank officer greater than me, and it’s Abrams.”

Max: Is he the one after whom the Abrams tank is named?

Matthew: Yes, it was named after him. Yes. He had four sons, they all became generals, and I think three of them became four-stars. One of them who did become a four-star is still alive in Washington. He has a consulting company. I called him up and I said, “Am I right, is this what your dad thought and what you thought back then?” He said, “Hell, yes. It’s worse than that.” Anyway, that’s what stopped the herbicides. They may have stopped anyway. It was dwindling down, no question. Now the question of whether dioxin and herbicides have caused too many health effects, I just don’t know. There’s an immense literature about this and it’s nothing I can say we ever studied. If I read all the literature, maybe I’d have an opinion.

I do know that dioxin is very poisonous, and there’s a prelude to this order from President Nixon to stop the use of all herbicides. That’s what caused the United States to stop the use of Agent Orange specifically. That happened first, before I went to Vietnam. That happened for a funny reason. A Harvard student, a Vietnamese boy, came to my office one day with a stack of newspapers from Saigon in Vietnamese. I couldn’t read them, of course, but they all had pictures of deformed babies, and this student claimed that this was because of Agent Orange, that the newspaper said it was because of Agent Orange.

Well, deformed babies are born all the time and I appreciated this coming from him, but there’s nothing I could do about it. But then I got from a graduate student here — Bill Haseltine, now become a very wealthy man — he had a girlfriend and she was working for Ralph Nader one summer, and she somehow got a purloined copy of a study that had been ordered by the NIH of the possible keratogenic, mutagenic, and carcinogenic effects of common herbicides, pesticides, and fungicides.

This company, called the Bionetics company, had this huge contract that tests all these different compounds, and they concluded from this that there was only one of these chemicals that did anything that might be dangerous for people. That was 2,4,5-T, trichlorophenoxyacetic acid. Well, that’s what Agent Orange is made out of. So, I had this report that had not yet been released to the public saying that this could cause birth defects in humans if it did the same thing as it did in guinea pigs and mice. I thought, the White House better know about this. That’s pretty explosive: claims in the newspapers in Saigon and scientific suggestions that this stuff might cause birth defects.

So, I decided to go down to Washington and see President Nixon’s science advisor. That was Lee DuBridge, physicist. Lee DuBridge had been the president of Caltech when I was a graduate student there and so he knew me, and I knew him. So, I went down to Washington with some friends, and I think one of the friends was Arthur Galston from Yale. He was a scientist who worked on herbicides, not on the phenoxyacetic herbicides but other herbicides. So we went down to see the President’s science advisor, and I showed them these newspapers and showed him the Bionetics report. He hadn’t seen it, it was at too low a level of government for him to see it and it had not yet been released to the public. Then he did something amazing, Lee DuBridge: He picked up the phone and he called David Packard, who was the number two at the Defense Department. Right then and there, without consulting anybody else, without asking the permission of the President, they canceled Agent Orange.

Max: Wow.

Matthew: That was the end Agent Orange. Now, not exactly the end. I got a phone call from Lee DuBridge a couple of days later when I was back at Harvard. He says, “Matt, the DuPont people have come to me. It’s not Agent Orange itself, it’s an impurity in Agent Orange called dioxin, and they know that dioxin is very toxic, and the Agent Orange that they make has very little dioxin in it because they know it’s bad and they make the stuff at low temperature, when dioxin is a by-product, that’s made in very small amount. These other companies like Diamond Shamrock and other companies, Monsanto, who make Agent Orange for the military, it must be their Agent Orange. It’s not our Agent Orange.

So, in other words the question was, we just use the Dow Agent Orange — maybe that’s safe. But the question is does the Dow Agent Orange cause defects in mice? So, a whole new series of experiments were done with Agent Orange containing much less dioxin in it. It still made birth defects. So, since it still made birth defects in one species of rodent, you could hardly say, “Well, it’s okay then for humans.” So, that really locked it, closed it down, and then even the Department of Agriculture prohibited the use in the United States, except on land that would have been unlikely to get into the human food chain. So, that ended the use of Agent Orange.

That had happened already before we went to Vietnam. They were then using only Agent White and Agent Blue, two other herbicides, but Agent Orange had been knocked out ahead of time. But that was the end of the whole herbicide program. It was two things: the dioxin concern, on the one hand, stopping Agent Orange, and the decision of President Nixon; and militarily Bunker and Abrams had said, “It’s no use, we want to get it stopped, it’s doing more harm than good. It’s getting the civilian population against us.”

Max: One reaction I have to these fascinating stories is how amazing it is that back in those days politicians really trusted scientists. You could go down to Washington, there would be a science advisor. You know, we even didn’t have a presidential science advisor for a while now during this administration. Do you feel that the climate has changed somehow in the way politicians view scientists?

Matthew: Well, I don’t have a big broad view of the whole thing. I just get the impression, like you do, that there are more politicians who don’t pay attention to science than there used to be. There are still some, but not as many, and not in the White House.

Max: I would say we shouldn’t particularly just point fingers at any particular administration, I think there has been a general downward trend for people’s respect for scientists overall. If you go back to when you were born, Matthew, and when I was born, I think generally people thought a lot more highly about scientists contributing very valuable things to society and they were very interested in them. I think right now there are much more people who can name — If you ask the average person how many famous movie stars can they name, or how many billionaires can they name, versus how many Nobel laureates can they name, the answer is going to be kind of different from the way it was a long time ago. It’s very interesting to think about what we can do to more help people appreciate the things that they do care about, like living longer and having technology and so on, are things that they, to a large extent, owe to science. It isn’t just the nerdy stuff that isn’t relevant to them.

Matthew: Well, I think movie stars were always at the top of the list. Way ahead of Nobel Prize winners and even of billionaires, but you’re certainly right.

Max: The second thing that really strikes me, which you did so wonderfully there, is that you never antagonized the politicians and the military, but rather went to them in a very constructive spirit and said look, here are the options. And based on the evidence, they came to your conclusion.

Matthew: That’s right. Except for the people who actually were doing these programs — that was different, you couldn’t very well tell them that. But for everybody else, yes, it was a help. You need to offer help, not hindrance.

The last thing was the Yellow Rain. That, too, involved the CIA. I was contacted by the CIA. They had become aware of reports from Southeast Asia, particularly from Thailand, Hmong tribespeople who were living in Laos, coming out of Laos across the Mekong into Thailand, and telling stories of being poisoned by stuff dropped from airplanes. Stuff that they called kemi or yellow rain.

At first, I thought maybe there was something to this, there are some nasty chemicals that are yellow. Not that lethal, but who knows, maybe there is exaggeration in their stories. One of them is called adamsite, it’s yellow, it’s an arsenical. So we decided we’d have a conference, because there was a  mystery: What is this yellow rain? We had a conference. We invited people from the intelligence community, from the state department. We invited anthropologists. We invited a bunch of people to ask, what is this yellow rain?

By this time, we knew that the samples that had been turned in contained pollen. One reason we knew that was that the British had samples of this yellow rain and they had shown that it contains pollen. They had looked at the samples of the yellow rain brought in by the Hmong tribespeople, given to British officers — or maybe Americans, I don’t know — but found its way into the hands of British intelligence, who bring these samples back to Porton and they’re examined in various ways, but also under the microscope. And the fellow who looked at them under the microscope happened to be a beekeeper. He knew just what pollen grains look like. And he knew that there was pollen, and then they sent this information to the United States, and we looked at the samples of yellow rain we had, and they all contained — all these yellow samples contained pollen.

The question was, what is it? It’s got pollen in it. Maybe it’s very poisonous. The Montagnard people say it falls from the sky. It lands on leaves and on rocks. The spots were about two millimeters in diameter. It’s yellow or brown or red, different colors. What is it? So, we had this meeting in Cambridge, and one of the people there, Peter Ashton, is a great botanist, his specialty is the trees of Southeast Asia and in particular the great dipterocarp trees, which are like the oaks in our part of the world. And he was interested in the fertilization of these dipterocarps, and the fertilization is done by bees. They collect pollen, though, like other bees.

And so the hypothesis we came to at the end of this day-long meeting was that maybe this stuff is poisonous, and the bees get poisoned by it because it falls on everything, including flowers that have pollen, and the bees get sick, and these yellow spots, they’re the vomit of the bees. These bees are smaller individually than the yellow spots, but maybe several bees get together and vomit on the same spot. Really a crazy idea. Nevertheless, it was the best idea we could come up with that explained why something could be toxic but have pollen in it. It could be little drops, associated with bees, and so on.

A couple of days later, both Peter Ashton, the botanist, and I, noticed on the backs of our cars on the windshields, the rear windshields, yellow spots loaded with pollen. These were being dropped by bees,  these were the natural droppings of bees, and that gave us the idea that maybe there was nothing poisonous in this stuff. Maybe it was the natural droppings of bees that the people in the villages thought was poisonous, but that wasn’t. So, we decided we better go to Thailand and find out what’s happening.

So, a great bee biologist named Thomas Seeley, who’s now at Cornell — he was at Yale at that time — and I flew over to Thailand, and went up into the forest to see if bees defecate in showers. Now why did we do that? It’s because friends here said, “Matt, this can’t be the source of the yellow rain that the Hmong people complained about, because bees defecate one by one. They don’t go out in a great armada of bees and defecate all at once. Each bee goes out and defecates by itself. So, you can’t explain the showers — they’d only get tiny little driblets, and the Hmong people say they’re real showers, with lots of drops falling all at once.”

So, Tom Seeley and I went to Thailand, where they also had this kind of bee. So, we went there, and it turns out that they defecate all at once, unlike the bees here. Now they do defecate in showers here too, but they’re small showers. That’s because the number of bees in a nest here is rather small, but they do come out on the first warm days of spring, when there’s now pollen and nectar to be harvested, but those showers are kind of small. Besides that, the reason that there are showers at all even in New England is because the bees are synchronized by winter. Winter forces them to stay in their nest all winter long, during which they’re eating the stored-up pollen and getting very constipated. Now, when they fly out, they all fly out, they’re all constipated, and so you get a big shower. Not as big as the natives in Southeast Asia reported, but still a shower.

But in southeast Asia, there are no seasons. Too near the equator. So, there’s nothing that would synchronize the defecation of bees, and that’s why we had to go to Thailand to see if — even though there’s no winter to synchronize their defecation flights — if they nevertheless do go out in huge numbers and all at once.

So, we’re in Thailand and we go up into the Khao Yai National Park and find places where there are clearings in the forests where you could see up into the sky, where if there were bees defecating their feces would fall to the ground, not get caught up in the trees. And we put down big pieces, one meter square, of white paper, and anchored them with rocks, and went walking around in the forest some more, and come back and look at our pieces of white paper every once in a while.

And then suddenly we saw a large number of spots on the paper, which meant that they had defecated all at once. They weren’t going around defecating one by one by one. There were great showers then. That’s still a question: Why they don’t go out one by one? And there are some good ideas why, I won’t drag you into that. It’s the convoy principle, to avoid getting picked off one by one by birds. That’s why people think that they go out in great armadas of constipated bees.

So, this gave us a new hypothesis. The so-called yellow rain is all a mistake. It’s just bees defecating, which people confuse and think is poisonous. Now, that still doesn’t prove that there wasn’t a poison. What was the evidence for poison? The evidence was that the Defense Intelligence Agency was sending samples of this yellow rain and also samples of human blood and other materials to a laboratory in Minnesota that knew how to analyze for the particular toxin that the Defense establishment thought was the poison. It’s a toxin called trichothecene mycotoxins, there’s a whole family of them. And this lab reported positive findings in the samples from Thailand but not in controls. So that seemed to be real proof that there was poison.

Well, this lab is a lab that also produced trichothecene mycotoxins, and the way they analyzed for them was by mass spectroscopy, and everybody knows that if you’re going to do mass spectroscopy, you’re going to be able to detect very, very, very tiny amounts of stuff, and so you shouldn’t both make large quantities and try to detect small quantities in the same room, because there’s the possibility of cross contamination. I have an internal report from the Defense Intelligence Agency saying that that laboratory did have numerous false positive, and that probably all of their results were bedeviled by contamination from the trichothecenes that were in the lab, and also because there may have been some false reading of the mass spec diagram.

The long and short of it is that when other laboratories tried to find trichothecenes in their samples: the US Army looked at at least 80 samples and found nothing. The British looked at at least 60 samples, found nothing. The Swedes looked at some number of samples, I don’t know the number, but found nothing. The French looked at a very few samples at their military analytical lab, and the French found nothing. No lab could confirm it. There was one lab at Rutgers that thought it could confirm it, but I believe that they were suffering from contamination also, because they were a lab that worked with trichothecenes also.

So, the long and short of it is that the chemical evidence was no good, and finally the ambassador there decided that we should have another look — Ambassador Dean. And that the military should send out a team that was properly equipped to check up on these stories, because up until then there was no dedicated team. There were teams that would come up briefly, listen to the refugees’ stories, collect samples, and go back. So Ambassador Dean requested a team that would stay there. So out comes a team from Washington, stays there longer than a year. Not just a week, but longer than a year, and they tried to re-locate the Hmong people in the camps who had told these stories in the refugee camps.

They couldn’t find a single one who would tell the same story twice. Either because they weren’t telling the same story twice, or because the interpreter interpreted the same story differently. So, whatever it was. Then they did something else. They tried to find people who were in the same location at the same time as was claimed there was such attacks, and those people never confirmed the attack. They could never find any confirmation by interrogation of people.

Then also, there was a CIA unit out there in that theater questioning captured prisoners of war and also people who surrendered from the North Vietnamese army: the people who were presumably behind the use of this toxic stuff. And they interrogated hundreds of people, and one of these interrogators wrote an article in an Intelligence Agency Journal, but an open journal, saying that he doubted that there was anything to the yellow rain because they had interrogated so many people including chemical corps people from the North Vietnamese Army, that he couldn’t believe that there really was anything going on.

So we did some more investigating of various kinds, not just going to Thailand, but doing some analysis of various things. We looked at the samples — we found bee hairs in the samples. We found that the bee pollen in the samples of the alleged poison had no protein inside. You can stain pollen grains with something called Coomassie brilliant blue, and these pollen grains that were in the samples handed in by the refugees, that were given to us by the army and by the Canadians, by the Australians, they didn’t stain blue. Why not? Because if a pollen grain passes through the gut of a bee, the bee digests out all of the good protein that’s inside the pollen grain, as its nutrition.

So, you’d have to believe that the Soviets were collecting pollen not from plants, which is hard enough, but had been regurgitated by bees. Well, that’s insane. You could never get enough to be a weapon by collecting bee vomit. So the whole story collapsed, and we’ve written a longer account of this. The United States government has never said we were right, but a few years ago said that maybe they were wrong. So that’s at least something.

So one case we were right, and the Soviets were wrong. Another case, the Soviets were wrong, and we were right, and the third case, the herbicides, nobody was right or wrong. It was just that it was, in my view, by the way, it was useless militarily. I’ll tell you why.

If you spray the deep forest, hoping to find a military installation that you can now see because there are no more leaves, it takes four or five weeks for the leaves to fall off. So, you might as well drop little courtesy cards that say, “Dear enemy. We have now sprayed where you are with herbicide. In four or five weeks we will see you. You may choose to stay there, in which case, we will shoot you. Or, you have four or five weeks to move somewhere else, in which case, we won’t be able to find you. You decide.” Well, come on, what kind of a brain came up with that?

The other use was along roadsides, for convoys to be safer from snipers who might be hidden in the woods. You knock the leaves off the trees and you can see deeper into the woods. That’s right, but you have to realize the fundamental law of physics, which is that if you can see from A to B, B can see back to A, right? If there’s a clear light path from one point to another, there’s a clear light path in the other direction.

Now think about it. You are a sniper in the woods, and the leaves now have not been sprayed. They grow right up to the edge of the forest and a convoy is coming down the road. You can stick your head out a little bit but not for very long. They have long-range weapons; When they’re right opposite you, they have huge firepower. If you’re anywhere nearby, you could get killed.

Now, if we get rid of all the leaves, now I can stand way back into the forest, and still sight you between the trunks. Now, that’s a different matter. A very slight move on my part determines how far up the road and down the road I can see. By just a slight movement of my eye and my gun, I can start putting you under fire a couple kilometers up the road — you won’t even know where it’s coming from. And I can keep you under fire a few kilometers down the road, when you pass me by. And you don’t know where I am anymore. I’m not right up by the roadside, because the leaves would otherwise keep me from seeing anything. I’m back in there somewhere. You can pour all kinds of fire, but you might not hit me.

So, for all these reasons, the leaves are not the enemy. The leaves are the enemy of the enemy. Not of us. We’d like to get rid of the trunks — that’s different, we do that with bulldozers. But getting rid of the leaves leaves a kind of a terrain which is advantageous to the enemy, not to us. So, on all these grounds, my hunch is that by embittering the civilian population — and after all our whole strategy was to win the hearts and minds — by embittering the native population by wiping out their crops with drifting herbicide, the herbicides helped us lose the war, not win it. We didn’t win it. But it helped us lose it.

But anyway, the herbicides got stopped in two steps. First Agent Orange, because of dioxin and the report from the Bionetics Company, and second because Abrams and Bunker said, “Stop it.” We now have a treaty, by the way, the ENMOD treaty, that makes it illegal under international law to do any kind of large-scale environmental modification as a weapon of war. So, that’s about everything I know.

And I should add: you might say, how could they interpret something that’s common in that region as a poison? Well, in China, in 1970, I believe it was, the same sort of thing happened, but the situation was very different. People believed that yellow spots were falling from the sky, they were fallout from nuclear weapons tests being conducted by the Soviet Union, and they were poisonous.

Well, the Chinese government asked a geologist from a nearby university to go investigate, and he figured out — completely out of touch with us, he had never heard of us, we had never heard of him — that it was bee feces that were being misinterpreted by the villagers as fallout from nuclear weapons test done by Russians.

It was exactly the same situation, except that in this case there was no reason whatsoever to believe that there was anything toxic there. And why was it that people didn’t recognize bee droppings for what they were? After all, there’s lots of bees out there. There are lots of bees here, too. And if in April, or near that part of spring, you look at the rear windshield of your car, if you’ve been out in the countryside or even here in midtown, you will see lots of these spots, and that’s what those spots are.

When I was trying to find out what kinds of pollen were in the samples of the yellow rain — the so-called yellow rain — that we had, I went down to Washington. The greatest United States expert on pollen grains and where they come from was at the Smithsonian Institution, a woman named Joan Nowicki. I told her that bees make spots like this all the time and she said, “Nonsense. I never see it.” I said, “Where do you park your car?” Well there’s a big parking lot by the Smithsonian, we go down there, and her rear windshield was covered with these things. We see them all the time. They’re part of what we see, but we don’t take any account of.

Here at Harvard there’s a funny story about that. One of our best scientists here, Ed Wilson, studies ants — but also bees — but mostly ants. But he knows a lot about bees. Well, he has an office in the museum building, and lots of people come to visit the museum at Harvard, a great museum, and there’s a parking lot for them. Now there’s a graduate student who has, in those days, bee nests up on top of the museum building. He’s doing some experiments with bees. But these bees defecate, of course. And some of the nice people who come to see Harvard Museum park their cars there and some of them are very nice new cars, and they come back out from seeing the museum and there’s this stuff on their windshields. So, they go to find out who is it that they can blame for this and maybe do something about it or pay them get it fixed or I don’t know what — anyway, to make a complaint. So, they come to Ed Wilson’s office.

Well, this graduate student is a graduate student of Ed Wilson, and of course, he knows that he’s got bee nests up there, and so the secretary of Ed Wilson knows what this stuff is. And the graduate student has the job of taking a rag with alcohol on it and going down and gently wiping the bee feces off of the windshields of these distressed drivers, so there’s never any harm done. But now, when I had some of this stuff that I’d collected in Thailand, I took two people to lunch at the faculty club here at Harvard, and some leaves with these spots on them under a plastic petri dish, just to see if they would know.

Now, one of these guys, Carroll Williams, knew all about insects, lots of things about insects, and Wilson of course; and we’re having lunch and I bring out this petri dish with the leaves covered with yellow spots and asked them, two professors who are great experts on insects, what the stuff is, and they hadn’t the vaguest idea. They didn’t know. So, there can be things around us that we see every day, and even if we’re experts we don’t know what it is. We don’t notice it. It’s just part of the environment. We don’t notice it. I’m sure that these Hmong people were getting shot at, they were getting napalmed, they were getting everything else, but they were not getting poisoned. At least not by bee feces. It was all a big mistake.

Max: Thank you so much, both for this fascinating conversation and all the amazing things you’d done to keep science a force for good in the world.

Ariel: Yes. This has been a really, really great and informative discussion, and I have loved learning about the work that you’ve done, Matthew. So, Matthew and Max, thank you so much for joining the podcast.

Max: Well, thank you.

Matthew: I enjoyed it. I’m sure I enjoyed it more than you did.

Ariel: No, this was great. It’s truly been an honor getting to talk with you.

If you’ve enjoyed this interview, let us know! Please like it, share it, or even leave a good review. I’ll be back again next month with more interviews with experts.  

 

FLI Podcast (Part 1): From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in arms control, working with the US government to renounce the development and possession of biological weapons and halt the use of Agent Orange and other herbicides in Vietnam. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.  

In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.

Topics discussed in this episode include:

  • Watson and Crick’s double helix hypothesis
  • The value of theoretical vs. experimental science
  • Biological weapons and the U.S. biological weapons program
  • The Biological Weapons Convention
  • The value of verification
  • Future considerations for biotechnology

Publications and resources discussed in this episode include:

Click here for Part 2: Anthrax, Agent Orange, and Yellow Rain: Verification Stories with Matthew Meselson and Max Tegmark

Ariel: Hi everyone and welcome to the FLI podcast. I’m your host, Ariel Conn with the Future of Life Institute, and I am super psyched to present a very special two-part podcast this month. Joining me as both a guest and something of a co-host is FLI president and MIT physicist Max Tegmark. And he’s joining me for these two episodes because we’re both very excited and honored to be speaking with Dr. Matthew Meselson. Matthew not only helped prove Watson and Crick’s hypothesis about the structure of DNA in the 1950s, but he was also instrumental in getting the U.S. to ratify the Geneva Protocol, in getting the U.S. to halt its Agent Orange Program, and in the creation of the Biological Weapons Convention. He is currently Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University where, among other things, he studies the role of sexual reproduction in evolution. Matthew and Max, thank you so much for joining us today.

Matthew: A pleasure.

Max: Pleasure.

Ariel: Matthew, you’ve done so much and I want to make sure we can cover everything, so let’s just dive right in. And maybe let’s start first with your work on DNA.

Matthew: Well, let’s start with my being a graduate student at Caltech.

Ariel: Okay.

Matthew: I had a been a freshman at Caltech but I didn’t like it. The teaching at that time was by rote except for one course, which was Linus Pauling’s course, General Chemistry. I took that course and I did a little research project for Linus, but I decided to go to graduate school much later at the University of Chicago because there was a program there called Mathematical Biophysics. In those days, before the structure of DNA was known, what could a young man do who liked chemistry and physics but wanted to find out how you could put together the atoms of the periodic chart and make something that’s alive?

There was a unit there called Mathematical Biophysics and the head of it was a man with a great big black beard, and that all seemed very attractive to a kid. So, I decided to go there but because of my freshman year at Caltech I got to know Linus’ daughter, Linda Pauling, and she invited me to a swimming pool party at their house in Sierra Madre. So, I’m in the water. It’s a beautiful sunny day in California, and the world’s greatest chemist comes out wearing a tie and a vest and looks down at me in the water like some kind of insect and says, “Well, Matt, what are you going to do next summer?”

I looked up and I said, “I’m going to the University of Chicago to Nicolas Rashevsky” that’s the man with the black beard. And Linus looked down at me and said, “But Matt, that’s a lot of baloney. Why don’t you come be my graduate student?” So, I looked up and said, “Okay.” That’s how I got into graduate school. I started out in X-ray crystallography, a project that Linus gave me to do. One day, Jacques Monod from the Institut Pasteur in Paris came to give a lecture at Caltech, and the question then was about the enzyme beta-galactosidase, a very important enzyme because studies of the induction of that enzyme led to the hypothesis of messenger RNA, also how genes are turned on and off. A very important protein used for those purposes.

The question of Monod’s lecture was: is this protein already lurking inside of cells in some inactive form? And when you add the chemical that makes it be produced, which is lactose (or something like lactose), you just put a little finishing touch on the protein that’s lurking inside the cells and this gives you the impression that the addition of lactose (or something like lactose) induces the appearance of the enzyme itself. Or the alternative was maybe the addition to the growing medium of lactose (or something like lactose) causes de novo production, a synthesis of the new protein, the enzyme beta-galactosidase. So, he had to choose between these two hypotheses. And he proposed an experiment for doing it — I won’t go into detail — which was absolutely horrible and would certainly not have worked, even though Jacques was a very great biologist.

I had been taking Linus’ course on the nature of the chemical bond, and one of the key take-home problems was: calculate the ratio of the strength of the Deuterium bond to the Hydrogen bond. I found out that you could do that in one line based on the — what’s called the quantum mechanical zero point energy. That impressed me so much that I got interested in what else Deuterium might have about it that would be interesting. Deuterium is heavy Hydrogen, with a neutron in the nucleus. So, I thought: what would happen if you exchange the water in something alive with Deuterium? And I read that there was a man who tried to do that with a mouse, but that didn’t work. The mouse died. Maybe because the water wasn’t pure, I don’t know.

But I had found a paper that you could grow bacteria, Escherichia coli, in pure heavy water with other nutrients added but no light water. So, I knew that you could make DNA from that as you could probably make DNA or also beta-galactosidase a little heavier by having it be made out of heavy Hydrogen rather than light. There’s some intermediate details here, but at some point I decided to go see the famous biophysicist Max Delbrück. I was in the Chemistry Department and Max was in the Biology Department.

And there was, at that time, a certain — I would say not a barrier, but a three-foot fence between these two departments. Chemists looked down on the biologists because they worked just with squiggly, gooey things. Then the physicists naturally looked down on the chemists and the mathematicians looked down on the physicists. At least that was the impression of us graduate students. So, I was somewhat fearsome to go meet Max Delbrück, and he also had a fearsome reputation, as not tolerating any kind of nonsense. But finally I went to see him — he was a lovely man actually — and the first thing he said when I sat down was, “What do you think about these two new papers of Watson and Crick?” I said I’d never heard about them.  Well, he jumped out of his chair and grabbed a heap of reprints that Jim Watson had sent to him, and threw them all at me, and yelled at me, and said, “Read these and don’t come back until you read them.”

Well, I heard the words “come back.” So I read the papers and I went back, and he explained to me that there was a problem with the hypothesis that Jim and Francis had for DNA replication. The idea of theirs was that the two strands come apart by unwinding the double helix. And if that meant that you had to unwind the entire parent double helix along its whole length, the viscous drag would have been impossible to deal with. You couldn’t drive it with any kind of reasonable biological motor.

So Max thought that you don’t actually unwind the whole thing: You make breaks, and then with little pieces you can unwind those and then seal them up. This gives you a kind of dispersive replication in which the two daughter molecules, each one has some pieces of the parent molecule but no complete strand from the parent molecule. Well, when he told me that, I almost immediately — I think it was almost immediately — realized that density separation would be a way to find out if this hypothesis predicted the finding of half heavy DNA after one generation. That is, one old strand together with one new strand forming one new duplex of DNA.

So I went to Linus Pauling and said, “I’d like to do that experiment,” and he gently said, “Finish your X-ray crystallography.” So, I didn’t do that experiment then. Instead I went to Woods Hole to be a teaching assistant in the Physiology course with Jim Watson. Jim had been living at Caltech that year in the faculty club, the Athenaeum, and so had I, so I had gotten to know Jim pretty well then. So there I was at Woods Hole, and I was not really a teaching assistant — I was actually doing an experiment that Jim wanted me to do — but I was meeting with the instructors.

One day we were on the second floor of the Lily building and Jim looked out the window and pointed down across the street. Sitting on the grass was a fellow, and Jim said, “That guy thinks he’s pretty smart. His name is Frank Stahl. Let’s give him a really tough experiment to do all by himself.” The Hershey–Chase Experiment. Well, I knew what that experiment was, and I didn’t think you could do it in one day, let alone just single-handedly. So I went downstairs to tell this poor Frank Stahl guy that they were going to give him a tough assignment.

I told him about that, and I asked him what he was doing. And he was doing something very interesting with bacteriophages. He asked me what I was doing, and I told him that I was thinking of finding out if DNA replicates semi-conservatively the way Watson and Crick said it should, by a method that would have something to do with density measurements in a centrifuge. I had no clear idea how to do that, just something by growing cells in heavy water and then switching them to light water and see what kind of DNA molecules they made in a density gradient in a centrifuge. And Frank made some good suggestions, and we decided to do this together at Caltech because he was coming to Caltech himself to be a postdoc that very next September.

Anyway, to make a long story short we made the experiment work, and we published it in 1958. That experiment said that DNA is made up of two subunits and when it replicates its subunits come apart, each one becomes associated with a new sub-unit. Now anybody in his right mind would have said, “By sub-unit you really mean a single polynucleotide chain. Isn’t that what you mean?” And we would have answered at that time, “Yes of course, that’s what we mean, but we don’t want to say that because our experiment doesn’t say that. Our experiment says that some kind of subunits do that — the subunits almost certainly are the single polynucleotide chains — but we want to confine our written paper to only what can be deduced from the experiment itself, and not go one inch beyond that.” It was later a fellow named John Cairns proved that the subunits were really the single polynucleotide chains of DNA.

Ariel: So just to clarify, those were the strands of DNA that Watson and Crick had predicted, is that correct?

Matthew: Yes, it’s the result that they would have predicted, exactly so. We did a bunch of other experiments at Caltech, some on mutagenesis and other things, but this experiment, I would say, had a big psychological value. Maybe its psychological value was more than anything else.

The year 1954, the year after Watson and Crick had published the structure of DNA and their speculations as to its biological meaning at Woods Hole, and Jim was there and Francis was there. I was there, as I mentioned. Rosalind Franklin was there. Sydney Brenner was there. It was very interesting because a good number of people there didn’t believe their structure for DNA, or that it had anything to do with life and genes, on the grounds that it was too simple, and life had to be very complicated. And the other group of people thought it was too simple to be wrong.

So two views: every one agreed that the structure that they had proposed was a simple one. Some people thought simplicity meant truth, and others thought that in biology, truth had to be complicated. What I’m trying to get at here is that after the structure was published it was just a hypothesis. It wasn’t proven by any methods of, for example, crystallography, to show — it wasn’t until much later that crystallography and a certain other kind of experiment actually proved that the Watson and Crick structure was right. At that time, it was a proposal based on model building.

So why was our experiment, the experiment showing the semi-conservative replication, of psychological value? It was because this is the first time you could actually see something. Namely, bands in an ultracentrifuge gradient. So, I think the effect of our experiment in 1958 was to make the DNA structure proposal of 1954 — it gave it a certain reality. Jim, in his book The Double Helix, actually says that he was greatly relieved when that came along. I’m sure he believed the structure was right all the time, but this certainly was a big leap forward in convincing people.

Ariel: I’d like to pull Max into this just a little bit and then we’ll get back to your story. But I’m really interested in this idea of the psychological value of science. Sort of very, very broadly, do you think a lot of experiments actually come down to more psychological value, or was your experiment unique in that way? I thought that was just a really interesting idea. And I think it would be interesting to hear both of your thoughts on this.

Matthew: Max, where are you?

Max: Oh, I’m just fascinated by what you’ve been telling us about here. I think of course, the sciences — we see again and again that experiments without theory and theory without experiments, neither of them would be anywhere near as amazing as when you have both. Because when there’s a really radical new idea put forth, half the time people at the time will dismiss it and say, “Oh, that’s obviously wrong,” or whatnot. And only when the experiment comes along do people start taking it seriously and vice versa. Sometimes a lot of theoretical ideas are just widely held as truths — like Aristotle’s idea of how the laws of motion should be — until somebody much later decides to put it to the experimental test.

Matthew: That’s right. In fact, Sir Arthur Eddington is famous for two things. He was one of the first ones to find experimental proof of the accuracy of Einstein’s theory of general relativity, and the other thing for which Eddington was famous was having said, “No experiment should be believed until supported by theory.”

Max: Yeah. Theorists and experiments have had this love-hate relationship throughout the ages, which I think, in the end, has been a very fruitful relationship.

Matthew: Yeah. In cosmology the amazing thing to me is that the experiments now cost billions or at least hundreds of millions of dollars. And that this is one area, maybe the only one, in which politicians are willing to spend a lot of money for something that’s so beautiful and theoretical and far off and scientifically fundamental as cosmology.

Max: Yeah. Cosmology is also a reminder again of the importance of experiment, because the big questions there — such as where did everything come from, how big is our universe, and so on — those questions have been pondered by philosophers and deep thinkers for as long as people have walked the earth. But for most of those eons all you could do was speculate with your friends over some beer about this, and then you could go home, because there was no further progress to be made, right?

It was only more recently when experiments gave us humans better eyes: where with telescopes, et cetera, we could start to see things that our ancestors couldn’t see, and with this experimental knowledge actually start to answer a lot of these things. When I was a grad student, we argued about whether our universe was 10 billion years old or 20 billion years old. Now we argue about whether it’s 13.7 or 13.8 billion years old. You know why? Experiment.

Matthew: And now is a more exciting time than any previous time, I think, because we’re beginning to talk about things like multi-universes and entanglement, things that are just astonishing and really almost foreign to the way that we’re able to think — that there’s other universes, or that there could be what’s called quantum mechanical entanglement: that things influence each other very far apart, so far apart that light could not travel between them in any reasonable time, but by a completely weird process, which Einstein called spooky action at a distance. Anyway, this is an incredibly exciting time about which I know nothing except from podcasts and programs like this one.

Max: Thank you for bringing this up, because I think the examples you gave right now actually are really, really linked to these breakthroughs in biology that you were telling us about, because I think we’ve been on this intellectual journey all along where we humans kept underestimating our ability to understand stuff. So for the longest time, we didn’t even really try our best because we assumed it was futile. People used to think that the difference between a living bug and a dead bug was that there was some sort of secret sauce, and the living bug has some sort life essence or something that couldn’t be studied with the tools of science. And then by the time people started to take seriously that maybe actually the difference between that living bug and the dead bug is that the mechanism is just broken in one of them, and you can study the mechanism — then you get to these kind of experimental questions that you were talking about. I think in the same way, people had previously shied away from asking questions about, not just about life, but about the origin of our universe for example, as being always hopelessly beyond where we were ever even able to do anything about, so people didn’t ask what experiments they could make. They just gave up without even trying.

And then gradually I think people were emboldened by breakthroughs in, for example, biology, to say, “Hey, what about — let’s look at some of these other things where people said we’re hopeless, too?” Maybe even our universe obeys some laws that we can actually set out to study. So hopefully we’ll continue being emboldened, and stop being lazy, and actually work hard on asking all questions, and not just give up because we think they’re hopeless.

Matthew: I think the key to making this process begin was to abandon supernatural explanations of natural phenomena. So long as you believe in supernatural explanations, you can’t get anywhere, but as soon as you give them up and look around for some other kind of explanation, then you can begin to make progress. The amazing thing is that we, with our minds that evolved under conditions of hunter-gathering and even earlier than that — that these minds of ours are capable of doing such things as imagining general relativity or all of the other things.

So is there any limit to it? Is there going to be a point beyond which we will have to say we can’t really think about that, it’s too complicated? Yes, that will happen. But we will by then have built computers capable of thinking beyond. So in a sense, I think once supernatural thinking was given up, the path was open to essentially an infinity of discovery, possibly with the aid of advanced artificial intelligence later on, but still guided by humans. Or at least by a few humans.

Max: I think you hit the nail on the head there. Saying, “All this is supernatural,” has been used as an excuse to be lazy over and over again, even if you go further back, you know, hundreds of years ago. Many people looked at the moon, and they didn’t ask themselves why the moon doesn’t fall down like a normal rock because they said, “Oh, there’s something supernatural about it, earth stuff obeys earth laws, heaven stuff obeys heaven laws, which are just different. Heaven stuff doesn’t fall down.”

And then Newton came along and said, “Wait a minute. What if we just forget about the supernatural, and for a moment, explore the hypothesis that actually stuff up there in the sky obeys the same laws of physics as the stuff on earth? Then there’s got to be a different explanation for why the moon doesn’t fall down.” And that’s exactly how he was led to his law of gravitation, which revolutionized things of course. I think again and again, there was again the rejection of supernatural explanations that led people to work harder on understanding what life really is, and now we see some people falling into the same intellectual trap again and saying, “Oh yeah, sure. Maybe life is mechanistic but intelligence is somehow magical, or consciousness is somehow magical, so we shouldn’t study it.”

Now, artificial intelligence progress is really, again, driven by people willing to let go of that and say, “Hey, maybe intelligence is not supernatural. Maybe it’s all about information processing, and maybe we can study what kind of information processing is intelligent and maybe even conscious as in having experiences.” There’s a lot learn at this meta level from what you’re saying there, Matthew, that if we resist excuses to not do the work by saying, “Oh, it’s supernatural,” or whatever, there’s often real progress we can make.

Ariel: I really hate to do this because I think this is such a great discussion, but in the interest of time, we should probably get back to the stories at Harvard, and then you two can discuss some of these issues — or others — a little more shortly in this interview. So yeah, let’s go back to Harvard.

Matthew: Okay, Harvard. So I came to Harvard. I thought I’d stay only five years. I thought it was kind of a duty for an American who’d grown up in the West to find out a little bit about what the East was like. But I never left. I’ve been here for 60 years. When I was here for about three years, my friend Paul Doty, a chemist, no longer living, asked me if I’d like to go work at the United States Arms Control and Disarmament Agency in Washington DC. He was on the general advisory board of that government branch, and it was embedded in the State Department building on 21st Street in Washington, but it was quite independent, it could report it directly to the White House, and it was the first year of its existence, and it was trying to find out what it should be doing.

And one of the ways it tried to find out what it should be doing was to hire six academics to come just for the summer. One of them was me, one of them was Freeman Dyson, the physicist, and there were four others. When I got there, they said, “Okay, you’re going to work on theater nuclear weapons arms control,” something I knew less than zero about. But I tried, and I read things and so on, and very famous people came to brief me — like Llewellyn Thompson, our ambassador to Moscow, and Paul Nitze, the deputy secretary of defense.

I realized that I knew nothing about this and although scientists often have the arrogance to think that they can say something useful about nearly anything if they think about it, here was something that so many people had thought about. So I went through my boss and said, “Look, you’re wasting your time and your money. I don’t know anything about this. I’m not gonna produce anything useful. I’m a chemist and a biologist. Why don’t you have me look into the arms control of that stuff?” He said, “Yeah, you could do whatever you want. We had a guy who did that, and he got very depressed and he killed himself. You could have his desk.”

So I decided to look into chemical and biological weapons. In those days, the arms control agency was almost like a college. We all had to have very high security clearances, and that was because the congress was worried that maybe there would be some leakers amongst the people doing this suspicious work in arms control, and therefore, we had to be in possession of the highest level of security clearance. This had, in a way, the unexpected effect that you could talk to your neighbor about anything. Ordinarily, you might not have clearance for what your neighbor, a different office, a different room, or a different desk was doing — but we had, all of us, such security clearances that we could all talk to each other about what we were doing. So it was like a college in that respect. It was a wonderful atmosphere.

Anyway, I decided I would just focus on biological weapons, because the two together would be too much for a summer. I went to the CIA, and a young man there showed me everything we knew about what other countries were doing with biological weapons, and the answer was we knew very little. Then I went to Fort Detrick to see what we were doing with biological weapons, and I was given a tour by a quite good immunologist who had been a faculty member at the Harvard Medical School, name was Leroy Fothergill. And we came to a big building, seven stories high. From a distance, you would think it had windows but when you get up close, they were phony windows. And I asked Dr. Fothergill, “What do we do in there?” He said, “Well, we have a big fermentor in there and we make Anthrax.” I said, “Well, why do we do that?” He said, “Well, biological weapons are a lot cheaper than nuclear weapons. It will save us money.”

I don’t think it took me very long, certainly by the time I got back to my office in the State Department Building, to realize that hey, we don’t want devastating weapons of mass destruction to be really cheap and save us money. We would like them to be so expensive that no one can afford them but us, or maybe no one at all. Because in the hands of other people, it would be like their having nuclear weapons. It’s ridiculous to want a weapon of mass destruction that’s ultra-cheap.

So that dawned on me. My office mate was Freeman Dyson, and I talked with him a little bit about it and he encouraged me greatly to pursue this. The more I thought about it, two things motivated me very strongly. Not just the illogic of it. The illogic of it motivated me only in the respect that it made me realize that any reasonable person could be convinced of this. In other words, it wouldn’t be a hard job to get this thing stopped, because anybody who’s thoughtful would see the argument against it. But there were two other aspects. One, it was my science: biology. It’s hard to explain, but that my science would be perverted in that way. But there’s another aspect, and that is the difference between war and peace.

We’ve had wars and we’ve had peace. Germany fights Britain, Germany is aligned with Britain. Britain fights France, Britain is aligned with France. There’s war. There’s peace. There are things that go on during war that might advance knowledge a little bit, but certainly, it’s during times of peace that the arts, the humanities, and science, too, make great progress. What if you couldn’t tell the difference and all the time is both war and peace? By that I mean, war up until now has been very special. There are rules of it. Basically, it starts with hitting a guy so hard that he’s knocked out or killed. Then you pick up a stone and hit him with that. Then you make a spear and spear him with that. Then you make a bow and arrow and spear him with that. Then later on, you make a gun and you shoot a bullet at him. Even a nuclear weapon: it’s all like hitting with an arm, and furthermore, when it stops, it’s stopped, and you know when it’s going on. It make sounds. It makes blood. It makes bang.

Now biological weapons, they could be responsible for a kind of war that’s totally surreptitious. You don’t even know what’s happening, or you know it’s happening but it’s always happening. They’re trying to degrade your crops. They’re trying to degrade your genetics. They’re trying to introduce nasty insects to you. In other words, it doesn’t have a beginning and an end. There’s no armistice. Now today, there’s another kind of weapon. It has some of those attributes: It’s cyber warfare. It might over time erase the distinction between war and peace. Now that really would be a threat to the advance of civilization, a permanent science fiction-like, locked in, war-like situation, never ending. Biological weapons have that potentiality.

So for those two reasons — my science, and it could erase the distinction between war and peace, could even change what it means to be human. Maybe you could change what the other guy’s like: change his genes somehow. Change his brain by maybe some complex signaling, who knows? Anyway, I felt a strong philosophical desire to get this thing stopped. Fortunately, I was in Harvard University, and so was Jack Kennedy. And although by that time he had been assassinated, he had left behind lots of people in the key cabinet offices who were Kennedy appointees. In particular, people who came from Harvard. So I could knock on almost any door.

So I went to Lyndon Johnson’s national security adviser, who had been Jack Kennedy’s national security adviser, and who had been the dean at Harvard who hired me, McGeorge Bundy, and said all these things I’ve just said. And he said, “Don’t worry, Matt, I’ll keep it out of the war plans.” I’ve never seen a war plan, but I guess if he said that, it was true. But that didn’t mean it wouldn’t keep on being developed.

Now here I should make an aside. Does that mean that the Army or the Navy or the Air Force wanted these things? No. We develop weapons in a kind of commercial way that is a part of the military. In this case, the Army Materiel Command works out all kinds of things: better artillery pieces, communication devices, and biological weapons. It doesn’t belong to any service. Then after, in this case, biological weapons, if the laboratories develop what they think is a good biological weapon, they still have to get one of the services — Air Force, Army, Navy, Marines —  to say, “Okay, we’d like that. We’ll buy some of that.”

There was always a problem here. Nobody wanted these things. The Air Force didn’t want them because you couldn’t calculate how many planes you needed to kill a certain number of people. You couldn’t calculate the human dose response, and beyond that you couldn’t calculate the dose that would reach the humans. There were too many unknowns. The Army didn’t like it, not only because they, too, wanted predictability, but because their soldiers are there, maybe getting infected by the same bugs. Maybe there’s vaccines and all that, but it also seemed dishonorable. The Navy didn’t want it because the one thing that ships have to be is clean. So oddly enough, biological weapons were kind of a step child.

Nevertheless, there was a dedicated group of people who really liked the idea and pushed hard on it. These were the people who were developing the biological weapons, and they had their friends in Congress, so they kept getting it funded. So I made a kind of a plan, like a protocol for doing an experiment, to get us to stop all this. How do you do that? Well, first you ask yourself: who can stop it? There’s only one person who can stop it. That’s the President of the United States.

The next thing is: what kind of advice is he going to get, because he may want to do something, but if all the advice he gets is against it, it takes a strong personality to go against the advice you’re getting. Also, word might get out, if it turned out you made a mistake, that they told you all along it was a bad idea and you went ahead anyway. That makes you a super fool. So the answer there is: well, you go to talk to the Secretary of Defense, and the Secretary of State, and the head of the CIA, and all of the senior people, and their people who are just below them.

Then what about the people who are working on the biological weapons? You have to talk to them, but not so much privately, because they really are dedicated. There were some people who are caught in this and really didn’t want to be doing it, but there were other people who were really pushing it, and it wasn’t possible, really, to tell them to quit your job and get out of this. But what you could do is talk with them in public, and by knowing more than they knew about their own subject — which meant studying up a lot — show that they were wrong.

So I literally crammed, trying to understand everything there was to know about aerobiology, diffusion of clouds, pathogenicity, history of biological weapons, the whole bit, so that I could sound more knowledgeable. I know that’s a sort of slightly underhanded way to win an argument, but it’s a way of convincing the public that the guys who are doing this aren’t so wise. And then you have to get public support.

I had a pal here who told me I had to go down to Washington and meet a guy named Howard Simons, who was the managing editor of the Washington Post. He had been a science journalist at The Post and that’s why some scientists up here in Harvard knew him. So, I went down there — Howie at that time was now managing editor — and I told him, “I want to get newspaper articles all over the country about the problem of biological weapons.” He took out a big yellow pad and he wrote down about 30 names. He said, “These are the science journalists at San Francisco Chronicle, Baltimore Sun, New York Times, et cetera, et cetera.” Put down the names of all the main science journalists. And he said to me, “These guys have to have something once a week to give their editor for the science columns, or the science pages. They’re always on the lookout for something, and biological weapons is a nice subject — they’d like to write about that, because it grabs people’s attention.”

So I arranged to either meet, or at least talk to all of these guys. And we got all kinds of articles in the press, and mainly reflecting the views that I had that this was unwise for the United States to pioneer this stuff. We should be in the position to go after anybody else who was doing it even in peacetime and get them to stop, which we couldn’t very well do if we were doing it ourselves. In other words, that meant a treaty. You have to have a treaty, which might be violated, but if it’s violated and you know, at least you can go after the violators, and the treaty will likely stop a lot of countries from doing it in the first place.

So what are the treaties? There’s an old treaty, a 1925 Geneva Protocol. The United States was not a party to it, but it does prohibit the first use of bacteriological or other biological weapons. So the problem was to convince the United States to get on board that treaty.

The very first paper I wrote for the President is called the Geneva Protocol of 1925. I never met President Nixon, but I did know Henry Kissinger: He’d been my neighbor at Harvard, the building next door to mine. There was a good lunch room on the third floor. We both ate there. He had started an arms control seminar, met every month. I went to that, all the meetings. We traveled a little bit in Europe together. So I knew him, and I wrote papers for Henry knowing that those would get to Nixon. The first paper that I wrote, as I said, was “The United States and the Geneva Protocol.” It made all these arguments that I’m telling you now about why the United States should not be in this business. Now, the Protocol also prohibits chemical weapons or the first use of chemical weapons.

Now, I should say something about writing papers for Presidents. You don’t want to write a paper that’s saying, “Here’s what you should do.” You have to put yourself in their position. There are all kinds of options on what they should do. So, you have to write a paper from the point of view of a reader who’s got to choose between a lot of options. He doesn’t have a choice to start with. So that’s the kind of paper you need to write. You’ve got to give every option a fair trial. You’ve got to do your best, both to defend every option and to argue against every option. And you’ve got to do it in no more than a very few number of pages. That’s no easy job, but you can do it.

So eventually, as you know, the United States renounced biological weapons in November of 1969. There was an off the record press briefing that Henry Kissinger gave to the journalists about this. And one of them, I think it was the New York Times guy, said, “What about toxin weapons?”

Now, toxins are poisonous things made by living things, like Botulinum toxin made by bacteria or snake venom, and those could be used as weapons in principle. You can read in this briefing, Henry Kissinger says, “What are toxins?” So what this meant, in other words, is that a whole new review, a whole new decision process had to be cranked up to deal with the question, “Well, do we renounce toxin weapons?” And there were two points of view. One was, “They are made by living things, and since we’re renouncing biological warfare, we should renounce toxins.”

The other point of view is, “Yeah, they’re made by living things, but they’re just chemicals, and so they can also be made by chemists in laboratories. So, maybe we should renounce them when they’re made by living things like bacteria or snakes, but reserve the right to make them and use them in warfare if we can synthesize them in chemical laboratories.” So I wrote a paper arguing that we should renounce them completely. Partly because it would be very confusing to argue that the basis for renouncing or not renouncing is who made them, not what they are. But also, I knew that my paper was read by Richard Nixon on a certain day on Key Biscayne in Florida, which was one of the places he’d go for rest and vacation.

Nixon was down there, and I had written a paper called “What Policy for Toxins.” I was at a friend’s house with my wife the night that the President and Henry Kissinger were deciding this issue. Henry called me, and I wasn’t home. They couldn’t find their copy of my paper. Henry called to see if I could read it to them, but he couldn’t find me because I was at a dinner party. Then Henry called Paul Doty, my friend, because he had a copy of the paper. But he looked for his copy and he couldn’t find it either. Then late that night Kissinger called Doty again and said, “We found the paper, and the President has made up his mind. He’s going to renounce toxins no matter how they’re made, and it was because of Matt’s paper.”

I had tried to write a paper that steered clear of political arguments — just scientific ones and military ones. However, there had been an editorial in the Washington Post by one of their editorial writers, Steve Rosenfeld, in which he wrote the line, “How can the President renounce typhoid only to embrace Botulism?”

I thought it was so gripping, I incorporated it under the topic of the authority and credibility of the President of the United States. And what Henry told Paul on the telephone was: that’s what made up the President’s mind. And of course, it would. The President cares about his authority and credibility. He doesn’t care about little things like toxins, but his authority and credibility… And so right there and then, he scratched out the advice that he’d gotten in a position paper, which was to take the option, “Use them but only if made by chemists,” and instead chose the option to renounce them completely. And that’s how that decision got made.

Ariel: That all ended up in the Biological Weapons Convention, though, correct?

Matthew: Well, the idea for that came from the British. They had produced a draft paper to take to the arms control talks with the Russians and other countries in Geneva, suggesting a treaty to prohibit biological weapons in war — not just the way the Geneva Protocol did, but would prohibit even their production and possession, not merely their use. Richard Nixon, in his renunciation by the United States, what he did was threefold. He got the United States out of the biological weapons business and decreed that Fort Detrick and other installations that had been doing that would hence forward be doing only peaceful things, like Detrick was partly converted to a cancer research institute, and all the biological weapons that had been stock piled were to be destroyed, and they were.

The other thing he did was renounce toxins. Another thing he decided to do was to resubmit the Geneva Protocol to the United States Senate for its advice and approval. And the last thing was to support the British initiative, and that was the Biological Weapons Convention. But you could only get it if the Russians agreed. But eventually, after a lot of negotiation, we got the Biological Weapons Convention, which is still in force. A little later we even got the Chemical Weapons Convention, but not right away because in my view, and in the view of a lot of people, we did need chemical weapons. Until we could be pretty sure that the Soviet Union was going to get rid of its chemical weapons, too.

If there are chemical weapons on the battlefield, soldiers have to put on gas masks and protective clothing, and this really slows down the tempo of combat action, so that if you could simply put the other side into that restrictive clothing, you have a major military accomplishment. Chemical weapons in the hands of only one side would give that side the option of slowing down the other side, reducing the mobility on the ground of the other side. So, until we got a treaty that had inspection provisions, which the chemical treaty does, and the biological treaty does not — well, it has a kind of challenge inspection, but no one’s ever done that, and it’s very hard to make it work — but the chemical treaty had inspection provisions that were obligatory, and have been extensive: with the Russians visiting our chemical production facilities, and our guys visiting theirs, and all kinds of verification. So that’s how we got the Chemical Weapons Convention. That was quite a bit later.

Max: So, I’m curious, was there a Matthew Meselson clone on the British side, thanks to whom the British started pushing this?

Matthew: Yes. There were of course, numerous clones. And there were numerous clones on this side of the Atlantic, too. None of these things could ever be ever done by just one person. But my pal Julian Robinson, who was at the University of Sussex in Brighton, he was a real scholar of chemical and biological weapons, knows everything about them, and their whole history, and has written all of the very best papers on this subject. He’s just an unbelievably accurate and knowledgeable historian and scholar. People would go to Julian for advice. He was a Mycroft. He’s still in Sussex.

Ariel: You helped start the Harvard Sussex Program on chemical and biological weapons. Is he the person you helped start that with, or was that separate?

Matthew: We decided to do that together.

Ariel: Okay.

Matthew: It did several things, but one of the main things it did was to publish a quarterly journal, which had a dispatch from Geneva — progress towards getting the Chemical Weapons Convention — because when we started the bulletin, the Chemical Convention had not yet been achieved. There were all kinds of news items in the bulletin; We had guest articles. And it finally ended, I think, only a few years ago. But I think it had a big impact; not only because of what was in it, but because, also, it united people of all countries interested in this subject. They all read the bulletin, and they all got a chance to write in the bulletin as well, and they occasionally meet each other, so it had an effect of bringing together a community of people interested in safely getting rid of chemical weapons and biological weapons.

Max: This Biological Weapons Convention was a great inspiration for subsequent treaties, first the ban on biological weapons, and then various other kinds of weapons, and today, we have a very vibrant debate about whether there should be also be a ban on lethal autonomous weapons, and inhumane uses of A.I. So, I’m curious to what extent you got lots of push-back back in those days from people who said, “Oh this is a stupid idea,” or, “This is never going to work,” and what the lessons are that could be learned from that.

Matthew: I think that with biological weapons, and also, but to a lesser extent, with chemical weapons, the first point was we didn’t need them. We had never really accepted World War I — when we were involved in the use of chemical weapons, that had been started. But it was never something that the military liked. They didn’t want to fight a war by encumberment. Biological weapons for sure not, once we realized that to make cheap weapons, they could get into the hands of people who couldn’t afford nuclear weapons, was idiotic. And even chemical weapons are relatively cheap and have the possibility of covering fairly large areas at a low price, and also getting into the hands of terrorists. Now, terrorism wasn’t much on anybody’s radar until more recently, but once that became a serious issue, that was another argument against both biological and chemical weapons. So those two weapons really didn’t have a lot of boosters.

Max: You make it sound so easy though. Did it never happen that someone came and told you that you were all wrong and that this plan was never going to work?

Matthew: Yeah, but that was restricted to the people who were doing it, and a few really eccentric intellectuals. As evidence of this: in the military, the office which dealt with chemical and biological weapons, the highest rank you could find in that would be a colonel. No general, just a colonel. You don’t get to be a general in the chemical corps. There were a few exceptions, basically old times, as kind of a left over from World War I. If you’re a part of the military that never gets to have a general or even a full colonel, you ain’t got much influence, right?

But if you talk about the artillery or the infantry, my goodness, I mean there are lots of generals — including four star generals, even five star generals — who come out of the artillery and infantry and so on, and then Air Force generals, and fleet admirals in the Navy. So that’s one way you can quickly tell whether something is very important or not.

Anyway, we do have these treaties, but it might be very much more difficult to get treaties on war between robots. I don’t know enough about it to really have an opinion. I haven’t thought about it.

Ariel: I want to follow up with a question I think is similar, because one of the arguments that we hear a lot with lethal autonomous weapons, is this fear that if we ban lethal autonomous weapons, it will negatively impact science and research in artificial intelligence. But you were talking about how some of the biological weapons programs were repurposed to help deal with cancer. And you’re a biologist and chemist, but it doesn’t sound like you personally felt negatively affected by these bans in terms of your research. Is that correct?

Matthew: Well, the only technically really important thing — that would have happened anyway — that’s radar, and that was indeed accelerated by the military requirement to detect aircraft at a distance. But usually it’s the reverse. People who had been doing research in fundamental science naturally volunteered or were conscripted to do war work. Francis Crick was working on magnetic torpedoes, not on DNA or hemoglobin. So, the argument that a war stimulates basic science is completely backwards.

Newton, he was director of the mint. Nothing about the British military as it was at the time helped Newton realize that if you shoot a projectile fast enough, it will stay in orbit; He figured that out by himself. I just don’t believe the argument that war makes science advance. It’s not true. If anything, it slows it down.

Max: I think it’s fascinating to compare the arguments that were made for and against a biological weapons ban back then with the arguments that are made for and against a lethal autonomous weapons ban today, because another common argument I hear for why people want lethal autonomous weapons today is because, “Oh, they’re going to be great. They’re going to be so cheap.” That’s like exactly what you were arguing is a very good argument against, rather than for, a weapons class.

Matthew: There’s some similarities and some differences. Another similarity is that even one autonomous weapon in the hands of a terrorist could do things that are very undesirable — even one. On the other hand, we’re already doing something like it with drones. There’s a kind of continuous path that might lead to this, and I know that the military and DARPA are actually very interested in autonomous weapons, so I’m not so sure that you could stop it, both because it’s continuous; It’s not like a real break.

Biological weapons are really different. Chemical weapons are really different. Whereas autonomous weapons still are working on the ancient primitive analogy of hitting a man with your fist, or shooting a bullet. So long as those autonomous weapons are still using guns, bullets, things like that, and not something that is not native to our biology like poison. But with a striking of a blow you can make a continuous line all the way through stones, and bows and arrows, and bullets, to drones, and maybe autonomous weapons. So discontinuity is different.

Max: That’s an interesting challenge, deciding where exactly one draws the line to be more challenging in this case. Another very interesting analogy, I think, between biological weapons and lethal autonomous weapons is the business of verification. You mentioned earlier that there was a strong verification protocol for the Chemical Weapons Convention, and there have been verification protocols for nuclear arms reduction treaties also. Some people say, “Oh, it’s a stupid idea to ban lethal autonomous weapons because you can’t think of a good verification system.” But couldn’t people have said that also as a critique of the Biological Weapons Convention?

Matthew:  That’s a very interesting point, because most people who think that verification can’t work have never been told what’s the basic underlying idea of verification. It’s not that you could find everything. Nobody believes that you could find every missile that might exist in Russia. Nobody ever would believe that. That’s not the point. It’s more subtle. The point is that you must have an ongoing attempt to find things. That’s intelligence. And there must be a heavy penalty if you find even one.

So it’s a step back from finding everything, to saying if you find even one then that’s a violation, and then you can take extreme measures. So a country takes a huge risk that another country’s intelligence organization, or maybe someone on your side who’s willing to squeal, isn’t going to reveal the possession of even one prohibited object. That’s the point. You may have some secret biological production facility, but if we find even one of them, then you are in violation. It isn’t that we have to find every single blasted one of them.

That was especially an argument that came from the nuclear treaties. It was the nuclear people who thought that up. People like Douglas McEachin at the CIA, who realized that there’s a more sophisticated argument. You just have to have a pretty impressive ability to find one thing out of many, if there’s anything out there. This is not perfect, but it’s a lot different from the argument that you have to know where everything is at all times.

Max: So, if I can paraphrase, is it fair to say that you simply want to give the parties to the treaty a very strong incentive not to cheat, because even if they get caught off base one single time, they’re in violation, and moreover, those who don’t have the weapons at that time will also feel that there’s a very, very strong stigma? Today, for example, I find it just fascinating how biology is such a strong brand. If you go ask random students here at MIT what they associate with biology, they will say, “Oh, new cures, new medicines.” They’re not going to say bioweapons. If you ask people when was the last time you read about a bioterrorism attack in the newspaper, they can’t even remember anything typically. Whereas, if you ask them about the new biology breakthroughs for health, they can think of plenty.

So, biology has clearly very much become a science that’s harnessed to make life better for people rather than worse. So there’s a very strong stigma. I think if I or anyone else here at MIT tried to secretly start making bioweapons, we’d have a very hard time even persuading any biology grad student to want to work with them because of the stigma. If one could create a similar stigma against lethal autonomous weapons, the stigma itself would be quite powerful, even absent the ability to do perfect verification. Does that make sense?

Matthew: Yes, it does, perfect sense.

Ariel: Do you think that these stigmas have any effect on the public’s interest or politicians’ interest in science?

Matthew: I think there’s still great fascination of people with science. I think that the exploration of space, for example: lots of people, not just kids — but especially kids — that are fascinated by it. Pretty soon, Elon Musk says in 2022, he’s going to have some people walking around on Mars. He’s just tested that BFR rocket of his that’s going to carry people to Mars. I don’t know if he’ll actually get it done but people are getting fascinated by the exploration of space, are getting fascinated by lots of medical things, are getting desperate about the need for a cure for cancer. I myself think we need to spend a lot more money on preventing — not curing but preventing cancer — and I think we know how to do it.

I think the public still has a big fascination, respect, and excitement from science. The politicians, it’s because, see, they have other interests. It’s not that they’re not interested or don’t like science. It’s because they have big money interests, for example. Coal and oil, these are gigantic. Harvard University has heavily invested in companies that deal with fossil fuels. Our whole world runs on fossil fuels mainly. You can’t fool around with that stuff. So it becomes a problem of which is going to win out, your scientific arguments, which are almost certain to be right, but not absolutely like one and one makes two — but almost — or the whole economy and big financial interests. It’s not easy. It will happen, we’ll convince people, but maybe not in time. That’s the sad part. Once it gets bad enough, it’s going to be bad. You can’t just turn around on a dime and take care of disastrous climate change.

Max: Yeah, this is very much the spirit of course, of the Future Life Institute, that Ariel’s podcast is run by. Technology, what it really does, it empowers us humans to do more, either more good things or more bad things. And technology in and of itself isn’t evil, nor is it morally good; It’s a tool, simply. And the more powerful it becomes, the more crucial it is that we also develop the wisdom to steer the technology for good uses. And I think what you’ve done with your biology colleagues is such an inspiring role model for all of the other sciences, really.

We physicists still feel pretty guilty about giving the world nuclear weapons, but we’ve also gave the world a lot of good stuff, from lasers, to smartphones and computers. Chemists gave the world a lot of great materials, but they also gave us, ultimately, the internal combustion engine and climate change. Biology, I think more than any other field, has clearly ended up very solidly on the good side. Everybody loves biology for what it does, even though it could have gone very differently, right? We could have had a catastrophic arms race, a race to the bottom, with one super power outdoing the other in bioweapons, and eventually these cheap weapons being everywhere, and on the black market, and bioterrorism every day. That future didn’t happen, that’s why we all love biology. And I am very honored to get to be on this call here with you, so I could personally thank you for your role on making it this way. We should not take this for granted, that it’ll be this way with all sciences, the way it’s become for biology. So, thank you.

Matthew: Yeah. That’s all right.

I’d like to end with one thought. We’re learning how to change the human genome. They won’t really get going for a while, and there’s some problems that very few people are thinking about. Not the so-called off target effects, that’s a well-known problem — but there’s another problem that I won’t go into, but it’s called epistasis. Nevertheless, 10 years from now, 100 years from now, 500 years from now, sooner or later we’ll be changing the human genome on a massive scale, making people better in various ways, so-called enhancements.

Now, a question arises. Do we know enough about the genetic basis of what makes us human to be sure that we can keep the good things about being human? What are those? Well, compassion is one. I’d say curiosity is another. Another is the feeling of needing to be needed. That sounds kind of complicated, I guess, but if you don’t feel needed by anybody — there’s some people who can go through life and they don’t need to feel needed. But doctors, nurses, parents, people who really love each other: the feeling of being needed by another human being, I think, is very pleasurable to many people, maybe to most people, and it’s one of the things that’s of essence of what it means to be human.

Now, where does this all take us? It means that if we’re going to start changing the human genome in any big time way, we need to know, first of all, what we most value in being human, and that’s a subject for the humanities, for everybody to talk about, think about. And then it’s a subject for the brain scientists to figure out what’s the basis of it. It’s got to be in the brain. But what is it in the brain? And we’re miles and miles and miles away in brain science from being able to figure out what is it in the brain — or maybe we’re not, I don’t know any brain science, I shouldn’t be shooting off my mouth — but we’ve got to understand those things. What is it in our brains that makes us feel good when we are of use to someone else?

We don’t want to fool around with whatever those genes are — do not monkey with those genes unless you’re absolutely sure that you’re making them maybe better — but anyway, don’t fool around. And figure out in the humanities, don’t stop teaching humanities. Learn from Sophocles, and Euripides, and Aeschylus: What are the big problems about human existence? Don’t make it possible for a kid to go through Harvard — as is possible today — without learning a single thing from Ancient Greece. Nothing. You don’t even have to use the word Greece. You don’t have to use the word Homer or any of that. Nothing, zero. Isn’t that amazing?

Before President Lincoln, everybody, to get to enter Harvard, had to already know Ancient Greek and Latin. Even though these guys were mainly boys of course, and they were going to become clergymen. They also, by the way — there were no electives — everyone had to take fluctions, which is differential calculus. Everyone had to take integral calculus. Every one had to take astronomy, chemistry, physics, as well as moral philosophy, et cetera. Well, there’s nothing like that anymore. We don’t all speak the same language because we’ve all had such different kinds of education, and also the humanities just get a short shrift. I think that’s very short sighted.

MIT is pretty good in humanities, considering it’s a technical school. Harvard used to be tops. Harvard is at risk of maybe losing it. Anyway, end of speech.

Max: Yeah, I want to just agree with what you said, and also rephrase it the way I think about it. What I hear you saying is that it’s not enough to just make our technology more powerful. We also need the humanities, and our humanity, for the wisdom of how we’re going to manage our technology and what we’re trying to use it for, because it does no good to have a really powerful tool if you aren’t wise and use it for the right things.

Matthew: If we’re going to change, we might even split into several species. Almost all of the other species have very close other species: neighbors. Especially if you can get them separated — there’s a colony on Mars and they don’t travel back and forth much — species will diverge. It takes a long, long, long, long time, but the idea there, like the Bible says, that we are fixed, nothing will change, that’s of course wrong. Human evolution is going on as we speak.

Ariel: We’ll end part one of our two-part podcast with Matthew Meselson here. Please join us for the next episode which serves as a reminder that weapons bans don’t just magically work. But rather, there are often science mysteries that need to be solved in order to verify whether a group has used a weapon illegally. In the next episode, Matthew will talk about three such scientific mysteries he helped solve, including the anthrax incident in Russia, the yellow rain affair in Southeast Asia, and the research he did that led immediately to the prohibition of Agent Orange. So please join us for part two of this podcast, which is also available now.

As always, if you’ve been enjoying this podcast, please take a moment to like it, share it, and maybe even leave a positive review. It’s a small action on your part, but it’s tremendously helpful for us.

FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.

Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.

Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

Topics discussed in this podcast include:

  • DeepMind progress, as seen with AlphaStar and AlphaFold
  • Manual dexterity in robots, especially QT Opt and Dactyl
  • Advances in creativity, as with Generative Adversarial Networks (GANs)
  • Feature-wise transformations
  • Continuing concerns about DeepFakes
  • Scaling up AI systems
  • Neuroevolution
  • Google Duplex, the AI assistant that sounds human on the phone
  • The General Data Protection Regulation (GDPR) and AI policy more broadly

Publications discussed in this podcast include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone, welcome to the FLI podcast. I’m your host, Ariel Conn. For those of you who are new to the podcast, at the end of each month, I bring together two experts for an in-depth discussion on some topic related to the fields that we at the Future of Life Institute are concerned about, namely artificial intelligence, biotechnology, climate change, and nuclear weapons.

The last couple of years for our January podcast, I’ve brought on two AI researchers to talk about what the biggest AI breakthroughs were in the previous year, and this January is no different. To discuss the major developments we saw in AI in 2018, I’m pleased to have Roman Yampolskiy and David Krueger joining us today.

Roman is an AI safety researcher and professor at the University of Louisville, his new book Artificial Intelligence Safety and Security is now available on Amazon and we’ll have links to it on the FLI page for this podcast. David is a PhD candidate in the Mila Lab at the University of Montreal, where he works on deep learning and AI safety. He’s also worked with teams at the Future of Humanity Institute and DeepMind, and he’s volunteered with 80,000 Hours to help people find ways to contribute to the reduction of existential risks from AI. So Roman and David, thank you so much for joining us.

David: Yeah, thanks for having me.

Roman: Thanks very much.

Ariel: So I think that one thing that stood out to me in 2018 was that the AI breakthroughs seemed less about surprising breakthroughs that really shook the AI community as we’ve seen in the last few years, and instead they were more about continuing progress. And we also didn’t see quite as many major breakthroughs hitting the mainstream press. There were a couple of things that made big news splashes, like Google Duplex, which is a new AI assistant program that sounded incredibly human on phone calls it made during the demos. And there was also an uptick in government policy and ethics efforts, especially with the General Data Protection Regulation, also known as the GDPR, which went into effect in Europe earlier this year.

Now I’m going to want to come back to Google and policy and ethics later in this podcast, but I want to start by looking at this from the research and development side of things. So my very first question for both of you is: do you agree that 2018 was more about impressive progress, and less about major breakthroughs? Or were there breakthroughs that really were important to the AI community that just didn’t make it into the mainstream press?

David: Broadly speaking I think I agree, although I have a few caveats for that. One is just that it’s a little bit hard to recognize always what is a breakthrough, and a lot of the things in the past that have had really big impacts didn’t really seem like some amazing new paradigm shift—it was sort of a small tweak that then made a lot of things work a lot better. And the other caveat is that there are a few works that I think are pretty interesting and worth mentioning, and the field is so large at this point that it’s a little bit hard to know if there aren’t things that are being overlooked.

Roman: So I’ll agree with you, but I think the pattern is more important than any specific breakthrough. We kind of got used to getting something really impressive every month, so relatively it doesn’t sound as good, all the AlphaStar, AlphaFold, AlphaZero happening almost every month. And it used to be it took 10 years to see something like that.

It’s likely it will happen even more frequently. We’ll conquer a new domain once a week or something. I think that’s the main pattern we have to recognize and discuss. There are significant accomplishments in terms of teaching AI to work in completely novel domains. I mean now we can predict protein folding, now we can have multi-player games conquered. That never happened before so frequently. Chess was impressive because it took like 30 years to get there.

David: Yeah, so I think a lot of people were kind of expecting or at least hoping for StarCraft or Dota to be solved—to see, like we did with AlphaGo, AI systems that are beating the top players. And I would say that it’s actually been a little bit of a let down for people who are optimistic about that, because so far the progress has been kind of unconvincing.

So the AlphaStar, which was a really recent result from last week, for instance: I’ve seen criticism of it that I think is valid that it was making more actions than a human could within a very short interval of time. So they carefully controlled the actions-per-minute that AlphaStar was allowed to take, but they didn’t prevent it from doing really short bursts of actions that really helped its micro-game, and that means that it can win without really being strategically superior to its human opponents. And I think the Dota results that OpenAI has had was also criticized as being sort of not the hardest version of the problem, and still the AI sort of is relying on some crutches.

Ariel: So before we get too far into that debate, can we take a quick step back and explain what both of those are?

David: So these are both real-time strategy games that are, I think, actually the two most popular real-time strategy games in the world that people play professionally, and make money playing. I guess that’s all to say about them.

Ariel: So a quick question that I had too about your description then, when you’re talking about AlphaStar and you were saying it was just making more moves than a person can realistically make. Is that it—it wasn’t doing anything else special?

David: I haven’t watched the games, and I don’t play StarCraft, so I can’t say that it wasn’t doing anything special. I’m basing this basically on reading articles and reading the opinions of people who are avid StarCraft players, and I think the general opinion seems to be that it is more sophisticated than what we’ve seen before, but the reason that it was able to win these games was not because it was out-thinking humans, it’s because it was out-clicking, basically, in a way that just isn’t humanly possible.

Roman: I would agree with this analysis, but I don’t see it as a bug, I see it as a feature. That just shows another way machines can be superior to people. Even if they are not necessarily smarter, they can still produce superior performance, and that’s what we really care about. Right? We found a different way, a non-human approach to solving this problem. That’s impressive.

David: Well, I mean, I think if you have an agent that can just click as fast as it wants, then you can already win at StarCraft, before this work. There needs to be something that makes it sort of a fair fight in some sense.

Roman: Right, but think what you’re suggesting: We have to handicap machines to make them even remotely within being comparative to people. We’re talking about getting to superintelligent performance. You can get there by many ways. You can think faster, you can have better memory, you can have better reaction time—as long as you’re winning in whatever domain we’re interested in, you have superhuman performance.

David Krueger: So maybe another way of putting this would be if they actually made a robot play StarCraft and made it use the same interface that humans do, such as a screen and mouse, there’s no way that it could have beat the human players. And so by giving it direct access to the game controls, it’s sort of not solving the same problem that a human is when they play this game.

Roman: I feel what you’re saying, I just feel that it is solving it in a different way, and we have pro-human bias saying, well that’s not how you play this game, you have an advantage. Human players usually rely on superior strategy, not just faster movements that may take advantage of it for a few nanoseconds, a couple of seconds. But it’s not a long-term sustainable pattern.

One of the research projects I worked on was this idea of artificial stupidity, we called it—kind of limiting machines to human-level capacity. And I think that’s what we’re talking about it here. Nobody would suggest limiting a chess program to just human-level memory, or human memorization of opening moves. But we don’t see it as a limitation. Machines have an option of beating us in ways humans can’t. That’s the whole point, and that’s why it’s interesting, that’s why we have to anticipate such problems. That’s where most of the safety and security issues will show up.

Ariel: So I guess, I think, Roman, your point earlier was sort of interesting that we’ve gotten so used to breakthroughs that stuff that maybe a couple of years ago would have seemed like a huge breakthrough is just run-of-the-mill progress. I guess you’re saying that that’s what this is sort of falling into. Relatively recently this would have been a huge deal, but because we’ve seen so much other progress and breakthroughs, that this is now interesting and we’re excited about it—but it’s not reaching that level of, oh my god, this is amazing! Is that fair to say?

Roman: Exactly! We get disappointed if the system loses one game. It used to be we were excited if it would match amateur players. Now it’s, oh, we played a 100 games and you lost one? This is just not machine-level performance, you disappoint us.

Ariel: David, do you agree with that assessment?

David: I would say mostly no. I guess, I think what really impressed me with AlphaGo and AlphaZero was that it was solving something that had been established as a really grand challenge for AI. And then in the case of AlphaZero, I think the technique that they actually used to solve it was really novel and interesting from a research point of view, and they went on to show that this same technique can solve a bunch of other board games as well.

And my impression from what I’ve seen about how they did AlphaStar and AlphaFold is that there were some interesting improvements and the performance is impressive but I think it’s neither, like, quite at the point where you can say we’ve solved it, we’re better than everybody, or in the case of protein folding, there’s not a bunch more room for improvement that has practical significance. And it’s also—I don’t see any really clear general algorithmic insights about AI coming out of these works yet. I think that’s partially because they haven’t been published yet, but from what I have heard about the details about how they work, I think it’s less of a breakthrough on the algorithm side than AlphaZero was.

Ariel: So you’ve mentioned AlphaFold. Can you explain what that is real quick?

David: This is the protein folding project that DeepMind did, and I think there’s a competition called C-A-S-P or CASP that happens every three years, and they sort of dominated that competition this last year doing what was described as two CASPs in one, so basically doubling the expected rate of improvement that people have seen historically at these tasks, or at least at the one that is the most significant benchmark.

Ariel: I find the idea of the protein folding thing interesting because that’s something that’s actually relevant to scientific advancement and health as opposed to just being able to play a game. Are we seeing actual applications for this yet?

David: I don’t know about that, but I agree with you that that is a huge difference that makes it a lot more exciting than some of the previous examples. I guess one thing that I want to say about that, though, is that it does look a little bit more to me like continuation of progress that was already happening in the communities. It’s definitely a big step up, but I think a lot of the things that they did there could have really happened over the next few years anyways, even without DeepMind being there. So, one of the articles I read put it this way: If this wasn’t done by DeepMind, if this was just some academic group, would this have been reported in the media? I think the answer is sort of like a clear no, and that says something about the priorities of our reporting and media as well as the significance of the results, but I think that just gives some context.

Roman: I’ll agree with David—the media is terrible in terms of what they report on, we can all agree on that. I think it was quite a breakthrough, I mean, to say that they not just beat the competition, but to actually kind of doubled performance improvement. That’s incredible. And I think anyone who got to that point would not be denied publication in a top journal; It would be considered very important in that domain. I think it’s one of the most important problems in medical research. If you can accurately predict this, possibilities are really endless in terms of synthetic biology, in terms of curing diseases.

So this is huge in terms of impact from being able to do it. As far as how applicable is it to other areas, is it a great game-changer for AI research? All those things can adapt between this ability to perform in real-life environments of those multiplayer games, and being able to do this. Look at how those things can be combined. Right? You can do things in the real world you couldn’t do before, both in terms of strategy games, which are basically simulations for economic competition, for wars, for quite a few applications where impact would be huge.

So all of it is very interesting. It’s easy to say that, “Well if they didn’t do it, somebody else maybe would do it in a couple of years.” But it’s almost always true for all inventions. If you look at the history of inventions, things like, I don’t know, telephone, have been invented at the same time by two or three people; radio, two or three people. It’s just the point where science gets enough ingredient technology where yeah, somebody’s going to do it, nice. But still, we give credit to whoever got there first.

Ariel: So I think that’s actually a really interesting point, because I think for the last few years we have seen sort of these technological advances but I guess we also want to be considering the advances that are going to have a major impact on humanity even if it’s not quite as technologically new.

David: Yeah, absolutely. I think the framing in terms of breakthroughs is a little bit unclear what we’re talking about when we talk about AI breakthroughs, and I think a lot of people in the field of AI kind of don’t like how much people talk about it in terms of breakthroughs because a lot of the progress is gradual and builds on previous work and it’s not like there was some sudden insight that somebody had that just changed everything, although that does happen in some ways.

And I think you can think of the breakthroughs both in terms of like what is the impact—is this suddenly going to have a lot of potential to change the world? You can also think of it, though, from the perspective of researchers as like, is this really different from the kind of ideas and techniques we’ve seen or seen working before? I guess I’m more thinking about the second right now in terms of breakthroughs representing really radical new ideas in research.

Ariel: Okay, well I will take responsibility for being one of the media people who didn’t do a good job with presenting AI breakthroughs. But I think both with this podcast and probably moving forward, I think that is actually a really important thing for us to be doing—is both looking at the technological progress and newness of something but also the impact it could have on either society or future research.

So with that in mind, you guys also have a good list of other things that did happen this year, so I want to start moving into some of that as well. So next on your list is manual dexterity in robots. What did you guys see happening there?

David: So this is something that’s definitely not my area of expertise, so I can’t really comment too much on it. But there are two papers that I think are significant and potentially representing something like a breakthrough in this application. In general robotics is really difficult, and machine learning for robotics is still, I think, sort of a niche thing, like most robotics is using more classical planning algorithms, and hasn’t really taken advantage of the new wave of deep learning and everything.

So there’s two works, one is QT-Opt, and the other one is Dactyl, and these are both by people from the Berkeley OpenAI crowd. And these both are showing kind of impressive results in terms of manual dexterity in robots. So there’s one that does a really good job at grasping, which is one of the basic aspects of being able to act in the real world. And then there’s another one that was sort of just manipulating something like a cube with different colored faces on it—that one’s Dactyl; the grasping one is QT-Opt.

And I think this is something that was paid less attention to in the media, because it’s been more of a story of kind of gradual progress I think. But my friend who follows this deep reinforcement learning stuff more told me that QT-Opt is the first convincing demonstration of deep reinforcement learning in the real world, as opposed to all these things we’ve seen in games. The real world is much more complicated and there’s all sorts of challenges with the noise of the environment dynamics and contact forces and stuff like this that have been really a challenge for doing things in the real world. And then there’s also the limited sample complexity where when you play a game you can sort of interact with the game as much as you want and play the game over and over again, whereas in the real world you can only move your robot so fast and you have to worry about breaking it, so that means in the end you can collect a lot less data, which makes it harder to learn things.

Roman: Just to kind of explain maybe what they did. So hardware’s expensive, slow: It’s very difficult to work with. Things don’t go well in real life; It’s a lot easier to create simulations in virtual worlds, train your robot in there, and then just transfer knowledge into a real robot in the physical world. And that’s exactly what they did, training that virtual hand to manipulate objects, and they could run through thousands, millions of situations and then it’s something you cannot do with an actual, physical robot at that scale. So, I think that’s a very interesting approach for why lots of people try doing things in virtual environments. Some of the early AGI projects all concentrated on virtual worlds as domain of learning. So that makes a lot of sense.

David: Yeah, so this was for the Dactyl project, which was OpenAI. And that was really impressive I think, because people have been doing this sim-to-real thing—where you train in simulation and then try and transfer it to the real world—with some success for like a year or two, but this one I think was really kind of impressive in that sense, because they didn’t actually train it in the real world at all, and what they had learned managed to transfer to the real world.

Ariel: Excellent. I’m going to keep going through your list. One thing that you both mentioned are GANs. So very quickly, if one of you, or both of you, could explain what a GAN is and what that stands for, and then we’ll get into what happened last year with those.

Roman: Sure, so this is a somewhat new way of doing creative generational visuals and audio. You have two neural networks competing, one is kind of creating fakes, and the other one is judging them, and you get to a point where they’re kind of 50/50. You can’t tell if it’s fake or real anymore. And it’s a great way to produce artificial faces, cars, whatever. Any type of input you can provide to the networks, they quickly learn to extract the essence of that image or audio and generate artificial data sets full of such images.

And there’s really exciting work on being able to extract properties from those, different styles. So if we talk about faces, for example: there could be a style for hair, a style for skin color, a style for age, and now it’s possible to manipulate them. So I can tell you things like, “Okay, Photoshop, I need a picture of a female, 20 years old, blonde, with glasses,” and it would generate a completely realistic face based on those properties. And we’re starting to see it show up not just in images but transferred to video, to generating whole virtual worlds. It’s probably the closest thing we ever had computers get to creativity: actually kind of daydreaming and coming up with novel outputs.

David: Yeah, I just want to say a little bit about the history of the research in GAN. So the first work on GANs was actually back four or five years ago in 2014, and I think it was actually kind of—didn’t make a huge splash at the time, but maybe a year or two after that it really started to take off. And research in GANs over the last few years has just been incredibly fast-paced and there’s been hundreds of papers submitted and published at the big conferences every year.

If you look just in terms of the quality of what is generated, this is, I think, just an amazing demonstration of the rate of progress in some areas of machine learning. The first paper had these sort of black and white pictures of really blurry faces, and now you can get giant—I think 256 by 256, or 512 by 512, or even bigger—really high resolution and totally indistinguishable from real photos, to the human eye anyway—images of faces. So it’s really impressive, and we’ve seen really consistent progress on that, especially in the last couple years.

Ariel: And also, just real quick, what does it stand for?

David: Oh, generative adversarial network. So it’s generative, because it’s sort of generating things from scratch, or from its imagination or creativity. And it’s adversarial because there are two networks: the one that generates the things, and then the one that tries to tell those fake images apart from real images that we actually collect by taking photos in the world.

Ariel: This is an interesting one because it can sort of transition into some ethics stuff that came up this past year, but I’m not sure if we want to get there yet, or if you guys want to talk a little bit more about some of the other things that happened on the research and development side.

David: I guess I want to talk about a few other things that have been making, I would say, sort of steady progress, like GANs. With a lot of interest in, I guess I would say, their ideas that are coming to fruition, even though some of these are not exactly from the last year, they sort of really started to improve themselves and become widely used in the last year.

Ariel: Okay.

David: I think this is actually used in maybe the latest, greatest GAN paper, is something that’s called feature-wise transformations. So this is an idea that actually goes back up to 40 years, depending on how you measure it, but has sort of been catching on in specific applications in machine learning in the last couple of years—starting with, I would say, style-transfer, which is sort of like what Roman mentioned earlier.

So the idea here is that in a neural network, you have what are called features, which basically correspond to the activations of different neurons in the network. Like how much that neuron likes what it’s seeing, let’s say. And those can also be interpreted as representing different kinds of visual patterns, like different kinds of textures, or colors. And these feature-wise transformations basically just take each of those different aspects of the image, like the color or texture in a certain location, and then allow you to manipulate that specific feature, as we call it, by making it stronger or amplifying whatever was already there.

And so you can sort of view this as a way of specifying what sort of things are important in the image, and that’s why it allows you to manipulate the style of images very easily, because you can sort of look at a certain painting style for instance, and say, oh this person uses a lot of wide brush strokes, or a lot of narrow brush strokes, and then you can say, I’m just going to modulate the neurons that correspond to wide or narrow brush strokes, and change the style of the painting that way. And of course you don’t do this by hand, by looking in and seeing what the different neurons represent. This all ends up being learned end-to-end. And so you sort of have an artificial intelligence model that predicts how to modulate the features within another network, and that allows you to change what that network does in a really powerful way.

So, I mentioned that it has been applied in the most recent GAN papers, and I think they’re just using those kinds of transformations to help them generate images. But other examples where you can explain what’s happening more intuitively, or why it makes sense to try and do this, would be something like visual question answering. So there you can have the modulation of the vision network being done by another network that looks at a question and is trying to help answer that question. And so it can sort of read the question and see what features of images might be relevant to answering that question. So for instance, if the question was, “Is it a sunny day outside?” then it could have the vision network try and pay more attention to things that correspond to signs of sun. Or if it was asked something like, “Is this person’s hair combed?” then you could look for the patterns of smooth, combed hair and look for the patterns of rough, tangled hair, and have those features be sort of emphasized in the vision network. That allows the vision network to pay attention to the parts of the image that are most relevant to answering the question.

Ariel: Okay. So, Roman, I want to go back to something on your list quickly in a moment, but first I was wondering if you have anything that you wanted to add to the feature-wise transformations?

Roman: All of it, you can ask, “Well why is this interesting, what are the applications for it?” So you are able to generate inputs, inputs for computers, inputs for people, images, sounds, videos. A lot of times they can be adversarial in nature as well—what we call deep fakes. Right? You can make, let’s say, a video of a famous politician say something, or do something.

Ariel: Yeah.

Roman: And this has very interesting implications for elections, for forensic science, for evidence. As those systems get better and better, it becomes harder and harder to tell if something is real or not. And maybe it’s still possible to do some statistical analysis, but it takes time, and we talked about media being not exactly always on top of it. So it may take 24 hours before we realize if this video was real or not, but the election is tonight.

Ariel: So I am definitely coming back to that. I want to finish going through the list of the technology stuff, but yeah I want to talk about deep fakes and in general, a lot of the issues that we’ve seen cropping up more and more with this idea of using AI to fake images and audio and video, because I think that is something that’s really important.

David: Yeah, it’s hard for me to estimate these things, but I would say this is probably, in terms of the impact that this is going to have societally, this is sort of the biggest story maybe of the last year. And it’s not like something that happened all of the sudden. Again, it’s something that has been building on a lot of progress in generative models and GANs and things like this. And it’s just going to continue, we’re going to see more and more progress like that, and probably some sort of arms’ race here where—I shouldn’t use that word.

Ariel: A competition.

David: A competition between people who are trying to use that kind of technology to fake things and people who are sort of doing forensics to try and figure out what is real and what is fake. And that also means that people are going to have to trust the people who have the expertise to do that, and believe that they’re actually doing that and not part of some sort of conspiracy or something.

Ariel: Alright, well are you guys ready to jump into some of those ethical questions?

David: Well, there are like two other broad things I wanted to mention, which I think are sort of interesting trends in the research community. One is just the way that people have been continuing to scale up AI systems. So a lot of the progress I think has arguably just been coming from more and more computation and more and more data. And there was a pretty great blog post by OpenAI about this last year that argued that the amount of computation that’s being used to train the most advanced AI systems is increasing by a factor of 10 times every year for the last several years, which is just astounding. But it also suggests that this might not be sustainable for a long time, so to the extent that you think that using more computation is a big driver of progress, we might start to see that slow down within a decade or so.

Roman: I’ll add another—what I think also is kind of building-on technology, not so much a breakthrough, we had it for a long time—but neural evolution is something I’m starting to pay a lot more attention to and that’s kind of borrowing from biology, trying to evolve ways for neural networks, optimized neural networks. And it’s producing very impressive results. It’s possible to run it in parallel really well, and it’s competitive with some of the leading alternative approaches.

So, the idea basically is you have this very large neural network, brain-like structure, but instead of trying to train it back, propagate errors, teach it in a standard neural networks way, you just kind of have a population of those brains competing for who’s doing best in a particular problem, and they share weights between good parents, and after a while you just evolve really well performing solutions to some of the most interesting problems.

Additionally you can kind of go meta-level on it and evolve architectures for the neural network itself—how many layers, how many inputs. This is nice because it doesn’t require much human intervention. You’re essentially letting the system figure out what the solutions are. We had some very successful results with genetic algorithms for optimization. We didn’t have much success with genetic programming, and now neural evolution kind of brings it back where you’re optimizing intelligence systems, and that’s very exciting.

Ariel: So you’re saying that you’ll have—to make sure I understand this correctly—there’s two or more neural nets trying to solve a problem, and they sort of play off of each other?

Roman: So you create a population of neural networks, and you give it a problem, and you see this one is doing really well, and that one. The others, maybe not so great. So you take weights from those two and combine them—like mom and dad, parent situation that produces offspring. And so you have this simulation of evolution where unsuccessful individuals are taken out of a population. Successful ones get to reproduce and procreate, and provide their high fitness weights to the next generation.

Ariel: Okay. Was there anything else that you guys saw this year that you want to talk about, that you were excited about?

David: Well I wanted to give a few examples of the kind of massive improvements in scale that we’ve seen. One of the most significant models and benchmarks in the community is ImageNet and training image classifiers that can tell you what a picture is a picture of on this dataset.So the whole sort of deep learning revolution was arguably started, or at least really came into the eyes of the rest of the machine learning community, because of huge success on this ImageNet competition. And training the model there took something like two weeks, and this last year there was a paper where you can train a more powerful model in less than four minutes, and they do this by using like 3000 graphics cards in parallel.

And then DeepMind also had some progress on parallelism with this model called IMPALA, which basically was in the context of reinforcement learning as opposed to classification, and there they sort of came up with a way that allowed them to do updates in parallels, like learn on different machines and combine everything that was learned in a way that’s asynchronous. So in the past the sort of methods that they would use for these reinforcement learning problems, you’d have to wait for all of the different machines to finish their learning on the current problem or instance that they’re learning about, and then combine all of that centrally—whereas the new method allows you to just as soon as you’re done computing or learning something, you can communicate it to the rest of the system, the other computers that are learning in parallel. And that was really important for allowing them to scale to hundreds of machines working on their problem at the same time.

Ariel: Okay, and so that, just to clarify as well, that goes back to this idea that right now we’re seeing a lot of success just scaling up the computing, but at some point that could slow things down essentially, if we had a limit for how much computing is possible.

David: Yeah, and I guess one of my points is also doing this kinds of scaling of computing requires some amount of algorithmic insight or breakthrough if you want to be dramatic as well. So this DeepMind paper I talked about, they had to devise new reinforcement learning algorithms that would still be stable when they had this real-time asynchronous updating. And so, in a way, yeah, a lot of the research that’s interesting right now is on finding ways to make the algorithm scale so that you can keep taking advantage of more and more hardware. And the evolution stuff also fits into that picture to some extent.

Ariel: Okay. I want to start making that transition into some of the concerns that we have for misuse around AI and how easy it is for people to be deceived by things that have been created by AI. But I want to start with something that’s hopefully a little bit more neutral, and talk about Google Duplex, which is the program that Google came out with, I think last May. I don’t know the extent to which it’s in use now, but they presented it, and it’s an AI assistant that can essentially make calls and set up appointments for you. So their examples were it could make a reservation at a restaurant for you, or it could make a reservation for you to get a haircut somewhere. And it got sort of mixed reviews, because on the one hand people were really excited about this, and on the other hand it was kind of creepy because it sounded human, and the people on the other end of the call did not know that they were talking to a machine.

So I was hoping you guys could talk a little bit I guess maybe about the extent to which that was an actual technological breakthrough versus just something—this one being more one of those breakthroughs that will impact society more directly. And then also I guess if you agree that this seems like a good place to transition into some of the safety issues.

David: Yeah, no, I would be surprised if they really told us about the details of how that worked. So it’s hard to know how much of an algorithmic breakthrough or algorithmic breakthroughs were involved. It’s very impressive, I think, just in terms of what it was able to do, and of course these demos that we saw were maybe selected for their impressiveness. But I was really, really impressed personally, just to see a system that’s able to do that.

Roman: It’s probably built on a lot of existing technology, but it is more about impact than what you can do with this. And my background is cybersecurity, so I see it as a great tool for like automating spear-phishing attacks on a scale of millions. You’re getting a real human calling you, talking to you, with access to your online data; Pretty much everyone’s gonna agree and do whatever the system is asking of you, if it’s credit card numbers, or social security numbers. So, in many ways it’s going to be a game changer.

Ariel: So I’m going to take that as a definite transition into safety issues. So, yeah, let’s start talking about, I guess, sort of human manipulation that’s happening here. First, the phrase “deep fake” shows up a lot. Can you explain what those are?

David: So “deep fakes” is basically just: you can make a fake video of somebody doing something or saying something that they did not actually do or say. People have used this to create fake videos of politicians, they’ve used it to create porn using celebrities. That was one of the things that got it on the front page of the internet, basically. And Reddit actually shut down the subreddit where people were doing that. But, I mean, there’s all sorts of possibilities.

Ariel: Okay, so I think the Reddit example was technically the very end of 2017. But all of this sort became more of an issue in 2018. So we’re seeing this increase in capability to both create images that seem real, create audio that seems real, create video that seems real, and to modify existing images and video and audio in ways that aren’t immediately obvious to a human. What did we see in terms of research to try to protect us from that, or catch that, or defend against that?

Roman: So here’s an interesting observation, I guess. You can develop some sort of a forensic tool to analyze it, and give you a percentage likelihood that it’s real or that it’s fake. But does it really impact people? If you see it with your own eyes, are you going to believe your lying eyes, or some expert statistician on CNN?

So the problem is it will still have tremendous impact on most people. We’re not very successful at convincing people about multiple scientific facts. They simply go outside, or it’s cold right now, so global warming is false. I suspect we’ll see exactly that with, let’s say, fake videos of politicians, where a majority of people easily believe anything they hear once or see once versus any number of peer reviewed publications disproving it.

David: I kind of agree. I mean, I think, when I try to think about how we would actually solve this kind of problem, I don’t think a technical solution that just allows somebody who has technical expertise to distinguish real from fake is going to be enough. We really need to figure out how to build a better trust infrastructure in our whole society which is kind of a massive project. I’m not even sure exactly where to begin with that.

Roman: I guess the good news is it gives you plausible deniability. If a video of me comes out doing horrible things I can play it straight.

Ariel: That’s good for someone. Alright, so, I mean, you guys are two researchers, I don’t know how into policy you are, but I don’t know if we saw as many strong policies being developed. We did see the implementation of the GDPR, and for people who aren’t familiar with the GDPR, it’s essentially European rules about what data companies can collect from your interactions online, and the ways in which you need to give approval for companies to collect your data, and there’s a lot more to it than that. One of the things that I found most interesting about the GDPR is that it’s entirely European based, but it had a very global impact because it’s so difficult for companies to apply something only in Europe and not in other countries. And so earlier this year when you were getting all of those emails about privacy policies, that was all triggered by the GDPR. That was something very specific that happened and it did make a lot of news, but in general I felt that we saw a lot of countries and a lot of national and international efforts for governments to start trying to understand how AI is going to be impacting their citizens, and then also trying to apply ethics and things like that.

I’m sort of curious, before we get too far into anything: just as researchers, what is your reaction to that?

Roman: So I never got as much spam as I did that week when they released this new policy, so that kind of gives you a pretty good summary of what to expect. If you look at history, we have regulations against spam, for example. Computer viruses are illegal. So that’s a very expected result. It’s not gonna solve technical problems. Right?

David: I guess I like that they’re paying attention and they’re trying to tackle these issues. I think the way GDPR was actually worded, it has been criticized a lot for being either much too broad or demanding, or vague. I’m not sure—there are some aspects of the details of that regulation that I’m not convinced about, or not super happy about. I guess overall it seems like people who are making these kinds of decisions, especially when we’re talking about cutting edge machine learning, it’s just really hard. I mean, even people in the fields don’t really know how you would begin to effectively regulate machine learning systems, and I think there’s a lot of disagreement about what a reasonable level of regulation would be or how regulations should work.

People are starting to have that sort of conversation in the research community a little bit more, and maybe we’ll have some better ideas about that in a few years. But I think right now it seems premature to me to even start trying to regulate machine learning in particular, because we just don’t really know where to begin. I think it’s obvious that we do need to think about how we control the use of the technology, because it’s just so powerful and has so much potential for harm and misuse and accidents and so on. But I think how you actually go about doing that is a really unclear and difficult problem.

Ariel: So for me it’s sort of interesting, we’ve been debating a bit today about technological breakthroughs versus societal impacts, and whether 2018 actually had as many breakthroughs and all of that. But I would guess that all of us agree that AI is progressing a lot faster than government does.

David: Yeah.

Roman: That’s almost a tautology.

Ariel: So I guess as researchers, what concerns do you have regarding that? Like do you worry about the speed at which AI is advancing?

David: Yeah, I would say I definitely do. I mean, we were just talking about this issue with fakes and how that’s going to contribute to things like fake news and erosion of trust in media and authority and polarization of society. I mean, if AI wasn’t going so fast in that direction, then we wouldn’t have that problem. And I think the rate that it’s going, I don’t see us catching up—or I should say, I don’t see the government catching up on its own anytime soon—to actually control the use of AI technology, and do our best anyways to make sure that it’s used in a safe way, and a fair way, and so on.

I think in and of itself it’s maybe not bad that the technology is progressing fast. I mean, it’s really amazing; Scientifically there’s gonna be all sorts of amazing applications for it. But there’s going to be more and more problems as well, and I don’t think we’re really well equipped to solve them right now.

Roman: I’ll agree with David, I’m very concerned at its relative rate of progress. AI development progresses a lot faster than anything we see in AI safety. AI safety is just trying to identify problem areas, propose some general directions, but we have very little to show in terms of solved problems.

If you look at our work in adversarial fields, maybe a little bit cryptography, the good guys have always been a step ahead of the bad guys, whereas here you barely have any good guys as a percentage. You have like less than 1% of researchers working directly on safety full-time. Same situation with funding. So it’s not a very optimistic picture at this point.

David: I think it’s worth definitely distinguishing the kind of security risks that we’re talking about, in terms of fake news and stuff like that, from long-term AI safety, which is what I’m most interested in, and think is actually even more important, even though I think there’s going to be tons of important impacts we have to worry about already, and in the coming years.

And the long-term safety stuff is really more about artificial intelligence that becomes broadly capable and as smart or smarter than humans across the board. And there, there’s maybe a little bit more signs of hope if I look at how the fields might progress in the future, and that’s because there’s a lot of problems that are going to be relevant for controlling or aligning or understanding these kind of generally intelligent systems that are probably going to be necessary anyways in terms of making systems that are more capable in the near future.

So I think we’re starting to see issues with trying to get AIs to do what we want, and failing to, because we just don’t know how to specify what we want. And that’s, I think, basically the core of the AI safety problem—is that we don’t have a good way of specifying what we want. An example of that is what are called adversarial examples, which sort of demonstrate that computer vision systems that are able to do a really amazing job at classifying images and seeing what’s in an image and labeling images still make mistakes that humans just would never make. Images that look indistinguishable to humans can look completely different to the AI system, and that means that we haven’t really successfully communicated to the AI system what our visual concepts are. And so even though we think we have done a good job of telling it what to do, it’s like, “tell us what this picture is of”—the way that it found to do that really isn’t the way that we would do it and actually there’s some very problematic and unsettling differences there. And that’s another field that, along with the ones that I mentioned, like generative models and GANs, has been receiving a lot more attention in the last couple of years, which is really exciting from the point of view of safety and specification.

Ariel: So, would it be fair to say that you think we’ve had progress or at least seen progress in addressing long-term safety issues, but some of the near-term safety issues, maybe we need faster work?

David: I mean I think to be clear, we have such a long way to go to address the kind of issues we’re going to see with generally intelligent and super intelligent AIs, that I still think that’s an even more pressing problem, and that’s what I’m personally focused on. I just think that you can see that there are going to be a lot of really big problems in the near term as well. And we’re not even well equipped to deal with those problems right now.

Roman: I’ll generally agree with David. I’m more concerned about long-term impacts. There are both more challenging and more impactful. It seems like short-term things may be problematic right now, but the main difficulty is that we didn’t start working on them in time. So problems like algorithmic fairness, bias, technological unemployment, are social issues which are quite solvable; They are not really that difficult from engineering or technical points of view. Whereas long-term control of systems which are more intelligent than you are—very much unsolved at this point in any even toy model. So I would agree with the part about bigger concerns but I think current problems we have today, they are already impacting people, but the good news is we know how to do better.

David: I’m not sure that we know how to do better exactly. Like I think a lot of these problems, it’s more of a problem of willpower and developing political solutions, so the ones that you mentioned. But with the deep fakes, this is something that I think requires a little bit more of a technical solution in the sense of how we organize our society so that people are either educated enough to understand this stuff, or so that people actually have someone they trust and have a reason to trust, who they can take their word for it on that.

Roman: That sounds like a great job, I’ll take it.

Ariel: It almost sounds like something we need to have someone doing in person, though.

So going back to this past year: were there, say, groups that formed, or research teams that came together, or just general efforts that, while maybe they didn’t produce something yet, you think could produce something good, either in safety or AI in general?

David: I think something interesting is happening in terms of the way AI safety is perceived and talked about in the broader AI and machine learning community. It’s a little bit like this phenomenon where once we solve something people don’t consider it AI anymore. So I think machine learning researchers, once they actually recognize the problem that the safety community has been sort of harping on and talking about and saying like, “Oh, this is a big problem”—once they say, “Oh yeah, I’m working on this kind of problem, and that seems relevant to me,” then they don’t really think that it’s AI safety, and they’re like, “This is just part of what I’m doing, making something that actually generalizes well and learns the right concept, or making something that is actually robust, or being able to interpret the model that I’m building, and actually know how it works.”

These are all things that people are doing a lot of work on these days in machine learning that I consider really relevant for AI safety. So I think that’s like a really encouraging sign, in a way, that the community is sort of starting to recognize a lot of the problems, or at least instances of a lot of the problems that are going to be really critical for aligning generally intelligent AIs.

Ariel: And Roman, what about you? Did you see anything sort of forming in the last year that maybe doesn’t have some specific result, but that seemed hopeful to you?

Roman: Absolutely. So I’ve mentioned that there is very few actual AI safety researchers as compared to the number of AI developers, researchers directly creating more capable machines. But the growth rate is much better I think. The number of organizations, the number of people who show interest in it, the number of papers I think is growing at a much faster rate, and it’s encouraging because as David said, it’s kind of like this convergence if you will, where more and more people realize, “I cannot say I built an intelligent system if it kills everyone.” That’s just not what an intelligent system is.

So safety and security become integral parts of it. I think Stuart Russell has a great example where he talks about bridge engineering. We don’t talk about safe bridges and secure bridges—there’s just bridges. If it falls down, it’s not a bridge. Exactly the same is starting to happen here: People realize, “My system cannot fail and embarrass the company, I have to make sure it will not cause an accident.”

David: I think that a lot of people are thinking about that way more and more, which is great, but there is a sort of research mindset, where people just want to understand intelligence, and solve intelligence. And I think that’s kind of a different pursuit. Solving intelligence doesn’t mean that you make something that is safe and secure, it just means you make something that’s really intelligent, and I would like it if people who had that mindset were still, I guess, interested in or respectful of or recognized that this research is potentially dangerous. I mean, not right now necessarily, but going forward I think we’re going to need to have people sort of agree on having that attitude to some extent of being careful.

Ariel: Would you agree though that you’re seeing more of that happening?

David: Yeah, absolutely, yeah. But I mean it might just happen naturally on its own, which would be great.

Ariel: Alright, so before I get to my very last question, is there anything else you guys wanted to bring up about 2018 that we didn’t get to yet?

David: So we were talking about AI safety and there’s kind of a few big developments in the last year. I mean, there’s actually too many I think for me to go over all of them, but I wanted to talk about something which I think is relevant to the specification problem that I was talking about earlier.

Ariel: Okay.

David: So, there are three papers in the last year, actually, on what I call superhuman feedback. The idea motivating these works is that even specifying what we want on a particular instance in some particular scenario can be difficult. So typically the way that we would think about training an AI that understands our intentions is to give it a bunch of examples, and say, “In this situation, I prefer if you do this. This is the kind of behavior I want,” and then the AI is supposed to pick up on the patterns there and sort of infer what our intentions are more generally.

But there can be some things that we would like AI systems to be competent at doing, ideally, that are really difficult to even assess individual instances of. Two examples that I like to use are designing a transit system for a large city, or maybe for a whole country, or the world or something. That’s something that right now is done by a massive team of people. Using that whole team to sort of assess a proposed design that the AI might make would be one example of superhuman feedback, because it’s not just a single human. But you might want to be able to do this with just a single human and a team of AIs helping them, instead of a team of humans. And there’s a few proposals for how you could do that that have come out of the safety community recently, which I think are pretty interesting.

Ariel: Why is it called superhuman feedback?

David: Actually, this is just my term for it. I don’t think anyone else is using this term.

Ariel: Okay.

David: Sorry if that wasn’t clear. The reason I use it is because there are three different, like, lines of work here. So there’s these two papers from OpenAI on what’s called amplification and debate, and then another paper from DeepMind on reward learning and recursive reward learning. And I like to view these as all kind of trying to solve the same problem. How can we assist humans and enable them to make good judgements and informed judgements that actually reflect what their preferences are when they’re not capable of doing that by themselves unaided. So it’s superhuman in the sense that it’s better than a single human can do. And these proposals are also aspiring to do things I think that even teams of humans couldn’t do by having AI helpers that sort of help you do the evaluation.

An example that Yan—who’s the lead author on the DeepMind paper, which I also worked on—gives is assessing an academic paper. So if you yourself aren’t familiar with the field and don’t have the expertise to assess this paper, you might not be able to say whether or not it should be published. But if you can decompose that task into things like: is the paper valid? Are the proofs valid? Are the experiments following a reasonable protocol? Is it novel? Is it formatted correctly for the venue where it’s submitted? And you got answers to all of those from helpers, then you could make the judgment. You’d just be like okay, it meets all of the criteria, so it should be published. The idea would be to get AI helpers to do those sorts of evaluations for you across a broad range of tasks, and allow us to explain to AIs, or teach AIs what we want across a broad range of tasks in that way.

Ariel: So, okay, and so then were there other things that you wanted to mention as well?

David: I do feel like I should talk about another thing that was, again, not developed last year, but really sort of took off last year—is this new kind of neural network architecture called the transformer, which is basically being used in a lot of places where convolutional neural networks and recurrent neural networks were being used before. And those were kind of the two main driving factors behind the deep learning revolution in terms of vision, where you use convolutional networks and things that have a sequential structure, like speech, or text, where people were using recurrent neural networks. And this architecture is actually motivated originally by the same sort of scaling consideration because it allowed them to remove some of the most computationally heavy parts of running these kind of models in the context of translation, and basically make it a hundred times cheaper to train a translation model. But since then it’s also been used in a lot of other contexts and has shown to be a really good replacement for these other kinds of models for a lot of applications.

And I guess the way to describe what it’s doing is it’s based on what’s called an attention mechanism, which is basically a way of giving a neural network the ability to pay more attention to different parts of an input than other parts. So like to look at one word that is most relevant to the current translation task. So if you’re imagining outputting words one at a time, then because different languages have words in different order, it doesn’t make sense to sort of try and translate the next word. You want to look through the whole input sentence, like a sentence in English, and find the word that corresponds to whatever word should come next in your output sentence.

And that was sort of the original inspiration for this attention mechanism, but since then it’s been applied in a bunch of different ways, including paying attention to different parts of the model’s own computation, paying attention to different parts of images. And basically just using this attention mechanism in the place of the other sort of neural architectures that people thought were really important to give you temporal dependencies across something sequential like a sentence that you’re trying to translate, turned out to work really well.

Ariel: So I want to actually pass this to Roman real quick. Did you have any comments that you wanted to add to either the superhuman feedback or the transformer architecture?

Roman: Sure, so superhuman feedback: I like the idea and I think people should be exploring that, but we can kind of look at similar examples previously. So, for a while we had situation where teams of human chess players and machines did better than just unaided machines or unaided humans. That lasted about ten years. And then machines became so much better, humans didn’t really contribute anything, it was kind of just like an additional bottleneck to consult with them. I wonder if long term this solution will face similar problems. It’s very useful right now, but it seems like, I don’t know if it will scale.

David: Well I want to respond to that, because I think it’s—the idea here is, in my mind, to have something that actually scales in the way that you’re describing, where it can sort of out-compete pure AI systems. Although I guess some people might be hoping that that’s the case, because that would make the strategic picture better in terms of people’s willingness to use safer systems. But this is more about just how can we even train systems—if we have the willpower, if people want to build a system that has the human in charge, and ends up doing what the human wants—how can we actually do that for something that’s really complicated?

Roman: Right. And as I said, I think it’s a great way to get there. So this part I’m not concerned about. It’s a long-term game with that.

David: Yeah, no, I mean I agree that that is something to be worried about as well.

Roman: There is a possibility of manipulation if you have a human in the loop, and that itself makes it not safer but more dangerous in certain ways.

David: Yeah, one of the biggest concerns I have for this whole line of work is that the human needs to really trust the AI systems that are assisting it, and I just don’t see that we have good enough mechanisms for establishing trust and building trustworthy systems right now, to really make this scale well without introducing a lot of risk for things like manipulation, or even just compounding of errors.

Roman: But those approaches, like the debate approach, it just feels like they’re setting up humans for manipulation from both sides, and who’s better at breaking the human psychological model.

David: Yep, I think it’s interesting, and I think it’s a good line of work. But I think we haven’t seen anything that looks like a convincing solution to me yet.

Roman: Agreed.

Ariel: So, Roman, was there anything else that you wanted to add about things that happened in the last year that we didn’t get to?

Roman: Well, as a professor, I can tell you that students stop learning after about 40 minutes. So I think at this point we’re just being counterproductive.

Ariel: So for what it’s worth, our most popular podcasts have all exceeded two hours. So, what are you looking forward to in 2019?

Roman: Are you asking about safety or development?

Ariel: Whatever you want to answer. Just sort of in general, as you look toward 2019, what relative to AI are you most excited and hopeful to see, or what do you predict we’ll see?

David: So I’m super excited for people to hopefully pick up on this reward learning agenda that I mentioned that Jan and me and people at DeepMind worked on. I was actually pretty surprised how little work has been done on this. So the idea of this agenda at a high level is just: we want to learn a reward function—which is like a score, that tells an agent how well it’s doing—learn reward functions that encode what we want the AI to do, and that’s the way that we’re going to specify tasks to an AI. And I think from a machine learning researcher point of view this is kind of the most obvious solution to specification problems and to safety—is just learner reward function. But very few people are really trying to do that, and I’m hoping that we’ll see more people trying to do that, and encountering and addressing some of the challenges that come up.

Roman: So I think by definition we cannot predict short-term breakthroughs. So what we’ll see is a lot of continuation of 2018 work, and previous work scaling up. So, if you have, let’s say, Texas hold ’em poker: so for two players, we’ll take it to six players, ten players, something like that. And you can make similar projections for other fields, so the strategy games will be taken to new maps, involve more players, maybe additional handicaps will be introduced for the bots. But that’s all we can really predict, kind of gradual improvement.

Protein folding will be even more efficient in terms of predicting actual structures: Any type of accuracy rates, if they were climbing from 80% to 90%, will hit 95, 96. And this is a very useful way of predicting what we can anticipate, and I’m trying to do something similar with accidents. So if we can see historically what was going wrong with systems, we can project those trends forward. And I’m happy to say that there is now at least two or three different teams working and collecting those examples and trying to analyze them and create taxonomies for them. So that’s very encouraging.

David: Another thing that comes to mind is—I mentioned adversarial examples earlier, which are these imperceptible differences to a human that change how the AI system perceives something like an image. And so far, for the most part, the field has been focused on really imperceptible changes. But I think now people are starting to move towards a broader idea of what counts as an adversarial example. So basically anything that a human thinks clearly should belong to this class and the AI system thinks clearly should belong to this other class that has sort have been constructed deliberately to create that kind of a difference.

And I think this going to be really interesting and exciting to see how the field tries to move in that direction, because as I mentioned, I think it’s hard to define how humans decide whether or not something is a picture of a cat or something. And the way that we’ve done it so far is just by giving lots of examples of things that we say are cats. But it turns out that that isn’t sufficient, and so I think this is really going to push a lot of people closer towards thinking about some of the really core safety challenges within the mainstream machine learning community. So I think that’s super exciting.

Roman: It is a very interesting topic, and I am in particular looking at a side subject in that, which is adversarial inputs for humans, and machines developing which I guess is kind of like optical illusions, and audio illusions, where a human is mislabeling inputs in a predictable way, which is allowing for manipulation.

Ariel: Along very similar lines, I think I want to modify my questions slightly, and also ask: coming up in 2019, what are you both working on that you’re excited about, if you can tell us?

Roman: Sure, so there has been a number of publications looking at particular limitations, either through mathematical proofs or through well known economic models, and what is possible in fact, from computational, complexity points of view. And I’m trying to kind of integrate those into a single model showing—in principle, not in practice, but even in principle—what can we do with the AI control problem? How solvable is it? Is it solvable? Is it not solvable? Because I don’t think there is a mathematically rigorous proof, or even a rigorous argument either way. So I think that will be helpful, especially with kind of arguing about importance of a problem and resource allocation.

David: I’m trying to think what I can talk about. I guess right now I have some ideas for projects that are not super well thought out, so I won’t talk about those. And I have a project that I’m trying to finish off which is a little bit hard to describe in detail, but I’ll give the really high level motivation for it. And it’s about something that people in the safety community like to call capability control. I think Nick Bostrom has these terms, capability control and motivation control. And so what I’ve been talking about most of the time in terms of safety during this podcast was more like motivation control, like getting the AI to want to do the right thing, and to understand what we want. But that might end up being too hard, or sort of limited in some respect. And the alternative is just to make AIs that aren’t capable of doing things that are dangerous or catastrophic.

A lot of people in the safety community sort of worry about capability control approaches failing because if you have a very intelligent agent, it will view these attempts to control it as undesirable, and try and free itself from any constraints that we give it. And I think a way of sort of trying to get around that problem is to sort of look at capability control from the lens of motivation control. So to basically make an AI that doesn’t want to influence certain things, and maybe doesn’t have some of these drives to influence the world, or to influence the future. And so in particular I’m trying to see how can we design agents that really don’t try to influence the future, and really only care about doing the right thing, right now. And if we try and do that in a sort of naïve way, or there ways that can fail, and we can get some sort of emergent drive to still try and optimize over the long term, or try and have some influence in the future. And I think to the extent we see things like that, that’s problematic from this perspective of let’s just make AIs that aren’t capable or motivated to influence the future.

Ariel: Alright! I think I’ve kept you both on for quite a while now. So, David and Roman, thank you so much for joining us today.

David: Yeah, thank you both as well.

Roman: Thank you so much.

FLI Podcast- Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown.

Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings—most Americans, for example, don’t trust Facebook—were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed.

This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University’s political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods.

In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include:

  • Demographic differences in perceptions of AI
  • Discrepancies between expert and public opinions
  • Public trust (or lack thereof) in AI developers
  • The effect of information on public perceptions of scientific issues

Research and publications discussed in this episode include:

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi there. I’m Ariel Conn with the Future of Life Institute. Today, I am doing a special podcast, which I hope will be just the first in a continuing series, in which I talk to researchers about the work that they’ve just published. Last week, a report came out called Artificial Intelligence: American Attitudes and Trends, which is a survey that looks at what Americans think about AI. I was very excited when the lead author of this report agreed to come join me and talk about her work on it, and I am actually now going to just pass this over to her, and let her introduce herself, and just explain a little bit about what this report is and what prompted the research.

Baobao: My name is Baobao Zhang. I’m a PhD candidate in Yale University’s political science department, and I’m also a research affiliate with the Center for the Governance of AI at the University of Oxford. We conducted a survey of 2,000 American adults in June 2018 to look at what Americans think about artificial intelligence. We did so because we believe that AI will impact all aspects of society, and therefore, the public is a key stakeholder. We feel that we should study what Americans think about this technology that will impact them. In this survey, we covered a lot of ground. In the past, surveys about AI tend to have very specific focus, for instance on automation and the future of work. What we try to do here is cover a wide range of topics, including the future of work, but also lethal autonomous weapons, how AI might impact privacy, and trust in various actors to develop AI.

So one of the things we found is Americans believe that AI is a technology that should be carefully managed. In fact, 82% of Americans feel this way. Overall, Americans express mixed support for developing AI. 41% somewhat support or strongly support the development of AI, while there’s a smaller minority, 22%, that somewhat or strongly opposes it. And in terms of the AI governance challenges that we asked—we asked about 13 of them—Americans think all of them are quite important, although they prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake news online, preventing AI cyber attacks, and protecting data privacy.

Ariel: Can you talk a little bit about what the difference is between concerns about AI governance and concerns about AI development and more in the research world?

Baobao: In terms of the support for developing AI, we saw that as a general question in terms of support—we didn’t get into the specifics of what developing AI might look like. But in terms of the governance challenges, we gave quite detailed, concrete examples of governance challenges, and these tend to be more specific.

Ariel: Would it be fair to say that this report looks specifically at governance challenges as opposed to development?

Baobao: It’s a bit of both. I think we ask both about the R&D side, for instance we ask about support for developing AI and which actors the public trusts to develop AI. On the other hand, we also ask about the governance challenges. Among the 13 AI governance challenges that we presented to respondents, Americans tend to think all of them are quite important.

Ariel: What were some of the results that you expected, that were consistent with what you went into this survey thinking people thought, and what were some of the results that surprised you?

Baobao: Some of the results that surprised us is how soon the public thinks that high-level machine intelligence will be developed. We find that they think it will happen a lot sooner than what experts predict, although some past research suggests similar results. What didn’t surprise me, in terms of the AI governance challenge question, is how people are very concerned about data privacy and digital manipulation. I think these topics have been in the news a lot recently, given all the stories about hacking or digital manipulation on Facebook.

Ariel: So going back real quick to your point about the respondents expecting high-level AI happening sooner: how soon do they expect it?

Baobao: In our survey, we asked respondents about high-level machine intelligence, and we defined it as when machines are able to perform almost all tasks that are economically relevant today better than the median human today at each task. My co-author, Allan Dafoe, and some of my other team members, we’ve done a survey asking AI researchers—this was back in 2016—a similar question, and there we had a different definition of high-level machine intelligence that required a higher bar, so to speak. So that might have caused some difference. We’re trying to ask this question again to AI researchers this year. We’re doing continuing research, so hopefully the results will be more comparable. Even so, I think the difference is quite large.

I guess one more caveat is—we have in the footnote—we did ask the same definition as we asked AI experts in 2016 in a pilot survey on the American public, and we also found that the public thinks high-level machine intelligence will happen sooner than experts predict. So it might not just be driven by the definition itself, but the public and experts have different assessments. But to answer your question, the median respondent in our American public sample predicts that there’s a 54% probability of high-level machine intelligence being developed within the next 10 years, which is quite high of a probability.

Ariel: I’m hesitant to ask this, because I don’t know if it’s a very fair question, but do you have thoughts on why the general public thinks that high-level AI will happen sooner? Do you think it is just a case that there’s different definitions that people are referencing, or do you think that they’re perceiving the technology differently?

Baobao: I think that’s a good question, and we’re doing more research to investigate these results and to probe at it. One thing is that the public might have a different perception of what AI is compared to experts. In future surveys, we definitely want to investigate that. Another potential explanation is that the public lacks understanding of what goes into AI R&D.

Ariel: Have there been surveys that are as comprehensive as this in the past?

Baobao: I’m hesitant to say that there are surveys that are as comprehensive as this. We certainly relied on a lot of past survey research when building our surveys. The Eurobarometer had a couple of good surveys on AI in the past, but I think we cover both sort of the long-term and the short-term AI governance challenges, and that’s something that this survey really does well.

Ariel: Okay. The reason I ask that is I wonder how much people’s perceptions or misperceptions of how fast AI is advancing would be influenced by just the fact that we have had significant advancements just in the last couple of years that I don’t think were quite as common during previous surveys that were presented to people.

Baobao: Yes, that certainly makes sense. One part of our survey tries to track responses over time, so I was able to dig up some surveys going all the way back to the 1980s that were conducted by the National Science Foundation on the question of automation—whether automation will create more jobs or eliminate more jobs. And we find that compared with the historical data, the percentage of people who think that automation will create more jobs than it eliminates—that percentage has decreased, so this result could be driven by people reading in the news about all these advances in AI and thinking, “Oh, AI is getting really good these days at doing tasks normally done by humans,” but again, you would need much more data to sort of track these historical trends. So we hope to do that. We just recently received a grant from the Ethics and Governance of AI Fund, to continue this research in the future, so hopefully we will have a lot more data, and then we can really map out these historical trends.

Ariel: Okay. We looked at those 13 governance challenges that you mentioned. I want to more broadly ask the same two-part question of: looking at the survey in its entirety, what results were most expected and what results were most surprising?

Baobao: In terms of the AI governance challenge question, I think we had expected some of the results. We’d done some pilot surveys in the past, so we were able to have a little bit of a forecast, in terms of the governance challenges that people prioritize, such as data privacy, cyber attacks, surveillance, and digital manipulation. These were also things that respondents in the pilot surveys had prioritized. I think some of the governance challenges that people still think of as important, but don’t view as likely to impact large numbers of people in the next 10 years, such as critical AI systems failure—these questions are sort of harder to ask in some ways. I know that AI experts think about it a lot more than, say, the general public.

Another thing that sort of surprised me is how much people think value alignment— which is sort of an abstract concept—how much people think that’s quite important, and also likely to impact large numbers of people within the next 10 years. It’s up there with safety of autonomous vehicles or biased hiring algorithms, so that was somewhat surprising.

Ariel: That is interesting. So if you’re asking people about value alignment, were respondents already familiar with the concept, or was this something that was explained to them and they just had time to consider it as they were looking at the survey?

Baobao: We explained to them what it meant, and we said that it means to make sure that AI systems are safe, trustworthy, and aligned with human values. Then we gave a brief paragraph definition. We think that maybe people haven’t heard of this term before, or it could be quite abstract, so therefore we gave a definition.

Ariel: I would be surprised if it was a commonly known term. Then looking more broadly at the survey as a whole, you looked at lots of different demographics. You asked other questions too, just in terms of things like global risks and the potential for global risks, or generally about just perception of AI in general, and whether or not it was good, and whether or not advanced AI was good or bad, and things like that. So looking at the whole survey, what surprised you the most? Was it still answers within the governance challenges, or did anything else jump out at you as unexpected?

Baobao: Another thing that jumped out at me is that respondents who have computer science or engineering degrees tend to think that the AI governance challenges are less important across the board than people who don’t have computer science or engineering degrees. These people with computer science or engineering degrees also are more supportive of developing AI. I suppose that result is not totally unexpected, but I suppose in the news there is a sense that people who are concerned about AI safety, or AI governance challenges, tend to be those who have a technical computer background. But in reality, what we see are people who don’t have a tech background who are concerned about AI. For instance, women, those with low levels of education, or those who are low-income, tend to be the least supportive of developing AI. That’s something that we want to investigate in the future.

Ariel: There’s an interesting graph in here where you’re showing the extent to which the various groups consider an issue to be important, and as you said, people with computer science or engineering degrees typically don’t consider a lot of these issues very important. I’m going to list the issues real quickly. There’s data privacy, cyber attacks, autonomous weapons, surveillance, autonomous vehicles, value alignment, hiring bias, criminal justice bias, digital manipulation, US-China arms race, disease diagnosis, technological unemployment, and critical AI systems failure. So as you pointed out, the people with the CS and engineering degrees just don’t seem to consider those issues nearly as important, but you also have a category here of people with computer science or programming experience, and they have very different results. They do seem to be more concerned. Now, I’m sort of curious what the difference was between someone who has experience with computer science and someone who has a degree in computer science.

Baobao: I don’t have a very good explanation for the difference between the two, except for I can say that the people with experience, that’s a lower bar, so there are more people in the sample who have computer science or programming experience—and in fact, there’s 735 of them, compared to people who have computer science or engineering undergrad or graduate degrees, and that’s 195 people. I suppose those who have the CS or programming experience, that comprises a greater number of people. Going forward, in future surveys, we want to probe at this a bit more. We might look at what industries various people are working in, or how much experience they have either using AI or developing AI.

Ariel: And then I’m also sort of curious—I know you guys still have more work that you want to do—but I’m curious what you know now about how American perspectives are either different or similar to people in other countries.

Baobao: The most direct comparison that we can make is with respondents in the EU, because we have a lot of data based on the Eurobarometer surveys, and we find that Americans share similar concerns with Europeans about AI. So as I mentioned earlier, 82% of Americans think that AI is a technology that should be carefully managed, and that percentage is similar to what the EU respondents have expressed. Also, we find similar demographic trends, in that women, those with lower levels of income or lower levels of education, tend to be not as supportive of developing AI.

Ariel: I went through this list, and one of the things that was on it is the potential for a US-China arms race. Can you talk a little bit about the results that you got from questions surrounding that? Do Americans seem to be concerned about a US-China arms race?

Baobao: One of the interesting findings from our survey is that Americans don’t necessarily think the US or China is the best at AI R&D, which is surprising, given that these two countries are probably the best. That’s a curious fact that I think we need to be cognizant of.

Ariel: I want to interject there, and then we can come back to my other questions, because I was really curious about that. Is that a case of the way you asked it—it was just, you know, “Is the US in the lead? Is China in the lead?”—as opposed to saying, “Do you think the US or China are in the lead?” Did respondents seem confused by possibly the way the question was asked, or do they actually think there’s some other country where there’s even more research happening?

Baobao: We asked this question in a way that it has been asked about general scientific achievements that Pew Research Center has asked about, so we did it such that it’s a survey experiment where half of the respondents were randomly assigned to consider the US and half of the respondents were randomly assigned to consider China. We wanted to ask this question in this manner, so we get more specific distribution of responses. When you just ask who is in the lead, you’re only allowed to put down one, whereas we give respondents a number of choices, so you can be either best in the world or above average, et cetera.

In terms of people underestimating US R&D, I think this is reflective of the public underestimating US scientific achievements in general. Pew had a similar question in a 2015 survey, and while 45% of the scientists they interviewed think that scientific achievement in the US are the best in the world, only 15% of Americans expressed the same opinion. So this could just be reflecting this general trend.

Ariel: I want to go back to my questions about the US-China arms race, and I guess it does make sense, first, to just define what you are asking about with a US-China arms race. Is that focused more on R&D, or were you also asking about a weapons race?

Baobao: This is actually a survey experiment, where we present different messages to respondents about a potential US-China arms race, and we asked both about investment in AI military capabilities as well as developing AI in a more peaceful manner, and cooperation between the US and China in terms of general R&D. We found that Americans seem to both support the US investing more in AI military capabilities, to make sure that it doesn’t fall behind China’s, even though it would exacerbate a AI military arms race. On the other hand, they also support the US working hard with China to cooperate to avoid the dangers of a AI arms race, and they don’t seem to understand that there’s a trade-off between the two.

I think this result is important for policymakers trying to not exacerbate an arms race, or to prevent one, when communicating with the public—to communicate these trade-offs, although we find that messages that explain the risks of an arm race tend to decrease respondent support for the US investing more in AI military capabilities, but the other information treatments don’t seem to change public perceptions.

Ariel: Do you think it’s a misunderstanding of the trade-offs, or maybe just hopeful thinking that there’s some way to maintain military might while still cooperating?

Baobao: I think this is a question that involves further investigation. I apologize that I keep saying this.

Ariel: That’s the downside to these surveys. I end up with far more questions than get resolved.

Baobao: Yes, and we’re one of the first groups who are asking these questions, so we’re just at the beginning stages of probing this very important policy question.

Ariel: With a project like this, do you expect to get more answers or more questions?

Baobao: I think in the beginning stages, we might get more questions than answers, although we are certainly getting some important answers—for instance that the American public is quite concerned about the societal impacts of AI. With that result, then we can probe and get more detailed answers hopefully. What are they concerned about? What can policymakers do to alleviate these concerns?

Ariel: Let’s get into some of the results that you had regarding trust. Maybe you could just talk a little bit about what you asked the respondents first, and what some of their responses were.

Baobao: Sure. We asked two questions regarding trust. We asked about trust in various actors to develop AI, and we also asked about trust in various actors to manage the development and deployment of AI. These actors include parts of the US government, international organizations, companies, and other groups such as universities or nonprofits. We found that among the actors that are most trusted to develop AI, these include university researchers and the US military.

Ariel: That was a rather interesting combination, I thought.

Baobao: I would like to give it some context. In general, trust in institutions is low among the American public. Particularly, there’s a lot of distrust in the government, and university researchers and the US military are the most trusted institutions across the board, when you ask about other trust issues.

Ariel: I would sort of wonder if there’s political sides with which people are more likely to trust universities and researchers versus trust the military. Is that across the board respondents on either side of the political aisle trusted both, or were there political demographics involved in that?

Baobao: That’s something that we can certainly look into with our existing data. I would need to check and get back to you.

Ariel: The other thing that I thought was interesting with that—and we can get into the actors that people don’t trust in a minute—but I know I hear a lot of concern that Americans don’t trust scientists. As someone who does a lot of science communication, I think that concern is overblown. I think there is actually a significant amount of trust in scientists; There’s just some certain areas where it’s less, and I was sort of wondering what you’ve seen in terms of trust in science, and if the results of this survey have impacted that at all.

Baobao: I would like to add that among the actors that we asked who are currently building AI or planning to build AI, trust is relatively low amongst all these groups.

Ariel: Okay.

Baobao: So, even with university scientists: 50% of respondents say that they have a great amount of confidence or a fair amount of confidence in university researchers developing AI in the interest of the public, so that’s better than some of these other organizations, but it’s not super high, and that is a bit concerning. And in terms of trust in science in general—I used to work in the climate policy space before I moved into AI policy, and there, it’s a question that we struggle with in terms of trust in expertise with regards to climate change. I found that in my past research, communicating the scientific consensus in climate change is actually an effective messaging tool, so your concerns about distrust in science being overblown, that could be true. So I think going forward, in terms of effective scientific communication, having AI researchers deliver an effective message: I think that could be important in bringing the public to trust AI more.

Ariel: As someone in science communication, I would definitely be all for that, but I’m also all for more research to understand that better. I also want to go into the organizations that Americans don’t trust.

Baobao: I think in terms of tech companies, they’re not perceived as untrustworthy across the board. I think trust is still relatively high for tech companies, besides Facebook. People really don’t trust Facebook, and that could be because of all the recent coverage of Facebook violating data privacy, the Cambridge Analytica scandal, digital manipulation on Facebook, et cetera. So we conducted this survey a few months after the Cambridge Analytica Facebook scandal had been in the news, but we’ve also run some pilot surveys before all that press coverage of the Cambridge Analytica Facebook scandal had broke, and we also found that people distrust Facebook. So it might be something particular to the company, although that’s a cautionary tale for other tech companies, that they should work hard to make sure that the public trusts its products.

Ariel: So I’m looking at this list, and under the tech companies, you asked about Microsoft, Google, Facebook, Apple, and Amazon. And I guess one question that I have—the trust in the other four, Microsoft, Google, Apple, and Amazon appears to be roughly on par, and then there’s very limited trust in Facebook. But I wonder, do you think it’s just—since you’re saying that Facebook also wasn’t terribly trusted beforehand—do you think that has to do with the fact that we have to give so much more personal information to Facebook? I don’t think people are aware of giving as much data to even Google, or Microsoft, or Apple, or Amazon.

Baobao: That could be part of it. So, I think going forward, we might want to ask more detailed questions about how people use certain platforms, or whether they’re aware that they’re giving data to particular companies.

Ariel: Are there any other reasons that you think could be driving people to not trust Facebook more than the other companies, especially as you said, with the questions and testing that you’d done before the Cambridge Analytica scandal broke?

Baobao: Before the Cambridge Analytica Facebook scandal, there were a lot of news coverage around the 2016 elections of vast digital manipulation on Facebook, and on social media, so that could be driving the results.

Ariel: Okay. Just to be consistent and ask you the same question over and over again, with this, what did you find surprising and what was on par with your expectations?

Baobao: I suppose I don’t find the Facebook results that unsurprising, given its negative press coverage, and also from our pilot results. What I did find surprising is the high levels of trust in the US military to develop AI, because I think some of us in the AI policy community are concerned about military applications of AI, such as lethal autonomous weapons. But on the other hand, Americans seem to place a high general level of trust in the US military.

Ariel: Yeah, that was an interesting result. So if you were going to move forward, what are some questions that you would ask to try to get a better feel for why the trust is there?

Baobao: I think I would like to ask some questions about particular uses or applications of AI these various actors are developing. Sometimes people aren’t aware that the US military is perhaps investing in this application of AI that they might find problematic, or that some tech companies are working on some other applications. I think going forward, we might do more of these survey experiments, where we give information to people and see if that increases or decreases trust in the various actors.

Ariel: What did Americans think of high-level machine learning and AI?

Baobao: What we found is that the public thinks, on balance, it will be more bad than good: So we have 15% of respondents who think it will be extremely bad, possibly leading to human extinction, and that’s a concern. On the other hand, only 5% thinks it will be extremely good. There’s a lot of uncertainty. To be fair, it is about a technology that a lot of people don’t understand, so 18% said, “I don’t know.”

Ariel: What do we take away from that?

Baobao: I think this also reflects on our previous findings that I talked about, where Americans expressed concern about where AI is headed: that there are people with serious reservations about AI’s impact on society. Certainly, AI researchers and policymakers should take these concerns seriously, invest a lot more research into how to prevent the bad outcomes and how to make sure that AI can be beneficial to everyone.

Ariel: Were there groups who surprised you by either being more supportive of high-level AI and groups who surprised you by being less supportive of high-level AI?

Baobao: I think the results for support of developing high-level machine intelligence versus support for developing AI, they’re quite similar. The correlation is quite high, so I suppose nothing is entirely surprising. Again, we find that people with CS or engineering degrees tend to have higher levels of support.

Ariel: I find it interesting that people who have higher incomes seem to be more supportive as well.

Baobao: Yes. That’s another result that’s pretty consistent across the two questions. We also performed analysis looking at these different levels of support for developing high-level machine intelligence, controlling for support of developing AI, and what we find there is that those with CS or programming experience have greater support of developing high-level machine intelligence, even controlling for support of developing AI. So there, it seems to be another tech optimism story, although we need to investigate further.

Ariel: And can you explain what you mean when you say that you’re analyzing the support for developing high-level machine learning with respect to the support for AI? What distinction are you making there?

Baobao: Sure. So we use a multiple linear regression model, where we’re trying to predict support for developing high-level machine intelligence using all these demographic characteristics, but also including respondent’s support for developing AI, to see if there’s something driving the support for developing high-level machine intelligence in spite of controlling for developing AI. And we find that controlling for support for developing AI, having CS or programming experience is further correlated with support of developing high-level machine intelligence. I hope that makes sense.

Ariel: For the purposes of the survey, how do you distinguish between AI and high-level machine learning?

Baobao: We defined AI as computer systems that perform tasks or make decisions that usually require human intelligence. So that’s a more general definition, versus high-level machine intelligence defined in such a way where the AI is doing most economically relevant tasks at the level of the median human.

Ariel: Were there inconsistencies between those two questions, where you were surprised to find support for one and not support for the other?

Baobao: We can sort of probe it further, to see if there’s people who answer differently for those two questions. We haven’t looked into it, but certainly that’s something that we can with our existing data.

Ariel: Were there any other results that you think researchers specifically should be made aware of, that could potentially impact the work that they’re doing in terms of developing AI?

Baobao: I guess here’s some general recommendations. I think it’s important for researchers or people working in an adjacent space to do a lot more scientific communication to explain to the public what they’re doing—particularly maybe AI safety researchers, because I think there’s a lot of hype about AI in the news, either how scary it is or how great it will be, but I think some more nuanced narratives would be helpful for people to understand the technology.

Ariel: I’m more than happy to do what I can to try to help there. So for you, what are your next steps?

Baobao: Currently, we’re working on two projects. We’re hoping to run a similar survey in China this year, so we’re currently translating the questions into Chinese and changing the questions to have more local context. So then we can compare our results—the US results with the survey results from China—which will be really exciting. We’re also working on surveying AI researchers about various aspects of AI, both looking at their predictions for AI development timelines, but also their views on some of these AI governance challenge questions.

Ariel: Excellent. Well, I am very interested in the results of those as well, so I hope you’ll keep us posted when those come out.

Baobao: Yes, definitely. I will share them with you.

Ariel: Awesome. Is there anything else you wanted to mention?

Baobao: I think that’s it.

Ariel: Thank you so much for joining us.

Baobao: Thank you. It’s a pleasure talking to you.

 

 

Podcast: Existential Hope in 2019 and Beyond

Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable  just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended.

The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts–Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg–about their views on the present, the future, and the path between them.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

We hope you’ll come away feeling inspired and motivated–not just to prevent catastrophe, but to facilitate greatness.

Topics discussed in this episode include:

  • How technology aids us in realizing personal and societal goals.
  • FLI’s successes in 2018 and our goals for 2019.
  • Worldbuilding and how to conceptualize the future.
  • The possibility of other life in the universe and its implications for the future of humanity.
  • How we can improve as a species and strategies for doing so.
  • The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future.
  • Existential hope and what it looks like now and far into the future.

You can listen to the podcast above, or read the full transcript below.

Ariel: Hi everyone. Welcome back to the FLI podcast. I’m your host, Ariel Conn, and I am truly excited to bring you today’s show. This month, we’re departing from our standard two-guest interview format because we wanted to tackle a big and fantastic topic for the end of the year that would require insight from a few extra people. It may seem as if we at FLI spend a lot of our time worrying about existential risks, but it’s helpful to remember that we don’t do this because we think the world will end tragically: We address issues relating to existential risks because we’re so confident that if we can overcome these threats, we can achieve a future greater than any of us can imagine.

And so, as we end 2018 and look toward 2019, we want to focus on a message of hope, a message of existential hope.

I’m delighted to present Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark and Anders Sandberg, all of whom were kind enough to come on the show and talk about why they’re so hopeful for the future and just how amazing that future could be.

Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and she created the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.

Over the course of a few days, I interviewed all six of our guests, and I have to say, it had an incredibly powerful and positive impact on my psyche. We’ve merged these interviews together for you here, and I hope you’ll all also walk away feeling a bit more hope for humanity’s collective future, whatever that might be.

But before we go too far into the future, let’s start with Anthony and Max, who can talk a bit about where we are today.

Anthony: I’m Anthony Aguirre, I’m one of the founders of the Future of Life Institute. And in my day job, I’m a Physicist at the University of California at Santa Cruz.

Max: I am Max Tegmark, a professor doing physics and AI research here at MIT, and also the president of the Future of Life Institute.

Ariel: All right. Thank you so much for joining us today. I’m going to start with sort of a big question. That is, do you think we can use technology to solve today’s problems?

Anthony: I think we can use technology to solve any problem in the sense that I think technology is an extension of our capability: it’s something that we develop in order to accomplish our goals and to bring our will into fruition. So, sort of by definition, when we have goals that we want to do — problems that we want to solve — technology should in principle be part of the solution.

Max: Take, for example, poverty. It’s not like we don’t have the technology right now to eliminate poverty. But we’re steering the technology in such a way that there are people who starve to death, and even in America there are a lot of children who just don’t get enough to eat, through no fault of their own.

Anthony: So I’m broadly optimistic that, as it has over and over again, technology will let us do things that we want to do better than we were previously able to do them. Now, that being said, there are things that are more amenable to better technology, and things that are less amenable. And there are technologies that tend to, rather than functioning as kind of an extension of our will, will take on a bit of a life of their own. If you think about technologies like medicine, or good farming techniques, those tend to be sort of overall beneficial and really are kind of accomplishing purposes that we set. You know, we want to be more healthy, we want to be better fed, we build the technology and it happens. On the other hand, there are obviously technologies that are just as useful or even more useful for negative purposes — socially negative or things that most people agree are negative things: landmines, for example, as opposed to vaccines. These technologies come into being because somebody is trying to accomplish their purpose — defending their country against an invading force, say — but once that technology exists, it’s kind of something that is easily used for ill purposes.

Max: Technology simply empowers us to do good things or bad things. Technology isn’t evil, but it’s also not good. It’s morally neutral. Right? You can use fire to warm up your home in the winter or to burn down your neighbor’s house. We have to figure out how to steer it and where we want to go with it. I feel that there’s been so much focus on just making our tech powerful right now — because that makes money, and it’s cool — that we’ve neglected the steering and the destination quite a bit. And in fact, I see the core goal of the Future of Life Institute: Help bring back focus on the steering of our technology and the destination.

Anthony: There are also technologies that are really tricky in that they give us what we think we want, but then we sort of regret having later, like addictive drugs, or gambling, or cheap sugary foods, or-

Ariel: Social media.

Anthony: … certain online platforms that will go unnamed. We feel like this is what we want to do at the time; We choose to do it. We choose to eat the huge sugary thing, or to spend some time surfing the web. But later, with a different perspective maybe, we look back and say, “Boy, I could’ve used those calories, or minutes, or whatever, better.” So who’s right? Is it the person at the time who’s choosing to eat or play or whatever? Or is it the person later who’s deciding, “Yeah, that wasn’t a good use of my time or not.” Those technologies I think are very tricky, because in some sense they’re giving us what we want. So we reward them, we buy them, we spend money, the industries develop, the technologies have money behind them. At the same time, it’s not clear that they make us happier.

So I think there are certain social problems, and problems in general, that technology will be tremendously helpful in improving as long as we can act to sort of wisely try to balance the effects of technology that have dual use toward the positive, and as long as we can somehow get some perspective on what to do about these technologies that take on a life of their own, and tend to make us less happy, even though we dump lots of time and money into them.

Ariel: This sort of idea of technologies — that we’re using them and as we use them we think they make us happy and then in the long run we sort of question that — is this a relatively modern problem, or are there examples of anything that goes further back that we can learn from from history?

Anthony: I think it goes fairly far back. Certainly drug use goes a fair ways back. I think there have been periods where drugs were used as part of religious or social ceremonies and in other kind of more socially constructive ways. But then, it’s been a fair amount of time where opiates and very addictive things have existed also. Those have certainly caused social problems back at least a few centuries.

I think a lot of these examples of technologies that give us what we seem to want but not really what we want are ones in which we’re applying the technology to a species — us — that developed in a very different set of circumstances, and that contrast between what’s available and what we evolutionarily wanted is causing a lot of problems. The sugary foods are an obvious example where we can now just supply huge plenitudes of something that was very rare and precious back in more evolutionary times — you know, sweet calories.

Drugs are something similar. We have a set of chemistry that helps us out in various situations, and then we’re just feeding those same chemical pathways to make ourselves feel good in a way that is destructive. And violence might be something similar. Violent technologies go way, way back. Those are another one that are clearly things that we want to invent to further our will and accomplish our goals. They’re also things that may at some level be addictive to humans. I think it’s not entirely clear exactly how — there’s a strange mix there, but I think there’s certainly something compelling and built into at least many humans’ DNA that promotes fighting and hunting and all kinds of things that were evolutionarily useful way back when and perhaps less useful now. It had a clear evolutionary purpose with tribes that had to defend themselves, with animals that needed to be killed for food. But feeding that desire to run around and hunt and shoot people, which most people aren’t doing in real life, but tons of people are doing in video games. So there’s clearly some built in mechanism that’s rewarding that behavior as being fun to do and compelling. Video games are obviously a better way to express that than running around and doing it in real life, but it tells you something about some circuitry that is still there and is left over from early times. So I think there are a number of examples like that — this connection between our biological evolutionary history and what technology makes available in large quantities — where we really have to think carefully about how we want to play that.

Ariel: So, as you look forward to the future, and sort of considering some of these issues that you’ve brought up, how do you envision us being able to use technology for good and maybe try to overcome some of these issues? I mean, maybe it is good if we’ve got people playing video games instead of going around shooting people in real life.

Anthony: Yeah. So there may be examples where some of that technology can fulfill a need in a less destructive way than it might otherwise be. I think there are also plenty of examples where a technology can root out or sort of change the nature of a problem that would be enormously difficult to do something about without a technology. So for example, I think eating meat, when you analyze it from almost any perspective, is a pretty destructive thing for humanity to be doing. Ecologically, ethically in terms of the happiness of the animals, health-wise: so many things are destructive about it. And yet, you really have the sense that it’s going to be enormously difficult — it would be very unlikely for that to change wholesale on a relatively short period of time.

However, there are technologies — clean meat, cultured meat, really good tasting vegetarian meat substitutes — that are rapidly coming to market. And you could imagine if those things were to get cheap and widely available and perhaps a little bit healthier, that could dramatically change that situation relatively quickly. I think if a non-ecologically destructive, non-suffering inducing, just as tasty and even healthier product were cheaper, I don’t think people would be eating meat. Very few people actually like, I think, intrinsically the idea of having an animal suffer in order for them to eat. So I think that’s an example of something that would be really, really hard to change through just social actions. Could be jump started quite a lot by technology — that’s one of the ones I’m actually quite hopeful about.

Global warming I think is a similar one — it’s on some level a social and economic problem. It’s a long-term planning problem, which we’re very bad at. It’s pretty clear how to solve the global warming issue if we really could think on the right time scales and weigh the economic costs and benefits over decades — it’d be quite clear that mitigating global warming now and doing things about it now might take some overall investment that would clearly pay itself off. But we seem unable to accomplish that.

On the other hand, you could easily imagine a really cheap, really power-dense, quickly rechargeable battery being invented and just utterly transforming that problem into a much, much more tractable one. Or feasible, small-scale nuclear fusion power generation that was cheap. You can imagine technologies that would just make that problem so much easier, even though it is ultimately kind of a social or political problem that could be solved. The technology would just make it dramatically easier to do that.

Ariel: Excellent. And so thinking more hopefully — even when we’re looking at what’s happening in the world today, news is usually focusing on all the bad things that have gone wrong — when you look around the world today, what do you think, “Wow, technology has really helped us achieve this, and this is super exciting?”

Max: Almost everything I love about today is the result of technology. It’s because of technology that we’ve more than doubled the lifespan that we humans used to have, most of human history. More broadly, I feel that the technology is empowering us. Ten thousand years ago, we felt really, really powerless; We were these beings, you know, looking at this great world out there and having very little clue about how it worked — it was largely mysterious to us — and even less ability to actually influence the world in a major way. Then technology enabled science, and vice versa. So the sciences let us understand more and more how the world works, and let us build this technology which lets us shape the world to better suit us. Helping produce much better, much more food, helping keep us warm in the winter, helping make hospitals that can take care of us, and schools that can educate us, and so on.

Ariel: Let’s bring on some of our other guests now. We’ll turn first to Gaia Dempsey. How do you envision technology being used for good?

Gaia: That’s a huge question.

Ariel: It is. Yes.

Gaia: I mean, at its essence I think technology really just means a tool. It means a new way of doing something. Tools can be used to do a lot of good — making our lives easier, saving us time, helping us become more of who we want to be. And I think technology is best used when it supports our individual development in the direction that we actually want to go — when it supports our deeper interests and not just the, say, commercial interests of the company that made it. And I think in order for that to happen, we need for our society to be more literate in technology. And to me that’s not just about understanding how computing platforms work, but also understanding the impact that tools have on us as human beings. Because they don’t just shape our behavior, they actually shape our minds and how we think.

So I think we need to be very intentional about the tools that we choose to use in our own lives, and also the tools that we build as technologists. I’ve always been very inspired by Douglas Engelbart’s work, and I think that — I was revisiting his original conceptual framework on augmenting human intelligence, which he wrote and published in 1962 — and I really think he had the right idea, which is that tools used by human beings don’t exist in a vacuum. They exist in a coherent system and that system involves language: the language that we use to describe the tools and understand how we’re using them; the methodology; and of course the training and education around how we learn to use those tools. And I think that as a tool maker it’s really important to think about each of those pieces of an overarching coherent system, and imagine how they’re all going to work together and fit into an individual’s life and beyond: you know, the level of a community and a society.

Ariel: I want to expand on some of this just a little bit. You mentioned this idea of making sure that the tool, the technology tool, is being used for people and not just for the benefit, the profit, of the company. And that that’s closely connected to making sure that people are literate about the technology. One, just to confirm that that is actually what you were saying. And, two, I mean one of the reasons I want to confirm this is because that is my own concern — that it’s being too focused for making profit and not enough people really understand what’s happening. My question to you is, then, how do we educate people? How do we get them more involved?

Gaia: I think for me, my favorite types of tools are the kinds of tools that support us in developing our thinking and that help us accelerate our ability to learn. But I think that some of how we do this in our society is not just about creating new tools or getting trained on new tools, but really doesn’t have very much to do with technology at all. And that’s in our education system, teaching critical thinking. And teaching, starting at a young age, to not just accept information that is given to you wholesale, but really to examine the motivations and intentions and interests of the creator of that information, and the distributor of that information. And I think these are really just basic tools that we need as citizens in a technological society and in a democracy.

Ariel: That actually moves nicely to another question that I have. Well, I actually think the sentiment might be not quite as strong as it once was, but I do still hear a lot of people who sort of approach technology as the solution to any of today’s problems. And I’m personally a little bit skeptical that we can only use technology. I think, again, it comes back to what you were talking about with it’s a tool so we can use it, but I think it just seems like there’s more that needs to be involved. I guess, how do you envision using technology as a tool, and still incorporating some of these other aspects like teaching critical thinking?

Gaia: You’re really hitting on sort of the core questions that are fundamental to creating the kind of society that we want to live in. And I think that we would do well to spend more time thinking deeply about these questions. I think technology can do really incredible, tremendous things in helping us solve problems and create new capabilities. But it also creates a new set of problems for us to engage with.

We’ve sort of coevolved with our technology. So it’s easy to point to things in the culture and say, “Well, this never would have happened without technology X.” And I think that’s true for things that are both good and bad. I think, again, it’s about taking a step back and taking a broader view, and really not just teaching critical thinking and critical analysis, but also systems level thinking. And understanding that we ourselves are complex systems, and we’re not perfect in the way that we perceive reality — we have cognitive biases, we cannot necessarily always trust our own perceptions. And I think that’s a lifelong piece of work that everyone can engage with, which is really about understanding yourself first. This is something that Yuval Noah Harari talked about in a couple of his recent books and articles that he’s been writing, which is: if we don’t do the work to really understand ourselves first and our own motivations and interests, and sort of where we want to go in the world, we’re much more easily co-opted and hackable by systems that are external to us.

There are many examples of recommendation algorithms and sentiment analysis — audience segmentation tools that companies are using to be able to predict what we want and present that information to us before we’ve had a chance to imagine that that is something we could want. And while that’s potentially useful and lucrative for marketers, the question is what happens when those tools are then utilized not just to sell us a better toothbrush on Amazon, but when it’s actually used in a political context. And so with the advent of these vast machine learning, reinforcement learning systems that can look at data and look at our behavior patterns and understand trends in our behavior and our interests, that presents a really huge issue if we are not ourselves able to pause and create a gap, and create a space between the information that’s being presented to us within the systems that we’re utilizing and really our own internal compass.

Ariel: You’ve said two things that I think are sort of interesting, especially when they’re brought together. And the first is this idea that we’ve coevolved with technology — which, I actually hadn’t thought of it in that phrase before, and I think it’s a really, really good description. But then when we consider that we’ve coevolved with technology, what does that mean in terms of knowing ourselves? And especially knowing ourselves as our biological bodies, and our limiting cognitive biases? I don’t know if that’s something that you’ve thought about much, but I think that combination of ideas is an interesting one.

Gaia: I mean, I know that I certainly already feel like I’m a cyborg. Part of knowing myself is — it does involve understanding the tools that I use, that feel that they are extensions of myself. That kind of comes back to the idea of technology literacy, and systems literacy, and being intentional about the kinds of tools that I want to use. For me, my favorite types of tools are the kind that I think are very rare: the kind that support us developing the capacity for long-term thinking, and for being true to the long-term intentions and goals that I set for myself.

Ariel: Can you give some examples of those?

Gaia: Yeah, I’ll give a couple examples. One example that’s sort of probably familiar to a lot of people listening to this comes from the book Ready Player One. And in this book the main character is interacting with his VR system that he sort of lives and breathes in every single day. And at a certain point the system asks him: do you want to activate your health module? I forgot exactly what it was called. And without giving it too much thought, he kind of goes, “Sure. Yeah, I’d like to be healthier.” And it instantiates a process whereby he’s not allowed to log into the OASIS without going through his exercise routine every morning. To me, what’s happening there is: there is a choice.

And it’s an interesting system design because he didn’t actually do that much deep thinking about, “Oh yeah, this is a choice I really want to commit to.” But the system is sort of saying, “We’re thinking through the way that your decision making process works, and we think that this is something you really do want to consider. And we think that you’re going to need about three months before you make a final decision as to whether this is something you want to continue with.”

So that three month period or whatever, and I believe it was three months in the book, is what’s known as an akrasia horizon. Which is a term that I learned through a different tool that is sort of a real life version of that, which is called Beeminder. And the akrasia horizon is, really, it’s a time period that’s long enough that it will sort of circumvent a cognitive bias that we have to really prioritize the near term at the expense of the future. And in the case of the Ready Player One example, the near term desire that he would have that would circumvent the future — his long-term health — is, “I don’t feel like working out today. I just want to get into my email or I just want to play a video game right now.” And a very similar sort of setup is created in this tool Beeminder, which I love to use to support some goals that I want to make sure I’m really very motivated to meet.

So it’s a tool where you can put in your goals and you can track them either yourself by entering the data manually, or you can connect to a number of different tracking capabilities like RescueTime and others. And if you don’t stay on track with your goals, they charge your credit card. It’s a very effective sort of motivating force. And so I sort of have a nickname: I call these systems time bridges. Which are really choices made by your long-term thinking self, that in some way supersedes the gravitational pull toward mediocrity inherent in your short-term impulses.

It’s about experimenting too. And this is one particular system that creates consequences and accountability. And I love systems. For me if I don’t have systems in my life that help me organize the work that I want to do, I’m hopeless. That’s why I like to collect and I’m sort of an avid taster of different systems, and I’ll try anything, and really collect and see what works. And I think that’s important. It’s a process of experimentation to see what works for you.

Ariel: Let’s turn to Allison Duettmann now, for her take on how we can use technology to help us become better versions of ourselves and to improve our societal interactions.

Allison: I think there are a lot of technological tools that we can use to aid our reasoning and sense making and coordination. So I think that technologies can be used to help with reasoning, for example, by mitigating trauma, or bias, or by augmenting our intelligence. That’s the whole point of creating AI in the first place. Technologies can also be used to help with collective sense-making, for example with truth-finding and knowledge management, and I think your hypertexts and prediction markets — something that Anthony’s working on — are really worthy examples here. I also think technologies can be used to help with coordination. Mark Miller, who I’m currently writing a book with, likes to say that if you lower the risks of cooperation, you’ll get a more cooperative world. I think that most cooperative interactions may soon be digital.

Ariel: That’s sort of an interesting idea, that there’s risks to cooperation. Can you maybe expand on that a little bit more?

Allison: Yeah, sure. I think that most of our interactions are already digital ones, for some of us at least, and they will be more and more so in the future. So I think that one step to lowering the risk of cooperation is establishing cybersecurity as a first step, because this would decrease the risk of digital coercion. But I do think that’s only part of it, because rather than just freeing us from the restraints that keep us from cooperating, we also need to equip us with the tools to cooperate, right?

Ariel: Yes.

Allison: I think some of those may be smart contracts to allow individuals to credibly commit, but there may be others too. I just think that we have to realize that the same technologies that we’re worried about in terms of risks are also the ones that may augment our abilities to decrease those risks.

Ariel: One of the things that came to mind as you were talking about this, using technology to improve cooperation — when we look at the world today, technology isn’t spread across the globe evenly. People don’t have equal access to these tools that could help. Do you have ideas for how we address various inequality issues, I guess?

Allison: I think inequality is a hot topic to address. I’m currently writing a book with Mark Miller and Christine Peterson on a few strategies to strengthen civilization. In this book we outline a few paths to do so, but also potential positive outcomes. One of the outcomes that we’re outlining is a voluntary world in which all entities can cooperate freely with each other to realize their interests. It’s kind of based on the premise that finding one utopia that works for everyone is hard, and is perhaps impossible, but that in the absence of knowing what’s in everyone’s interest, we shouldn’t try to impose any interests by one entity — whether that’s an AI or an organization or a state — but we should try to create a framework in which different entities, with different interests, whether they’re human or artificial, can pursue their interests freely by cooperating. And I think If you look at the strategy, it has worked pretty well so far. If you look at society right now it’s really not perfect, but by allowing humans to cooperate freely and engage in some mutually beneficial relationships, civilization already serves our interests quite well. And it’s really not perfect by far, I’m not saying this, but I think as a whole, our civilization at least tends imperfectly to plan for pareto-preferred paths. We have survived so far, and in better and better ways.

So a few ways that we propose to strengthen this highly involved process is by proposing kind of general recommendations for solving coordination problems, and then a few more specific ideas on reframing a few risks. But I do think that enabling a voluntary world in which different entities can cooperate freely with each other is the best we can do, given our limited knowledge of what is in everyone’s interests.

Ariel: I find that interesting, because I hear lots of people focus on how great intelligence is, and intelligence is great, but it does often seem — and I hear other people say this — that cooperation is also one of the things that our species has gotten right. We fail at it sometimes, but it’s been one of the things, I think, that’s helped.

Allison: Yeah, I agree. I hosted an event last year at the Internet Archive on different definitions of intelligence. Because in the paper that we wrote last year, we have this very grand, or broad conception of intelligence, which includes civilization as an intelligence. So I think you may be asking yourself the question of, what does it mean to be intelligent, and if what we care about is problem-solving ability then I think that civilization certainly classifies as a system that can solve more problems than any individual that is within it alone. So I do think this is part of the cooperative nature of the individual parts within civilization, and so I don’t think that cooperation and intelligence are mutually exclusive at all. Marvin Minsky wrote this amazing book, Society of Mind, and in much of this, has similar ideas.

Ariel: I’d like to take this idea and turn it around, and this is a question specifically for Max and Anthony: looking back at this past year, how has FLI helped foster cooperation and public engagement surrounding the issues we’re concerned about? What would you say were FLI’s greatest successes in 2018?

Anthony: Let’s see, 2018. What I’ve personally enjoyed the most, I would say, is starting the engagement between the technical researchers and the nonprofit community really starting to get more engaged with state and federal governments. So for example the Asilomar principles — which were generated at this nexus of business and nonprofit and academic thinkers about AI and related things — I think were great. But that conversation didn’t really include much from people in policy, and governance, and governments, and so on. So, starting to see that thinking, and those recommendations, and those aspirations of the community of people who know about AI and are thinking hard about it and what it should do and what it shouldn’t do — seeing that start to come into the political sphere, and the government sphere, and the policy sphere I think is really encouraging.

That seems to be happening in many places at some level. I think the local one that I’m excited about is the passage of the California legislature of a resolution endorsing the Asilomar principles. That felt really good to see that happen and really encouraging that there were people in the legislature that — we didn’t go and lobby them to do that, they came to us and said, “This is really important. We want to do something.” And we worked with them to do that. That was super encouraging, because it really made it feel like there is a really open door, and there’s a desire in the policy world to do something. This thing is getting on people’s radar, that there’s a huge transformation coming from AI.

They see that their responsibility is to do something about that. They don’t intrinsically know what they should be doing, they’re not experts in AI, they haven’t been following the field. So there needs to be that connection and it’s really encouraging to see how open they are and how much can be produced with honestly not a huge level of effort; Just communication and talking through things I think made a significant impact. I was also happy to see how much support there continues to be for controlling the possibility of lethal autonomous weapons.

The thing we’ve done this year, the lethal autonomous weapons pledge, I felt really good about the success of. So this was an idea that anybody who’s interested, but especially companies who are engaged in developing related technologies, drones, or facial recognition, or robotics, or AI in general — to get them to take that step themselves of saying, “No, we want to develop these technologies for good, and we have no interest in developing things that are going to be weaponized and used in lethal autonomous weapons.”

I think having a large number of people and corporations sign on to a pledge like that is useful not so much because they were planning to do all those things and now they signed a pledge, so they’re not going to do it anymore. I think that’s not really the model so much as it’s creating a social and cultural norm that these are things that people just don’t want to have anything to do with, just like biotech companies don’t really want to be developing biological weapons, they want to be seen as forces for good that are building medicines and therapies and treatments and things. Everybody is happy for biotech companies to be doing those things.

If biotech companies were building biological weapons also, you really start to wonder, “Okay, wait a minute, why are we supporting this? What are they doing with my information? What are they doing with all this genetics that they’re getting? What are they doing with the research that’s funded by the government? Do we really want to be supporting this?” So keeping that distinction in the industry between all the things that we all support — better technologies for helping people — versus the military applications, particularly in this rather destabilizing and destructive way: I think that is more the purpose — to really make clear that there are companies that are going to develop weapons for the military, and that’s part of the reality of the world.

We have militaries; We need, at the moment, militaries. I think I certainly would not advocate that the US should stop defending itself, or shouldn’t develop weapons, and I think it’s good that there are companies that are building those things. But there are very tricky issues when the companies building military weapons are the same companies that are handling all of the data of all of the people in the world or in the country. I think that really requires a lot of thought, how we’re going to handle it. And seeing companies engage with those questions and thinking about how are the technologies that we’re developing, how are they going to be used and for what purposes, and what purposes do we not want them to be used for is really, really heartening. It’s been very positive I think to see at least in certain companies those sort of conversations go on with our pledge or just in other ways.

You know, seeing companies come out with, “This is something that we’re really worried about. We’re developing these technologies, but we see that there could be major problems with them.” That’s very encouraging. I don’t think it’s necessarily a substitute for something happening at the regulatory or policy level, I think that’s probably necessary too, but it’s hugely encouraging to see companies being proactive about thinking about the societal and ethical implications of the technologies they’re developing.

Max: There are four things I’m quite excited about. One of them is that we managed to get so many leading companies and AI researchers and universities to pledge to not build lethal autonomous weapons, also known as killer robots. Second is that we were able to channel two million dollars, thanks to Elon Musk, to 10 research groups around the world to help figure out how to make artificial general intelligence safe and beneficial. Third is that the state of California decided to officially endorse the 23 Asilomar Principles. It’s really cool that these are getting more taken seriously now, even by policy makers. And the fourth is that we were able to track down the children of Stanislav Petrov in Russia, thanks to whom this year is not the 35th anniversary year of World War III, and actually give them the appreciation we feel that they deserve.

I’ll tell you a little more about this one because it’s something I think a lot of people still aren’t that aware of. But September 26th, 35 years ago, Stanislav Petrov was on shift and in charge of his Soviet early warning station, which showed five US nuclear missiles incoming, one after the other. Obviously, not what he was hoping that would happen at work that day and a really horribly scary situation where the natural response is to do what that system was built for: namely, warning the Soviet Union so that they would immediately strike back. And if that had happened, then thousands of mushroom clouds later, you know, you and I, Ariel, would probably not be having this conversation. Instead, he, mostly on gut instinct, came to the conclusion that there was something wrong and said, “This is a false alarm.” And we’re incredibly grateful for that level-headed action of him. He passed away recently.

His two children are living on very modest means outside of Moscow and we felt that when someone does something like this, or in his case abstains from doing something, that future generations really appreciate, we should show our appreciation, so that others in his situation later on know that if they sacrifice themselves for the greater good, they will be appreciated. Or if they’re dead, their loved ones will. So we organized a ceremony in New York City and invited them to it and bought air tickets for them and so on. And in a very darkly humorous illustration of how screwed up their relationships are at the global level now, the US decided that because — that the way to show appreciation for the US not having gotten nuked was to deny a visa to Stanislav’s son. So he could only join by Skype. Fortunately, his daughter was able to get a visa, even though the waiting period to even get a visa point for Moscow was 300 days. We had to fly her to Israel to get her the Visa.

But she came and it was her first time ever outside of Russia. She was super excited to come and see New York. It was very touching for me to see all the affection that the New Yorkers there deemed at her and see her reaction and her husband’s reaction and to get to give her this $50,000 award, which for them was actually a big deal. Although it’s of course nothing compared to the value for the rest of the world of what their father did. And it was a very sobering reminder that we’ve had dozens of near misses where we almost had a nuclear war by mistake. And even though the newspapers usually make us worry about North Korea and Iran, of course by far the most likely way in which we might get killed by a nuclear explosion is because another just stupid malfunction or error causing the US and Russia to start a war by mistake.

I hope that this ceremony and the one we did the year before also, for family of Vasili Arkhipov, can also help to remind people that hey, you know, what we’re doing here, having 14,000 hydrogen bombs and just relying on luck year after year isn’t a sustainable long-term strategy and we should get our act together and reduce nuclear arsenals down to the level needed for deterrence and focus our money on more productive things.

Ariel: So I wanted to just add a quick follow-up to that because I had the privilege of attending the ceremony and I got to meet the Petrovs. And one of the things that I found most touching about meeting them was their own reaction to New York, which was in part just an awe of the freedom that they felt. And I think, especially, this is sort of a US centric version of hope, but it’s easy for us to get distracted by how bad things are because of what we see in the news, but it was a really nice reminder of how good things are too.

Max: Yeah. It’s very helpful to see things through other people’s eyes and in many cases, it’s a reminder of how much we have to lose if we screw up.

Ariel: Yeah.

Max: And how much we have that we should be really grateful for and cherish and preserve. It’s even more striking if you just look at the whole planet, you know, in a broader perspective. It’s a fantastic, fantastic place, this planet. There’s nothing else in the solar system even remotely this nice. So I think we have a lot to win if we can take good care of it and not ruin it. And obviously, the quickest way to ruin it would be to have an accidental nuclear war, which — it would be just by far the most ridiculously pathetic thing humans have ever done, and yet, this isn’t even really a major election issue. Most people don’t think about it. Most people don’t talk about it. This is, of course, the reason that we, with the Future of Life Institute, try to keep focusing on the importance of positive uses of technology, whether it be nuclear technology, AI technology, or biotechnology, because if we use it wisely, we can create such an awesome future, like you said: Take the good things we have, make them even better.

Ariel: So this seems like a good moment to introduce another guest, who just did a whole podcast series exploring existential risks relating to AI, biotech, nanotech, and all of the other technologies that could either destroy society or help us achieve incredible advances if we use them right.

Josh: I’m Josh Clark. I’m a podcaster. And I’m the host of a podcast series called the End of the World with Josh Clark.

Ariel: All right. I am really excited to have you on the show today because I listened to all of the End of the World. And it was great. It was a really, really wonderful introduction to existential risks.

Josh: Thank you.

Ariel: I highly recommend it to anyone who hasn’t listened to it. But now that you’ve just done this whole series about how things can go horribly wrong, I thought it would be fun to bring you on and talk about what you’re still hopeful for after having just done that whole series.

Josh: Yeah, I’d love that, because a lot of people are hesitant to listen to the series because they’re like, well, “it’s got to be such a downer.” And I mean, it is heavy and it is kind of a downer, but there’s also a lot of hope that just kind of emerged naturally from the series just researching this stuff. There is a lot of hope — it’s pretty cool.

Ariel: That’s good. That’s exactly what I want to hear. What prompted you to do that series, The End of the World?

Josh: Originally, it was just intellectual curiosity. I ran across a Bostrom paper in like 2005 or 6, my first one, and just immediately became enamored with the stuff he was talking about — it’s just baldly interesting. Like anyone who hears about this stuff can’t help but be interested in it. And so originally, the point of the podcast was, “Hey, everybody come check this out. Isn’t this interesting? There’s like, people actually thinking about this kind of stuff and talking about it.” And then as I started to interview some of the guys at the Future of Humanity Institute, started to read more and more papers and research further, I realized, wait, this isn’t just like, intellectually interesting. This is real stuff. We’re actually in real danger here.

And so as I was creating the series, I underwent this transition for how I saw existential risks, and then ultimately how I saw humanity’s future, how I saw humanity, other people, and I kind of came to love the world a lot more than I did before. Not like I disliked the world or people or anything like that. But I really love people way more than I did before I started out, just because I see that we’re kind of close to the edge here. And so the point of why I made the series kind of underwent this transition, and you can kind of tell in the series itself where it’s like information, information, information. And then now, that you have bought into this, here’s how we do something about it.

Ariel: So you have two episodes that go into biotechnology and artificial intelligence, which are two — especially artificial intelligence — they’re both areas that we work on at FLI. And in them, what I thought was nice is that you do get into some of the reasons why we’re still pursuing these technologies, even though we do see these existential risks around them. And so, I was curious, as you were doing your research into the series, what did you learn about, where you were like, “Wow, that’s amazing, that I’m so psyched that we’re doing this, even though there are these risks.”

Josh: Basically everything I learned about. I had to learn particle physics to explain what’s going on in large Hadron Collider. I had to learn a lot about AI. I realized when I came into it, that my grasp of AI was beyond elementary. And it’s not like I could actually put together a AGI myself from scratch or anything like that now, but I definitely know a lot more than I did before. With biotech in particular, there was a lot that I learned that I found particularly jarring with the number of accidents that are reported every year, and then more than that, the fact that not every lab in the world has to report accidents. I found that extraordinarily unsettling.

So kind of from start to finish, I learned a lot more than I knew going into it, which is actually one of the main reasons why it took me well over a year to make the series because I would start to research something and then I’d realized I need to understand the fundamentals of this. So I’d go understand, I’d go learn that, and then there’d be something else I had to learn first, before I could learn something the next level up. So I kept having to kind of regressively research and I ended up learning quite a bit of stuff.

But I think to answer your question, the thing that struck me the most was learning about physics, about particle physics, and how tenuous our understanding of our existence is, but just how much we’ve learned so far in just the last like century or so, when we really dove into quantum physics, particle physics and just what we know about things. One of the things that just knocked my socks off was the idea that there’s no such thing as particles — like particles, as we think of them are just basically like shorthand. But the rest of the world outside of particle physics has said like, “Okay, particles, there’s like protons and neutrons and all that stuff. There’s electrons. And we understand that they kind of all fit into this model, like a solar system. And that’s how atoms work.”

That is not at all how atoms work, like a particle is just a pack of energetic vibrations and everything that we experience and see and feel, and everything that goes on in the universe is just the interaction of these energetic vibrations in force fields that are everywhere at every point in space and time. And just to understand that, like on a really fundamental level, changed my life actually, changed the way that I see the universe and myself and everything actually.

Ariel: I don’t even know where I want to go next with that. I’m going to come back to that because I actually think it connects really nicely to the idea of existential hope. But first I want to ask you a little bit more about this idea of getting people involved more. I mean, I’m coming at this from something of a bubble at this point where I am surrounded by people who are very familiar with the existential risks of artificial intelligence and biotechnology. But like you said, once you start looking at artificial intelligence, if you haven’t been doing it already, you suddenly realize that there’s a lot there that you don’t know.

Josh: Yeah.

Ariel: I guess I’m curious, now that you’ve done that, to what extent do you think everyone needs to? To what extent do you think that’s possible? Do you have ideas for how we can help people understand this more?

Josh: Yeah you know, that really kind of ties into taking on existential risks in general, is just being an interested curious person who dives into the subject and learns as much as you can, but that at this moment in time, as I’m sure you know, that’s easier said than done. Like you really have to dedicate a significant portion of your life to spending time focusing on that one issue whether it’s AI, it’s biotech or particle physics, or nanotech, whatever. You really have to immerse yourself into it because it’s not a general topic of national or global conversation, the existential risks that we’re facing, and certainly not the existential risks we’re facing from all the technology that everybody’s super happy that we’re coming out with.

And I think that one of the first steps to actually taking on existential risks is for more and more people to start talking about it. Groups like yours, talking to the public, educating the public. I’m hoping that my series did something like that, just arousing curiosity in people, but also raising awareness of these things like, these are real things, these aren’t crackpots talking about this stuff. This is real, legitimate issues that are coming down the pike, that are being pointed out by real, legitimate scientists and philosophers and people who have given great thought about this. This isn’t like a chicken little situation; This is quite real. I think if you can pique someone’s curiosity just enough that they listen, stop and listen, do a little research, it sinks in after a minute that this is real. And that, oh, this is something that they want to be a part of doing something about.

And so I think just getting people talking about that just by proxy will interest other people who hear about it, and it will spread further and further out. And I think that that’s step one, is to just make it so it’s an okay thing to talk about, so you’re not nuts to raise this kind of stuff seriously.

Ariel: Well, I definitely appreciate you doing your series for that reason. I’m hopeful that that will help a lot.

Ariel: Now, Allison — you’ve got this website which, my understanding is that you’re trying to get more people involved in this idea that if we focus on these better ideals for the future, we stand a better shot at actually hitting them.

Allison: At ExistentialHope.com, I keep a map of reading, podcasts, organizations, and people that inspire an optimistic long-term vision for the future.

Ariel: You’re clearly doing a lot to try to get more people involved. What is it that you’re trying to do now, and what do you think we all need to be doing more of to get more people thinking this way?

Allison: I do think that it’s up to everyone, really, to try to, again, engage with the fact that we may not be doomed, and what may be on the other side. What I’m trying to do with the website, at least, is generating common knowledge to catalyze more directed coordination toward beautiful futures. I think that there’s a lot of projects out there that are really dedicated to identifying the threats to human existence, but very few really offer guidance on what to influence that. So I think we should try to map the space of both peril and promise which lie before us, but we should really try to aim for that this knowledge can empower each and every one of us to navigate toward the grand future.

For us currently on the website this involves orienting ourselves, so collecting useful models, and relevant broadcasts, and organizations that generate new insights, and then try to synthesize a map of where we came from, and a really kind of long perspective, and where we may go, and then which lenses of science and technology and culture are crucial to consider along the way. Then finally we would like to publish a living document that summarizes those models that are published elsewhere, to outline possible futures, and the idea is that this is a collaborative document. Even already, currently, the website links to a host of different Google docs in which we’re trying to really synthesize the current state of the art in the different focus areas. The idea is that this is collaborative. This is why it’s on Google docs, because everyone can just comment. And people do, and I think this should really be a collaborative effort.

Ariel: What are some of your favorite examples of content that, presumably, you’ve added to your website, that look at these issues?

Allison: There’s quite a host of things on there, I think, that a good start for people to go on the website is just to go on the overview. Because here I list kind of my top 10 lists about short pieces and long pieces, but my personal ones, I think, as a starting ground: I really like the metaethics sequence by Eliezer Yudkowsky. It contains a really good post, like Existential Angst Factory, and Reality as Fixed Computation. For me this is kind of like existentialism 2.0. Have to get your motivations and expectations right. What can I reasonably hope for? Then I think, relatedly, there’s also the Fan Sequence, also by Yudkowsky. But that together with, for example, Letter From Utopia by Nick Bostrom, or Hedonistic Imperative by David Pearce, or Post On Raikoth by Scott Alexander — they are really a nice next step because they actually lay out a few compelling positive versions of utopia.

Then if you want to get into the more nitty gritty there’s a longer section on civilization, its past and its future — so, what’s wrong and how to improve it. Here Nick Bostrom wrote this piece on the future of human evolution, which lays out two suboptimal paths for humanity’s future, and interestingly enough they don’t involve extinction. A similar one, I think, which probably many people are familiar with, is Scott Alexander’s Meditations On Moloch, and then some that people are less familiar with — Growing Children For Bostrom’s Disneyland. They are really interesting, because they are other pieces of this type, which are sketching out competitive and selective pressures that lead toward races to the bottom, as negative futures which don’t involve extinction per se. I think the really interesting thing, then, is that even those features are only bad if you think that the bottom is bad.

Next to them I list books, for example, like Robin Hanson, Age of M, which argues that living at subsistence may not be terrible, and in fact it’s pretty much what most of our past lives outside of the current dream time have always involved. So I think those are two really different lenses to make sense of the same reality, and I personally found this contrast so intriguing that I hosted a salon last year with Paul Christiano, Robin Hanson, Peter Eckersley, and a few others to kind of map out where we may be racing towards, so how bad those competitive equilibria actually are. I also link to those from the website.

To me it’s always interesting to map out one potentially possible future visions, and then try to find one either that contradicts or compliments it. I think having a good idea of an overview of those gives you a good map, or at least a space of possibilities.

Ariel: What do you recommend to people who are interested in trying to do more? How do you suggest they get involved?

Allison: One thing, an obvious thing, would be commenting on the Google Docs, and I really encourage everyone to do that. Another one would be just to join the mailing list. You can kind of indicate whether you want updates on me, or whether you want to collaborate, in which case we may be able to reach out to you. Or if you’re interested in meetups, they would only be in San Francisco so far, but I’m hoping that there may be others. I do think that currently the project is really in its infancy. We are relying on the community to help with this, so there should be a kind of collaborative vision.

I think that one of the main things that I’m hoping that people can get out of it for now is just to give some inspiration on where we may end up if we get it right, and on why work toward better futures, or even work toward preventing existential risks, is both possible and necessary. If you go on the website on the first section — the vision section — that’s what that section is for.

Secondly, then, if you are already opted in, if you’re already committed, I’m hoping that perhaps the project can provide some orientation. If someone would like to help but doesn’t really know where to start, the focus areas are an attempt to map out the different areas that we need to make progress on for better futures. Each area comes with an introductory text, and organizations that are working in that area that one can join or support, and Future of Life is in a lot of those areas.

Then I think finally, just apart from inspiration or orientation, it’s really a place for collaboration. The project is in its infancy and everyone should contribute their favorite pieces to our better futures.

Ariel: I’m really excited to see what develops in the coming year for existentialhope.com. And, naturally, I also want to hear from Max and Anthony about 2019. What are you looking forward to for FLI next year?

Max: For 2019 I’m looking forward to more constructive collaboration on many aspects of this quest for a good future for everyone on earth. At the nerdy level, I’m looking forward to more collaboration on AI’s safety research and also ways of making the economy, that keeps growing thanks to AI, actually make everybody better off, rather than some people poorer and angrier. And at the most global level really looking forward to working harder to get past this outdated us versus them attitude that we still have between the US and China and Russia and other major powers. Many of our political leaders are so focused on the zero sum game mentality that they will happily risk major risks of nuclear war and AI arms races and other outcomes where everybody would lose, instead of just realizing hey, you know, we’re actually in this together. What does it mean for America to win? It means that all Americans get better off. What does it mean for China to win? It means that the Chinese people all get better off. Those two things can obviously happen at the same time as long as there’s peace, and technology just keeps improving life for everybody.

In practice, I’m very eagerly looking forward to seeing if we can get scientists from around the world — for example, AI researchers — to converge on certain shared goals that are really supported everywhere in the world, including by political leaders and in China and the US and Russia and Europe and so on, instead of just obsessing about the differences. Instead of thinking us versus them, it’s all of us on this planet working together against the common enemy, which is our own stupidity and the tendency to make bad mistakes, so that we can harness this powerful technology to create a future where everybody wins.

Anthony: I would say I’m looking forward to more of what we’re doing now, thinking more about the futures that we do want. What exactly do those look like? Can we really think through pictures of the future that makes sense to us that are attractive, that are plausible, and yet aspirational, and where we can identify things and systems and institutions that we can build now toward the aim of getting us to those futures? I think there’s been a lot of, so far, thinking about what are the major problems that might arise, and I think that’s really, really important, and that project is certainly not over, and it’s not like we’ve avoided all of those pitfalls by any means, but I think it’s important not to just not fall into the pit, but to actually have a destination that we’d like to get to — you know, the resort at the other end of the jungle or whatever.

I find it frustrating a bit when people do what I’m doing now: they talk about talking about what we should and shouldn’t do. But they don’t actually talk about what we should and shouldn’t do. I think the time has come to actually talk about it in the same way that when… there was the first use of CRISPR in a embryo that came to term. So everybody’s talking about, “Well, we need to talk about what we should and shouldn’t do with this. We need to talk about that, we need to talk about it.” Let’s talk about it already.

So I’m excited about upcoming events that FLI will be involved in that are explicitly thinking about: let’s talk about what that future is that we would like to have and let’s debate it, let’s have that discussion about what we do want and don’t want, try to convince each other and persuade each other of different visions for the future. I do think we’re starting to actually build those visions for what institutions and structures in the future might look like. And if we have that vision, then we can think of what are the things we need to put in place to have that.

Ariel: So one of the reasons that I wanted to bring Gaia on is because I’m working on a project with her — and it’s her project — where we’re looking at this process of what’s known as worldbuilding, to sort of look at how we can move towards a better future for all. I was hoping you could describe it, this worldbuilding project that I’m attempting to help you with, or work on with you. What is worldbuilding, and how are you modifying it for your own needs?

Gaia: Yeah. Worldbuilding is a really fascinating set of techniques. It’s a process that has its roots in narrative fiction. You can think of, for example, the entire complex world that J.R.R. Tolkien created for The Lord of the Rings series, for example. And in more contemporary times, some spectacularly advanced worldbuilding is occurring now in the gaming industry. So these huge connected systems of systems that underpin worlds in which millions of people today are playing, socializing, buying and selling goods, engaging in an economy. These are these vast online worlds that are not just contained on paper as in a book, but are actually embodied in software. And over the last decade, world builders have begun to formally bring these tools outside of the entertainment business, outside of narrative fiction and gaming, film and so on, and really into society and communities. So I really define worldbuilding as a powerful act of creation.

And one of the reasons that it is so powerful is that it really facilitates collaborative creation. It’s a collaborative design practice. And in my personal definition of worldbuilding, the way that I’m thinking of it, and using it, is that it unfolds in four main stages. The first stage is: we develop a foundation of shared knowledge that’s grounded in science, and research, and relevant domain expertise. And the second phase is building on that foundation of knowledge. We engage in an exercise where we predict how the interconnected systems that have emerged in this knowledge database — we predict how they will evolve. And we imagine the state of their evolution at a specific point in the future. Then the third phase is really about capturing that state in all its complexity, and making that information useful to the people who need to interface with it. And that can be in the form of interlinked databases and particularly also in the form of visualizations, which help make these sort of abstract ideas feel more present and concrete. And then the fourth and final phase is then utilizing that resulting world as a tool that can be used to support scenario simulation, research, and development in many different areas including public policy, media production, education, and product development.

I mentioned that these techniques are being brought outside of the realm of entertainment. So rather than just designing fantasy worlds for the sole purpose of containing narrative fiction and stories, these techniques are now being used with communities, and Fortune 500 companies, and foundations, and NGOs, and other places, to create plausible future worlds. It’s fascinating to me to see how these are being used. For example, they’re being used to reimagine the mission of an organization. They’re being used to plan for the future, and plan around a collective vision of that future. They’re very powerful for developing new strategies, new programs, and new products. And I think to me one of the most interesting things is really around informing policy work. That’s how I see worldbuilding.

Ariel: Are there any actual examples that you can give or are they proprietary?

Gaia: There are many examples that have created some really incredible outcomes. One of the first examples of worldbuilding that I ever learned about was a project that was done with a native Alaskan tribe. And the comments that came from the tribe and about that experience were what really piqued my interest. Because they said things like, “This enabled us to sort of leap frog over the barriers in our current thinking and imagine possibilities that were sort of beyond what we had considered.” This project brought together several dozen members of the community, again, to engage in this collaborative design exercise, and actually visualize and build out those systems and understand how they would be interconnected. And it ended up resulting in, I think, some really incredible things. Like a partnership with MIT where they brought a digital fabrication lab onto their reservation, and created new education programs around digital design and digital fabrication for their youth. And there’s a lot of other things that are still coming out of that particular worldbuild.

There are other examples where Fortune 500 companies are building out really detailed, long-term worldbuilds that are helping them stay relevant, and imagine how their business model is going to need to transform in order to adapt to really plausible, probable futures that are just around the corner.

Ariel: I want to switch now to what you specifically are working on. The project we’re looking at is looking roughly 20 years into the future. And you’ve sort of started walking through a couple systems yourself while we’ve been working on the project. And I thought that it might be helpful if you could sort of walk through, with us, what those steps are to help understand how this process works.

Gaia: Maybe I’ll just take a quick step back, if that’s okay and just explain the worldbuild that we’re preparing for.

Ariel: Yeah. Please do.

Gaia: This is a project called Augmented Intelligence. The first Augmented Intelligence summit is happening in March in 2019. And our goal with this project is really to engage with and shift the culture, and also our mindset, about the future of artificial intelligence. And to bring together a multidisciplinary group of leaders from government, academia, and industry, and to do a worldbuild that’s focused on this idea of: what does our future world look like with advanced AI deeply integrated into it? And to go through the process of really imagining and predicting that world in a way that’s just a bit further beyond the horizon that we normally see and talk about. And that exercise, that’s really where we’re getting that training for long-term thinking, and for systems level thinking. And the world that results — our hope is that it will allow us to develop better intuitions, to experiment, to simulate scenarios, and really to have a more attuned capacity to engage in many ways with this future. And ultimately explore how we want to evolve our tools and our society to meet that challenge.

Gaia: What will come out of this process — it really is a generative process that will create assets and systems that are interconnected, that inhabit and embody a world. And this world should allow us to experiment, and simulate scenarios, and develop a more attuned capacity to engage with the future. And that means on both an intuitive level and also in a more formal structured way. And ultimately our goal is to use this tool to explore how we want to evolve as a society, as a community, and to allow ideas to emerge about what solutions and tools will be needed to adapt to that future. Our goal is to really bootstrap a steering mechanism that allows us to navigate more effectively toward outcomes that support human flourishing.

Ariel: I think that’s really helpful. I think an example to walk us through what that looks like would be helpful.

Gaia: Sure. You know, basically what would happen in a worldbuilding process is that you would have some constraints or some sort of seed information that you think is very likely — based on research, based on the literature, based on sort of the input that you’re getting from domain experts in that area. For example, you might say, “In the future we think that education is all going to happen in a virtual reality system that’s going to cover the planet.” Which I don’t think is actually the case, but just to give an example. You might say something like, “If this were true, then what are the implications of that?” And you would build a set of systems, because it’s very difficult to look at just one thing in isolation.

Because as soon as you start to do that — John Muir says, “As soon as you try to look at just one thing, you find that it is irreversibly connected to everything else in the universe.” And I apologize to John Muir for not getting that quote exactly correct, he says it much more eloquently than that. But the idea is there. And that’s sort of what we leverage in a worldbuilding process: where you take one idea and then you start to unravel all of the implications, and all of the interconnecting systems that would be logical, and also possible, if that thing were true. It really does depend on the quality of the inputs. And that’s something that we’re working really, really hard to make sure that our inputs are believable and plausible, but don’t put too much in terms of constraints on the process that unfolds. Because we really want to tap into the creativity in the minds of this incredible group of people that we’re gathering, and that is where the magic will happen.

Ariel: To make sure that I’m understanding this right: if we use your example of, let’s say all education was being taught virtually, I guess questions that you might ask or you might want to consider would be things like: who teaches it, who’s creating it, how do students ask questions, who would their questions be directed to? What other types of questions would crop up that we’d want to consider? Or what other considerations do you think would crop up?

Gaia: You also want to look at the infrastructure questions, right? So if that’s really something that is true all over the world, what do server farms look like in that future, and what’s the impact on the environment? Is there some complimentary innovation that has happened in the field of computing that has made computing far more efficient? How have we been able to do this? Given the — there are certain physical limitations that just exist on our planet. If X is true in this interconnected system, then how have we shaped, and molded, and adapted everything around it to make that thing true? You can look at infrastructure, you can look at culture, you can look at behavior, you can look at, as you were saying, communication and representation in that system and who is communicating. What are the rules? I mean, I think a lot about the legal framework, and the political structure that exists around this. So who has power and agency? How are decisions made?

Ariel: I don’t know what this says about me, but I was just wondering what detention looks like in a virtual world.

Gaia: Yeah. It’s a good question. I mean, what are the incentives and what are the punishments in that society? And do our ideas of what incentives and punishments look like actually change in that context? There isn’t a place where you can come on a Saturday if there’s no physical school yard. How is detention even enforced when people can log in and out of the system at will?

Ariel: All right, now you have me wondering what recess looks like.

Gaia: So you can see that there are many different fascinating sort of rabbit holes that you could go down. And of course our goal is to really make this process really useful to imagining the way that we want our policies, and our tools, and our education to evolve.

Ariel: I want to ask one more question about … Well, it’s sort of about this but there’s also a broader aspect to it. And that is, I hear a lot of talk — and I’m one of the people saying this because I think it’s absolutely true — that we need to broaden the conversation and get more diverse voices into this discussion about what we want our future to look like. But what I’m finding is that this sounds really nice in theory, but it’s incredibly hard to actually do in practice. I’m under the impression that that is some of what you’re trying to address with this project. I’m wondering if you can talk a little bit about how you envision trying to get more people involved in considering how we want our world to look in the future.

Gaia: Yeah, that’s a really important question. One of the sources of inspiration for me on this point was a conversation with Stuart Russell — an interview with Stuart Russell, I should say — that I listened to. We’ve been really fortunate and we are thrilled that he’s one of our speakers and he’ll be involved in the worldbuilding process. And he kind of talks about this idea that the artificial intelligence researchers, the roboticists, even a few technologists that are building these amplifying tools that are just increasing in potency year over year, are not the only ones who need to have input into the conversation around how they’re utilized and the implications on all of us. And that’s really one of the sort of core philosophies behind this particular project, is that we really want it to be a multidisciplinary group that comes together, and we’re already seeing that. We have a really wonderful set of collaborators who are thinking about ethics in this space, and who are thinking about a broader definition of ethics, and different cultural perspectives on ethics. And how we can create a conversation that allows space for those to simultaneously coexist.

Allison: I recently had a similar kind of question that arose in conversation, which was about: why are we lacking positive future visions so much? Why are we all kind of stuck in a snapshot of the current suboptimal macro situation? I do think it’s our inability to really think in larger terms. If you look at our individual human life, clearly for most of us, it’s pretty incredible — our ability to lead much longer and healthier lives than ever before. If we compare this to how well humans used to live, this difference is really unfathomable. I think Yuval Harari said it right, he said “You wouldn’t want to have lived 100 years ago.” I think that’s correct. On the other hand I also think that we’re not there yet.

I find it, for example, pretty peculiar that we say that we value freedom of choice in everything we do, but in the one thing that’s kind of the basis of all of our freedoms, which is our very existence, we leave it up again to slowly deteriorate according to aging. This would really deteriorate ourselves and everything we value. I think that every day aging is burning libraries. We’ve come a long way, but we’re not safe, and we are definitely not there yet. I think the same holds true for civilization at large. I think thanks to a lot of technologies our living standards have been getting better and better, and I think the decline of poverty and violence are just a few examples.

We can share knowledge much easier, and I think everyone who’s read Enlightenment Now will be kind of tired of those graphs, but again, I also think that we’re not there yet. I think even though we have less wars than ever before, the ability to wipe ourselves out as a species also really exists, and I think in fact this ability is now more available to more people, and with technologies of maturity, it may really only take a small and well-curated group of individuals to cause havoc of catastrophic consequences. If you let that sink in, it’s really absurd that we have no emergency plan for the use of technological weapons. We have no plans to rebuild civilization. We have no plans to back up human life.

I think that current news articles take too much of a short term view. They’re more a snapshot. I think the long-term view, on the one hand, opens up this eye of, “Hey, look how far we’ve come,” but also, “Oh man. We’re here, and we’ve made it so far, but there’s no feasible plan for safety yet.” I do think we need to change that, so I think the long run doesn’t only open up rosy glasses, but also the realization that we ought to do more because we’ve come so far.

Josh: Yeah, one of the things that makes this time so dangerous is we’re at this kind of a fork in the road, where if we go this one way, like say, with figuring out how to develop friendliness in AI, we could have this amazing, astounding future for humanity that stretches for billions and billions and billions of years. One of the things that really opened my eyes was, I always thought that the heat death of the universe will spell the end of humanity. There’s no way we’ll ever make it past that, because that’s just the cessation of everything that makes life happen, right? And we will probably have perished long before that. But let’s say we figured out a way to just make it to the last second and humanity dies at the same time the universe does. There’s still an expiration date on humanity. We still go extinct eventually. But one of the things I ran across when I was doing research for the physics episode is that the concept of growing a universe from seed, basically, in a lab is out there. It’s done. I don’t remember who came up with it. But somebody has sketched out basically how to do this.

It’s 2018. If we think 100 or 200 or 500 or a thousand years down the road and that concept can be built upon and explored, we may very well be able to grow universes from seed in laboratories. Well, when our universe starts to wind down or something goes wrong with it, or we just want to get away, we could conceivably move to another universe. And so we suddenly lose that expiration date for humanity that’s associated with the heat death of the universe, if that is how the universe goes down. And so this idea that we have a future lifetime that spans into at least the multiple billions of years — at least a billion years if we just manage to stay alive on Planet Earth and never spread out but just don’t actually kill ourselves — when you take that into account the stakes become so much higher for what we’re doing today.

Ariel: So, we’re pretty deep into this podcast, and we haven’t heard anything from Anders Sandberg yet, and this idea that Josh brought up ties in with his work. Since we’re starting to talk about imagining future technologies, let’s meet Anders.

Anders: Well, I’m delighted to be on this. I’m Anders Sandberg. I’m a senior research fellow at The Future of Humanity Institute at University of Oxford.

Ariel: One of the things that I love, just looking at your FHI page, you talk about how you try to estimate the capabilities of future technology. I was hoping you could talk a little bit about what that means, what you’ve learned so far, how one even goes about studying the capabilities of future technologies?

Anders: Yeah. It is a really interesting problem because technology is based on ideas. As a general rule, you cannot predict what ideas people will come up with in the future, because if you could, you would already kind of have that idea. So this means that, especially technologies that are strongly dependent on good ideas, are going to be tremendously hard to predict. This is of course why artificial intelligence is a little bit of a nightmare. Similarly, biotechnology is strongly dependent on what we discover in biology and a lot of that is tremendously weird, so again, it’s very unpredictable.

Meanwhile, other domains of life are advancing at a more sedate pace. It’s more like you incrementally improve things. So the ideas are certainly needed, but we don’t really change everything around. If you think about more slower, microprocessors are getting better and a lot of improvements are small, incremental ones. Some of them require a lot of intelligence to come up with, but in the end it all sums together. It’s a lot of small things adding together. So you can see a relatively smooth development in the large.

Ariel: Okay. So what you’re saying is we don’t just have each year some major discovery, and that’s what doubles it. It’s lots of little incremental steps.

Anders: Exactly. But if you look at the performance of some software, quite often it goes up smoothly because the computers are getting better and then somebody has a brilliant idea that can do it not just in 10% less time, but maybe in 10% of the time that it would have taken. For example, the fast Fourier transform that people invented in the 60s and 70s enables the compression we use today for video and audio and enables multimedia on the internet. Without that to speed up, it would not be practical to do, even with current computers. This is true for a lot of things in computing. You get a surprise insight and the problem that previously might be impossible to do efficiently suddenly becomes quite convenient. So the problem is of course: what can we say about the abilities of future technology if these things happen?

One of the nice things you can do is you can lean on the laws of physics. There are good reasons not to think that perpetual motion machines can work, because we understand, actually, energy conservation and the laws of thermodynamics that give very strong reason why this cannot happen. We can be pretty certain that that’s not possible. We can analyze what would then be possible if you had perpetual motion machines or faster than light transport and you can see that some of the consequences are really weird. But it makes you suspect that this is probably not going to happen. So that’s one way of looking at it. But you can do the reverse: You can take laws of physics and engineering that you understand really well and make fictional machines — essentially work out all the details and say “okay, I can’t build this but were I to build it, in that case what properties would it have?” If I wanted to build, let’s say, a machine made out of atoms, could I make it to work? And it turns out that this is possible to do in a rigorous way, and it tells you capabilities about machines that don’t exist yet, and maybe we will never build, but it shows you what’s possible.

This is what Eric Drexler did for nanotechnology in the 80s and 90s. He basically worked out what would be possible if we could put atoms in the right place. He could demonstrate that this would produce machines of tremendous capability. We still haven’t built them, but he proved that these can be built — and we probably should build them because they are so effective, so environmentally friendly, and so on.

Ariel: So you gave the example of what he came up with a while back. What sort of capabilities have you come across that you thought were interesting that you’re looking forward to us someday pursuing?

Anders: I’ve been working a little bit on the questions about “is it possible to settle a large part of the universe?” I have been working out, together with my colleagues, a bit of the physical limitations of that. All in all, we found that a civilization doesn’t need to use an enormous, astronomical amount of matter and energy to settle a very large chunk of the universe. The total amount of matter corresponds with roughly a Mercury-sized planet in a solar system in each of the galaxies. Many people would say if you want to settle the universe you need an enormous spacecraft and you need enormous amount of energy. It looks like you would be able to see that across half of the universe, but we could demonstrate that actually if you essentially use matter from a really big asteroid or a small planet, you can get enough solar collectors to launch small spacecraft to all the stars and all the galaxies within reach and there you’ll use again a bit of asteroids to do it. The laws of physics allow intelligent life to spread across an enormous amount of the universe in a rather quiet way.

Ariel: So does that mean you think it’s possible that there is life out there and it’s reasonable for us not to have found it?

Anders: Yes. If we were looking at the stars, we would probably miss if one or two stars in remote galaxies were covered with solar collectors. It’s rather easy to miss them among the hundreds of billions of other stars. This was actually the reason we did this paper: We demonstrate that much of the thinking about the Fermi paradox — that annoying question that well, there ought to be a lot of intelligent life out in the universe given how large it is and that we tend to think that it’s relatively likely yet we don’t see anything — many of those explanations are based on the possibility of colonizing just the Milky Way. In this paper, we demonstrate that actually you need to care about all the other galaxies too. In a sense, we made the fermi paradox between a million and a billion times worse. Of course, this is all in a day’s work for us in the Philosophy Department, making everybody’s headaches bigger.

Ariel: And now it’s just up to someone else to figure out the actual way to do this technically.

Anders: Yeah, because it might actually be a good idea for us to do.

Ariel: So Josh, you’ve mentioned the future of humanity a couple of times, and humanity in the future, and now Anders has mentioned the possibility of colonizing space. I’m curious how you think that might impact humanity. How do you define humanity in the future?

Josh: I don’t know. That’s a great question. It could take any number of different routes. I think — Robin Hanson is an economist who came up with this, the great filter hypothesis, and I talked to him about that very question. His idea was that — and I’m sure it’s not just his, but it’s probably a pretty popular idea — that once we spread out from Earth and start colonizing further and further out into the galaxy, and then into the universe, we’ll undergo speciation events like, there will be multiple species of humans in the universe again, just like there was like 50,000 years ago, when we shared Earth with multiple species of humans.

The same thing is going to happen as we spread out from Earth. I mean, I guess the question is, which humans are you talking about, in what galaxy? I also think there’s a really good chance — and this could happen among multiple human species — that at least some humans will eventually shed their biological form and upload themselves into some sort of digital format. I think if you just start thinking in efficiencies, that’s just a logical conclusion to life. And then there’s any number of routes we could take and change especially as we merge more with technology or spread out from Earth and separate ourselves from one another. But I think the thing that really kind of struck me as I was learning all this stuff is that we tend to think of ourselves as the pinnacle of evolution, possibly the most intelligent life in the entire universe, right? Certainly the most intelligent on Earth, we’d like to think. But if you step back and look at all the different ways that humans can change, especially like the idea that we might become post-biological, it becomes clear that we’re just a point along a spectrum that keeps on stretching out further and further into the future than it does even into the past.

We’re just at a current situation on that point right now. We’re certainly not like the end-all be-all of evolution. And ultimately, we may take ourselves out of evolution by becoming post-biological. It’s pretty exciting to think about all the different ways that it can happen, all the different routes we can take — there doesn’t have to just be one single one either.

Ariel: Okay, so, I kind of want to go back to some of the space stuff a little bit, and Anders is the perfect person for my questions. I think one of the first things I want to ask is, very broadly, as you’re looking at these different theories about whether or not life might exist out in the universe and that it’s reasonable for us not to have found it, do you connect the possibility that there are other life forms out there with an idea of existential hope for humanity? Or does it cause you concern? Or are they just completely unrelated?

Anders: The existence of extraterrestrial intelligence: if we knew they existed that would in some sense be hopeful because we know the universe allows for more than our kind of intelligence and intelligence might survive over long spans of time. If we just discovered that we’re all alone and a lot of ruins from extinct civilizations, that would be very bad news for us. But we might also have this weird situation that we currently feel, that we don’t see anybody. We don’t notice any ruins; Maybe we’re just really unique and should perhaps feel a bit proud or lucky but also responsible for a whole universe. It’s tricky. It seems like we could learn something very important if we understood how much intelligence there is out there. Generally, I have been trying to figure out: is the absence of aliens evidence for something bad? Or might it actually be evidence for something very hopeful?

Ariel: Have you concluded anything?

Anders: Generally, our conclusion has been that the absence of aliens is not surprising. We tend to think that the Fermi Paradox implies “oh, there’s something strange here.” The universe is so big and if you multiply the number of stars with some reasonable probability, you should get loads of aliens. But actually, the problem here is reasonable probability. We normally tend to think of that as something like bigger than one chance in a million or so, but actually, there is no reason the laws of physics wouldn’t put a probability that’s one in a googol. It actually turns out that we’re uncertain enough about the origin of life and the origins of intelligence and other forms of complexity that it’s not implausible that maybe we are the only life within the visible universe. So we shouldn’t be too surprised about that empty sky.

One possible reason for the great silence is that life is extremely rare. Another possibility might be that life is not rare but it’s very rare that it becomes the kind of life that evolves to complex nervous systems. Another reason might be of course that once you get intelligence, well, it destroys itself relatively quickly, and Robin Hanson has called this the Great Filter. We know that one of the terms in the big equation for the number of civilizations in the universe needs to be very small; otherwise, the sky would be full of aliens. But is that one of the early terms, like the origin of life, or the origin of intelligence — or the late term, how long intelligence survives? Now, if there is an early Great Filter, this is rather good news for us. We are going to be very unique and maybe a bit lonely, but, it doesn’t tell us anything dangerous about our own chances. Of course, we might still flub it and go extinct because our own stupidity but that’s kind of up to us rather than the laws of physics.

On the other hand, if it turns out that there is a late Great Filter, then even though we know the universe might be dangerous, we’re still likely to get wiped out — which is very scary. So, figuring out where the unlikely terms in the big equation are is actually quite important for making a guess about our own chances.

Ariel: Where are we now in terms of that?

Anders: Right now, in my opinion — I have a paper, not published yet but it’s in the review process, where we try to apply proper uncertainty calculations to this. Because many people make guesstimates about the probabilities of various things, admit that they’re guesstimates, and then get a number at the end that we also admit is a bit uncertain. But we haven’t actually done a proper uncertainty calculation so quite a lot of these numbers become surprisingly biased. So instead of saying that maybe there’s one chance in a million that a planet develops life, you should try to have a full range of what’s the lowest probability there could be for life and what’s the highest probability and how do you think it distributes between them. If you use that kind of proper uncertainty range and then multiply it all together and do the maths right, then you get the probability distribution for how many alien species there could be in the universe. Even if you’re starting out as somebody who’s relatively optimistic about the mean value of all of this, you will still find that you get a pretty big chunk of probability that we’re actually pretty alone in the Milky Way or even the observable universe.

In some sense, this is just common sense. But it’s a very nice thing to be able to quantify the common sense, and then start saying: so what happens if we for example discover that there is life on Mars? What will that tell us? How will that update things? You can use the math to calculate that, and this is what we’ve done. Similarly, if we notice that there doesn’t seem to be any alien super civilizations around the visible universe, that’s a very weak update but you can still use that to see that this updates our estimates of the probability of life and intelligence much more than the longevity of civilizations.

Mathematically this gives us a reason to think that the Great Filter might be early. The absence of life might be rather good news for us because it means that once you get intelligence, there’s no reason why it can’t persist for a long time and grow into something very flourishing. That is a really good cause of existential hope. It’s really promising, but we of course need to do our observations. We actually need to look for life, we need to look out in the sky and see. You may find alien civilizations. In the end, any amount of mathematics and armchair astrobiology, that’s always going to be disproven by any single observation.

Ariel: That comes back to a question that came to mind a bit earlier. As you’re looking at all of this stuff and especially as you’re looking at the capabilities of future technologies, once we figure out what possibly could be done, can you talk a little bit about what our limitations are today from actually doing it? How impossible is it?

Anders: Well, impossible is a really tricky word. When I hear somebody say “it’s impossible,” I immediately ask “do you mean against the laws of physics and logic” or “we will not be able to do this for the foreseeable future” or “we can’t do it within the current budget”?

Ariel: I think maybe that’s part of my question. I’m guessing a lot of these things probably are physically possible, which is why you’ve considered them, but yeah, what’s the difference between what we’re technically capable of today and what, for whatever reason, we can’t budget into our research?

Anders: We have a domain of technologies that we already have been able to construct. Some of them are maybe too expensive to be very useful. Some of them still requires a bunch of grad students holding them up and patching them as they are breaking all the time, but we can kind of build them. And then there’s some technology that we are very robustly good at. We have been making cog wheels and combustion engines for decades now and we’re really good at that. Then there are these technologies that we can do exploratory engineering to demonstrate that if we actually had cog wheels made out of pure diamond or the Dyson shell surrounding the sun collecting energy, they could do the following things.

So they don’t exist as practical engineering. You can work out blueprints for them and in some sense of course, once we have a complete enough blueprint, if you asked could you build the thing, you could do it. The problem is of course normally you need the tools and resources for that, and you need to make the tools to make the tools, and the tools to make those tools, and so on. So if we wanted to make atomically precise manufacturing today, we can’t jump straight to it. What we need to make is a tool that allows us to build things that are moving us much closer.

The Wright Brothers’ airplane was really lousy as an airplane but it was flying. It’s a demonstration, but it’s also a tool that allows you to make a slightly better tool. You would want to get through this and you’d probably want to have a roadmap and do experiments and figure out better tools to do that.

This is typically where scientists actually have to give way to engineers. Because engineers care about solving a problem rather than being the most elegant about it. In science, we want to have this beautiful explanation of how everything works; Then we do experiments to test whether it’s true and refine our explanation. But in the end, the paper that gets published is going to be the one that has the most elegant understanding. In engineering, the thing that actually sells and changes the world is not going to be the most elegant thing but the most useful thing. The AK-47 is in many ways not a very precise piece of engineering but that’s the point. It should be possible to repair it in the field.

The reason our computers are working so well was we figured out the growth path where you use photolithography to etch silicon chips, and that allowed us to make a lot of them very cheaply. As we learned more and more about how to do that, they became cheaper and more capable and we developed even better ways of etching them. So in order to build molecular nanotechnology, you would need to go through a somewhat similar chain. It might be that you start out with using biology to make proteins, and then you use the proteins to make some kind of soft machinery, and then you use that soft machinery to make hard machinery, and eventually end up with something like the work of Eric Drexler.

Ariel: I actually want to step back to the present now and you mentioned computers and we’re doing them very well. But computers are also an example of — or maybe software I suppose is more the example — of technology that works today but it often fails. Especially when we’re considering things like AI safety in the future, what should we make of the fact that we’re not designing software to be more robust? I mean, I think especially if we look at something like airplanes which are quite robust, we can see that it could be done but we’re still choosing not to.

Anders: Yeah, nobody would want to fly with an airplane that crashed as often as a word processor.

Ariel: Exactly.

Anders: It’s true that the earliest airplanes were very crash prone — in fact most of them were probably as bad as our current software is. But the main reason we’re not making software better is that most of the time we’re not willing to pay for that quality. Also, that there is some very hard engineering problems with engineering complexity. So making a very hard material is not easy but in some sense, it’s a straightforward problem. If, on the other hand, you have literally billions of moving pieces that all need to fit together, then it gets tricky to make sure that this always works as it should. But it can be done.

People have been working on mathematical proofs that certain pieces of software are correct and secure. It’s just that up until recently, it’s been so expensive and tough that nobody really cared to do it except maybe some military groups. Now it’s starting to become more and more essential because we’ve built our entire civilization on a lot of very complex systems that are unfortunately very insecure, very unstable, and so on. Most of the time we get around it by making backup copies and whenever a laptop crashes, well, we reboot it, swear a bit and hopefully we haven’t lost too much work.

That’s not always a bad solution — a lot of biology is like that too. Cells in our bodies are failing all the time but they’re just getting removed and replaced and then we try again. But this, of course, is not enough for certain sensitive applications. If we ever want to have brain-to-computer interfaces, we certainly want to have good security so we don’t get hacked. If we want to have very powerful AI systems, we want to make sure that their motivations are constrained in such a way that they’re helpful. We also want to make sure that they don’t get hacked or develop weird motivations or behave badly because their owners told them to behave badly. Those are very complex problems: It’s not just like engineering something that’s simply safe. You’re going to need entirely new forms of engineering for that kind of learning system.

This is something we’re learning. We haven’t been building things like software for very long and when you think about the sheer complexity of a normal operating system, even a small one running on a phone, it’s kind of astonishing that it works at all.

Allison: I think that Eliezer Yudkowsky once said that the problem of our complex civilization is its complexity. It does seem that technology is outpacing our ability to make sense of it. But I think we have to remind ourselves again of why we developed those technologies in the first place, and of the tremendous promises if we get it right. Of course on the one hand I think solving problems that are created by technologies, for example, existential risks — or at least some of those, they require a few kind of non-technological aspects, especially human reasoning, sense-making, and coordination.

And  I’m not saying that we have to focus on one conception of the good. There are many conceptions of the good. There’s transhumanist futures, there’s cosmist futures, there’s extropian futures, and many, many more, and I think that’s fine. I don’t think we have to agree on a common conception just yet — in fact we really shouldn’t. But the point is not that we ought to settle soon, but that we have to allow into our lives again the possibility that things can be good, that good things are possible — not guaranteed, but they’re possible. I think to use technologies for good we really need a change of mindset, from pessimism to at least conditional optimism. And we need a plethora of those, right? It’s not going to be one of them.

I do think that in order to use technologies for good purposes, we really have to remind ourselves that they can be used for good, and that there are good outcomes in the first place. I genuinely think that often in our research, we put the cart before the horse in focusing solely on how catastrophic human extinction would be. I think this often misses the point that extinction is really only so bad because the potential value that could be lost is so big.

Josh: If we can just make it to this point — Nick Bostrom, whose ideas a lot of The End of the World is based on, calls it technological maturity. It’s kind of a play on something that Carl Sagan said about the point we’re at now: “technological adolescence” is what Sagan called it, which is this point where we’re starting to develop this really intense, amazingly powerful technology that will one day be able to guarantee a wonderful, amazing existence for humanity, if we can survive to the point where we’ve mastered it safely. That’s what the next hundred or 200 or maybe 300 years stretches out ahead of us. That’s the challenge that we have in front of us. If we can make it to technological maturity, if we figure out how to make an artificial generalized intelligence that is friendly to humans, that basically exists to make sure that humanity is well cared for and taken care of, there’s just no telling what we’ll be able to come up with and just how vastly improved the life of the average human would be in that situation.

We’re talking — honestly, this isn’t like some crazy far out far future idea. This is conceivably something that we could get done as humans in the next century or two or three. Even if you talk out to 1000 years, that sounds far away. But really, that’s not a very long time when you consider just how far of a lifespan humanity could have stretching out ahead of it. The stakes: that makes me, almost gives me a panic attack when I think of just how close that kind of a future is for humankind and just how close to the edge we’re walking right now in developing that very same technology.

Max: The way I see the future of technology as we go towards artificial general intelligence, and perhaps beyond — it could totally make life the master of its own destiny, which makes this a very important time to stop and think what do we want this destiny to be? The more clear and positive vision we can formulate, I think the more likely it is we’re going to get that destiny.

Allison: We often seem to think that rather than optimizing for good outcomes, we should aim for maximizing the probability of an okay outcome, but I think for many people it’s more motivational to act on a positive vision, rather than one that is steered by risks only. To be for something rather than against something. To work toward a grand goal, rather than an outcome in which survival is success. I think a good strategy may be to focus on good outcomes.

Ariel: I think it’s incredibly important to remember all of the things that we are hopeful for for the future, because these are the precise reasons that we’re trying to prevent the existential risks, all of the ways that the future could be wonderful. So let’s talk a little bit about existential hope.

Allison: The term existential hope was coined by Owen Cotton-Barratt and Toby Ord to describe the chance of something extremely good happening, as opposed to an existential risk, which is a chance of something extremely terrible occurring. Kind of like describing a eucatastrophe instead of a catastrophe. I personally really agree with this line, because I think for me really it means that you can ask yourself this question of: do you think you can save the future? I think this question may appear at first pretty grandiose, but I think it’s sometimes useful to ask yourself that question, because I think if your answer is yes then you’ll likely spend your whole life trying, and you won’t rest, and that’s a pretty big decision. So I think it’s good to consider the alternative, because if the answer is no then you perhaps may be able to enjoy the little bit of time that you have on Earth rather than trying to spend it on making a difference. But I am not sure if you could actually enjoy every blissful minute right now if you knew that there was just a slight chance that you could make a difference. I mean, could you actually really enjoy this? I don’t think so, right?

I think perhaps we fail — and we do our best, but at the final moment something comes along that makes us go extinct anyways. But I think if we imagine the opposite scenario, in which we have not tried, and it turns out that we could have done something, an idea that we may have had or a skill we may have given was missing and it’s too late, I think that’s a much worse outcome.

Ariel: Is it fair for me to guess, then, that you think for most people the answer is that yes, there is something that we can do to achieve a more existential hope type future?

Allison: Yeah, I think so. I think that for most people there is at least something that we can be doing if we are not solving the wrong problems. But I do also think that this question is a serious question. If the answer for yourself is no, then I think you can really try to focus on having a life that is as good as it could be right now. But I do think that if the answer is yes, and if you opt in, then I think that there’s no space any more to focus on how terrible everything is. Because we’ve just confessed to how terrible everything is, and we’ve decided that we’re still going to do it. I think that if you opt in, really, then you can take that bottle of existential angst and worries that I think is really pestering us, and put it to the side for a moment. Because that’s an area you’ve dealt with and decided we’re still going to do it.

Ariel: The sentiment that’s been consistent is this idea that the best way to achieve a good future is to actually figure out what we want that future to be like and aim for it.

Max: On one hand, should be a no-brainer because that’s how we think about life as individuals. Right? I often get students walking into my office at MIT for career advice, and I always ask them about their vision for the future, and they always tell me something positive. They don’t walk in there and say, “Well, maybe I’ll get murdered. Maybe I’ll get cancer. Maybe I’ll …” because they know that that’s a really ridiculous approach to career planning. Instead, they envision the positive future, their aspiring things, so that we can constructively think about the challenges, the pitfalls to be avoided, and a good strategy for getting there.

Yet, as a species, we do exactly the opposite. We go to the movies and we watch Terminator, or Blade Runner, or yet another dystopic future vision that just fills us with fear and sometimes paranoia or hypochondria, when what we really need to do, as a species, is the same thing as we need to do as individuals: envision a hopeful, inspiring future that we want to rally around. It’s a well known historical fact, right, that the secret to get more constructive collaboration is to develop a shared positive vision. Why is Silicon Valley in California and not in Uruguay or Mongolia? Well, it’s because in the 60s, JFK articulated this really inspiring vision — going to space — which lead to massive investments in stem research and gave the US the best universities in the world and these amazing high tech companies, ultimately. Came from a positive vision.

Similarly, why is Germany now unified into one country instead of fragmented into many? Or Italy? Because of a positive vision. Why are the US states working together instead of having more civil wars against each other? Because of a positive vision of how much greater we’ll be if we work together. And if we can develop a more positive vision for the future of our planet, where we collaborate and everybody wins by getting richer and better off, we’re again much more likely to get that than if everybody just keeps spending their energy and time thinking about all the ways they can get screwed by their neighbors and all the ways in which things can go wrong — causing some self fulfilling prophecy basically, where we get a future with war and destruction instead of peace and prosperity.

Anders: One of the things I’m envisioning is that you can make a world where everybody’s connected but also connected on their own terms. Right now, we don’t have a choice. My smartphone gives me a lot of things but it also reports my location and a lot of little apps are sending my personal information to companies and institutions I have no clue about and I don’t trust. I think one important technology that might actually be that you do privacy-enhancing technologies. Many of the little near-field microchips we carry around, they also are indiscriminately reporting to nearby antennas what we’re doing. But you could imagine having a little personal firewall that actually blocks signals that you don’t approve of. You could actually have firewalls and ways of controlling the information leaving your smartphone or your personal space. And I think we actually need to develop that, both for security purposes but also to feel that we actually are in charge of our private lives.

Some of that privacy is a social convention. We agree on what is private and not: This is why we have certain rules about what you are allowed to do with a cell phone in a restaurant. You’re not going to have a conversation with somebody — that’s rude. And others are not supposed to listen to your restaurant conversations that you have with people in the restaurant, even though technically of course, it’s trivial. I think we are going to develop new interesting rules and new technologies to help implement these social rules.

Another area I’m really excited about is the ability to capture energy, for example, using solar collectors. Solar collectors are getting exponentially better and are becoming competitive in a lot of domains with traditional energy sources. But the most beautiful things is they can be made small, used in a distributed manner. You don’t need that big central solar farm even though it might be very effective. You can actually have little solar panels on your house or even on gadgets, if they’re energy efficient enough. That means that you both reduce the risk of a collective failure but also that you get a lot of devices that can now function independently of the grid.

Then I think we are probably going to be able to combine this to fight a lot of emergent biological threats. Right now, we still have this problem that it takes a long time to identify a new pathogen. But I think we’re going to see more and more distributed sensors that can help us identify it quickly, global networks that make the medical professional aware that something new has shown up, and hopefully also ways of very quickly brewing up vaccines in an automated manner when something new shows up.

My vision is that within one or two decades, if something nasty shows up, the next morning, everybody could essentially have a little home vaccine machine manufacture those antibodies to make you resistant against that pathogen — whether that was a bio weapon or something nature accidentally brewed up.

Ariel: I never even thought about our own personalized vaccine machines. Is that something people are working on?

Anders: Not that much yet.

Ariel: Oh.

Anders: You need to manufacture antibodies cheaply and effectively. This is going to require some fairly advanced biotechnology or nanotechnology. But it’s very foreseeable. Basically, you want to have a specialized protein printer. This is something we’re moving in the direction of. I don’t think anybody’s right now doing it but I think it’s very clearly in the path where we’re already moving.

So right now in order to make a vaccine, you need to have this very time consuming process: For example in the case of flu vaccine, you identify the virus, you multiply the virus, you inject it into chicken eggs to get the antibodies and the antigens, you develop a vaccine, and if you did it all right, you have a vaccine out in a few months just in time for the winter flu — and hopefully it was for the version of the flu that was actually making the rounds. If you were unlucky, it was a different one.

But what if you could instead take the antigen, you sequence it — that’s just going to take you a few hours — you generate all the proteins, you run it through various software and biological screens to remove the ones that don’t fit, find the ones that are likely to be good targets for immune system, automatically generate the antibodies, automatically test them out so you find which ones might be bad for patients, and then test them out. Then you might be able to make a vaccine within weeks or days.

Ariel: I really like your vision for the near term future. I’m hoping that all of that comes true. Now, to end, as you look further out into the future — which you’ve clearly done a lot of — what are you most hopeful for?

Anders: I’m currently working on writing a book about what I call “Grand Futures.” Assuming humanity survives and gets its act together, however we’re supposed to do that, then what? How big could the future possibly be? It turns out that the laws of physics certainly allow us to do fantastic things. We might be able to spread literally over billions of light years. Settling space is definitely physically possible, but also surviving even as a normal biological species on earth for literally hundreds of millions of years — and that’s already not stretching it. It might be that if we go post-biological, we can survive up until proton decay in somewhere north of 10^30 years in the future. Of course, the amount of intelligence that could be generated, human brains are probably just the start.

We could probably develop ourselves or Artificial Intelligence to think enormously bigger, enormously much more deeply, enormously more profoundly. Again, this is stuff that I can analyze. There are questions about what the meaning of these thoughts would be, how deep the emotions of the future could be, et cetera, that I cannot possibly answer. But it looks like the future could be tremendously grand, enormously much bigger, just like our own current society would strike our stone age ancestors as astonishingly wealthy, astonishingly knowledgeable and interesting.

I’m looking at: what about the stability of civilizations? Historians have been going on a lot about the decline and fall of civilizations. Does that tell us an ultimate limit on what we can plan for? Eventually I got fed up reading historians and did some statistics and got some funny conclusions. But even if our civilization lasts long, it might become something very alien over time, so how do we handle that? How do you even make a backup of your civilization?

And then of course there are questions like “how long can we survive on earth?” And “when the biosphere starts failing in about a billion years, couldn’t we fix that?” What are the environmental ethics issues surrounding that? What about settling the solar system? how do you build and maintain your Dyson sphere? Then of course there’s the stellar settlement, the intergalactic settlement, then the ultimate limits of physics. What can we say about them and in what ways could physics be really different from what we expect and what does that do for our chances?

It all leads back to this question: so, what should we be doing tomorrow? What are the near term issues? Some of them are interesting like, okay, so if the future is super grand, we should probably expect that we need to safeguard ourselves against existential risk. But we might also have risks — not just going extinct, but causing suffering and pain. And maybe there are other categories we don’t know about. I’m looking a little bit at all the unknown super important things that we don’t know about yet. How do we search for them? If we discover something that turns out to be super important, how do we coordinate mankind to handle that?

Right now, this sounds totally utopian. Would you expect all humans to get together and agree on something philosophical? That sounds really unlikely. Then again, a few centuries ago the United Nations and the internet would also sound totally absurd. The future is big — we have a lot of centuries ahead of us, hopefully.

Max: When I look really far into the future, I also look really far into space and I see this vast cosmos, which is 13.8 billion years old. And most of it is, despite what the UFO enthusiasts say, is actually looking pretty dead and wasted opportunities. And if we can help life flourish not just on earth, but ultimately throughout much of this amazing universe, making it come alive and teeming with these fascinating and inspiring developments, that makes me feel really, really inspired.

This is something I hope we can contribute to, we denizens of this planet, right now, here, in our lifetime. Because I think this is the most important time and place probably in cosmic history. After 13.8 billion years on this particular planet, we’ve actually developed enough technology, almost, to either drive ourselves extinct or to create super intelligence, which can spread out into the cosmos and do either horrible things or fantastic things. More than ever, life has become the master of its own destiny.

Allison: For me this pretty specific vision would really be a voluntary world, in which different entities, whether they’re AI or humans, can cooperate freely with each other to realize their interests. I do think that we don’t know where we want to end up, and we really have — if you look back 100 years, it’s not only that you wouldn’t have wanted to live there, but also many of the things that were regarded as moral back then are not regarded as moral anymore by most of us, and we can expect the same to hold true 100 years from now. I think rather than locking in any specific types of values, we ought to leave the space of possible values open.

Maybe right now you could try to do something like coherent extrapolated volition, which is, in AI safety, coined by Eliezer Yudkowsky to describe a goal function of a superintelligence that would execute your goals if you were more the person you wish you were, if we lived closer together, if we had more time to think and collaborate — so kind of a perfect version of human morality. I think that perhaps we could do something like that for humans, because we all come from the same evolutionary background. We all share a few evolutionary cornerstones, at least, that make us value family, or make us value a few others of those values, and perhaps we could do something like coherent extrapolated volition of some basic, very boiled down values that most humans would agree to. I think that may be possible, I’m not sure.

On the other hand, in a future where we succeed, at least in my version of that, we live not only with humans but with a lot of different mind architectures that don’t share our evolutionary background. For those mind architectures it’s not enough to try to do something like coherent extrapolated volition, because given that they have very different starting conditions, they will also end up valuing very different value sets. In the absence of us knowing what’s in their interests, I think really the only thing we can reasonably do is try to create a framework in which very different mind architectures can cooperate freely with each other, and engage in mutually beneficial relationships.

Ariel: Honestly, I really love that your answer of what you’re looking forward to is that it’s something for everybody. I like that.

Anthony: When you think about what life used to be for most humans, we really have come a long way. I mean, slavery was just fully accepted for a long time. Complete subjugation of women and sexism was just totally accepted for a really long time. Poverty was just the norm. Zero political power was the norm. We are in a place where, although imperfect, many of these things have dramatically changed; even if they’re not fully implemented; Our ideals and our beliefs of human rights and human dignity and equality have completely changed and we’ve implemented a lot of that in our society.

So what I’m hopeful about is that we can continue that process, and that the way that culture and society work 100 years from now, we would look at from now and say, “Oh my God, they really have their shit together. They have figured out how to deal with differences between people, how to strike the right balance between collective desires and individual autonomy, between freedom and constraint, and how people can feel liberated to follow their own path while not trampling on the rights of others.” These are not in principle impossible things to do, and we fail to do them right now in large part, but I would like to see our technological development be leveraged into a cultural and social development that makes all those things happen. I think that really is what it’s about.

I’m much less excited about more fancy gizmos, more financial wealth for everybody, more power to have more stuff and accomplish more and higher and higher GDP. Those are useful things, but I think they’re things toward an end, and that end is the sort of happiness and fulfillment and enlightenment of the conscious living beings that make up our world. So, when I think of a positive future, it’s very much one filled with a culture that honestly will look back on ours now and say, “Boy, they really were screwed up, and I’m glad we’ve gotten better and we still have a ways to go.” And I hope that our technology will be something that will in various ways make that happen, as technology has made possible the cultural improvements we have now.

Ariel: I think as a woman I do often look back at the way technology enabled feminism to happen. We needed technology to sort of get a lot of household chores accomplished — to a certain extent, I think that helped.

Anthony: There are pieces of cultural progress that don’t require technology, as we were talking about earlier, but are just made so much easier by it. Labor-saving devices helped with feminism; Just industrialization I think helped with serfdom and slavery — we didn’t have to have a huge number of people working in abject poverty and total control in order for some to have a decent lifestyle, we could spread that around. I think something similar is probably true of animal suffering and meat. It could happen without that — I mean, I fully believe that 100 years from now, or 200 years from now, people will look back at eating meat as just like a crazy thing that people used to do. It’s just the truth I think of what’s going to happen.

But it’ll be much, much easier if we have technologies that make that economically viable and easy rather than pulling teeth and a huge cultural fight and everything, which I think will be hard and long. We should be thinking about, if we had some technological magic wand, what are the social problems that we would want to solve with it, and then let’s look for that wand once we identify those problems. If we could make some social problem much better if we only had such and such technology, that’s a great thing to know, because technologies are something we’re pretty good at inventing. If they don’t violate the laws of physics, and there’s some motivation, we can often generate those things, so let’s think about what they are, what would it take to solve this sort of political informational mess where nobody knows what’s true and everybody is polarized?

That’s a social problem. It has a social solution. But there might be technologies that would be enormously helpful in making those social solutions easier. So what are those technologies? Let’s think about them. So I don’t think there’s a kind of magic bullet for a lot of these problems. But having that extra boost that makes it easier to solve the social problem I think is something we should be looking for for sure.

And there are lots of technologies that really do help — worth keeping in mind, I guess, as we spend a lot of our time worrying about the ill effects of them, and the dangers and so on. There is a reason we keep pouring all this time and money and energy and creativity into developing new technologies.

Ariel: I’d like to finish with one last question for everyone, and that is: what does existential hope mean for you?

Max: For me, existential hope is hoping for and envisioning a really inspiring future, and then doing everything we can to make it so.

Anthony: It means that we really give ourselves the space and opportunity to continue to progress our human endeavor — our culture, our society — to build a society that really is backstopping everyone’s freedom and actualization, compassion, enlightenment, in a kind of steady, ever-inventive process. I think we don’t often give ourselves as much credit as we should for how much cultural progress we’ve really made in tandem with our technological progress.

Anders: My hope for the future is that we get this enormous open-ended future. It’s going to contain strange and frightening things, but I also believe that most of it is going to be fantastic. It’s going to be roaring onward far, far, far into the long term future of the universe, probably changing a lot of the aspects of the universe.

When I use the term “existential hope,” I contrast that with existential risk. Existential risks are things that threaten to curtail our entire future, to wipe it out, to make it too much smaller than it could be. Existential hope, to me, means that maybe the future is grander than we expect. Maybe we have chances we’ve never seen. And I think we are going to be surprised by many things in the future and some of them are going to be wonderful surprises. That is the real existential hope.

Gaia: When I think about existential hope, I think it’s sort of an unusual phrase. But to me it’s really about the idea of finding meaning, and the potential that each of us has to experience meaning in our lives. And I think that the idea of existential hope, and I should say, the existential part of that, is the concept that that fundamental capability is something that will continue in the very long-term and will not go away. You know, I think it’s the opposite of nihilism, it’s the opposite of the idea that everything is just meaningless and our lives don’t matter and nothing that we do matters.

If I’m feeling — if I’m questioning that, I like to go and read something like Viktor Frankl’s book Man’s Search for Meaning, which really reconnects me to these incredible, deep truths about the human spirit. That’s a book that tells the story of his time in a concentration camp at Auschwitz. And even in those circumstances, the ability that he found within himself and that he saw within people around him to be kind, and to persevere, and to really give of himself, and others to give of themselves. And there’s just something impossible, I think, to capture in language. Language is a very poor tool, in this case, to try to encapsulate the essence of what that is. I think it’s something that exists on an experiential level.

Allison: For me, existential hope is really trying to choose to make a difference, knowing that success is not guaranteed, but it’s really making a difference because we simply can’t do it any other way. Because not trying is really not an option. It’s the first time in history that we’ve created the technologies for our destruction and for our ascent. I think they’re both within our hands, and we have to decide how to use them. So I think existential hope is transcending existential angst, and transcending our current limitation, rather than trying to create meaning within them, and I think it’s the adequate mindset for the time that we’re in.

Ariel: And I still love this idea that existential hope means that we strive toward everyone’s personal ideal, whatever that may be. On that note, I cannot thank my guests enough for joining the show, and I also hope that this episode has left everyone listening feeling a bit more optimistic about our future. I wish you all a happy holiday and a happy new year!

Podcast: Governing Biotechnology, From Avian Flu to Genetically-Modified Babies with Catherine Rhodes

A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning.

As biotechnology and other emerging technologies become more powerful, the dual-use nature of research — that is, research that can have both beneficial and risky outcomes — is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats?

On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues.

Topics discussed in this episode include:

  • Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information
  • The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically
  • The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins
  • How scientists can anticipate whether the results of their research could be misused by someone else
  • To what extent does risk stem from technology, and to what extent does it stem from how we govern it?

Books and publications discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. Now I’ve been planning to do something about biotechnology this month anyways since it would go along so nicely with the new resource we just released which highlights the benefits and risks of biotech. I was very pleased when Catherine Rhodes agreed to be on the show. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance, or a lack of it.

But she has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues. The timing of Catherine as a guest is also especially fitting given that just this week the science world was shocked to learn that a researcher out of China is claiming to have created the world’s first genetically edited babies.

Now neither she nor I have had much of a chance to look at this case too deeply but I think it provides a very nice jumping-off point to consider regulations, ethics, and risks, as they pertain to biology and all emerging sciences. So Catherine, thank you so much for being here.

Catherine: Thank you.

Ariel: I also want to add that we did have another guest scheduled to join us today who is unfortunately ill, and unable to participate, so Catherine, I am doubly grateful to you for being here today.

Before we get too far into any discussions, I was hoping to just go over some basics to make sure we’re all on the same page. In my readings of your work, you talk a lot about biorisk and biosecurity, and I was hoping you could just quickly define what both of those words mean.

Catherine: Yes, in terms of thinking about both biological risk and biological security, I think about the objects that we’re trying to protect. It’s about the protection of human, animal, and plant life and health, in particular. Some of that extends to protection of the environment. The risks are the risks to those objects and security is securing and protecting those.

Ariel: Okay. I’d like to start this discussion where we’ll talk about ethics and policy, looking first at the example of the gain-of-function experiments that caused another stir in the science community a few years ago. That was research which was made, I believe, on the H5N1 virus, also known as the avian flu, and I believe it made the virus more virulent. First, can you just explain what gain-of-function means? And then I was hoping you could talk a bit about what that research was, and what the scientific community’s reaction to it was.

Catherine: Gain-of-function’s actually quite a controversial term to have selected to describe this work, because a lot of what biologists do is work that would add a function to the organism that they’re working on, without that actually posing any security risk. In this context, it was a gain of a function that would make it perhaps more desirable for use as a biological weapon.

In this case, it was things like an increase in its ability to transmit between mammals, so in particular, they were getting it tracked to be transmittable between ferrets in a laboratory, and ferrets are a model for transmission between humans.

Ariel: You actually bring up an interesting point that I hadn’t thought about. To what extent does our choice of terminology affect how we perceive the ethics of some of these projects?

Catherine: I think it was perhaps in this case, it was more that the use of that term which was more done from perhaps the security and policy community side, made the conversation with scientists more difficult, as it was felt this was mislabeling our research, it’s affecting research that shouldn’t really come into this kind of conversation about security. So I think that was where it maybe caused some difficulties.

But I think also there’s understanding that needs to be the other way as well, that this isn’t not necessarily that all policymakers are going to have that level of detail about what they mean when they’re talking about science.

Ariel: Right. What was the reaction then that we saw from the scientific community and the policymakers when this research was published?

Catherine: There was firstly a stage of debate about whether those papers should be published or not. There was some guidance given by what’s called the National Science Advisory Board for Biosecurity in the US, that those papers should not be published in full. So, actually, the first part of the debate was about that stage of ‘should you publish this sort of research where it might have a high risk of misuse?’

That was something that the security community had been discussing for at least a decade, that there were certain experiments where they felt that they would meet a threshold of risk, where they shouldn’t be openly published or shouldn’t be published with their methodological details in full. I think for the policy and security community, it was expected that these cases would arise, but this hadn’t perhaps been communicated to the scientific community particularly well, and so I think it came as a shock to some of those researchers, particularly because the research had been approved initially, so they were able to conduct the research, but suddenly they would find that they can’t publish the research that they’ve done. I think that was where this initial point of contention came about.

It then became a broader issue. More generally, how do we handle these sorts of cases? Are there times when we should restrict publication? Or, is publication actually open publication, going to be a better way of protecting ourselves, because we’ll all know about the risks as well?

Ariel: Like you said, these scientists had gotten permission to pursue this research, so it’s not like it was questionable, or they had no reason to think it was too questionable to begin with. And yet, I guess there is that issue of how can scientists think about some of these questions more long term and maybe recognize in advance that the public or policymakers might find their research concerning? Is that something that scientists should be trying to do more of?

Catherine: Yes, and I think that’s part of this point about the communication between the scientific and policy communities, so that these things don’t come as a surprise or a shock. Yes, I think there was something in this. If we’re allowed to do the research, should we not have had more conversation at the earlier stages? I think in general I would say that’s where we need to get to, because if you’re trying to intervene at the stage of publication, it’s probably already too late to really contain the risk of publication, because for example, if you’ve submitted a journal article online, that information’s already out there.

So yes, trying to take it further back in the process, so that the beginning stages of designing research projects these things are considered, is important. That has been pushed forward by funders, so there are now some clauses about ‘have you reviewed the potential consequences of your research?’ That is one way of triggering that thinking about it. But I think there’s been a broader question further back about education and awareness.

It’s all right if you’re being asked that question, but do you actually have information that helps you know what would be a security risk? And what elements might you be looking for in your work? So, there’s this case more generally in how do we build awareness amongst the scientific community that these issues might arise, and train them to be able to spot some of the security concerns that may be there?

Ariel: Are we taking steps in that direction to try to help educate both budding scientists and also researchers who have been in the field for a while?

Catherine: Yes, there have been quite a lot of efforts in that area. Again, probably over the last decade or so, done by academic groups in civil society. It’s been something that’s been encouraged by states-parties to the Biological Weapons Convention have been encouraging education and awareness raising, and also the World Health Organization. It’s got a document on responsible life sciences research, and it also encourages education and awareness-raising efforts.

I think that those have further to go, and I think some of the barriers to those being taken up are the familiar things that it’s very hard to find space in a scientific curriculum to have that teaching, that more resources are needed in terms of where are the materials that you would go to. That is being built up.

I think also then talking about the scientific curriculums at maybe the undergraduate, postgraduate level, but how do you extend this throughout scientific careers as well? There needs to be a way of reaching scientists at all levels.

Ariel: We’re talking a lot about the scientists right now, but in your writings, you mention that there are three groups who have responsibility for ensuring that science is safe and ethical. Those are one, obviously the scientists, but then also you mention policymakers, and you mention the public and society. I was hoping you could talk a little bit about how you see the roles for each of those three groups playing out.

Catherine: I think these sorts of issues, they’re never going to be just the responsibility of one group, because there are interactions going on. Some of those interactions are important in terms of maybe incentives. So we talked about publication. Publication is of such importance within the scientific community and within their incentive structures. It’s so important to publish, that again, trying to intervene just at that stage, and suddenly saying, “No, you can’t publish your research” is always going to be a big problem.

It’s to do with the norms and the practices of science, but some of that, again, comes from the outside. Are there ways we can reshape those sorts of structures that would be more useful? Is one way of thinking about it. I think we need clear signals from policymakers as well, about when to take threats seriously or not. If we’re not hearing from policymakers that there are significant security concerns around some forms of research, then why should we expect the scientist to be aware of it?

Yes, also policy does have a control and governance mechanisms within it, so it can be very useful. In forms of deciding what research can be done, that’s often done by funders and government bodies, and not by the research community themselves. Trying to think how more broadly, to bring in the public dimension. I think what I mean there is that it’s about all of us being aware of this. It shouldn’t be isolating one particular community and saying, “Well, if things go wrong, it was you.”

Socially, we’ve got decisions to make about how we feel about certain risks and benefits and how we want to manage them. In the gain-of-function case, the research that was done had the potential for real benefits for understanding avian influenza, which could produce a human pandemic, and therefore there could be great public health benefits associated with some of this research that also poses great risks.

Again, when we’re dealing with something that for society, could bring both risks and benefits, society should play a role in deciding what balance it wants to achieve.

Ariel: I guess I want to touch on this idea of how we can make sure that policymakers and the public – this comes down to a three way communication. I guess my question is, how do we get scientists more involved in policy, so that policymakers are informed and there is more of that communication? I guess maybe part of the reason I’m fumbling over this question is it’s not clear to me how much responsibility we should be putting specifically on scientists for this, versus how much responsibility does go to the other groups.

Catherine: About science, it’s becoming more involved in policy. That’s another part of thinking of the relationship between science and policy, and science and society, is that we’ve got an expectation that part of what policymakers will consider is how to have regulation and governance that’s appropriate to scientific practice, and to emerging technologies, science and technology advances, then they need information from the scientific community about those things. There’s a responsibility of policymakers to seek some of that information, but also for scientists to be willing to engage in the other direction.

I think that’s the main answer to how they could be more informed, and what other ways there could be more communication? I think some of the useful ways that’s done at the moment is by having, say, meetings where there might be a horizon scanning element, so that scientists can have input on where we might see advances going. But if you also have within the participation, policymakers, and maybe people who know more about things like technology transfer, and startups, investments, so they can see what’s going on in terms of where the money’s going. Bringing those groups together to look at where the future might be going is quite a good way of capturing some of those advances.

And it helps inform the whole group, so I think those sorts of processes are good, and there are some examples of those, and there are some examples where the international science academies come together to do some of that sort of work as well, so that they would provide information and reports that can go forward to international policy processes. They do that for meetings at the Biological Weapons Convention, for example.

Ariel: Okay, so I want to come back to this broadly in a little bit, but first I want to touch on biologists and ethics and regulation a little bit more generally. Because I guess I keep thinking of the famous Asilomar meeting from I think it was in the late ’70s, in which biologists got together, recognized some of the risks in their field, and chose to pause the work that they were doing, because there were ethical issues. I tend to credit them with being more ethically aware than a lot of other scientific fields.

But it sounds like maybe that’s not the case. Was that just a special example in which scientists were unusually proactive? I guess, should we be worried about scientists and biosecurity, or is it just a few bad apples like we saw with this recent Chinese researcher?

Catherine: I think in terms of ethical awareness, it’s not that I don’t think biologists are ethically aware, but it is that there can be a lot of different things coming onto their agendas in that, and again, those can be pushed out by other practices within your daily work. So, I think for example, one of the things in biology, often it’s quite close to medicine, and there’s been a lot over the last few decades about how we treat humans and animals in research.

There’s ethics and biomedical ethics, there’s practices to do with consent and participation of human subjects, that people are aware of. It’s just that sometimes you’ve got such an overload of all these different issues you’re supposed to be aware of and responding to, so sustainable development and environmental protection is another one, that I think it’s going to be the case that often things will fall off the agenda or knowing which you should prioritize perhaps can be difficult.

I do think there’s this lack of awareness of the past history of biological warfare programs, and the fact that scientists have always been involved with them, and then looking forward to know how much more easy, because of the trends in technology, it may be for more actors to have access to such technologies and the implications that might have.

I think that picks up on what you were saying about, are we just concerned about the bad apples? Are there some rogue people out there that we should be worried about? I think there’s two parts to that, because there may be some things that are more obvious, where you can spot, “Yeah, that person’s really up to something they shouldn’t be.” I think there are probably mechanisms where people do tend to be aware of what’s going on in their laboratories.

Although, as you mentioned, the recent Chinese case, potentially CRISPR gene edited babies, it seems clear that people within that person’s laboratory didn’t know what was going on, the funders didn’t know what was going on, the government didn’t know what was going on, so yes, there will be some cases where there’s something very obvious that someone is doing bad.

I think that’s probably an easier thing to handle and to conceptualize, but when we’re now getting these questions about you can be doing the stuff, scientific work, and research, that’s for clear benefits, and you’re doing it for those beneficial purposes, but how do you work out whether the results of that could be misused by someone else? How do you frame whether you have any responsibility for how someone else would use it when they may well not be anywhere near you in a laboratory? They may be very remote, you probably have no contact with them at all, so how can you judge and assess how your work may be misused, and then try and make some decision about how you should proceed with it? I think that’s a more complex issue.

That does probably, as you say, speak to ‘are there things in scientific cultures, working practices, that might assist with dealing with that? Or might make it problematic?’ Again, I think I’ve picked up a few times, but there’s a lot going on in terms of the sorts of incentive structures that scientists are working in, which do more broadly meet up with global economic incentives. Again, not knowing the full details of the recent Chinese CRISPR case, there can often be almost racing dynamics between countries to have done some of this research and to be ahead in it.

I think that did happen with the gain-of-function experiments so that when the US had a moratorium on doing them, that China wrapped up its experiments in the same area. There’s all these kind of incentive structures that are going on as well, and I think those do affect wider scientific and societal practices.

Ariel: Okay. Quickly touching on some of what you were talking about, in terms of researchers who are doing things right, in most cases I think what happens is this case of dual use, where the research could go either way. I think I’m going to give scientists the benefit of the doubt and say most of them are actually trying to do good with their research. That doesn’t mean that someone else can’t come along later and then do something bad with it.

This is I think especially a threat with biosecurity, and so I guess, I don’t know that I have a specific question that you haven’t really gotten into already, but I am curious if you have ideas for how scientists can deal with the dual use nature of their research. Maybe to what extent does more open communication help them deal with it, or is open communication possibly bad?

Catherine: Yes. I think yes it’s possibly good and possibly bad. I think again, yeah, it’s a difficult question without putting their practice into context. Again, it shouldn’t be that just the scientist has to think through these issues of dual use and can it be misused. If there’s not really any new information coming out about how serious a threat this might be, so do we know that this is being pursued by any terrorist group? Do we know why that might be of a particular concern?

I think another interesting thing is that you might get combinations of technology that have developed in different areas, so you might get someone who does something that helps with the dispersal of an agent, that’s entirely disconnected from someone who might be working on an agent, that would be useful to disperse. Knowing about the context of what else is going on in technological development, and not just within your own work is also important.

Ariel: Just to clarify, what are you referring to when you say agent here?

Catherine: In this case, again, thinking of biology, so that might be a microorganism. If you were to be developing a biological weapon, you don’t just need to have a nasty pathogen. You would need some way of dispersing, disseminating that, for it to be weaponized. Those components may be for beneficial reasons going on in very different places. How would scientists be able to predict where those might combine and come together, and create a bigger risk than just their own work?

Ariel: Okay. And then I really want to ask you about the idea of the races, but I don’t have a specific question to be honest. It’s a concerning idea, and it’s something that we look at in artificial intelligence, and it’s clearly a problem with nuclear weapons. I guess what are concerns we have when we look at biological races?

Catherine: It may not even be necessarily specific to looking at biological races, but it is this thing, and again, not even thinking of maybe military science uses of technology, but about how we have very strong drivers for economic growth, and that technology advances will be really important to innovation and economic growth.

So, I think this does provide a real barrier to collective state action against some of these threats, because if a country can see an advantage of not regulating an area of technology as strongly, then they’ve got a very strong incentive to go for that. It’s working out how you might maybe overcome some of those economic incentives, and try and slow down some of the development of technology, or application of technology perhaps, to a pace where we can actually start doing these things like working out what’s going on, what the risks might be, how we might manage those risks.

But that is a hugely controversial kind of thing to put forward, because the idea of slowing down technology, which is clearly going to bring us these great benefits and is linked to progress and economic progress is a difficult sell to many states.

Ariel: Yeah, that makes sense. I think I want to turn back to the Chinese case very quickly. I think this is an example of what a lot of people fear, in that you have this scientist who isn’t being open with the university that he’s working with, isn’t being open with his government about the work he’s doing. It sounds like even the people who are working for him in the lab, and possibly even the parents of the babies that are involved may not have been fully aware of what he was doing.

We don’t have all the information, but at the moment, at least what little we have sounds like an example of a scientist gone rogue. How do we deal with that? What policies are in place? What policies should we be considering?

Catherine: I think I share where the concerns in this are coming from, because it looks like there’s multiple failures of the types of layers of systems that should have maybe been able to pick this up and stop it, so yes, we would usually expect that a funder of the research, or the institution the person’s working in, the government through regulation, the colleagues of a scientist would be able to pick up on what’s happening, have some ability to intervene, and that doesn’t seem to have happened.

Knowing that these multiple things can all fall down is worrying. I think actually an interesting thing about how we deal with this that there seems to be a very strong reaction from the scientific community working around those areas of gene editing, to all come together and collectively say, “This was the wrong thing to do, this was irresponsible, this is unethical. You shouldn’t have done this without communicating more openly about what you were doing, what you were thinking of doing.”

I think that’s really interesting to see that community push back which I think in those cases to me, where scientists are working in similar areas, I’d be really put off by that, thinking, “Okay, I should stay in line with what the community expects me to do.” I think that is important.

Where it also is going to kick in from the more top-down regulatory side as well, so whether China will now get some new regulation in place, do some more checks down through the institutional levels, I don’t know. Likewise, I don’t know whether internationally it will bring a further push for coordination on how we want to regulate those experiments.

Ariel: I guess this also brings up the question of international standards. It does look like we’re getting very broad international agreement that this research shouldn’t have happened. But how do we deal with cases where maybe most countries are opposed to some type of research and another country says, “No, we think it could be possibly ethical so we’re going to allow it?”

Catherine: I think this is again, the challenging situation. It’s interesting to me, this picks up, I’m trying to think whether this is maybe 15-20 years ago, but the debates about human cloning internationally, whether there should be a ban on human cloning. There was a declaration made, there’s a UN declaration against human cloning, but it fell down in terms of actually being more than a declaration, having something stronger in terms of an international law on this, because basically in that case, it was the differences between states’ views of the status of the embryo.

Regulating human reproductive research at the international level is very difficult because of some of those issues where like you say, there can be quite significant differences in ethical approaches taken by different countries. Again, in this case, I think what’s been interesting is, “Okay, if we’re going to come across a difficulty in getting an agreement between states and the governmental level, is there things that the scientific community or other groups can do to make sure those debates are happening, and that some common ground is being found to how we should pursue research in these areas, when we should decide it’s maybe safe enough to go down some of these lines?”

I think another point about this case in China was that it’s just not known whether it’s safe to be doing gene editing on humans yet. That’s actually one of the reasons why people shouldn’t be doing it regardless. I hope that gets some way to the answer. I think it is very problematic that we often will find that we can’t get broad international agreement on things, even when there seems to be some level of consensus.

Ariel: We’ve been talking a lot about all of these issues from the perspective of biological sciences, but I want to step back and also look at some of these questions more broadly. There’s two sides that I want to look at. One is just this question of how do we enable scientists to basically get into policy more? I mean, how can we help scientists understand how policymaking works and help them recognize that their voices in policy can actually be helpful? Or, do you think that we are already at a good level there?

Catherine: I would say we’re certainly not at an ideal level yet of science and policy. It does vary across different areas of course, so the thing that was coming up into my mind is in climate change, for example, having the intergovernmental panel doing their reports every few years. There’s a good, collaborative, international evidence base and good science policy process in that area.

But in other areas there’s a big deficit I would say. I’m most familiar with that internationally, but I think some of this scales down to the national level as well. Part of it is going in the other direction almost. When I spoke earlier about needs perhaps for education and awareness raising among scientists about some of these issues around how their research may be used, I think there’s also a need for people in policy to become more informed about science.

That is important. I’m trying to think what are the ways maybe scientists can do that? I think there’s some attempts, so when there’s international negotiations going on, to have … I think I’ve heard them described as mini universities, so maybe a week’s worth of quick updates on where the science is at before a negotiation goes on that’s relevant to that science.

I think one of the key things to say is that there are ways for scientists and the scientific community to have influence both on how policy develops and how it’s implemented, and a lot of this will go through intermediary bodies. In particular, the professional associations and academies that represent scientific communities. They will know, for example, thinking in the UK context, but I think this is similar in the US, there may be a consultation by parliament on how should we address a particular issue?

There was one in the UK a couple of years ago, how should we be regulating genetically modified insects? If a consultation like that’s going on and they’re asking for advice and evidence, there’s often ways of channeling that through academies. They can present statements that represent broader scientific consensus within their communities and input that.

The reason for mentioning them as intermediaries, again, it’s a lot of a burden to put on individual scientists to say, “You should all be getting involved in policy and informing policy. Another part of what you should be doing as part of your role,” but yes, realizing that you can do that as a collective, rather than it just having to be an individual thing I think is valuable.

Ariel: Yeah, there is the issue of, “Hey, in your free time, can you also be doing this?” It’s not like scientists have lots of free time. But one of the things that I get the impression is that scientists are sometimes a little concerned about getting involved with policymaking because they fear overregulation, and that it could harm their research and the good that they’re trying to do with their research. Is this fear justified? Are scientists hampered by policies? Are they helped by policies?

Catherine: Yeah, so it’s both. It’s important to know that the mechanisms of policy can play facilitative roles, they can promote science, as well as setting constraints and limits on it. Again, most governments are recognizing that the life sciences and biology and artificial intelligence and other emerging technologies are going to be really key for their economic growth.

They are doing things to facilitate and support that, and fund it, so it isn’t only about the constraints. However, I guess for a lot of scientists, the way you come across regulation, you’re coming across the bits that are the constraints on your work, or there are things that make you fill in a lot of forms, so it can just be perceived as something that’s burdensome.

But I would also say that certainly something I’ve noticed in recent years is that we shouldn’t think that scientists and technology communities aren’t sometimes asking for areas to be regulated, asking for some guidance on how they should be managing risks. Switching back to a biology example, but with gene drive technologies, the communities working on those have been quite proactive in asking for some forms of, “How do we govern the risks? How should we be assessing things?” Saying, “These don’t quite fit with the current regulatory arrangements, we’d like some further guidance on what we should be doing.”

I can understand that there might be this fear about regulation, but I also think something you said, could this be the source of the reluctance to engage with policy, and I think an important thing to say there is that actually if you’re not engaging with policy, it’s more likely that the regulation is going to be working in ways that are not intentionally, but could be restricting scientific practice. I think that’s really important as well, that maybe the regulation is created in a very well intended way, and it just doesn’t match up with scientific practice.

I think at the moment, internationally this is becoming a discussion around how we might handle the digital nature of biology now, when most regulation is to do with materials. But if we’re going to start regulating the digital versions of biology, so gene sequencing information, that sort of thing, then we need to have a good understanding of what the flows of information are, in which ways they have value within the scientific community, whether it’s fundamentally important to have some of that information open, and we should be very wary of new rules that might enclose it.

I think that’s something again, if you’re not engaging with the processes of regulation and policymaking, things are more likely to go wrong.

Ariel: Okay. We’ve been looking a lot about how scientists deal with the risks of their research, how policymakers can help scientists deal with the risks of their research, et cetera, but it’s all about the risks coming from the research and from the technology, and from the advances. Something that you brought up in a separate conversation before the podcast is to what extent does risk stem from technology, and to what extent can it stem from how we govern it? I was hoping we could end with that question.

Catherine: That’s a really interesting question to me, and I’m trying to work that out in my own research. One of the interesting and perhaps obvious things to say is it’s never down to the technology. It’s down to how we develop it, use it, implement it. The human is always playing a big role in this anyway.

But yes, I think a lot of the time governance mechanisms are perhaps lagging behind the development of science and technology, and I think some of the risk is coming from the fact that we may just not be governing something properly. I think this comes down to things we’ve been mentioning earlier. We need collectively both in policy, in the science communities, technology communities, and society, just to be able to get a better grasp on what is happening in the directions of emerging technologies that could have both these very beneficial and very destructive potentials, and what is it we might need to do in terms of really rethinking how we govern these things?

Yeah, I don’t have any answer for where the sources of risk are coming from, but I think it’s an interesting place to look, is that intersection between the technology development, and the development of regulation and governance.

Ariel: All right, well yeah, I agree. I think that is a really great question to end on, for the audience to start considering as well. Catherine, thank you so much for joining us today. This has been a really interesting conversation.

Catherine: Thank you.

Ariel: As always, if you’ve been enjoying the show, please take a moment to like it, share it, and follow us on your preferred podcast platform.

[end of recorded material]

Podcast: Can We Avoid the Worst of Climate Change? with Alexander Verbeek and John Moorhead

“There are basically two choices. We’re going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don’t care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” – Alexander Verbeek

On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-security relationship. John is President of Drawdown Switzerland, an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming. He is a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com, and he advises and informs on climate solutions that are economy, society, and environment positive.

Topics discussed in this episode include:

  • Why the difference between 1.5 and 2 degrees C of global warming is so important, and why we can’t exceed 2 degrees C of warming
  • Why the economy needs to fundamentally change to save the planet
  • The inequality of climate change
  • Climate change’s relation to international security problems
  • How we can avoid the most dangerous impacts of climate change: runaway climate change and a “Hothouse Earth”
  • Drawdown’s 80 existing technologies and practices to solve climate change
  • “Trickle up” climate solutions — why individual action is just as important as national and international action
  • What all listeners can start doing today to address climate change

Publications and initiatives discussed in this episode include:

You can listen to this podcast above, or read the full transcript below. And feel free to check out our previous podcast episodes on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hi everyone, Ariel Conn here with the Future of Life Institute. Now, this month’s podcast is going live on Halloween, so I thought what better way to terrify our listeners than with this month’s IPCC report. If you’ve been keeping up with the news this month, you’re well aware that the report made very dire predictions about what a future warmer world will look like if we don’t keep global temperatures from rising more than 1.5 degrees Celsius. Then of course there were all of the scientists’ warnings that came out after the report about how the report underestimated just how bad things could get.

It was certainly enough to leave me awake at night in a cold sweat. Yet the report wasn’t completely without hope. The authors seem to still think that we can take action in time to keep global warming to 1.5 degrees Celsius. So to consider this report, the current state of our understanding of climate change, and how we can ensure global warming is kept to a minimum, I’m excited to have Alexander Verbeek and John Moorhead join me today.

Alexander is a Dutch environmentalist, diplomat, and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. Over the past 28 years, he has worked on international security, humanitarian, and geopolitical risk issues, and the linkage to the Earth’s accelerating environmental crisis. He created the Planetary Security Initiative held at The Hague’s Peace Palace where representatives from 75 countries meet annually on the climate change-security relationship. He spends most of his time speaking and advising on planetary change to academia, global NGOs, private firms, and international organizations.

John is President of Drawdown Switzerland in addition to being a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com. He advises and informs on climate solutions that are economy, society, and environment positive. He affects change by engaging on the solutions to global warming with youth, business, policy makers, investors, civil society, government leaders, et cetera. Drawdown Switzerland an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming in Switzerland and internationally by investment at scale in Drawdown Solutions. So John and Alexander, thank you both so much for joining me today.

Alexander: It’s a pleasure.

John: Hi Ariel.

Ariel: All right, so before we get too far into any details, I want to just look first at the overall message of the IPCC report. That was essentially: two degrees warming is a lot worse than 1.5 degrees warming. So, I guess my very first question is why did the IPCC look at that distinction as opposed to anything else?

Alexander: Well, I think it’s a direct follow up from the negotiations in the Paris Agreement, where in a very late stage after the talk for all the time about two degrees, at a very late stage the text included the reference to aiming for 1.5 degrees. At that moment, it invited the IPCC to produce a report by 2018 about what the difference actually is between 1.5 and 2 degrees. Another major conclusion is that it is still possible to stay below 1.5 degrees, but then we have to really urgently really do a lot, and that is basically cut in the next 12 years our carbon pollution with 45%. So that means we have no day to lose, and governments, basically everybody, business and people, everybody should get in action. The house is on fire. We need to do something right now.

John: In addition to that, we’re seeing a whole body of scientific study that’s showing just how difficult it would be if we were to get to 2 degrees and what the differences are. That was also very important. Just for your US listeners, I just wanted to clarify because we’re going to be talking in degrees centigrade, so for the sake of argument, if you just multiply by two, every time you hear one, it’s two degrees Fahrenheit. I just wanted to add that.

Ariel: Okay great, thank you. So before we talk about how to address the problem, I want to get more into what the problem actually is. And so first, what is the difference between 1.5 degrees Celsius and 2 degrees Celsius in terms of what impact that will have on the planet?

John: So far we’ve already seen a one degree C increase. The impacts that we’re seeing, they were all predicted by the science, but in many cases we’ve really been quite shocked at just how quickly global warming is happening and the impacts it’s having. I live here in Switzerland, and we’re just now actually experiencing another drought, but in the summer we had the worst drought in eastern Switzerland since 1847. Of course we’ve seen the terrible hurricanes hitting the United States this year and last. That’s one degree. So 1.5 degrees increase, I like to use the analogy of our body temperature: If you’re increasing your body temperature by two degrees Fahrenheit, that’s already quite bad, but if you then increase it by three degrees Fahrenheit, or four, or five, or six, then you’re really ill. That’s really what happens with global warming. It’s not a straight line.

For instance, the difference between 1.5 degrees and two degrees is that heat waves are forecast to increase by over 40%. There was another study that showed that fresh water supply would decrease by 9% in the Mediterranean for 1.5 degrees, but it would decrease by 17% if we got to two degrees. So that’s practically doubling the impact for a change of 1.5 degrees. I can go on. If you look at wheat production, the difference between two and 1.5 degrees is a 70% loss in yield. Sea level rise would be 50 centimeters versus 40 centimeters, and 10 centimeters doesn’t sound like that much, but it’s a huge amount in terms of increase.

Alexander: Just to illustrate that a bit, if you have just a 10 centimeters increase, that means that 10 million people extra will be on the move. Or to formulate it another way, I remember when Hurricane Sandy hit New York and the subway flooded. At that moment we had, and that’s where we now are more or less, we have had some 20 centimeters of sea level rise since the industrial revolution. If we didn’t have those 20 centimeters, the subways would not have flooded. So it sounds like nothing, but it has a lot of impacts. I think another one that I saw that was really striking is the impact on nature, the impact on insects or on coral reefs. So if you have two degrees, there’s hardly any coral reef left in the world, whereas if it would be 1.5 degrees, we would still lose 70-90%, but there could still be some coral reefs left.

John: That’s a great example I would say, because currently it’s 50% of coral reefs at one degree increase have already died off. So at 1.5, we could reach 90%, and two degrees we will have practically wiped off all coral reefs.

Alexander: And the humanitarian aspects are massive. I mean John just mentioned water. I think one of these things we will see in the next decade or next two decades is a lot of water related problems. The amount of people that will not have access to water is increasing rapidly. It may double in the next decade. So any indication here that we have in the report on how much more problems we will see with water if we have that half degree extra is a very good warning. If you see the impact of not enough water on the quality of life of people, on people going on the move, increased urbanization, more tensions in the city because there they also have problems with having enough water, and of course water is related to energy and especially food production. So its humanitarian impacts of just that half degree extra is massive.

Then last thing here, we’re talking about global average. In some areas, if let’s say globally it gets two degrees warmer, in landlocked countries for instance, it will go much faster, or in the Arctic, it goes like twice as fast with enormous impacts and potential positive feedback loops that might end up with.

Ariel: That was something interesting for me to read. I’ve heard about how the global average will increase 1.5 to two degrees, but I hadn’t heard until I read this particular report that that can mean up to 3.5 degrees Celsius in certain places, that it’s not going to be equally distributed, that some places will get significantly hotter. Have models been able to predict where that’s likely to happen?

John: Yeah, and not only that, it’s already happening. That’s also one of the problems we face when we describe global warming in terms of one number, an average number, is that it doesn’t portray the big differences that we’re seeing in terms of global warming. For instance, in the case of Switzerland we’re already at a two degree centigrade increase, and that’s had huge implications for Switzerland already. We’re a landlocked country. We have beautiful mountains as you know, and beautiful lakes as well, but we’re currently seeing things that we hadn’t seen before, which is some of our lakes are starting to dry out in this current drought period. Lake levels have dropped very significantly. Not the major ones that are fed by glaciers, but the glaciers themselves, out of 80 glaciers that are tracked in Switzerland, 79 are retreating. They’re losing mass.

That’s having impacts, and in terms of extreme weather, just this last summer we saw these incredible – what Al Gore calls water bombs – that happened in Lausanne and Eschenz, two of our cities, where we saw centimeters, months worth of rain, fall in the space of just a few minutes. This is caused all sorts of damages as well.

Just a last point about temperature differences is that, for instance, northern Europe this last summer, we saw four, five degrees, much warmer, which caused so much drying out that we saw forest fires that we hadn’t seen in places like Sweden or Finland and so on. We also saw in February of this year what the scientists call a temperature anomaly of 20 degrees, which meant that for a few days it was warmer in the North Pole than it was in Poland because of this temperature anomaly. Averages help us understand the overall trends, but they also hide differences that are important to consider as well.

Alexander: Maybe the word global warming is, let’s say for a general public, not the right word because it sounds a bit like “a little bit warmer,” and if it’s now two degrees warmer than yesterday, I don’t care so much. Maybe “climate weirding” or “climate chaos” are better because we will just get more extremes. Let’s say you follow for instance how the jet stream is moving, it used to have rather quick pulls going around the planet at the height where the jets like to fly at about 10 kilometers. It is now, because there’s less temperature difference between the equator and the poles, it’s getting slower. It’s getting a bit lazy.

That means two things. It means on the one hand that you see that once you have a certain weather pattern, it sticks longer, but the other thing is by this lazy jet stream to compare it a bit like a river that enters the flood lands and starts to meander, is that the waves are getting bigger. Let’s say if it used to be that the jet stream brought cold air from Iceland to the Netherlands where I’m from, since it is now wavier, it brings now cold weather all the way from Greenland, and same with warm weather. It comes from further down south and it sticks longer in that pattern so you get longer droughts, you get longer periods of rain, it all gets more extreme. So a country like the Netherlands which is a delta where we always deal with too much water, and like many other countries in the world, we experience drought now which is something that we’re not used to. We have to ask foreign experts how do you deal with drought, because we always tried to pump the water out.

John: Yeah I think the French, as often is the case, have the best term for it. It’s called dérèglement climatique which is this idea of climate disruption.

Ariel: I’d like to come back to some of the humanitarian impacts because one of the things that I see a lot is this idea that it’s the richer, mostly western but not completely western countries that are causing most of the problems, and yet it’s the poorer countries that are going to suffer the most. I was wondering if you guys could touch on that a little bit?

Alexander: Well I think everything related to climate change is about that it is unfair. It is created by countries that generally are less impacted by now, so we started let’s say in western Europe with the industrial revolution and came followed by the US that took over. Historically the US produced the most. Then you have a different groups of countries. Let’s take a country in Sahel like Burkina Faso for instance. They contributed practically zero to the whole problem, but the impact is much more on their sides. Then there’s kind of a group of countries in between. Let’s say a country like China that for a long time did not contribute much to the problem and is now rapidly catching up. Then you get this difficult “tragedy of the commons” behavior that everybody points at somebody else for their part, what they have done, and either because they did it in past or because they do it now, everybody can use the statistics in their advantage, apart from these really really poor countries that are getting the worst.

I mean a country like Tuvalu is just disappearing. That’s one of those low-lying natural states in the Pacific. They contributed absolutely zero and their country is drowning. They can point at everybody else and nobody will point at them. So there is a huge call for that this is an absolutely globalized problem that you can only solve by respecting each other, by cooperating together, and by understanding that if you help other countries, it’s not only your moral obligation but it’s also in your own interest to help the others to solve this.

John: Yeah. Your listeners would most likely also be aware of the sustainable development goals, which are the objectives the UN set for 2030. There are 17 of them. They include things like no poverty, zero hunger, health, education, gender equality, et cetera. If you look at who is being impacted by a 2 degree and a 1.5 degree world, then you can see that it’s particularly in the developing and the least developed countries that the impact is felt the most, and that these SDGs are much more difficult if not impossible to reach in a 2 degree world. Which again is why it’s so important for us to stay within 1.5 degrees.

Ariel: And so looking at this from more of a geopolitical perspective, in terms of trying to govern and address… I guess this is going to be a couple questions. In terms of trying to prevent climate change from getting too bad, what do countries broadly need to be doing? I want to get into specifics about that question later, but broadly for now what do they need to be doing? And then, how do we deal with a lot of the humanitarian impacts at a government level if we don’t keep it below 1.5 degrees?

Alexander: A broad answer would be two things: get rid of the carbon pollution that we’re producing every day as soon as possible. So phase out fossil fuels. The other that’s a broad answer would be a parallel to what John was just talking about. We have the agenda 2030. We have those 17 sustainable development goals. If we would all really follow that and live up to that, we’d actually get a much better world because all of these things are integrated. If you just look at climate change in isolation you are not going to get there. It’s highly integrated to all those related problems.

John: Yeah, just in terms of what needs to be done broadly speaking, it’s the adoption of renewable energy, scaling up massively the way we produce electricity using renewables. The IPCC suggested there should be 85% and there are others that say we can even get to 100% renewables by 2050. The other side is everything to do with land use and food, our diet has a huge impact as well. On the one hand as Alexander has said very well, we need to cut down on emissions that are caused by industry and fossil fuel use, but on the other hand what’s really important is to preserve our natural ecosystems that protect us, and add forest, not deforest. We need to naturally scale up the capture of carbon dioxide. Those are the two pieces of the puzzle.

Alexander: Don’t want to go too much into details, but all together it ultimately asks for a different kind of economy. In our latest elections when I looked at the election programs, every party whether left or right or in the middle, they all promise something like, “when we’re in government, they’ll be something like 3% of economic growth every year.” But if you grow 3% every year, that means that every 20 years you double your economy. That means every 40 years you quadruple your economy, which might be nice if it will be only the services industry, but if you talk about production we can not let everything grow in the amount of resources that we use and the amount of waste we produce, when the Earth itself is not growing. So apart from moving to renewables, it is also changing the way how we use everything around and how we consume.

You don’t have to grow when you have it this good already, but it’s so much in the system that we have used the past 200, 250 years. Everything is based on growth. And as the Club of Romes said in the early ’70s, there’s limits to growth unless our planet would be something like a balloon that somebody would blow air in and it would be growing, then you would have different system. But as long as that is not the case and as long as there’s no other planets where we can fly to, that is the question where it’s very hard to find an answer. You can conclude that we can not grow, but how do we change that? That’s probably a completely different podcast debate, but it’s something I wanted to flag here because at the end of today you always end up with this question.

Ariel: This is actually, this is very much something that I wanted to come back to, especially in terms of what individuals can do, I think consuming less is one of the things that we can do to help. So I want to come back to that idea. I want to talk a little bit more though about some of the problems that we face if we don’t address the problem, and then come back to that. So, first going back to the geopolitics of addressing climate change if it happens, I think, again, we’ve talked about some of the problems that can arise as a result of climate change, but climate change is also thought of as a threat multiplier. So it could trigger other problems. I was hoping you could talk a little bit about some of the threats that governments need to be aware of if they don’t address climate change, both in terms of what climate change could directly cause and what it could indirectly cause.

Alexander: There’s so much we can cover here. Let’s start with security, it’s maybe the first one you think of. You’ll read in the paper about climate wars and water wars and those kind of popular words, which of course is too simplified. But, there is a clear correlation between changing climates and security.

We’ve seen it in many places. You see it in the place where we’re seeing more extreme weather now, so let’s say in the Sahel area, or in the Middle East, there’s a lot of examples where you just see that because of rising temperatures and because of less rainfall which is consistently going on now, it’s getting worse now. The combination is worse. You get more periods of drought, so people are going on the move. Where are they going to? Well normally, unlike many populists like to claim in some countries, they’re not immediately going to the western countries. They don’t go too far. People don’t want to move too far so they go to an area not too far away, which is a little bit less hit by this drought, but by the fact that they arrived there, they increased pressures on the little water and food and other resources that they have. That creates, of course, tensions with the people that are already there.

So think for instance about the Nomadic herdsman and the more agricultural farmers that you have and the kind of tension. They all need a little bit of water, so you see a lot of examples. There’s this well known graph where you see the world’s food prices over the past 10 years. There were two big spikes where suddenly the food prices as well as the energy prices rapidly went up. The most well known is in late 2010. Then if you plot on that graph the revolutions and uprisings and unrest in the world, you see that as soon as the world’s food price gets above, let’s say, 200, you see that there is so much more unrest. The 2010 one led soon after to the Arab Spring, which is not an automatic connection. In some countries there was no unrest, and they had the same drought, so it’s not a one on one connection.

So I think you used the right word of saying a threat multiplier. On top of all the other problems they have with bad governance and fragile economies and all kinds of other development aspects that you find back in those same SDGs that were mentioned, if you add to that the climate change problem, you will get a lot of unrest.

But let me add one last thing here. It’s not just about security. There’s also, there’s an example for instance, when Bangkok was flooding, the factory that produced chips was flooded. The chip prices worldwide suddenly rose like 10%, but there was this factory in the UK that produced perfectly ready cars to sell. The only thing they missed was this few-centimeters big electronic chip that needed to be in the car. So they had to close the factory for like 6 weeks because of a flooding in Bangkok. That just shows that this interconnected worldwide economy that we have, you’re nowhere in the world safe from the impacts of climate change.

Ariel: I’m not sure if it was the same flood, but I think Apple had a similar problem, didn’t they? Where they had a backlog of problems with hard drives or something because the manufacturer, I think in Thailand, I don’t remember, flooded.

But anyway, one more problem that I want to bring up, and that is: at the moment we’re talking about actually taking action. I mean even if we only see global temperatures rise to two degrees Celsius, that will be because we took action. But my understanding is, on our current path we will exceed two degrees Celsius. In fact, the US National Highway Traffic Safety Administration Report that came out recently basically says that a 4 degree increase is inevitable. So I want to talk about what the world looks like at that level, and then also what runaway climate change is and whether you think we’re on a path towards runaway climate change, or if that’s still an extreme that hopefully won’t happen.

John: There’s a very important discussion that’s going on around at what point we will reach that tipping point where because of positive feedback loops, it’s just going to get worse and worse and worse. There’s been some very interesting publications lately that were trying to understand at what level that would happen. It turns out that the assessment is that it’s probably around 2 degrees. At the moment, if you look at the Paris Agreement and what all the countries have committed to and you basically take all those commitments which, you were mentioning the actions that already have been started, and you basically play them out until 2030, we would be on a track that would take us to 3 degrees increase, ultimately.

Ariel: And to clarify, that’s still with us taking some level of action, right? I mean, when you talk about that, that’s still us having done something?

John: Yeah, if you add up all the countries’ plans that they committed to and they fully implement them, it’s not sufficient. We would get to 3 degrees. But that’s just to say just how much action is required, we really need to step up the effort dramatically. That’s basically what the 1.5 degrees IPCC report tells us. If we were to get already to 2 degrees, let’s not talk about 3 degrees in the moment. But what could happen is that we would reach this tipping point into what scientists are describing a “Hothouse Earth.” What that means is that you get so much ice melting — now, the ice and snow serve an important protective function. They reflect back out, because it’s white it reflects back out a lot of the heat. If all that melts and is replaced by much darker land mass or ocean, then that heat is gonna be absorbed, not reflected. So that’s one positive feedback loop that constantly makes it even warmer, and that melts more ice, et cetera.

Another one is the permafrost, where the permafrost, as its name suggests, is frozen in the northern latitudes. The risk is that it starts to melt. It’s not the permafrost itself, it’s all the methane that it contains, which is a very powerful greenhouse gas which would then get released. That leads to warmer temperatures which melts even more of the permafrost et cetera.

That’s the whole idea of runaway, then we completely lose control, all the natural cooling systems, the trees and so on start to die back as well, and so we get four, five, six … But as I mentioned earlier, 4 could be 7 in some parts of the world and it could be 2 or 3 in others. It would make large parts of the world basically uninhabitable if you take it to the extreme of where it could all go.

Ariel: Do we have ideas of how long that could take? Is that something that we think could happen in the next 100 years or is that something that would still take a couple hundred years?

John: Whenever we talk about the temperature increases, we’re looking at the end of the century, so that’s 2100, but that’s less than 100 years.

Ariel: Okay.

Alexander: The problem is looking to, at the end of the century, this always come back to “end of the century.” It sounds so far away, it’s just 82 years. I mean if you flip back, you’re in 1936. My father was a boy of 10 years old and it’s not that far away. My daughter might still live in 2100, but by that time she’ll have children and maybe grandchildren that have to live through the next century. It’s not that once we are at the year 2100 that the problem suddenly stops. We talk about an accelerating problem. If you stay on the business-as-usual scenario and you mitigate hardly anything, then it’s 4 degrees at the end of the century, but the temperatures keep rising.

As we already said, 4 degrees at the end of the century, that is kind of average. In the worst case scenario, it might as well be 6. It could also be less. And in the Arctic it could be anywhere between let’s say 6 or maybe even 11. It’s typically the Arctic where you have this methane, what John was just talking about, so we don’t want to get some kind of Venus, you know. This is typically the world we do not want. That makes it why it’s so extremely important to take measures now because anything you do now is a fantastic investment in the future.

If you look at risks on other things, Dick Cheney a couple of years ago said, if there’s only 1% chance that terrorists will get weapons of mass destruction we should act as if they have them. Why don’t we do it in this case? If there’s only 1% chance that we would get complete destruction of the planet as we know it, we have to take urgent action. So why do it on the one risk that hardly kills people if you look on big numbers, however bad terrorism is, and now we talk something about a potential massive killer of millions of people and we just say, “Yeah, well you know, only 50% chance that we get in this scenario or that scenario.”

What would you do if you were sitting in a plane and at takeoff the pilot says, “Hi guys. Happy to be on board. This is how you buckle and unbuckle your belt. And oh by the way, we have 50% chance that we’re gonna make it today. Hooray, we’re going to take off.” Well you would get out of the plane. But you can’t get out of this planet. So we have to take action urgently, and I think the report that came out is excellent.

The problem is, if you’re reading it a bit too much and everybody is focusing on it now, you get into this energetic mood like, “Hey. We can do it!” We only talk about corals. We only talk about this because suddenly we’re not talking about the three or four or five degree scenarios, which is good for a change because it gives hope. I know that in talks like this I always try to give as much hope as I can and show the possibilities, but we shouldn’t forget about how serious the thing is that we’re actually talking about. So now we go back to the positive side.

Ariel: Well I am all for switching to the positive side. I find myself getting increasingly cynical about our odds of success, so let’s try to fix that in whatever time we have left.

John: Can I just add just briefly, Alex, because I think that’s a great comment. It’s something that I’m also confronted with sometimes by fellow climate change folk, is that they come up to me, and this is after they’ve heard me talk about what the solutions are. They tell me, “Don’t make it sound too easy either.” But I think it’s a question of balance and I think that when we do talk about the solutions and we’ll hear about them, but do bear in mind just how much change is involved. I mean it is really very significant change that we need to embark on to avoid 1.5 or beyond.

Alexander: There’s basically two choices. We’re going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don’t care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.

It is only because those that have so much political power are so closely connected to the big corporations that look for short-term profits, and certainly not all of them, but the ones that are really influential, and I’m certainly thinking about the country of our host today. They have so much impact on the policies that are made and their sole interest is just the next quarterly financial report that comes out. That is not in the interest of the people of this planet.

Ariel: So this is actually a good transition to a couple of questions that I have. I actually did start looking at the book Drawdown, which talks about, what is it, 80 solutions? Is that what they discuss?

John: Yeah, 80 existing solutions or technologies or practices, and then there’s 20 what they call coming attractions which would be in addition to that. But it’s the 80 we’re talking about, yeah.

Ariel: Okay, so I started reading that and I read the introduction and the first chapter and felt very, very hopeful. I started reading about some of the technologies and I still felt hopeful. Then as I continued reading it and began to fully appreciate just how many technologies have to be implemented, I started to feel less hopeful. And so, going back, before we talk too much about the specific technologies, I think as someone who’s in the US, one of the questions that I have is even if our federal government isn’t going to take action, is it still possible for those of us who do believe that climate change is an issue to take enough action that we can counter that?

John: That’s an excellent question and it’s a very apropos question as well. My take on this is I had the privilege of being at the Global Climate Action Summit in San Francisco. You’re living it, but I think it’s two worlds basically in the United States at the moment, at least two worlds. What really impressed me, however, was that you had people of all political persuasions, you had indigenous people, you had the head of the union, you had mayors, city leaders. You also had some country leaders as well who were there, particularly those who are gonna be most impacted by climate change. What really excited me was the number of commitments that were coming at us throughout the days of, one city that’s gonna go completely renewable and so on.

We had so many examples of those. And in particular, if you’re talking about the US, California, which actually if it was its own country would be the fifth economy I believe — they’re committed to achieving 100% renewable energy by 2050. There was also the mayor of Houston, for instance, who explained how quickly he wanted to also achieve 100% renewables. That’s very exciting and that movement I think is very important. It would be of course much much better to have nations’ leaders as well to fully back this, but I think that there’s a trickle-up aspect, and I don’t know if this is the right time to talk about exponential growth that can happen. Maybe when we talk about the specific solutions we can talk about just how quickly they can go, particularly when you have a popular movement around saving the climate.

A couple of weeks ago I was in Geneva. There was a protest there. Geneva is quite a conservative city actually. I mean you’ve got some wonderful chocolate as you know, but also a lot of banks and so on. At the march, there were, according to the organizers, 7000 people. It was really impressive to see that in Geneva which is not that big a city. The year before at the same march there were 500. So we’re more than increasing the numbers by 10, and I think that there’s a lot of communities and citizens that are being affected that are saying, “I don’t care what the federal government’s doing. I’m gonna put a solar panel on my roof. I’m going to change my diet, because it’s cheaper, it saves me money, and it also is much healthier to do that and with much more resilience,” when a hurricane comes around for instance.

Ariel: I think now is a good time to start talking about what some of the solutions are. I wanna come back to the idea of trickle up, because I’m still gonna ask you guys more questions about individual action as well, but first let’s talk about some of the things that we can be doing now. What are some of the technological developments that exist today that have the most promise that we should be investing more in and using more?

John: What I perhaps wanted to do is just take a little step back, because the IPCC does talk about some very unpleasant things that could happen to our planet, but they also talk about what the steps are to stay within 1.5 degrees. Then there’s some other plans we can discuss that also achieve that. So what does the IPCC tell us? You mentioned it earlier. First of all, we need to significantly cut, every decade actually, by half, the carbon dioxide emission and greenhouse gas emissions. That’s something called the Carbon Law. It’s very convenient because you can imagine defining what your objective is and say okay, every 10 years I need to cut in half the emissions. That’s number one.

Number two is that we need to go dramatically to renewables. There’s no other way, because of the emissions that fossil fuels produce, they will no longer be an option. We have to go renewable as quickly as possible. It can be done by 2050. There’s a professor at Stanford called Mark Jacobson who with an international team has mapped out the way to get to 100% renewables for 139 countries. It’s called The Solutions Project. Number Three has to do with fossil fuels. What the IPCC says is that there should be practically no coal being used in 2050. That’s where there are some differences.

Basically, as I mentioned earlier, on the one hand you have your emissions and on the other hand you have this capture, the sequestration of carbon by soils and by vegetation. They’re both in balance. One is putting CO2 into the air, and the other is taking it out. So we need to favor obviously the sequestration. It’s an area under the curve problem. You have a certain budget that’s associated with that temperature increase. If you emit more, you need to absorb more. There’s just no two ways about it.

The IPCC is actually in that respect quite conservative, because they’re saying there still will be coal around. Whereas there are other plans such as Drawdown and the Exponential Climate Action Roadmap, as well as The Solutions Project which I just mentioned, which get us to 100% renewables by 2050, and so zero emissions for sake of argument.

The other difference I would say with the IPCC is that because you are faced with this tremendous problem of all this carbon dioxide we need to take out of the atmosphere, which is where Drawdown comes from. The term means to draw out of the atmosphere the carbon dioxide. There’s this technology which is around, it’s basically called energy crops. You basically grow crops for energy. That gives us a little bit of an issue because it encourages politicians to think that there’s a magic wand that we’ll be able to use in the future to all of a sudden be able to remove the carbon dioxide. I’m not saying that we may very well have to get there, what I am saying is that we can, with for instance Drawdown’s 80 solutions, get there.

Now in terms of the promise, the thing that I think is important is that the thinking has to evolve from the magic bullet syndrome that we all live every day, we always want to find that magic solution that’ll solve everything, to thinking more holistically about the whole of the Earth’s planetary system and how they interact and how we can achieve solutions that way.

Alexander: Can I ask something John? Can you summarize that Drawdown relies with its 80 technologies, completely on proven technology whereas in the recent 1.5 report, I have the impression that they practically, for every solution that they come up with, they rely on still unproven technologies that are still on the drawing table or maybe tested on a very small scale? Is there a difference between those two approaches?

John: Not exactly. I think there’s actually a lot of overlap. There’s a lot of the same solutions that are in Drawdown are in all climate solutions, so we come back to the same set which is actually very reassuring because that’s the way science works. It empirically tests and models all the different solutions. So what I always find very reassuring is whenever I read different approaches, I always look back at Drawdown and I say, “Okay yes, that’s in the 80 solutions.” So I think there is actually a lot of over overlap. A lot of IPCC is Drawdown solutions, but the IPCC works a bit differently because the scientists have to work with governments in terms of coming up with proposals, so there is a process of negotiation of how far can we take this which scientists such as the Project Drawdown scientists are unfettered by that.

They just go out and they look for what’s best. They don’t care if it’s politically sensitive or not, they will say what they need to say. But I think the big area of concern is this famous bio-energy carbon capture and storage (BECCS), which are these energy crops that you grow and then you capture the carbon dioxide. So you actually are capturing carbon dioxide. There’s both moral hazard because politicians will say, “Okay. I’m just going to wait until BECCS comes round and that will solve all our problems,” on the one hand. On the other hand it does pose us with some serious questions about competition of land for producing crops versus producing crops for energy.

Ariel: I actually want to follow up with Alexander’s question really quickly because I’ve gotten a similar impression that some of the stuff in the IPCC report is for technologies that are still in development. But my understanding is that the Drawdown solutions are in theory at least, if not in practice, ready to scale up.

John: They’re existing technologies, yeah.

Ariel: So when you say there’s a lot of overlap, is that me or us misunderstanding the IPCC report or are there solutions in the IPCC report that aren’t ready to be scaled up?

John: The approaches are a bit different. The approaches that Drawdown takes is a bottom up approach. They basically unleashed 65 scientists to go out and look for the best solutions. So they go out and they look at all the literature. And it just so happens that nuclear energy is one of them. It doesn’t produce greenhouse gas emissions. It is a way of producing energy that doesn’t cause climate change. A lot of people don’t like that of course, because of all the other problems we have with nuclear. But let me just reassure you very quickly that there are three scenarios for Drawdown. It goes from so-called “Plausible,” which I don’t like as a name because it suggests that the other ones might not be plausible, but it’s the most conservative one. Then the second one is “Drawdown.” Then the third one is “Optimum.”

Optimum doesn’t include solutions that are called with regrets, such as nuclear. So when you go optimum, basically it’s 100% renewable. There’s no nuclear energy in there either in the mix. That’s very positive. But in terms of the solutions, what they look at, what IPCC looks at is the trajectory that you could achieve given the existing technologies. So they talk about renewables, they talk about fossil fuels going down to net zero, they talk about natural climate solutions, but perhaps they don’t talk about, for instance, educating girls, which is one of the most important Drawdown solutions because of the approach that Drawdown takes where they look at everything. Sorry, that’s a bit of a long answer to your question.

Alexander: That’s actually part of the beauty of Drawdown, that they look so broadly, that educating girls… So a girl leaving school at 12 got on average like five children and a girl that you educate leaving school at the age of 18 on average has about two children, and they will have a better quality of life. They will put much less pressure on the planet. So this more holistic approach of Drawdown I like very much and I think it’s good to see so much overlap between Drawdown and IPCC. But I was struck by IPCC that it relies so heavily on still unproven technologies. I guess we have to bet on all our horses and treat this a bit as a kind of wartime economy. If you see the creativity and the innovation that we saw during the second World War in the field of technology as well as government by the way, and if you see, let’s say, the race to the moon, the amazing technology that was developed in such a short time.

Once you really dedicate all your knowledge and your creativity and your finances and your political will into solving this, we can solve this. That is what Drawdown is saying and that is also what the IPCC 1.5 is saying. We can do it, but we need the political will and we need to mobilize the strengths that we have. Unfortunately, when I look around worldwide, the trend is in many countries exactly the opposite. I think Brazil might soon be the latest one that we should be worried about.

John: Yeah.

Ariel: So this is, I guess where I’m most interested in what we can do and also possibly the most cynical, and this comes back to this trickle up idea that you were talking about. That is, we don’t have the political will right now. So what do those of us who do have the will do? How do we make that transition of people caring to governments caring? Because I do, maybe this is me being optimistic, but I do think if we can get enough people taking individual action, that will force governments to start taking action.

John: So trickle up, grassroots, I think we’re in the same sort of idea. I think it’s really important to talk a little bit, and then we will get into the solutions, but to talk about not just as the solutions to global warming, but to a lot of other problems as well such as air pollution, our health, the pollution that we see in the environment. And actually Alexander you were talking earlier about the huge transformation. But transformation does not necessarily always have to mean sacrifice. It doesn’t also have to mean that we necessarily, although it’s certainly a good idea, for instance, I think you were gonna ask a question also about flying, to fly less there’s no doubt about that. To perhaps not buy the 15th set of clothes and so on so forth.

So there certainly is an element of that, although the positive side of that is the circular economy. In fact, these solutions, it’s not a question of no growth or less growth, but it’s a question of different growth. I think in terms of the discussion in climate change, one mistake that we have made is emphasized too much the “don’t do this.” I think that’s also what’s really interesting about Drawdown, is that there’s no real judgments in there. They’re basically saying, “These are the facts.” If you have a plant-based diet, you will have a huge impact on the climate versus if you eat steak every day, right? But it’s not making a judgment. Rather than don’t eat meat it’s saying eat plant-based foods.

Ariel: So instead of saying don’t drive your car, try to make it a competition to see who can bike the furthest each week or bike the most miles?

John: For example, yeah. Or consider buying an electric car if you absolutely have to have a car. I mean in the US it’s more indispensable than in Europe.

Alexander: It means in the US that when you build new cities, try to build them in a more clever way than the US has been doing up until now because if you’re in America and you want to buy whatever, a new toothbrush, you have to get in your car to go there. When I’m in Europe, I just walk out of the door and within 100 meters I can buy a toothbrush somewhere. I walk or I go on a bicycle.

John: That might be a longer-term solution.

Alexander: Well actually it’s not. I mean in the next 30 years, the amount of investment they can place new cities is an amount of 90 trillion dollars. The city patterns that we have in Europe were developed in the Middle Ages in the centers of cities, so although it is urgent and we have to do a lot of things, you should also think about the investments that you make now that will be followed for hundreds of years. We shouldn’t keep repeating the mistakes from the past. These are the kinds of things we should also talk about. But to come back to your question on what we can do individually, I think there is so much that you can do that helps the planet.

Of course, you’re only one out of seven billion people, although if you listen to this podcast it is likely that you are in that elite out of that seven billion that is consuming much more of the planet, let’s say, than your quota that you should be allowed to. But it means, for instance, changing your diet, and then if you go to a plant-based diet, the perks are not only that it is good for the planet, it is good for yourself as well. You live longer. You have less chance of developing cancer or heart disease or all kinds of other things you don’t want to have. You will live longer. You will have for a longer time a healthier life.

It means actually that you discover all kinds of wonderful recipes that you had never heard of before when you were still eating steak every day, and it is actually a fantastic contribution for the animals that are daily on an unimaginable scale tortured all over the world, locked up in small cages. You don’t see it when you buy it at a butcher, but you are responsible because they do that because you are the consumer. So stop doing that. Better for the planet. Better for the animals. Better for yourself. Same with use your bicycle, walk more. I still have a car. It is 21 years old. It’s the only car I ever bought in my life, and I use it maximum 20 minutes per month. I’m not even buying an electrical vehicle because I still got an old one. There’s a lot that you can do and it has more advantages than just to the planet.

John: Absolutely. Actually, walkable cities is one of the Drawdown solutions. Maybe I can just mention very quickly. I’ll just list out of the 80 solutions, there was a very interesting study that showed that there are 30 of them that we could put into place today, and that that added up to about 40% of the greenhouse gases that we’ll be able to remove.

I’ll just list them quickly. The ones at the end, they’re more, if you are in an agricultural setting, which of course is probably not the case for many of your listeners. But: reduced food waste, plant-rich diets, clean cookstoves, composting, electric vehicles we talked about, ride sharing, mass transit, telepresence (basically video conferencing, and there’s a lot of progress being made there which means we perhaps don’t need to take that airplane.) Hybrid cars, bicycle infrastructure, walkable cities, electric bicycles, rooftop solar, solar water (so that’s heating your hot water using solar.) Methane digesters (it’s more in an agricultural setting where you use biomass to produce methane.) Then you have LED lighting, which is a 90% gain compared to incandescent. Household water saving, smart thermostats, household recycling and recyclable paper, micro wind (there are some people that are putting a little wind turbine on their roof.)

Now these have to do with agriculture, so they’re things like civil pasture, tropical staple trees, tree intercropping, regenerative agriculture, farmland restoration, managed grazing, farmland irrigation and so on. If you add all those up it’s already 37% of the solution. I suspect that the 20 is probably a good 20%. Those are things you can do tomorrow — today.

Ariel: Those are helpful, and we can find those all at drawdown.org; that’ll also list all 80. So you’ve brought this up a couple times, so let’s talk about flying. This was one of those things that really hit home for me. I’ve done the carbon footprint thing and I have an excellent carbon footprint right up until I fly and then it just explodes. As soon as I start adding the footprint from my flights it’s just awful. I found it frustrating that one, so many scientists especially have … I mean it’s not even that they’re flying, it’s that they have to fly if they want to develop their careers. They have to go to conferences. They have to go speak places. I don’t even know where the responsibility should lie, but it seems like maybe we need to try to be cutting back on all of this in some way, that people need to be trying to do more. I’m curious what you guys think about that.

Alexander: Well start by paying tax, for instance. Why is it — well I know why it is — but it’s absurd that when you fly an airplane you don’t pay tax. You can fly all across Europe for like 50 euros or 50 dollars. That is crazy. If you would do the same by your car, you pay tax on the petrol that you buy, and worse, you are not charged for the pollution that you cause. We know that airplanes are heavily polluting. It’s not only the CO2 that they produce, but where they produce, how they produce. It works three to four times faster than all the CO2 that you produce if you drive your car. So we know how bad it is, then make people pay for it. Just make flying more expensive. Pay for the carbon you produce. When I produce waste at home, I pay to my municipality because they pick it up and they have to take care of my garbage, but if I put garbage in the atmosphere, somehow I don’t go there. Actually, it is by all sorts of strange ways, it’s actually subsidized because you don’t pay a tax for it, so there’s worldwide like five or six times as much subsidies on fossil fuels than there is on renewables.

We completely have to change the system. Give people a budget maybe. I don’t know, there could be many solutions. You could say that everybody has the right to search a budget for flying or for carbon, and you can maybe trade that or swap it or whatever. There’s some NGOs that do it. They say to, I think the World Wildlife Fund, but correct me if I’m wrong. All the people working there, they get not only a budget for the projects, they also get a carbon budget. You just have to choose, am I going to this conference or going to that conference, or should I take the train, and you just keep track of what you are doing. That’s something we should maybe roll out on a much bigger scale and make it more expensive.

John: Yeah, the whole idea of a carbon tax, I think is key. I think that’s really important. Some other thoughts: Definitely reduce, do you really absolutely need to make that trip, think about it. Now with webcasting and video conferencing, we can do a lot more without flying. The other thing I suggest is that when you at some point you absolutely do have to travel, try to combine it with as many other things as possible that are perhaps not directly professional. If you are already in the climate change field, then at least you’re traveling for a reason. Then it’s a question of the offsets. Using calculators you can see what the emissions were and pay for what’s called an offset. That’s another option as well.

Ariel: I’ve heard mixed things about offsets. In some cases I see that yes, you should absolutely buy them, and you should. If you fly, you should get them. But that in a lot of cases they’re a bandaid or they might be making it seem like it’s okay to do this when it’s still not the solution. I’m curious what your thoughts on that are.

John: For me, something like an offset, as much as possible should be a last resort. You absolutely have to make the trip, it’s really important, and you offset your trip. You pay for some trees to be planted in the rainforest for instance. There are loads of different possibilities to do so. It’s not a good idea. Unfortunately Switzerland’s plan, for instance, includes a lot of getting others to reduce emissions. That’s really, you can argue that it’s cheaper to do it that way and somebody else might do it more cheaply for you so to speak. So cheaper to plant a tree and it’ll have more impact in the rainforest than in Switzerland. But on the other hand, it’s something which I think we really have to avoid, also because in the end the green economy is where the future lies and where we need to transform to. So if we’re constantly getting others to do the decarbonization for us, then we’ll be stuck with an industry which is ultimately will become very expensive. That’s not a good idea either.

Alexander: I think also the prices are absolutely unrealistic. If you fly, let’s say, from London to New York, your personal, just the fact that you were in the plane, not all the other people, the fact you were in the plane is responsible for three square meters of the Arctic that is melting. You can offset that by paying something like, what is it, 15 or 20 dollars for offsetting that flight. That makes ice in the Arctic extremely cheap. A square meter would be worth something like seven dollars. Well I personally would believe that it’s worth much more.

Then the thing is, then they’re going to plant a tree that takes a lot of time to grow. By the time it’s big, it’s getting CO2 out of the air, are they going to cut it and make newspapers out of it which you then burn in a fireplace, the carbon is still back to where it was. So you need to really carefully think what you’re doing. I feel it is very much a bit like going to a priest and say like, “I have flown. Oh, I have sinned, but I can now do a few prayers and I pay these $20 and now it’s fine. I can book my next flight.” That is not the way it should be. Punish people up front to pay the tickets. Pay the price for the pollution and for the harm that you are causing to this planet and to your fellow citizens on this planet.

John: Couldn’t agree more. But there are offset providers in the US, look them up. See which one you like the best and perhaps buy more offsets. Economy is half the carbon than Business class, I hate to say.

Alexander: Something for me which you mentioned there, I decided long ago, six, seven years ago, that I would never ever in my life fly Business again. I’m not, as somebody who had a thrombosis and the doctors advised me that I should take business, I don’t. I still fly. I’m very much like Ariel that my footprint is okay until the moment that I start adding flying because I do that a lot for my job. Let’s say in the next few weeks, I have a meeting in the Netherlands. I have only 20 days later a meeting in England. I stay in the Netherlands. In between I do all my travel to Belgium and France and the UK, I do everything by train. It’s only that by plane I’m going back from London to Stockholm, because I couldn’t find any reasonable way to go back. I wonder why don’t we have high speed train connections all the way up to Stockholm here.

Ariel: We talked a lot about taxing carbon. I had an interesting experience last week where I’m doing what I can to try to not drive if I’m in town. I’m trying to either bike or take the bus. What often happens is that works great until I’m running late for something, and then I just drive because it’s easier. But the other week, I was giving a little talk on the campus at CU Boulder, and the parking on CU Boulder is just awful. There is absolutely no way that, no matter how late I’m running, it’s more convenient for me to take my car. It never even once dawned on me to take the car. I took a bus. It’s that much easier. I thought that was really interesting because I don’t care how expensive you make gas or parking, if I’m running late I’m probably gonna pay for it. Whereas if you make it so inconvenient that it just makes me later, I won’t do that. I was wondering if you have any other, how can we do things like that where there’s also this inconvenience factor?

Alexander: Have a look at Europe. Well coincidentally I know CU Boulder and I know how difficult the parking is. That’s the brilliance of Boulder where I see a lot of brilliant things. It’s what we do in Europe. I mean one of the reasons why I never ever use a car in Stockholm is that I have no clue how or where to park it, nor can I read the signs because my Swedish is so bad. I’m afraid of a ticket. I never use the car here. Also because we have such perfect public transport. The latest thing they have here is the VOI that just came out like last month, which is, I don’t know the word, we call it “step” in Dutch. I don’t know what you call that in English, whether it’s the same word or not, but it’s like these two-wheeled things that kids normally have. You know?

They are now here electric, so you download an app on your mobile phone and you see one of them in the street because they’re everywhere now. Type in a code and then it unlocks. Then it starts using your time. So for every minute, you pay like 15 cents. So all these electric little things that are everywhere for free, you just drive all around town and you just drop them wherever you like. When you need one, you look on your app and the app shows you where the nearest one is. It’s an amazing way of transport and it’s just, a month ago you saw just one or two. Now they are everywhere. You’re on the streets, you see one. It’s an amazing new way of transport. It’s very popular. It just works on electricity. It makes things so much more easy to reach everywhere in the city because you go at least twice as fast as walking.

John: There was a really interesting article in The Economist about parking. Do you know how many parking spots The Shard, the brand new building in London, the skyscraper has? Eight. The point that’s being made in terms of what you were just asking about in terms of inconvenience, in Europe it just really, in most cases it really doesn’t make any sense at all to take a car into the city. It’s a nightmare.

Before we talk more about personal solutions, I did want to make some points about the economics of all these solutions because what’s really interesting about Drawdown as well is that they looked at both what you would save and what it would cost you to save that over the 30 years that you would put in place those solutions. They came up with some things which at first sight are really quite surprising, because you would save 74.4 trillion dollars for an investment or a net cost of 29.6 trillion.

Now that’s not for all the solutions, so it’s not exactly that. In some of the solutions it’s very difficult to estimate. For instance, the value of educating girls. I mean it’s inestimable. But the point that’s also made is that if you look at The Solutions Project, Professor Jacobson, they also looked at savings, but they looked at other savings that I think are much more interesting and much more important as well. You would basically see a net increase of over 24 million long-term jobs that you would see an annual decrease in four to seven million air pollution deaths per year.

You would also see the stabilization of energy prices, because think of the price of oil where it goes from one day to the next, and annual savings of over 20 trillion in health and climate costs. Which comes back to, when you’re doing those solutions, you are also saving money, but you are also saving more importantly peoples’ lives, the tragedy of the commons, right? So I think it’s really important to think about those solutions. I mean we know very well why we are still using fossil fuels, it’s because of the massive subsidies and support that they get and the fact that vested interests are going to defend their interests.

I think that’s really important to think about in terms of those solutions. They are becoming more and more possible. Which leads me to the other point that I’m always asked about, which is, it’s not going fast enough. We’re not seeing enough renewables. Why is that? Because even though we don’t tax fuel, as you mentioned Alexander, because we’ve produced now so many solar panels, the cost is getting to be much cheaper. It’ll get cheaper and cheaper. That’s linked to this whole idea of exponential growth or tipping points, where all of a sudden all of us start to have a solar panel on our roof, where more and more of us become vegetarians.

I’ll just tell you a quick anecdote on that. We had some out of town guests who absolutely wanted to go to actually a very good steakhouse in Geneva. So along we went. We didn’t want to offend them and say “No, no, no. We’re certainly not gonna go to a steakhouse.” So we went along. It was a group of seven of us. Imagine the surprise when they came to take our orders and three out of seven of us said, “I’m afraid we’re vegetarians.” It was a bit of a shock. I think those types of things start to make others think as well, “Oh, why are you vegetarian,” and so on and so forth.

That sort of reflection means that certain business models are gonna go out of business, perhaps much faster than we think. On the more positive side, there are gonna be many more vegetarian restaurants, you can be sure, in the future.

Ariel: I want to ask about what we’re all doing individually to address climate change. But Alexander, one of the things that you’ve done that’s probably not what just a normal person would do, is start the Planetary Security Initiative. So before we get into what individuals can do, I was hoping you could talk a little bit about what that is.

Alexander: That was not so much as an individual. I was at Yale University for half a year when I started this, but then when I came back in the Ministry of Foreign Affairs for one more year, I had some ideas and I got support from the ministers of doing that, on bringing the experts in the world together that work in the field of the impact that climate change will have on security. So the idea to start was creating an annual meeting where all these experts in the world come together because that didn’t exist yet, and to make more scientists and researchers in the world energetic to study more in the field of how this relationship works. But more importantly, the idea was also to connect the knowledge and the insights of these experts on how the changing climate and the impacts impacts has on water and food, and our changing planetary conditions, how they are impacting the geopolitics.

I have a background, both in security as well as environment. That used to be two completely different tracks that weren’t really interacting. The more I was working on those two things, the more that I saw that the changing environment is actually directly impacting our security situation. It’s already happening and you can be pretty sure that the impact is going to be much more in the future. So what we then started was a meeting in the Peace Palace in the Hague. There were some 75 countries the first time that we were present there, and then the key experts in the world. It’s now an annual meeting that always takes place. For anybody that’s interested, contact me and then I will provide you with the right contact. It is growing now into all kinds of other initiatives and other involvement and more studies that are taking place.

So the issue is really taking off, and that is mainly because more and more people see the need of getting better insights into the impact that all of these changes that we’ve been discussing, that it’ll have on security whether that’s individual security, human security of individuals, that’s also geopolitical security. Imagine that when so much is changing, when the economies are changing so rapidly, when interests of people change and when people start going on the move, tensions will rise for a number of reasons, partly related to climate change, but it’s very much a situation where climate change is already in an existing fragile situation, it’s making it worse. So that is the Planetary Security Initiative. The government of the Netherlands has been very strong on this, working closely together with something other governments. Sweden, for instance, where I’m living, Sweden has in the past year been focusing very much on strengthening the United Nations, that you would have experts at the relevant high level in New York that can connect the dots and connect to people and the issues to not just raise awareness for the issue, but make sure that in the policies that are made, these issues are also taken into account because you better do it up front than repair damage afterwards if you haven’t taken care of these issues.

It’s a rapidly developing field. There is a new thing as, for instance, using AI and data, I think the World Resources Institute in Washington is very good at that, where they combine let’s say, the geophysical data, let’s say satellite and other data on increasing drought in the world, but also deforestation and other resource issues. They are connecting that now with the geopolitical impacts with AI and with combining all these completely different databases. You get much better insight on where the risks really are, and I believe that in the years to come, WRI in combination with several other think tanks can do brilliant work where the world is really waiting for the kind of insights. International policies will be so much more effective if you know much better where the problems are really going to hit first.

Ariel: Thank you. All right, so we are starting to get a little bit short on time, and I want to finish the discussion with things that we’ve personally been doing. I’m gonna include myself in this one because I think the more examples the better. So what we’ve personally been doing to change our lifestyles for the better, not sacrifice, but for the better, to address climate change. And also, to keep us all human, where we’re failing that we wish we were doing better.

I can go ahead and start. I am trying to not use my car in town. I’m trying to stick to biking or taking public transportation. I have dropped the temperature in our house by another degree, so I’m wearing more sweaters. I’m going to try to be stricter about flying, only if I feel that I will actually be having a good impact on the world will I fly, or a family emergency, things like that.

I’m pretty sure our house is on wind power. I work remotely, so I work from home. I don’t have to travel for work. I those are some of the big things, and as I said, flying is still a problem for me so that’s something I’m working on. Food is also an issue for me. I have lots of food issues so cutting out meat isn’t something that I can do. But I have tried to buy most of my food from local farms, I’m trying to buy most of my meat from local farms where they’re taking better care of the animals as well. So hopefully that helps a little bit. I’m also just trying to cut back on my consumption in general. I’m trying to not buy as many things, and if I do buy things I’m trying to get them from companies that are more environmentally-conscious. So I think food and flying are sort of where I’m failing a little bit, but I think that’s everything on my end.

Alexander: I think one of the big changes I made is I became years ago already vegetarian for a number of good reasons. I am now practically vegan. Sometimes when I travel it’s a bit too difficult. I hardly ever use the car. I guess it’s just five or six times a year that I actually use my car. I use bicycles and public transport. The electricity at our home is all wind power. In the Netherlands, that’s relatively easy to arrange nowadays. There’s a lot of offers for it, so I deliberately buy wind power, including in the times when wind power was still more expensive than other power. I think about in consumption, when I buy food, I try to buy more local food. There’s the occasional kiwi, which I always wonder it’s arrives in Europe, but that’s another thing that you can think of. Apart from flying, I really do my best with my footprint. Then flying is the difficult thing because with my work, I need to fly. It is about personal contacts. It is about meeting a lot of people. It’s about teaching.

I do teaching online. I use Skype for teaching to classrooms. I do many Skype conferences all the time, but yes I’m still flying. I refuse flying business class. I started that some six, seven years ago. Just today business class ticket was offered to me for a very long flight and I refused it. I say I will fly economy. But yes, the flying is what adds to my footprint. I still, I try to combine trips. I try to stay longer at a certain place, combining it, and then by train go to all kinds of other places. But when you’re stuck here in Stockholm, it’s quite difficult to get here by other means than flying. Once I’m, let’s say, in the Netherlands or Brussels or Paris or London or Geneva, you can do all those things by train, but it gets a bit more difficult out here.

John: Pretty much in Alexander’s case, except that I’m very local. I travel actually very little and I keep the travel down. If I do have to travel, I have managed to do seven hour trips by train. That’s a possibility in Europe, but that sort of gets you to the middle of Germany. Then the other thing is I’ve become vegetarian recently. I’m pretty close to vegan, although it’s difficult with such good cheese we have in this country. But the way it came about is interesting as well. It’s not just me. It’s myself, my wife, my daughter, and my son. The third child is never gonna become vegetarian I don’t think. But that’s not bad, four out of five.

In terms of what I think you can do and also points to things that we perhaps don’t think about contributing, being a voice, vis a vis others in our own communities and explaining why you do what you do in terms of biking and so on so forth. I think that really encourages others to do the same. It can grow a lot like that. In that vein, I teach as much as I can to high school students. I talk to them about Drawdown. I talk to them about solutions and so on. They get it. They are very very switched on about this. I really enjoy that. You really see, it’s their future, it’s their generation. They don’t have very much choice unfortunately. On a more positive note, I think they can really take it away in terms of a lot of actions which we haven’t done enough of.

Ariel: Well I wanted to mention this stuff because going back to your idea, this trickle up, I’m still hopeful that if people take action that that will start to force governments to. One final question on that note, did you guys find yourselves struggling with any of these changes or did you find them pretty easy to make?

Alexander: I think all of them were easy. Switching your energy to wind power, et cetera. Buying more consciously. It comes naturally. I was already vegetarian, and then moving to vegan, just go online and read it about it and how to do it. I remember when I was a kid that hardly anybody was vegetarian. Then I once discussed it with my mother and she said, “Oh it’s really difficult because then you need to totally balance your food and be in touch with your doctor, whatever.” I’ve never spoken to any doctor. I just stopped eating meat and now I … Years ago I swore out all dairy. I’ve never been ill. I don’t feel ill. Actually I feel better. It is not complicated. The rather complicated thing is flying, there are sometimes I have to make difficult choices like being for a long time away from home, I saved quite a bit on that part. That’s sometimes more complicated or, like soon I’ll be in a nearly eight hour train ride in something I could have flown in an hour.

John: I totally agree. I mean I enjoy being in a train, being able to work and not be worried about some truck running into you or the other foibles of driving which I find very very … I’ve got to a point where I’m becoming actually quite a bad driver. I drive so little that, I hope not, but I might have an accident.

Ariel: Well fingers crossed that doesn’t happen. Amd good. That’s been my experience so far too. The changes that I’ve been trying to make haven’t been difficult. I hope that’s an important point for people to realize. Anything else you want to add either of you?

Alexander: I think there’s just one thing that we didn’t touch on, on what you can do individually. That’s perhaps the most important one for us in democratic countries. That is vote. Vote for the best party that actually takes care of our long-term future, a party that aims for taking rapidly the right climate change measures. A party that wants to invest in a new economy that sees that if you invest now, you can be a leader later.

There is, in some countries, you have a lot of parties and there is all kinds of nuances. In other countries you have to deal with basically two parties, where just the one part is absolutely denying science and is doing exactly the wrong things and are basically aiming to ruin the planet as soon as possible, whereas the other party is actually looking for solutions. Well if you live in a country like that, and there are coincidentally soon elections coming up, vote for the party that takes the best positions on this because it is about the future of your children. It is the single most important influential thing that you can do, certainly if you live in a country where the emissions that the country produces are still among the highest in the world. Vote. Take people with you to do it.

Ariel: Yeah, so to be more specific about that, as I mentioned at the start this podcast, it’s coming out on Halloween, which means in the US, elections are next week. Please vote.

John: Yeah. Perhaps something else is how you invest, where your money is going. That’s one that can have a lot of impact as well. All I can say is, I hate to come back to Drawdown, but go through the Drawdown and think about your investments and say, okay, renewables whether it’s LEDs or whatever technology it is, if it’s in Drawdown, make sure it’s in your investment portfolio. If it’s not, you might want to get out of it, particularly the ones that we already know are causing the problem in the first place.

Ariel: That’s actually, that’s a good reminder. That’s something that has been on my list of things to do. I know I’m guilty of not investing in the proper companies at the moment. That’s something I’ve been wanting to fix.

Alexander: And tell your pension funds: divest from fossil fuels and invest in renewables and all kinds of good things that we need in the new economy.

John: But not necessarily because you’re doing it as a charitable cause, but really because these are the businesses of the future. We talked earlier about growth that these different businesses can take. Another factor that’s really important is efficiency. For instance, I’m sure you have heard of The Impossible Burger. It’s a plant-based burger. Now what do you think is the difference in terms of the amount of crop land required to produce a beef burger versus an impossible burger?

Alexander: I would say one in 25 or one in 35, but at range.

John: Yeah, so it’s one in 20. The thing is that when you look at that type of gain in efficiency, it’s just a question of time. A cow simply can’t compete. You have to cut down the trees to grow the animal feed that you ship to the cow, that the cow then eats. Then you have to wait a number of years, and that’s that 20 factor difference in efficiency. Now our capitalist economic system doesn’t like inefficient systems. You can try to make that cow as efficient as possible, you’re never going to be able to compete with a plant-based burger. Anybody who thinks that that plant-based burger isn’t going to displace the meat burger should really think again.

Ariel: All right, I think we’re ending on a nice hopeful note. So I want to thank you both for coming on today and talking about all of these issues.

Alexander: Thanks Ariel. It was nice to talk.

John: Thank you very much.

Ariel: If you enjoyed this podcast, please take a moment to like it and share it, and maybe even leave a positive review. And o f course, if you haven’t already, please follow us. You can find the FLI podcast on iTunes, Google Play, SoundCloud, and Stitcher.

[end of recorded material]

Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More

How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks?

In this special podcast episode, Ariel speaks with Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Martin is a cosmologist and space scientist based in the University of Cambridge. He is director of The Institute of Astronomy and Master of Trinity College, and he was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords.

Topics discussed in this episode include:

  • Why Martin remains a technical optimist even as he focuses on existential risks
  • The economics and ethics of climate change
  • How AI and automation will make it harder for Africa and the Middle East to economically develop
  • How high expectations for health care and quality of life also put society at risk
  • Why growing inequality could be our most underappreciated global risk
  • Martin’s view that biotechnology poses greater risk than AI
  • Earth’s carrying capacity and the dangers of overpopulation
  • Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars
  • The ethics of artificial meat, life extension, and cryogenics
  • How intelligent life could expand into the galaxy
  • Why humans might be unable to answer fundamental questions about the universe

Books and resources discussed in this episode include

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloudiTunesGooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with The Future of Life Institute. Now, our podcasts lately have dealt with artificial intelligence in some way or another, and with a few focusing on nuclear weapons, but FLI is really an organization about existential risks, and especially x-risks that are the result of human action. These cover a much broader field than just artificial intelligence.

I’m excited to be hosting a special segment of the FLI podcast with Martin Rees, who has just come out with a book that looks at the ways technology and science could impact our future both for good and bad. Martin is a cosmologist and space scientist. His research interests include galaxy formation, active galactic nuclei, black holes, gamma ray bursts, and more speculative aspects of cosmology. He’s based in Cambridge where he has been director of The Institute of Astronomy, and Master of Trinity College. He was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords. He holds the honorary title of Astronomer Royal. He has received many international awards for his research and belongs to numerous academies, including The National Academy of Sciences, the Russian Academy, the Japan Academy, and the Pontifical Academy.

He’s on the board of The Princeton Institute for Advanced Study, and has served on many bodies connected with international collaboration and science, especially threats stemming from humanity’s ever heavier footprint on the planet and the runaway consequences of ever more powerful technologies. He’s written seven books for the general public, and his most recent book is about these threats. It’s the reason that I’ve asked him to join us today. First, Martin thank you so much for talking with me today.

Martin: Good to be in touch.

Ariel: Your new book is called On the Future: Prospects for Humanity. In his endorsement of the book Neil deGrasse Tyson says, “From climate change, to biotech, to artificial intelligence, science sits at the center of nearly all decisions that civilization confronts to assure its own survival.”

I really liked this quote, because I felt like it sums up what your book is about. Basically science and the future are too intertwined to really look at one without the other. And whether the future turns out well, or whether it turns out to be the destruction of humanity, science and technology will likely have had some role to play. First, do you agree with that sentiment? Am I accurate in that description?

Martin: No, I certainly agree, and that’s truer of this century than ever before because of greater scientific knowledge we have, and the greater power to use it for good or ill, because the technologies allow tremendously advanced technologies which could be misused by a small number of people.

Ariel: You’ve written in the past about how you think we have essentially a 50/50 chance of some sort of existential risk. One of the things that I noticed about this most recent book is you talk a lot about the threats, but to me it felt still like an optimistic book. I was wondering if you could talk a little bit about, this might be jumping ahead a bit, but maybe what the overall message you’re hoping that people take away is?

Martin: Well, I describe myself as a technical optimist, but political pessimist because it is clear that we couldn’t be living such good lives today with seven and a half billion people on the planet if we didn’t have the technology which has been developed in the last 100 years, and clearly there’s a tremendous prospect of better technology in the future. But on the other hand what is depressing is the very big gap between the way the world could be, and the way the world actually is. In particular, even though we have the power to give everyone a decent life, the lot of the bottom billion people in the world is pretty miserable and could be alleviated a lot simply by the money owned by the 1,000 richest people in the world.

We have a very unjust society, and the politics is not optimizing the way technology is used for human benefit. My view is that it’s the politics which is an impediment to the best use of technology, and the reason this is important is that as time goes on we’re going to have a growing population which is ever more demanding of energy and resources, putting more pressure on the planet and its environment and its climate, but we are also going to have to deal with this if we are to allow people to survive and avoid some serious tipping points being crossed.

That’s the problem of the collective effect of us on the planet, but there’s another effect, which is that these new technologies, especially bio, cyber, and AI allow small groups of even individuals to have an effect by error or by design, which could cascade very broadly, even globally. This, I think, makes our society very brittle. We’re very interdependent, and on the other hand it’s easy for there to be a breakdown. That’s what depresses me, the gap between the way things could be, and the downsides if we collectively overreach ourselves, or if individuals cause disruption.

Ariel: You mentioned actually quite a few things that I’m hoping to touch on as we continue to talk. I’m almost inclined, before we get too far into some of the specific topics, to bring up an issue that I personally have. It’s connected to a comment that you make in the book. I think you were talking about climate change at the time, and you say that if we heard that there was 10% chance that an asteroid would strike in 2100 people would do something about it.

We wouldn’t say, “Oh, technology will be better in the future so let’s not worry about it now.” Apparently I’m very cynical, because I think that’s exactly what we would do. And I’m curious, what makes you feel more hopeful that even with something really specific like that, we would actually do something and not just constantly postpone the problem to some future generation?

Martin: Well, I agree. We might not even in that case, but the reason I gave that as a contrast to our response to climate change is that there you could imagine a really sudden catastrophe happening if the asteroid does hit, whereas the problem with climate change is really that it’s first of all, the effect is mainly going to be several decades in the future. It’s started to happen, but the really severe consequences are decades away. But also there’s an uncertainty, and it’s not a sort of sudden event we can easily visualize. It’s not at all clear therefore, how we are actually going to do something about it.

In the case of the asteroid, it would be clear what the strategy would be to try and deal with it, whereas in the case of climate there are lots of ways, and the problem is that the consequences are decades away, and they’re global. Most of the political focus obviously is on short-term worry, short-term problems, and on national or more local problems. Anything we do about climate change will have an effect which is mainly for the benefit of people in quite different parts of the world 50 years from now, and it’s hard to keep those issues up the agenda when there are so many urgent things to worry about.

I think you’re maybe right that even if there was a threat of an asteroid, there may be the same sort of torpor, and we’d fail to deal with it, but I thought that’s an example of something where it would be easier to appreciate that it would really be a disaster. In the case of the climate it’s not so obviously going to be a catastrophe that people are motivated now to start thinking about it.

Ariel: I’ve heard it go both ways that either climate change is yes, obviously going to be bad but it’s not an existential risk so therefore those of us who are worried about existential risk don’t need to worry about it, but then I’ve also heard people say, “No, this could absolutely be an existential risk if we don’t prevent runaway climate change.” I was wondering if you could talk a bit about what worries you most regarding climate.

Martin: First of all, I don’t think it is an existential risk, but it’s something we should worry about. One point I make in my book is that I think the debate, which makes it hard to have an agreed policy on climate change, stems not so much from differences about the science — although of course there are some complete deniers — but differences about ethics and economics. There’s some people of course who completely deny the science, but most people accept that CO2 is warming the planet, and most people accept there’s quite a big uncertainty, matter of fact a true uncertainty about how much warmer you get for a given increase in CO2.

But even among those who accept the IPCC projections of climate change, and the uncertainties therein, I think there’s a big debate, and the debate is really between people who apply a standard economic discount rate where you discount the future to a rate of, say 5%, and those who think we shouldn’t do it in this context. If you apply a 5% discount rate as you would if you were deciding whether it’s worth putting up an office building or something like that, then of course you don’t give any weight to what happens after about, say 2050.

As Bjorn Lomborg, the well-known environmentalist argues, we should therefore give a lower priority to dealing with climate change than to helping the world’s poor in other more immediate ways. He is consistent given his assumptions about the discount rate. But many of us would say that in this context we should not discount the future so heavily. We should care about the life chances of a baby born today as much as we should care about the life chances of those of us who are now middle aged and won’t be alive at the end of the century. We should also be prepared to pay an insurance premium now in order to remove or reduce the risk of the worst case climate scenarios.

I think the debates about what to do about climate change is essentially ethics. Do we want to discriminate on grounds of date of birth and not care about the life chances of those who are now babies, or are we prepared to make some sacrifices now in order to reduce a risk which they might encounter in later life?

Ariel: Do you think the risks are only going to be showing up that much later? We are already seeing these really heavy storms striking. We’ve got Florence in North Carolina right now. There’s a super typhoon hit southern China and the Philippines. We had Maria, and I’m losing track of all the hurricanes that we’ve had. We’ve had these huge hurricanes over the last couple of years. We saw California and much of the west coast of the US just on flames this year. Do you think we really need to wait that long?

Martin: I think it’s generally agreed that extreme weather is now happening more often as a consequence of climate change and the warming of the ocean, and that this will become a more serious trend, but by the end of the century of course it could be very serious indeed. And the main threat is of course to people in the disadvantaged parts of the world. If you take these recent events, it’s been far worse in the Philippines than in the United States because they’re not prepared for it. Their houses are more fragile, etc.

Ariel: I don’t suppose you have any thoughts on how we get people to care more about others? Because it does seem to be in general that sort of worrying about myself versus worrying about other people. The richer countries are the ones who are causing more of the climate change, and it’s the poorer countries who seem to be suffering more. Then of course there’s the issue of the people who are alive now versus the people in the future.

Martin: That’s right, yes. Well, I think most people do care about their children and grandchildren, and so to that extent they do care about what things will be like at the end of the century, but as you say, the extra-political problem is that the cause of the CO2 emissions is mainly what’s happened in the advanced countries, and the downside is going to be more seriously felt by those in remote parts of the world. It’s easy to overlook them, and hard to persuade people that we ought to make a sacrifice which will be mainly for their benefit.

I think incidentally that’s one of the other things that we have to ensure happens, is a narrowing of the gap between the lifestyles and the economic advantages in the advanced and the less advanced parts of the world. I think that’s going to be in everyone’s interest because if there continues to be great inequality, not only will the poorer people be more subject to threats like climate change, but I think there’s going to be massive and well-justified discontent, because unlike in the earlier generations, they’re aware of what they’re missing. They all have mobile phones, they all know what it’s like, and I think there’s going to be embitterment leading to conflict if we don’t narrow this gap, and this requires I think a sacrifice on the part of the wealthy nations to subsidize developments in these poorer countries, especially in Africa.

Ariel: That sort of ties into another question that I had for you, and that is, what do you think is the most underappreciated threat that maybe isn’t quite as obvious? You mentioned the fact that we have these people in poorer countries who are able to more easily see what they’re missing out on. Inequality is a problem in and of itself, but also just that people are more aware of the inequality seems like a threat that we might not be as aware of. Are there others that you think are underappreciated?

Martin: Yes. Just to go back, that threat is of course very serious because by the end of the century there might be 10 times as many people in Africa as in Europe, and of course they would then have every justification in migrating towards Europe with the result of huge disruption. We do have to care about those sorts of issues. I think there are all kinds of reasons apart from straight ethics why we should ensure that the less developed countries, especially in Africa, do have a chance to close the gap.

Incidentally, one thing which is a handicap for them is that they won’t have the route to prosperity followed by the so called “Asian tigers,” which were able to have high economic growth by undercutting the labor cost in the west. Now what’s happening is that with robotics it’s possible to, as it were, re-shore lots of manufacturing industry back to wealthy countries, and so Africa and the Middle East won’t have the same opportunity the far eastern countries did to catch up by undercutting the cost of production in the west.

This is another reason why it’s going to be a big challenge. That’s something which I think we don’t worry about enough, and need to worry about, because if the inequalities persist when everyone is able to move easily and knows exactly what they’re missing, then that’s a recipe for a very dangerous and disruptive world. I would say that is an underappreciated threat.

Another thing I would count as important is that we are as a society very brittle, and very unstable because of high expectations. I’d like to give you another example. Suppose there were to be a pandemic, not necessarily a genetically engineered terrorist one, but a natural one. Then in contrast to what happened in the 14th century when the Bubonic Plague, the Black Death, occurred and killed nearly half the people in certain towns and the rest went on fatalistically. If we had some sort of plague which affected even 1% of the population of the United States, there’d be complete social breakdown, because that would overwhelm the capacity of hospitals, and people, unless they are wealthy, would feel they weren’t getting their entitlement of healthcare. And if that was a matter of life and death, that’s a recipe for social breakdown. I think given the high expectations of people in the developed world, then we are far more vulnerable to the consequences of these breakdowns, and pandemics, and the failures of electricity grids, et cetera, than in the past when people were more robust and more fatalistic.

Ariel: That’s really interesting. Is it essentially because we expect to be leading these better lifestyles, just that expectation could be our downfall if something goes wrong?

Martin: That’s right. And of course, if we know that there are cures available to some disease and there’s not the hospital capacity to offer it to all the people who are afflicted with the disease, then naturally that’s a matter of life and death, and that is going to promote social breakdown. This is a new threat which is of course a downside of the fact that we can at least cure some people.

Ariel: There’s two directions that I want to go with this. I’m going to start with just transitioning now to biotechnology. I want to come back to issues of overpopulation and improving healthcare in a little bit, but first I want to touch on biotech threats.

One of the things that’s been a little bit interesting for me is that when I first started at FLI three years ago we were very concerned about biotechnology. CRISPR was really big. It had just sort of exploded onto the scene. Now, three years later I’m not hearing quite as much about the biotech threats, and I’m not sure if that’s because something has actually changed, or if it’s just because at FLI I’ve become more focused on AI and therefore stuff is happening but I’m not keeping up with it. I was wondering if you could talk a bit about what some of the risks you see today are with respect to biotech?

Martin: Well, let me say I think we should worry far more about bio threats than about AI in my opinion. I think as far as the bio threats are concerned, then there are these new techniques. CRISPR, of course, is a very benign technique if it’s used to remove a single damaging gene that gives you a particular disease, and also it’s less objectionable than traditional GM because it doesn’t cross the species barrier in the same way, but it does allow things like a gene drive where you make a species extinct by making it sterile.

That’s good if you’re wiping out a mosquito that carries a deadly virus, but there’s a risk of some effect which distorts the ecology and has a cascading consequence. There are risks of that kind, but more important I think there is a risk of the misuse of these techniques, and not just CRISPR, but for instance the the gain of function techniques that we used in 2011 in Wisconsin and in Holland to make influenza virus both more virulent and more transmissible, things like that which can be done in a more advanced way now I’m sure.

These are clearly potentially dangerous, even if experimenters have a good motive, then the viruses might escape, and of course they are the kinds of things which could be misused. There have, of course, been lots of meetings, you have been at some, to discuss among scientists what the guidelines should be. How can we ensure responsible innovation in these technologies? These are modeled on the famous Conference in Asilomar in the 1970s when recombinant DNA was first being discussed, and the academics who worked in that area, they agreed on a sort of cautious stance, and a moratorium on some kinds of experiments.

But now they’re trying to do the same thing, and there’s a big difference. One is that these scientists are now more global. It’s not just a few people in North America and Europe. They’re global, and there is strong commercial pressures, and they’re far more widely understood. Bio-hacking is almost a student recreation. This means, in my view, that there’s a big danger, because even if we have regulations about certain things that can’t be done because they’re dangerous, enforcing those regulations globally is going to be as hopeless as it is now to enforce the drug laws, or to enforce the tax laws globally. Something which can be done will be done by someone somewhere, whatever the regulations say, and I think this is very scary. Consequences could cascade globally.

Ariel: Do you think that the threat is more likely to come from something happening accidentally, or intentionally?

Martin: I don’t know. I think it could be either. Certainly it could be something accidental from gene drive, or releasing some dangerous virus, but I think if we can imagine it happening intentionally, then we’ve got to ask what sort of people might do it? Governments don’t use biological weapons because you can’t predict how they will spread and who they’d actually kill, and that would be an inhibiting factor for any terrorist group that had well-defined aims.

But my worst nightmare is some person, and there are some, who think that there are too many human beings on the planet, and if they combine that view with the mindset of extreme animal rights people, etc, they might think it would be a good thing for Gaia, for Mother Earth, to get rid of a lot of human beings. They’re the kind of people who, with access to this technology, might have no compunction in releasing a dangerous pathogen. This is the kind of thing that worries me.

Ariel: I find that interesting because it ties into the other question that I wanted to ask you about, and that is the idea of overpopulation. I’ve read it both ways, that overpopulation is in and of itself something of an existential risk, or a catastrophic risk, because we just don’t have enough resources on the planet. You actually made an interesting point, I thought, in your book where you point out that we’ve been thinking that there aren’t enough resources for a long time, and yet we keep getting more people and we still have plenty of resources. I thought that was sort of interesting and reassuring.

But I do think at some point that does become an issue. At then at the same time we’re seeing this huge push, understandably, for improved healthcare, and expanding life spans, and trying to save as many lives as possible, and making those lives last as long as possible. How do you resolve those two sides of the issue?

Martin: It’s true, of course, as you imply, that the population has risen double in the last 50 years, and there were doomsters who in the 1960s and ’70s thought that mass starvation by now, and there hasn’t been because food production has more than kept pace. If there are famines today, as of course there are, it’s not because of overall food shortages. It’s because of wars, or mal-distribution of money to buy the food. Up until now things have gone fairly well, but clearly there are limits to the food that can be produced on the earth.

All I would say is that we can’t really say what the carrying capacity of the earth is, because it depends so much on the lifestyle of people. As I say in the book, the world couldn’t sustainably have 2 billion people if they all lived like present day Americans, using as much energy, and burning as much fossil fuels, and eating as much beef. On the other hand you could imagine lifestyles which are very sort of austere, where the earth could carry 10, or even 20 billion people. We can’t set an upper limit, but all we can say is that given that it’s fairly clear that the population is going to rise to about 9 billion by 2050, and it may go on rising still more after that, we’ve got to ensure that the way in which the average person lives is less profligate in terms of energy and resources, otherwise there will be problems.

I think we also do what we can to ensure that after 2050 the population turns around and goes down. The base scenario is when it goes on rising as it may if people choose to have large families even when they have the choice. That could happen, and of course as you say, life extension is going to have an affect on society generally, but obviously on the overall population too. I think it would be more benign if the population of 9 billion in 2050 was a peak and it started going down after that.

And it’s not hopeless, because the actual number of births per year has already started going down. The reason the population is still going up is because more babies survive, and most of the people in the developing world are still young, and if they live as long as people in advanced countries do, then of course that’s going to increase the population even for a steady birth rate. That’s why, unless there’s a real disaster, we can’t avoid the population rising to about 9 billion.

But I think policies can have an affect on what happens after that. I think we do have to try to make people realize that having large numbers of children has negative externalities, as it were in economic jargon, and it is going to be something to put extra pressure on the world, and affects our environment in a detrimental way.

Ariel: As I was reading this, especially as I was reading your section about space travel, I want to ask you about your take on whether we can just start sending people to Mars or something like that to address issues of overpopulation. As I was reading your section on that, news came out that Elon Musk and SpaceX had their first passenger for a trip around the moon, which is now scheduled for 2023, and the timing was just entertaining to me, because like I said you have a section in your book about why you don’t actually agree with Elon Musk’s plan for some of this stuff.

Martin: That’s right.

Ariel: I was hoping you could talk a little bit about why you’re not as big a plan of space tourism, and what you think of humanity expanding into the rest of the solar system and universe?

Martin: Well, let me say that I think it’s a dangerous delusion to think we can solve the earth’s problems by escaping to Mars or elsewhere. Mass emigration is not feasible. There’s nowhere in the solar system which is as comfortable to live in as the top of Everest or the South Pole. I think the idea which was promulgated by Elon Musk and Stephen Hawking of mass emigration is, I think, a dangerous delusion. The world’s problems have to be solved here, dealing with climate change is a dawdle compared to terraforming Mars. SoI don’t think that’s true.

Now, two other things about space. The first is that the practical need for sending people into space is getting less as robots get more advanced. Everyone has seen pictures of the Curiosity Probe trundling across the surface of Mars, and maybe missing things that a geologist would notice, but future robots will be able to do much of what a human will do, and to manufacture large structures in space, et cetera, so the practical need to send people to space is going down.

On the other hand, some people may want to go simply as an adventure. It’s not really tourism, because tourism implies it’s safe and routine. It’ll be an adventure like Steve Fossett or the guy who fell supersonically from an altitude balloon. It’d be crazy people like that, and maybe this Japanese tourist is in the same style, who want to have a thrill, and I think we should cheer them on.

I think it would be good to imagine that there are a few people living on Mars, but it’s never going to be as comfortable as our Earth, and we should just cheer on people like this.

And I personally think it should be left to private money. If I was an American, I would not support the NASA space program. It’s very expensive, and it could be undercut by private companies which can afford to take higher risks than NASA could inflict on publicly funded civilians. I don’t think NASA should be doing manned space flight at all. Of course, some people would say, “Well, it’s a national aspiration, a national goal to show superpower pre-eminence by a massive space project.” That was, of course, what drove the Apollo program, and the Apollo program cost about 4% of The US federal budget. Now NASA has .6% or thereabouts. I’m old enough to remember the Apollo moon landings, and of course if you would have asked me back then, I would have expected that there might have been people on Mars within 10 or 15 years at that time.

There would have been, had the program been funded, but of course there was no motive, because the Apollo program was driven by superpower rivalry. And having beaten the Russians, it wasn’t pursued with the same intensity. It could be that the Chinese will, for prestige reasons, want to have a big national space program, and leapfrog what the Americans did by going to Mars. That could happen. Otherwise I think the only manned space flight will, and indeed should, be privately funded by adventurers prepared to go on cut price and very risky missions.

But we should cheer them on. The reason we should cheer them on is that if in fact a few of them do provide some sort of settlement on Mars, then they will be important for life’s long-term future, because whereas we are, as humans, fairly well adapted to the earth, they will be in a place, Mars, or an asteroid, or somewhere, for which they are badly adapted. Therefore they would have every incentive to use all the techniques of genetic modification, and cyber technology to adapt to this hostile environment.

A new species, perhaps quite different from humans, may emerge as progeny of those pioneers within two or three centuries. I think this is quite possible. They, of course, may download themselves to be electronic. We don’t know how it’ll happen. We all know about the possibilities of advanced intelligence in electronic form. But I think this’ll happen on Mars, or in space, and of course if we think about going further and exploring beyond our solar system, then of course that’s not really a human enterprise because of human life times being limited, but it is a goal that would be feasible if you were a near immortal electronic entity. That’s a way in which our remote descendants will perhaps penetrate beyond our solar system.

Ariel: As you’re looking towards these longer term futures, what are you hopeful that we’ll be able to achieve?

Martin: You say we, I think we humans will mainly want to stay on the earth, but I think intelligent life, even if it’s not out there already in space, could spread through the galaxy as a consequence of what happens when a few people who go into space and are away from the regulators adapt themselves to that environment. Of course, one thing which is very important is to be aware of different time scales.

Sometimes you hear people talk about humans watching the death of the sun in five billion years. That’s nonsense, because the timescale for biological evolution by Darwinian selection is about a million years, thousands of times shorter than the lifetime of the sun, but more importantly the time scale for this new kind of intelligent design, when we can redesign humans and make new species, that time scale is a technological time scale. It could be only a century.

It would only take one, or two, or three centuries before we have entities which are very different from human beings if they are created by genetic modification, or downloading to electronic entities. They won’t be normal humans. I think this will happen, and this of course will be a very important stage in the evolution of complexity in our universe, because we will go from the kind of complexity which has emerged by Darwinian selection, to something quite new. This century is very special, which is a century where we might be triggering or jump starting a new kind of technological evolution which could spread from our solar system far beyond, on the timescale very short compared to the time scale for Darwinian evolution and the time scale for astronomical evolution.

Ariel: All right. In the book you spend a lot of time also talking about current physics theories and how those could evolve. You spend a little bit of time talking about multiverses. I was hoping you could talk a little bit about why you think understanding that is important for ensuring this hopefully better future?

Martin: Well, it’s only peripherally linked to it. I put that in the book because I was thinking about, what are the challenges, not just challenges of a practical kind, but intellectual challenges? One point I make is that there are some scientific challenges which we are now confronting which may be beyond human capacity to solve, because there’s no particular reason to think that the capacity of our brains is matched to understanding all aspects of reality any more than a monkey can understand quantum theory.

It’s possible that there be some fundamental aspects of nature that humans will never understand, and they will be a challenge for post-humans. I think those challenges are perhaps more likely to be in the realm of complexity, understanding the brain for instance, than in the context of cosmology, although there are challenges in cosmology which is to understand the very early universe where we may need a new theory like string theory with extra dimensions, et cetera, and we need a theory like that in order to decide whether our big bang was the only one, or whether there were other big bangs and a kind of multiverse.

It’s possible that in 50 years from now we will have such a theory, we’ll know the answers to those questions. But it could be that there is such a theory and it’s just too hard for anyone to actually understand and make predictions from. I think these issues are relevant to the intellectual constraints on humans.

Ariel: Is that something that you think, or hope, that things like more advanced artificial intelligence or however we evolve in the future, that that evolution will allow “us” to understand some of these more complex ideas?

Martin: Well, I think it’s certainly possible that machines could actually, in a sense, create entities based on physics which we can’t understand. This is perfectly possible, because obviously we know they can vastly out-compute us at the moment, so it could very well be, for instance, that there is a variant of string theory which is correct, and it’s just too difficult for any human mathematician to work out. But it could be that computers could work it out, so we get some answers.

But of course, you then come up against a more philosophical question about whether competence implies comprehension, whether a computer with superhuman capabilities is necessarily going to be self-aware and conscious, or whether it is going to be just a zombie. That’s a separate question which may not affect what it can actually do, but I think it does affect how we react to the possibility that the far future will be dominated by such things.

I remember when I wrote an article in a newspaper about these possibilities, the reaction was bimodal. Some people thought, “Isn’t it great there’ll be these even deeper intellects than human beings out there,” but others who thought these might just be zombies thought it was very sad if there was no entity which could actually appreciate the beauties and wonders of nature in the way we can. It does matter, in a sense, to our perception of this far future, if we think that these entities which may be electronic rather than organic, will be conscious and will have the kind of awareness that we have and which makes us wonder at the beauty of the environment in which we’ve emerged. I think that’s a very important question.

Ariel: I want to pull things back to a little bit more shorter term I guess, but still considering this idea of how technology will evolve. You mentioned that you don’t think it’s a good idea to count on going to Mars as a solution to our problems on Earth because all of our problems on Earth are still going to be easier to solve here than it is to populate Mars. I think in general we have this tendency to say, “Oh, well in the future we’ll have technology that can fix whatever issue we’re dealing with now, so we don’t need to worry about it.”

I was wondering if you could sort of comment on that approach. To what extent can we say, “Well, most likely technology will have improved and can help us solve these problems,” and to what extent is that a dangerous approach to take?

Martin: Well, clearly technology has allowed us to live much better, more complex lives than we could in the past, and on the whole the net benefits outweigh the downsides, but of course there are downsides, and they stem from the fact that we have some people who are disruptive, and some people who can’t be trusted. If we had a world where everyone could trust everyone else, we could get rid of about a third of the economy I would guess, but I think the main point is that we are very vulnerable.

We have huge advances, clearly, in networking via the Internet, and computers, et cetera, and we may have the Internet of Things within a decade, but of course people worry that this opens up a new kind of even more catastrophic potential for cyber terrorism. That’s just one example, and ditto for biotech which may allow the development of pathogens which kill people of particular races, or have other effects.

There are these technologies which are developing fast, and they can be used to great benefit, but they can be misused in ways that will provide new kinds of horrors that were not available in the past. It’s by no means obvious which way things will go. Will there be a continued net benefit of technology, as I think we’ve said there as been up ’til now despite nuclear weapons, et cetera, or will at some stage the downside run ahead of the benefits.

I do worry about the latter being a possibility, particularly because of this amplification factor, the fact that it only takes a few people in order to cause disruption that could cascade globally. The world is so interconnected that we can’t really have a disaster in one region without its affecting the whole world. Jared Diamond has this book called Collapse where he discusses five collapses of particular civilizations, whereas other parts of the world were unaffected.

I think if we really had some catastrophe, it would affect the whole world. It wouldn’t just affect parts. That’s something which is a new downside. The stakes are getting higher as technology advances, and my book is really aimed to say that these developments are very exciting, but they pose new challenges, and I think particularly they pose challenges because a few dissidents can cause more trouble, and I think it’ll make the world harder to govern. It’ll make cities and countries harder to govern, and a stronger tension between three things we want to achieve, which is security, privacy, and liberty. I think that’s going to be a challenge for all future governments.

Ariel: Reading your book I very much got the impression that it was essentially a call to action to address these issues that you just mentioned. I was curious: what do you hope that people will do after reading the book, or learning more about these issues in general?

Martin: Well, first of all I hope that people can be persuaded to think long term. I mentioned that religious groups, for instance, tend to think long term, and the papal encyclical in 2015 I think had a very important effect on the opinion in Latin America, Africa, and East Asia in the lead up to the Paris Climate Conference, for instance. That’s an example where someone from outside traditional politics would have an effect.

What’s very important is that politicians will only respond to an issue if it’s prominent in the press, and prominent in their inbox, and so we’ve got to ensure that people are concerned about this. Of course, I ended the book saying, “What are the special responsibilities of scientists,” because scientists clearly have a special responsibility to ensure that their work is safe, and that the public and politicians are made aware of the implications of any discovery they make.

I think that’s important, even though they should be mindful that their expertise doesn’t extend beyond their special area. That’s a reason why scientific understanding, in a general sense, is something which really has to be universal. This is important for education, because if we want to have a proper democracy where debate about these issues rises above the level of tabloid slogans, then given that the important issues that we have to discuss involve health, energy, the environment, climate, et cetera, which have scientific aspects, then everyone has to have enough feel for those aspects to participate in a debate, and also enough feel for probabilities and statistics to be not easily bamboozled by political arguments.

I think an educated population is essential for proper democracy. Obviously that’s a platitude. But the education needs to include, to a greater extent, an understanding of the scope and limits of science and technology. I make this point at the end and hope that it will lead to a greater awareness of these issues, and of course for people in universities, we have a responsibility because we can influence the younger generation. It’s certainly the case that students and people under 30 may be alive towards the end of the century are more mindful of these concerns than the middle aged and old.

It’s very important that these activities like the Effective Altruism movement, 80,000 Hours, and these other movements among students should be encouraged, because they are going to be important in spreading an awareness of long-term concerns. Public opinion can be changed. We can see the change in attitudes to drunk driving and things like that, which have happened over a few decades, and I think perhaps we can have a more environmental sensitivity so to become regarded as sort of rather naff or tacky to waste energy, and to be extravagant in consumption.

I’m hopeful that attitudes will change in a positive way, but I’m concerned simply because the politics is getting very difficult, because with social media, panic and rumor can spread at the speed of light, and small groups can have a global effect. This makes it very, very hard to ensure that we can keep things stable given that only a few people are needed to cause massive disruption. That’s something which is new, and I think is becoming more and more serious.

Ariel: We’ve been talking a lot about things that we should be worrying about. Do you think there are things that we are currently worrying about that we probably can just let go of, that aren’t as big of risks?

Martin: Well, I think we need to ensure responsible innovation in all new technologies. We’ve talked a lot about bio, and we are very concerned about the misuse of cyber technology. As regards AI, of course there are a whole lot of concerns to be had. I personally think that the takeover AI would be rather slower than many of the evangelists suspect, but of course we do have to ensure that humans are not victimized by some algorithm which they can’t have explained to them.

I think there is an awareness to this, and I think that what’s being done by your colleagues at MIT has been very important in raising awareness of the need for responsible innovation and ethical application of AI, and also what your group has recognized is that the order in which things happen is very important. If some computer is developed and goes rogue, that’s bad news, whereas if we have a powerful computer which is under our control, then it may help us to deal with these other problems, the problems of the misuse of biotech, et cetera.

The order in which things happen is going to be very important, but I must say I don’t completely share these concerns about machines running away and taking over, ’cause I think there’s a difference in that, for biological evolution there’s been a drive toward intelligence being favored, but so is aggression. In the case of computers, they may drive towards greater intelligence, but it’s not obvious that that is going to be combined with aggression, because they are going to be evolving by intelligent design, not the struggle of the fittest, which is the way that we evolved.

Ariel: What about concerns regarding AI just in terms of being mis-programmed, and AI just being extremely competent? Poor design on our part, poor intelligent design?

Martin: Well, I think in the short term obviously there are concerns about AI making decisions that affect people, and I think most of us would say that we shouldn’t be deprived of our credit rating, or put in prison on the basis of some AI algorithm which can’t be explained to us. We are entitled to have an explanation if something is done to us against our will. That is why it is worrying if too much is going to be delegated to AI.

I also think that constraint on the development of self-driving cars, and things of that kind, is going to be constrained by the fact that these become vulnerable to hacking of various kinds. I think it’ll be a long time before we will accept a driverless car on an ordinary road. Controlled environments, yes. In particular lanes on highways, yes. In an ordinary road in a traditional city, it’s not clear that we will ever accept a driverless car. I think I’m frankly less bullish than maybe some of your colleagues about the speed at which the machines will really take over and be accepted, that we can trust ourselves to them.

Ariel: As I mentioned at the start, and as you mentioned at the start, you are a techno optimist, for as much as the book is about things that could go wrong it did feel to me like it was also sort of an optimistic look at the future. What are you most optimistic about? What are you most hopeful for looking at both short term and long term, however you feel like answering that?

Martin: I’m hopeful that biotech will have huge benefits for health, will perhaps extend human life spans a bit, but that’s something about which we should feel a bit ambivalent. So, I think health, and also food. If you asked me, what is one of the most benign technologies, it’s to make artificial meat, for instance. It’s clear that we can more easily feed a population of 9 billion on a vegetarian diet than on a traditional diet like Americans consume today.

To take one benign technology, I would say artificial meat is one, and more intensive farming so that we can feed people without encroaching too much on the natural part of the world. I’m optimistic about that. If we think about very long term trends then life extension is something which obviously if it happens too quickly is going to be hugely disruptive, multi-generation families, et cetera.

Also, even though we will have the capability within a century to change human beings, I think we should constrain that on earth and just let that be done by the few crazy pioneers who go away into space. But if this does happen, then as I say in the introduction to my book, it will be a real game changer in a sense. I make the point that one thing that hasn’t changed over most of human history is human character. Evidence for this is that we can read the literature written by the Greeks and Romans more than 2,000 years ago and resonate with the people, and their characters, and their attitudes and emotions.

It’s not at all clear that on some scenarios, people 200 years from now will resonate in anything other than an algorithmic sense with the attitudes we have as humans today. That will be a fundamental, and very fast change in the nature of humanity. The question is, can we do something to at least constrain the rate at which that happens, or at least constrain the way in which it happens? But it is going to be almost certainly possible to completely change human mentality, and maybe even human physique over that time scale. One has only to listen to listen to people like George Church to realize that it’s not crazy to imagine this happening.

Ariel: You mentioned in the book that there’s lots of people who are interested in cryogenics, but you also talked briefly about how there are some negative effects of cryogenics, and the burden that it puts on the future. I was wondering if you could talk really quickly about that?

Martin: There are some people, I know some, who have a medallion around their neck which is an injunction of, if they drop dead they should be immediately frozen, and their blood drained and replaced by liquid nitrogen, and that they should then be stored — there’s a company called Alcor in Arizona that does this — and allegedly revived at some stage when technology advanced. I find it hard to take these seriously, but they say that, well the chance may be small, but if they don’t invest this way then the chance is zero that they have a resurrection.

But I actually think that even if it worked, even if the company didn’t go bust, and sincerely maintained them for centuries and they could then be revived, I still think that what they’re doing is selfish, because they’d be revived into a world that was very different. They’d be refugees from the past, and they’d therefore be imposing an obligation on the future.

We obviously feel an obligation to look after some asylum seeker or refugee, and we might feel the same if someone had been driven out of their home in the Amazonian forest for instance, and had to find a new home, but these refugees from the past, as it were, they’re imposing a burden on future generations. I’m not sure that what they’re doing is ethical. I think it’s rather selfish.

Ariel: I hadn’t thought of that aspect of it. I’m a little bit skeptical of our ability to come back.

Martin: I agree. I think the chances are almost zero, even if they were stored and et cetera, one would like to see this technology tried on some animal first to see if they could freeze animals at liquid nitrogen temperatures and then revive it. I think it’s pretty crazy. Then of course, the number of people doing it is fairly small, and some of the companies doing it, there’s one in Russia, which are real ripoffs I think, and won’t survive. But as I say, even if these companies did keep going for a couple of centuries, or however long is necessary, then it’s not clear to me that it’s doing good. I also quoted this nice statement about, “What happens if we clone, and create a neanderthal? Do we put him in a zoo or send him to Harvard,” said the professor from Stanford.

Ariel: Those are ethical considerations that I don’t see very often. We’re so focused on what we can do that sometimes we forget. “Okay, once we’ve done this, what happens next?”

I appreciate you being here today. Those were my questions. Was there anything else that you wanted to mention that we didn’t get into?

Martin: One thing we didn’t discuss, which was a serious issue, is the limits of medical treatment, because you can make extraordinary efforts to keep people alive long before they’d have died naturally, and to keep alive babies that will never live a normal life, et cetera. Well, I certainly feel that that’s gone too far at both ends of life.

One should not devote so much effort to extreme premature babies, and allow people to die more naturally. Actually, if you asked me about predictions I’d make about the next 30 or 40 years, first more vegetarianism, secondly more euthanasia.

Ariel: I support both, vegetarianism, and I think euthanasia should be allowed. I think it’s a little bit barbaric that it’s not.

Martin: Yes.

I think we’ve covered quite a lot, haven’t we?

Ariel: I tried to.

Martin: I’d just like to mention that my book does touch a lot of bases in a fairly short book. I hope it will be read not just by scientists. It’s not really a science book, although it emphasizes how scientific ideas are what’s going to determine how our civilization evolves. I’d also like to say that for those in universities, we know it’s only interim for students, but we have universities like MIT, and my University of Cambridge, we have convening power to gather people together to address these questions.

I think the value of the centers which we have in Cambridge, and you have in MIT, are that they are groups which are trying to address these very, very big issues, these threats and opportunities. The stakes are so high that if our efforts can really reduce the risk of a disaster by one part in 10,000, we’ve more than earned our keep. I’m very supportive of our Centre for Existential Risk in Cambridge, and also the Future of Life Institute which you have at MIT.

Given the huge numbers of people who are thinking about small risks like which foods are carcinogenic, and the threats of low radiation doses, et cetera, it’s not at all inappropriate that there should be some groups who are focusing on the more extreme, albeit perhaps rather improbable threats which could affect the whole future of humanity. I think it’s very important that these groups should be encouraged and fostered, and I’m privileged to be part of them.

Ariel: All right. Again, the book is On the Future: Prospects for Humanity by Martin Rees. I do want to add, I agree with what you just said. I think this is a really nice introduction to a lot of the risks that we face. I started taking notes about the different topics that you covered, and I don’t think I got all of them, but there’s climate change, nuclear war, nuclear winter, biodiversity loss, overpopulation, synthetic biology, genome editing, bioterrorism, biological errors, artificial intelligence, cyber technology, cryogenics, and the various topics in physics, and as you mentioned the role that scientists need to play in ensuring a safe future.

I highly recommend the book as a really great introduction to the potential risks, and the hopefully much greater potential benefits that science and technology can pose for the future. Martin, thank you again for joining me today.

Martin: Thank you, Ariel, for talking to me.

[end of recorded material]

Podcast: AI and Nuclear Weapons – Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

In 1983, Soviet military officer Stanislav Petrov prevented what could have been a devastating nuclear war by trusting his gut instinct that the algorithm in his early-warning system wrongly sensed incoming missiles. In this case, we praise Petrov for choosing human judgment over the automated system in front of him. But what will happen as the AI algorithms deployed in the nuclear sphere become much more advanced, accurate, and difficult to understand? Will the next officer in Petrov’s position be more likely to trust the “smart” machine in front of him?

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.

Topics discussed in this episode include:

  • The sophisticated military robots developed by Soviets during the Cold War
  • How technology shapes human decision-making in war
  • “Automation bias” and why having a “human in the loop” is much trickier than it sounds
  • The United States’ stance on automation with nuclear weapons
  • Why weaker countries might have more incentive to build AI into warfare
  • How the US and Russia perceive first-strike capabilities
  • “Deep fakes” and other ways AI could sow instability and provoke crisis
  • The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
  • The perceived obstacles to reducing nuclear arsenals

Publications discussed in this episode include:

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloud, iTunes, GooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with the Future of Life Institute. I am just getting over a minor cold and while I feel okay, my voice may still be a little off so please bear with any crackling or cracking on my end. I’m going to try to let my guests Paul Scharre and Mike Horowitz do most of the talking today. But before I pass the mic over to them, I do want to give a bit of background as to why I have them on with me today.

September 26th was Petrov Day. This year marked the 35th anniversary of the day that basically World War III didn’t happen. On September 26th in 1983, Petrov, who was part of the Russian military, got notification from the automated early warning system he was monitoring that there was an incoming nuclear attack from the US. But Petrov thought something seemed off.

From what he knew, if the US were going to launch a surprise attack, it would be an all-out strike and not just the five weapons that the system was reporting. Without being able to confirm whether the threat was real or not, Petrov followed his gut and reported to his commanders that this was a false alarm. He later became known as “the man who saved the world” because there’s a very good chance that the incident could have escalated into a full-scale nuclear war had he not reported it as a false alarm.

Now this 35th anniversary comes at an interesting time as well because last month in August, the United Nations Convention on Conventional Weapons convened a meeting of a Group of Governmental Experts to discuss the future of lethal autonomous weapons. Meanwhile, also on September 26th, governments at the United Nations held a signing ceremony to add more signatures and ratifications to last year’s treaty, which bans nuclear weapons.

It does feel like we’re at a bit of a turning point in military and weapons history. On one hand, we’ve seen rapid advances in artificial intelligence in recent years and the combination of AI weaponry has been referred to as the third revolution in warfare after gunpowder and nuclear weapons. On the other hand, despite the recent ban on nuclear weapons, the nuclear powers which have not signed the treaty are taking steps to modernize their nuclear arsenals.

This begs the question, what happens if artificial intelligence is added to nuclear weapons? Can we trust automated and autonomous systems to make the right decision as Petrov did 35 years ago? To consider these questions and many others, I Have Paul Scharre and Mike Horowitz with me today. Paul is the author of Army of None: Autonomous Weapons in the Future of War. He is a former army ranger and Pentagon policy official, currently working as Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.

Mike Horowitz is professor of political science and the Associate Director of Perry World House at the University of Pennsylvania. He’s the author of The Diffusion of Military Power: Causes and Consequences for International Politics, and he’s an adjunct Senior Fellow at the Center for a New American Security.

Paul and Mike first, thank you so much for joining me today.

Paul: Thank you, thanks for having us.

Mike: Yeah, excited for the conversation.

Ariel: Excellent, so before we get too far into this, I was hoping you could talk a little bit about just what the current status is of artificial intelligence in weapons, of nuclear weapons, maybe more specifically is AI being used in nuclear weapon systems today? 2015, Russia announced a nuclear submarine drone called Status 6, curious what the status of that is. Are other countries doing anything with AI in nuclear weapons? That’s a lot of questions, so I’ll turn that over to you guys now.

Paul: Okay, all right, let me jump in first and then Mike can jump right in and correct me. You know, I think if there’s anything that we’ve learned from science fiction from War Games to Terminator, it’s that combining AI and nuclear weapons is a bad idea. That seems to be the recurring lesson that we get from science fiction shows. Like many things, the sort of truth here is less dramatic but far more interesting actually, because there is a lot of automation that already exists in nuclear weapons and nuclear operations today and I think that is a very good starting point when we think about going forward, what has already been in place today?

The Petrov incident is a really good example of this. On the one hand, the Petrov incident, if it captures one simple point, it’s the benefit of human judgment. One of the things that Petrov talks about is that when evaluating what to do in this situation, there was a lot of extra contextual information that he could bring to bear that would outside of what the computer system itself knew. The computer system knew that there had been some flashes that the Soviet satellite early warning system had picked up, that it interpreted it as missile launches, and that was it.

But when he was looking at this, he was also thinking about the fact that it’s a brand new system, they just deployed this Oko, the Soviet early warning satellite system, and it might be buggy as all technology is, as particularly Soviet technology was at the time. He knew that there could be lots of problems. But also, he was thinking about what would the Americans do, and from his perspective, he said later, we know because he did report a false alarm, he was able to say that he didn’t think it made sense for the Americans to only launch five missiles. Why would they do that?

If you were going to launch a first strike, it would be overwhelming. From his standpoint, sort of this didn’t add up. That contributed to what he said ultimately was sort of 50/50 and he went with his gut feeling that it didn’t seem right to him. Of course, when you look at this, you can ask well, what would a computer do? The answer is, whatever it was programmed to do, which is alarming in that kind of instance. But when you look at automation today, there are lots of ways that automation is used and the Petrov incident illuminates some of this.

For example, automation is used in early warning systems, both radars and satellite, infrared and other systems to identify objects of interest, label them, and then cue them to human operators. That’s what the computer automated system was doing when it told Petrov there were missile launches; that was an automated process.

We also see in the Petrov incident the importance of the human-automation interface. He talks about there being a flashing red screen, it saying “missile launch” and all of these things being, I think, important factors. We think about how this information is actually conveyed to the human, and that changes the human decision-making as part of the process. So there were partial components of automation there.

In the Soviet system, there have been components of automation in the way the launch orders are conveyed, in terms of rockets that would be launched and then fly over the Soviet Union, now Russia, to beam down launch codes. This is, of course, contested but reportedly came out after the end of the Cold War, there was even some talk of and according to some sources, there was actually deployment of a semi-automated Dead Hand system. A system that could be activated, it’s called perimeter, by the Soviet leadership in a crisis and then if the leadership was taken out in Moscow after a certain period of time if they did not relay in and show that they were communicating, that launch codes would be passed down to a bunker that had a Soviet officer in it, a human who would make the final call to then convey automated launch orders that could there was still a human in the loop but it was like one human instead of the Soviet leadership, to launch a retaliatory strike if their leadership had been taken out.

Then there are certainly, when you look at some of the actual delivery vehicles, things like bombers, there’s a lot of automation involved in bombers, particularly for stealth bombers, there’s a lot of automation required just to be able to fly the aircraft. Although, the weapons release is controlled by people.

You’re in a place today where all of the weapons decision-making is controlled by people, but they maybe making decisions that are based on information that’s been given to them through automated processes and filtered through automated processes. Then once humans have made these decisions, they may be conveyed and those orders passed along to other people or through other automated processes as well.

Mike: Yeah, I think that that’s a great overview and I would add two things I think to give some additional context. First, is that in some ways, the nuclear weapons enterprise is already among the most automated for the use of force because the stakes are so high. Because when countries are thinking about using nuclear weapons, whether it’s the United States or Russia or other countries, it’s usually because they view an existential threat is existing. Countries have already attempted to build in significant automation and redundancy to ensure, to try to make their threats more credible.

The second thing is I think Paul is absolutely right about the Petrov incident but the other thing that it demonstrates to me that I think we forget sometimes, is that we’re fond of talking about technological change in the way that technology can shape how militaries act it can shape the nuclear weapons complex but it’s organizations and people that make choices about how to use technology. They’re not just passive actors, and different organizations make different kinds of choices about how to integrate technology depending on their standard operating procedures, depending on their institutional history, depending on bureaucratic priorities. It’s important I think not to just look at something like AI in a vacuum but to try to understand the way that different nuclear powers, say, might think about it.

Ariel: I don’t know if this is fair to ask but how might the different nuclear powers think about it?

Mike: From my perspective, I think an interesting thing you’re seeing now is the difference in how the United States has talked about autonomy in the nuclear weapons enterprise and some other countries. US military leaders have been very clear that they have no interest in autonomous systems, for example, armed with nuclear weapons. It’s one of the few things in the world of things that one might use autonomous systems for, it’s an area where US military leaders have actually been very explicit.

I think in some ways, that’s because the United States is generally very confident in its second strike deterrent, and its ability to retaliate even if somebody else goes first. Because the United States feels very confident in its second strike capabilities, that makes the, I think, temptation of full automation a little bit lower. In some ways, the more a country fears that its nuclear arsenal could be placed at risk by a first strike, the stronger its incentives to operate faster and to operate even if humans aren’t available to make those choices. Those are the kinds of situations in which autonomy would potentially be more attractive.

In comparisons of nuclear states, it’s in generally the weaker one from a nuclear weapons perspective that I think will, all other things being equal, more inclined to use automation because they fear the risk of being disarmed through a first strike.

Paul: This is such a key thing, which is that when you look at what is still a small number of countries that have nuclear weapons, that they have very different strategic positions, different sizes of arsenals, different threats that they face, different degrees of survivability, and very different risk tolerances. I think it’s important that certainly within the American thinking about nuclear stability, there’s a clear strain of thought about what stability means. Many countries may see this very, very differently and you can see this even during the Cold War where you had approximate parity in the kinds of arsenals between the US and the Soviet Union, but there’s still thought about stability very differently.

The semi-automated Dead Hand system perimeter is a great example of this, where when this would come out afterwards, from sort of a US standpoint thinking about risk, people were just aghast at this and it’s a bit terrifying to think about something that is even semi-automated, it just might have sort of one human involved. But from the Soviet standpoint, this made an incredible amount of strategic sense. And not for sort of the Dr. Strangelove reason of you want to tell the enemy to deter them, which is how I think Americans might tend to think about this, because they didn’t actually tell the Americans.

The real rationale on the Soviet side was to reduce the pressure of their leaders to try to make a use or lose decision with their arsenal so that rather than if there was something like a Petrov incident, where there was some indications of a launch, maybe there’s some ambiguity, whether there is a genuine American first strike but they’re concerned that their leadership in Moscow might be taken out, they could activate this system and they could trust that if there was in fact an American first strike that took out the leadership, there would still be a sufficient retaliation instead of feeling like they had to rush to retaliate.

Countries are going to see this very differently, and that’s of course one of the challenges in thinking about stability, is to not to fall under the trap of mirror.

Ariel: This brings up actually two points that I have questions about. I want to get back to the stability concept in a minute but first, one of the things I’ve been reading a bit about is just this idea of perception and how one country’s perception of another country’s arsenal can impact how their own military development happens. I was curious if you could talk a little bit about how the US perceives Russia or China developing their weapons and how that impacts us and the same for those other two countries as well as other countries around the world. What impact is perception having on how we’re developing our military arsenals and especially our nuclear weapons? Especially if that perception is incorrect.

Paul: Yeah, I think the origins of the idea of nuclear stability really speak to this where the idea came out in the 1950s among American strategists when they were looking at the US nuclear arsenal in Europe, and they realized that it was vulnerable to a first strike by the Soviets, that American airplanes sitting on the tarmac could be attacked by a Soviet first strike and that might wipe out the US arsenal, and that knowing this, they might in a crisis feel compelled to launch their aircraft sooner and that might actually incentivize them to use or lose, right? Use the aircraft, launch them versus, B, have them wiped out.

If the Soviets knew this, then that perception alone that the Americans might, if things start to get heated, launch their aircraft, might incentivize the Soviets to strike first. Schilling has a quote about them striking us to prevent us from striking them and preventing them from them striking us. This sort of gunslinger potential of everyone reaching for their guns to draw them first because someone else might do so that’s not just a technical problem, it’s also one of perception and so I think it’s baked right into this whole idea and it happens in both slower time scales when you look at arms race stability and arms race dynamics in countries, what they invest in, building more missiles, more bombers because of the concern about the threat from someone else. But also, in a more immediate sense of crisis stability, the actions that leaders might take immediately in a crisis to maybe anticipate and prepare for what they fear others might do as well.

Mike: I would add on to that, that I think it depends a little bit on how accurate you think the information that countries have is. If you imagine your evaluation of a country is based classically on their capabilities and then their intentions. Generally, we think that you have a decent sense of a country’s capabilities and intentions are hard to measure. Countries assume the worst, and that’s what leads to the kind of dynamics that Paul is talking about.

I think the perception of other countries’ capabilities, I mean there’s sometimes a tendency to exaggerate the capabilities of other countries, people get concerned about threat inflation, but I think that’s usually not the most important programmatic driver. There’s been significant research now on the correlates of nuclear weapons development, and it tends to be security threats that are generally pretty reasonable in that you have neighbors or enduring rivals that actually have nuclear weapons, and that you’ve been in disputes with and so you decide you want nuclear weapons because nuclear weapons essentially function as invasion insurance, and that having them makes you a lot less likely to be invaded.

And that’s a lesson the United States by the way has taught the world over and over, over the last few decades you look at Iraq, Libya, et cetera. And so I think the perception of other countries’ capabilities can be important for your actual launch posture. That’s where I think issues like speed can come in, and where automation could come in maybe in the launch process potentially. But I think that in general, it’s sort of deeper issues that are generally real security challenges or legitimately perceived security challenges that tend to drive countries’ weapons development programs.

Paul: This issue of perception of intention in a crisis, is just absolutely critical because there is so much uncertainty and of course, there’s something that usually precipitates a crisis and so leaders don’t want to back down, there’s usually something at stake other than avoiding nuclear war, that they’re fighting over. You see many aspects of this coming up during the much-analyzed Cuban Missile Crisis, where you see Kennedy and his advisors both trying to ascertain what different actions that the Cubans or Soviets take, what they mean for their intentions and their willingness to go to war, but then conversely, you see a lot of concern by Kennedy’s advisors about actions that the US military takes that may not be directed by the president, that are accidents, that are slippages in the system, or friction in the system and then worrying that the Soviets over-interpret these as deliberate moves.

I think right there you see a couple of components where you could see automation and AI being potentially useful. One which is reducing some of the uncertainty and information asymmetry: if you could find ways to use the technology to get a better handle on what your adversary was doing, their capabilities, the location and disposition of their forces and their intention, sort of peeling back some of the fog of war, but also increasing command and control within your own forces. That if you could sort of tighten command and control, have forces that were more directly connected to the national leadership, and less opportunity for freelancing on the ground, there could be some advantages there in that there’d be less opportunity for misunderstanding and miscommunication.

Ariel: Okay, so again, I have multiple questions that I want to follow up with and they’re all in completely different directions. I’m going to come back to perception because I have another question about that but first, I want to touch on the issue of accidents. Especially because during the Cuban Missile Crisis, we saw an increase in close calls and accidents that could have escalated. Fortunately, they didn’t, but a lot of them seemed like they could very reasonably have escalated.

I think it’s ideal to think that we can develop technology that can help us minimize these risks, but I kind of wonder how realistic that is. Something else that you mentioned earlier with tech being buggy, it does seem as though we have a bad habit of implementing technology while it is still buggy. Can we prevent that? How do you see AI being used or misused with regards to accidents and close calls and nuclear weapons?

Mike: Let me jump in here, I would take accidents and split it into two categories. The first are cases like the Cuban Missile Crisis where what you’re really talking about is miscalculation or escalation. Essentially, a conflict that people didn’t mean to have in the first place. That’s different I think than the notion of a technical accident, like a part in a physical sense, you know a part breaks and something happens.

Both of those are potentially important and both of those are potentially influenced by… AI interacts with both of those. If you think about challenges surrounding the robustness of algorithms, the risk of hacking, the lack of explainability, Paul’s written a lot about this, and that I think functions not exclusively, but in many ways on the technical accident side.

The miscalculation side, the piece of AI I actually worry about the most are not uses of AI in the nuclear context, it’s conventional deployments of AI, whether autonomous weapons or not, that speed up warfare and thus cause countries to fear that they’re going to lose faster because it’s that situation where you fear you’re going to lose faster that leads to more dangerous launch postures, more dangerous use of nuclear weapons, decision-making, pre-delegation, all of those things that we worried about in the Cold War and beyond.

I think the biggest risk from an escalation perspective, at least for my money, is actually the way that the conventional uses of AI could cause crisis instability, especially for countries that don’t feel very secure, that don’t think that their second strike capabilities are very secure.

Paul: I think that your question about accidents gets to really the heart of what do we mean by stability? I’m going to paraphrase from my colleague Elbridge Colby, who does a lot of work on nuclear issues and  nuclear stability. What you really want in a stable situation is a situation where war only occurs if one side truly seeks it. You don’t get an escalation to war or escalation of crises because of technical accidents or miscalculation or misunderstanding.

There could be multiple different kinds of causes that might lead you to war. And one of those might even perverse incentives. A deployment posture for example, that might lead you to say, “Well, I need to strike first because of a fear that they might strike me,” and you want to avoid that kind of situation. I think that there’s lots to be said for human involvement in all of these things and I want to say right off the bat, humans bring to bear the ability to understand judgment and context that AI systems today simply do not have. At least we don’t see that in development based on the state of the technology today. Maybe it’s five years away, 50 years away, I have no idea, but we don’t see that today. I think that’s really important to say up front. Having said that, when we’re thinking about the way that these nuclear arsenals are designed in their entirety, the early warning systems, the way that data is conveyed throughout the system and the way it’s presented to humans, the way the decisions are made, the way that those orders are then conveyed to launch delivery vehicles, it’s worth looking at new technologies and processes and saying, could we make it safer?

We have had a terrifying number of near misses over the years. No actual nuclear use because of accidents or miscalculation, but it’s hard to say how close we’ve been and this is I think a really contested proposition. There are some people that can look at the history of near misses and say, “Wow, we are playing Russian roulette with nuclear weapons as a civilization and we need to find a way to make this safer or disarm or find a way to step back from the brink.” Others can look at the same data set and say, “Look, the system works. Every single time, we didn’t shoot these weapons.”

I will just observe that we don’t have a lot of data points or a long history here so I don’t think there should be huge error bars on whatever we suggest about the future, and we have very little data at all about actual people’s decision-making for false alarms in a crisis. We’ve had some instances where there have been false alarms like the Petrov incident. There have been a few others but we don’t really have a good understanding of how people would respond to that in the midst of a heated crisis like the Cuban Missile Crisis.

When you think about using automation, there are ways that we might try to make this entire socio-technical architecture of responding to nuclear crises and making a decision about reacting, safer and more stable. If we could use AI systems to better understand the enemy’s decision-making or the factual nature of their delivery platforms, that’s a great thing. If you could use it to better convey correct information to humans, that’s a good thing.

Mike: Paul, I would add, if you can use AI to buy decision-makers time, if essentially the speed of processing means that humans then feel like they have more time, which you know decreases their cognitive stress somehow, psychology would suggest, that could in theory be a relevant benefit.

Paul: That’s a really good point and Thomas Schilling again, talks about the real key role that time plays here, which is a driver of potentially rash actions in a crisis. Because you know, if you have a false alert of your adversary launching a missile at you, which has happened a couple times on both sides, at least two instances on either side the American and Soviet side during the Cold War and immediately afterwards.

If you have sort of this false alarm but you have time to get more information, to call them on a hotline, to make a decision, then that takes the pressure off of making a bad decision. In essence, you want to sort of find ways to change your processes or technology to buy down the rate of false alarms and ensure that in the instance of some kind of false alarm, that you get kind of the right decision.

But you also would conversely want to increase the likelihood that if policymakers did make a rational decision to use nuclear weapons, that it’s actually conveyed because that is of course, part of the essence of deterrence, is knowing that if you were to use these weapons, the enemy would respond in kind and that’s what this in theory deters use.

Mike: Right, what you want is no one to use nuclear weapons unless they genuinely mean to, but if they genuinely mean to, we want that to occur.

Paul: Right, because that’s what’s going to prevent the other side from doing it. There was this paradox, what Scott Sagan refers to in his book on nuclear accidents, “The Always Never Dilemma”, that they’re always used when it’s intentional but never used by accident or miscalculation.

Ariel: Well, I’ve got to say I’m hoping they’re never used intentionally either. I’m not a fan, personally. I want to touch on this a little bit more. You’re talking about all these ways that the technology could be developed so that it is useful and does hopefully help us make smarter decisions. Is that what you see playing out right now? Is that how you see this technology being used and developed in militaries or are there signs that it’s being developed faster and possibly used before it’s ready?

Mike: I think in the nuclear realm, countries are going to be very cautious about using algorithms, autonomous systems, whatever terminology you want to use, to make fundamental choices or decisions about use. To the extent that there’s risk in what you’re suggesting, I think that those risks are probably, for my money, higher outside the nuclear enterprise simply because that’s an area where militaries I think are inherently a little more cautious, which is why if you had an accident, I think it would probably be because you had automated perhaps some element of the warning process and your future Petrovs essentially have automation bias. They trust the algorithms too much. That’s a question, they don’t use judgment as Paul was suggesting, and that’s a question of training and doctrine.

For me, it goes back to what I suggested before about how technology doesn’t exist in a vacuum. The risks to me depend on training and doctrine in some ways as much about the technology itself but actually, the nuclear weapons enterprise is an area where militaries in general, will be a little more cautious than outside of the nuclear context simply because the stakes are so high. I could be wrong though.

Paul: I don’t really worry too much that you’re going to see countries set up a process that would automate entirely the decision to use nuclear weapons. That’s just very hard to imagine. This is the most conservative area where countries will think about using this kind of technology.

Having said that, I would agree that there are lots more risks outside of the nuclear launch decision, that could pertain to nuclear operations or could be in a conventional space, that could have spillover to nuclear issues. Some of them could involve like the use of AI in early warning systems and then how is it, the automation bias risk, that that’s conveyed in a way to people that doesn’t convey sort of the nuance of what the system is actually detecting and the potential for accidents and people over-trust the automation. There’s plenty of examples of humans over-trusting in automation in a variety of settings.

But some of these could be just a far a field in things that are not military at all, right, so look at technology like AI-generated deep fakes and imagine a world where now in a crisis, someone releases a video or an audio of a national political leader making some statement and that further inflames the crisis, and perhaps introduces uncertainty about what someone might do. That’s actually really frightening, that could be a catalyst for instability and it could be outside of the military domain entirely and hats off to Phil Reiner who works out on these issues in California and who’s sort of raised this one and deep fakes.

But I think that there’s a host of ways that you could see this technology raising concerns about instability that might be outside of nuclear operations.

Mike: I agree with that. I think the biggest risks here are from the way that a crisis, the use of AI outside the nuclear context, could create or escalate a crisis involving one or more nuclear weapons states. It’s less AI in the nuclear context, it’s more whether it’s the speed of war, whether it’s deep fakes, whether it’s an accident from some conventional autonomous system.

Ariel: That sort of comes back to a perception question that I didn’t get a chance to ask earlier and that is, something else I read is that there’s risks that if a country’s consumer industry or the tech industry is designing AI capabilities, other countries can perceive that as automatically being used in weaponry or more specifically, nuclear weapons. Do you see that as being an issue?

Paul: If you’re in general concerned about militaries importing commercially-driven technology like AI into the military space and using it, I think it’s reasonable to think that militaries are going to try to look for technology to get advantages. The one thing that I would say might help calm some of those fears is that the best sort of friend for someone who’s concerned about that is the slowness of the military acquisition processes, which move at like a glacial pace and are a huge hindrance actually a lot of psychological adoption.

I think it’s valid to ask for any technology, how would its use affect positively or negatively global peace and security, and if something looks particularly dangerous to sort of have a conversation about that. I think it’s great that there are a number of researchers in different organizations thinking about this, I think it’s great that FLI is, you’ve raised this, but there’s good people at RAND, Ed Geist and Andrew Lohn have written a report on AI and nuclear stability; Laura Saalman and Vincent Boulanin at SIPRI work on this funded by the Carnegie Corporation. Phil Reiner, who I mentioned a second ago, I blanked on his organization, it’s Technology for Global Security but thinking about a lot of these challenges, I wouldn’t leap to assume that just because something is out there, that means that militaries are always going to adopt it. The militaries have their own strategic and bureaucratic interests at stake that are going to influence what technologies they adopt and how.

Mike: I would add to that, if the concern is that countries see US consumer and commercial advances and then presume there’s more going on than there actually is, maybe, but I think it’s more likely that countries like Russia and China and others think about AI as an area where they can generate potential advantages. These are countries that have trailed the American military for decades and have been looking for ways to potentially leap ahead or even just catch up. There are also more autocratic countries that don’t trust their people in the first place and so I think to the extent you see incentives for development in places like Russia and China, I think those incentives are less about what’s going on in the US commercial space and more about their desire to leverage AI to compete with the United States.

Ariel: Okay, so I want to shift slightly but also still continuing with some of this stuff. We talked about the slowness of the military to take on new acquisitions and transform, I think, essentially. One of the things that to me, it seems like we still sort of see and I think this is changing, I hope it’s changing, is treating a lot of military issues as though we’re still in the Cold War. When I say I’ve been reading stuff, a lot of what I’ve been reading has been coming from the RAND report on AI and nuclear weapons. And they talk a lot about bipolarism versus multipolarism.

If I understand this correctly, bipolarism is a bit more like what we saw with the Cold War where you have the US and allies versus Russia and whoever. Basically, you have that sort of axis between those two powers. Whereas today, we’re seeing more multipolarism where you have Russia and the US and China and then there’s also things happening with India and Pakistan. North Korea has been putting itself on the map with nuclear weapons.

I was wondering if you can talk a bit about how you see that impacting how we continue to develop nuclear weapons, how that changes strategy and what role AI can play, and correct me if I’m wrong in my definitions of multipolarism and bipolarism.

Mike: Sure, I mean I think during the Cold War, when you talk about a bipolar nuclear situation during the Cold War, essentially what that reflects is that the United States and the then-Soviet Union had the only two nuclear arsenals that mattered. Any other country in the world, either the United States or Soviet Union could essentially destroy absorbing a hit from their nuclear arsenal. Whereas since the end of the Cold War, you’ve had several other countries including China, as well as India, Pakistan to some extent now, North Korea, who have not just developed nuclear arsenals but developed more sophisticated nuclear arsenals.

That’s what’s part of the ongoing debate in the United States, whether it’s even debated is a I think a question about whether the United States now is vulnerable to China’s nuclear arsenal, meaning the United States no longer could launch a first strike against China. In general, you’ve ended up in a more multipolar nuclear world in part because I think the United States and Russia for their own reasons spent a few decades not really investing in their underlying nuclear weapons complex, and I think the fear of a developing multipolar nuclear structure is one reason why the United States under the Obama Administration and then continuing in the Trump administration has ramped up its efforts at nuclear modernization.

I think AI could play in here in some of the ways that we’ve talked about, but I think AI in some ways is not the star of the show. The star of the show remains the desire by countries to have secure retaliatory capabilities and on the part of the United States, to have the biggest advantage possible when it comes to the sophistication of its nuclear arsenal. I don’t know what do you think, Paul?

Paul: I think to me the way that the international system and the polarity, if you will, impacts this issue mostly is that cooperation gets much harder when the number of actors that are needed to cooperate against increase, when the “n” goes from 2 to 6 or 10 or more. AI is a relatively diffuse technology, while there’s only a handful of actors internationally that are at the leading edge, this technology proliferates fairly rapidly, and so will be widely available to many different actors to use.

To the extent that there are maybe some types of applications of AI that might be seen as problematic in the nuclear context, either in nuclear operations or related or incidental to them. It’s much harder to try to control that, when you have to get more people to get on board and agree. That’s one thing for example, if, I’ll make this up, hypothetically, let’s say that there are only two global actors who could make deep fake high resolution videos. You might say, “Listen, let’s agree not to do this in a crisis or let’s agree not to do this for manipulative purposes to try to stoke a crisis.” When anybody could do it on a laptop then like forget about it, right? That’s a world we’ve got to live with.

You certainly see this historically when you look at different arms control regimes. There was a flurry of arms control actually during the Cold War both bipolar between the US and USSR, but then also multi-lateral ones that those two countries led because you have a bipolar system. You saw attempts earlier in the 20th century to do arms control that collapsed because of some of these dynamics.

During the 20s, the naval treaties governing the number and the tonnage of battleships that countries built, collapsed because there was one defector, initially Japan, who thought they’d gotten sort of a raw deal in the treaty, defecting and then others following suit. We’ve seen this since the end of the Cold War with the end of the Missile Defense Treaty but then now sort of the degradation of the INF treaty with Russia cheating on it and sort of INF being under threat this sort of concern that because you have both the United States and Russia reacting to what other countries were doing, in the case of the anti-ballistic missile treaty, the US being concerned about ballistic missile threats from North Korea and Iran, and deploying limited missile defense systems and then Russia being concerned that that either was actually secretly aimed at them or might have effects at reducing their posture and the US withdrawing entirely from the ABM treaty to be able to do that. That’s sort of being one unraveling.

In the case of INF Treaty, Russia looking at what China is building not a signatory to INF and building now missiles that violate the INF Treaty. That’s a much harder dynamic when you have multiple different countries at play and countries having to respond to security threats that may be diverse and asymmetric from different actors.

Ariel: You’ve touched on this a bit already but especially with what you were just talking about and getting various countries involved and how that makes things a bit more challenging what specifically do you worry about if you’re thinking about destabilization? What does that look like?

Mike: I would say destabilization for ‘who’ is the operative question in that there’s been a lot of empirical research now suggesting that the United States never really fully bought into mutually assured destruction. The United States sort of gave lip service to the idea while still pursuing avenues for nuclear superiority even during the Cold War and in some ways, a United States that’s somehow felt like its nuclear deterrent was inadequate would be a United States that probably invested a lot more in capabilities that one might view as destabilizing if the United States perceived challenges from multiple different actors.

But I would tend to think about this in the context of individual pairs of states or small groups at states and that the notion that essentially you know, China worries about America’s nuclear arsenal, and India worries about China’s nuclear arsenal, and Pakistan worries about India’s nuclear arsenal and all of them would be terribly offended that I just said that. These relationships are complicated and in some ways, what generates instability is I think a combination of deterioration of political relations and a decreased feeling of security if the technological sophistication of the arsenals of potential adversaries grows.

Paul: I think I’m less concerned about countries improving their arsenals or military forces over time to try to gain an edge on adversaries. I think that’s sort of a normal process that militaries and countries do. I don’t think it’s particularly problematic to be honest with you, unless you get to a place where the amount of expenditure is so outrageous that it creates a strain on the economy or that you see them pursuing some race for technology that once they got there, there’s sort of like a winner-take-all mentality, right, of, “Oh, and then I need to use it.” Whoever gets to nuclear weapons first, then uses nuclear weapons and then gains an upper hand.

That creates incentives for once you achieve the technology, launching a preventive war, which is think is going to be very problematic. Otherwise, upgrading our arsenal, improving it I think is a normal kind of behavior. I’m more concerned about how do you either use technology beneficially or avoid certain kinds of applications of technology that might create risks in a crisis for accidents and miscalculations.

For example, as we’re seeing countries acquire more drones and deploy them in military settings, I would love to see an international norm against putting nuclear weapons on a drone, on an uninhabited vehicle. I think that it is more problematic from a technical risk standpoint, and a technical accident standpoint, than certainly using them on an aircraft that has a human on board or on a missile, which doesn’t have a person on board but is a one-way vehicle. It wouldn’t be sent on patrol.

While I think it’s highly unlikely that, say, the United States would do this, in fact, they’re not even making their next generation B-21 Bomber uninhabited-

Mike: Right, the US has actively moved to not do this, basically.

Paul: Right, US Air Force generals have spoken out repeatedly saying they want no part of such a thing. We haven’t seen the US voice this concern really publicly in any formal way, that I actually think could be beneficial to say it more concretely in, for example, like a speech by the Secretary of Defense, that might signal to other countries, “Hey, we actually think this is a dangerous thing,” and I could imagine other countries maybe having a different miscalculus or seeing some more advantages capability-wise to using drones in this fashion, but I think that could be dangerous and harmful. That’s just one example.

I think automation bias I’m actually really deeply concerned about, as we use AI in tools to gain information and as the way that these tools function becomes more complicated and more opaque to the humans, that you could run into a situation where people get a false alarm but they begin to over-trust the automation, and I think that’s actually a huge risk in part because you might not see it coming, because people would say, “Oh humans are in the loop. Humans are in charge, it’s no problem.” But in fact, we’re conveying information in a way to people that leads them to surrender judgment to the machines even if that’s just using automation in information collection and has nothing to do with nuclear decision-making.

Mike: I think that those are both right, though I think I may be skeptical in some ways about our ability to generate norms around not putting nuclear weapons on drones.

Paul: I knew you were going to say that.

Mike: Not because I think it’s a good idea, like it’s clearly a bad idea but the country it’s the worst idea for is the United States.

Paul: Right.

Mike: If a North Korea, or an India, or a China thinks that they need that to generate stability and that makes them feel more secure to have that option, I think it will be hard to talk them out of it if their alternative would be say, land-based silos that they think would be more vulnerable to a first strike.

Paul: Well, I think it depends on the country, right? I mean countries are sensitive at different levels to some of these perceptions of global norms of responsible behavior. Like certainly North Korea is not going to care. You might see a country like India being more concerned about sort of what is seen as appropriate responsible behavior for a great power. I don’t know. It would depend upon sort of how this was conveyed.

Mike: That’s totally fair.

Ariel: Man, I have to say, all of this is not making it clear to me why nuclear weapons are that beneficial in the first place. We don’t have a ton of time so I don’t know that we need to get into that but a lot of these threats seem obviously avoidable if we don’t have the nukes to begin with.

Paul: Let’s just respond to that briefly, so I think there’s two schools of thought here in terms of why nukes are valuable. One is that nuclear weapons reduce the risk of conventional war and so you’re going to get less state-on-state warfare, that if you had a world with no nuclear weapons at all, obviously the risk of nuclear armageddon would go to zero, which would be great. That’s not a good risk for us to be running.

Mike: Now the world is safer. Major conventional war.

Paul: Right, but then you’d have more conventional war like we saw in World War I and World War II and that led to tremendous devastation, so that’s one school of thought. There’s another one that basically says that the only thing that nuclear weapons are good for is to deter others from using nuclear weapons. That’s what former Secretary of Defense Robert McNamara has said and he’s certainly by no means a radical leftist. There’s certainly a strong school of thought among former defense and security professionals that a world of getting to global zero would be good, but how you get there, even if that were, sort of people agreed that’s definitely where we want to go and maybe it’s worth a trade-off in greater conventional war to take away the threat of armageddon, how you get there in a safe way is certainly not at all clear.

Mike: The challenge is that when you go down to lower numbers, we talked before about how the United States and Russia have had the most significant nuclear arsenals both in terms of numbers and sophistication, the lower the numbers go, the more small numbers matter, and so the more the arsenals of every nuclear power essentially would be important and because countries don’t trust each other, it could increase the risk that somebody essentially tries to gun to be number one as you get closer to zero.

Paul: Right.

Ariel: I guess one of the things that isn’t obvious to me, even if we’re not aiming for zero, let’s say we’re aiming to decrease the number of nuclear weapons globally to be in the hundreds, and not, what, we’re at 15,000-ish at the moment? I guess I worry that it seems like a lot of the advancing technology we’re seeing with AI and automation, but possibly not, maybe this would be happening anyway, it seems like it’s also driving the need for modernization and so we’re seeing modernization happening rather than a decrease of weapons happening.

Mike: I think the drive for modernization, I think you’re right to point that out as a trend. I think part of it’s simply the age of the arsenals for some of these, for countries including the United States and the age of components. You have components designed to have a lifespan, say of 30 years that have used for 60 years. And where the people that built some of those of components in the first place, now have mostly passed away. It’s even hard to build some of them again.

I think it’s totally fair to say that emerging technologies including AI could play a role in shaping modernization programs. Part of the incentive for it I think has simply to do with a desire for countries, including but not limited to the United States, to feel like their arsenals are reliable, which gets back to perception, what you raised before, though that’s self-perception in some ways more than anything else.

Paul: I think Mike’s right that reliability is what’s motivating modernization, primarily, right? It’s a concern that these things are aging, they might not work. If you’re in a situation where it’s unclear if they might work, then that could actually reduce deterrents and create incentives for others to attack you and so you want your nuclear arsenal to be reliable.

There’s probably a component of that too, that as people are modernizing, trying to seek advantage over others. I think it’s worth it when you take a step back and look at where we are today, with sort of this legacy of the Cold War and the nuclear arsenals that are in place, how confident are we in mutual deterrence not leading to nuclear war in the future? I’m not super confident, I’m sort of in the camp of when you look at the history of near-miss accidents is pretty terrifying and there’s probably a lot of luck at play.

From my perspective, as we think about going forward, there’s certainly on the one hand, there’s an argument to be said for “let it all go to rust,” and if you could get countries to do that collectively, all of them, maybe there’d be big advantages there. If that’s not possible, then those countries are modernizing their arsenals in the sake of reliability, to maybe take a step back and think about how do you redesign these systems to be more stable, to increase deterrence, and reduce the risk of false alarms and accidents overall, sort of “soup to nuts” when you’re looking at the architecture.

I do worry that that’s not a major feature when countries are looking at modernization that they’re thinking about increasing reliability of their systems working, the sort of “always” component of the “always never dilemma.” They’re thinking about getting an advantage on others but there may not be enough thought going into the “never” component of how do we ensure that we continue to buy down risk of accidents or miscalculation.

Ariel: I guess the other thing I would add that I guess isn’t obvious is, if we’re modernizing our arsenals so that they are better, why doesn’t that also mean smaller? Because we don’t need 15,000 nuclear weapons.

Mike: I think there are actually people out there that view effective modernization as something that could enable reductions. Some of that depends on politics and depends on other international relations kinds of issues, but I certainly think it’s plausible that the end result of modernization could make countries feel more confident in nuclear reductions, all other things equal.

Paul: I mean there’s certainly, like the US and Russia have been working slowly to reduce their arsenals with a number of treaties. There was a big push in the Obama Administration to look for ways to continue to do so but countries are going to want these to be mutual reductions, right? Not unilateral.

In a certain level of the US and Russian arsenals going down, you’re going to get tied into what China’s doing, and the size of their arsenal becoming relevant, and you’re also going to get tied into other strategic concerns for some of these countries when it comes to other technologies like space-based weapons or anti-space weapons or hypersonic weapons. The negotiations become more complicated.

That doesn’t mean that they’re not valuable or worth doing, because while the stability should be the goal, having fewer weapons overall is helpful in the sense of if there is a God forbid, some kind of nuclear exchange, there’s just less destructive capability overall.

Ariel: Okay, and I’m going to end it on that note because we are going a little bit long here. There are quite a few more questions that I wanted to ask. I don’t even think we got into actually defining what AI on nuclear weapons looks like, so I really appreciate you guys joining me today and answering the questions that we were able to get to.

Paul: Thank you.

Mike: Thanks a lot. Happy to do it and happy to come back anytime.

Paul: Yeah, thanks for having us. We really appreciate it.

[end of recorded material]

Podcast: Artificial Intelligence – Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction?

On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she’s also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance.

Topics discussed in this episode include:

  • Three lenses through which to view AI’s transformative power
  • Emerging international and national AI governance strategies
  • The risks and benefits of regulating artificial intelligence
  • The importance of public trust in AI systems
  • The dangers of an AI race
  • How AI will change the nature of wealth and power

Papers and books discussed in this episode include:

You can listen to the podcast above and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher.

 

Ariel: Hi there, I’m Ariel Conn with the Future of Life Institute. As we record and publish this podcast, diplomats from around the world are meeting in Geneva to consider whether to negotiate a ban on lethal autonomous weapons. As a technology that’s designed to kill people, it’s no surprise that countries would consider regulating or banning these weapons, but what about all other aspects of AI? While, most, if not all AI researchers, are designing the technology to improve health, ease strenuous or tedious labor, and generally improve our well-being, most researchers also acknowledge that AI will be transformative, and if we don’t plan ahead, those transformations could be more harmful than helpful.

We’re already seeing instances in which bias and discrimination have been enhanced by AI programs. Social media algorithms are being blamed for impacting elections; it’s unclear how society will deal with the mass unemployment that many fear will be a result of AI developments, and that’s just the tip of the iceberg. These are the problems that we already anticipate and will likely arrive with the relatively narrow AI we have today. But what happens as AI becomes even more advanced? How can people, municipalities, states, and countries prepare for the changes ahead?

Joining us to discuss these questions are Allan Dafoe and Jessica Cussins. Allan is the Director of the Governance of AI program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. His research seeks to understand the causes of world peace, particularly in the age of advanced artificial intelligence.

Jessica is an AI Policy Specialist with the Future of Life Institute, where she explores AI policy considerations for near and far term. She’s also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance. Jessica and Allan, thank you so much for joining us today.

Allan: Pleasure.

Jessica: Thank you, Ariel.

Ariel: I want to start with a quote, Allan, that’s on your website and also on a paper that you’re working on that we’ll get to later, where it says, “AI will transform the nature of wealth and power.” And I think that’s sort of at the core of a lot of the issues that we’re concerned about in terms of what the future will look like and how we need to think about what impact AI will have on us and how we deal with that. And more specifically, how governments need to deal with it, how corporations need to deal with it. So, I was hoping you could talk a little bit about the quote first and just sort of how it’s influencing your own research.

Allan: I would be happy to. So, we can think of this as a proposition that may or may not be true, and I think we could easily spend the entire time talking about the reasons why we might think it’s true and the character of it. One way to motivate it, as I think has been the case for people, is to consider that it’s plausible that artificial intelligence would at some point be human-level in a general sense, and to recognize that that would have profound implications. So, you can start there, as, for example, if you were to read Superintelligence by Nick Bostrom, you sort of start at some point in the future and reflect on how profound this technology would be. But I think you can also motivate this with much more near-term perspective and thinking of AI more in a narrow sense.

So, I will offer three lenses for thinking about AI and then I’m happy to discuss it more. The first lens is that of general purpose technology. Economists and others have looked at AI and seen that it seems to fit the category of general purpose technology, which are classes of technologies that provide a crucial input to many important processes, economic, political, and military, social, and are likely to generate these complementary innovations in other areas. And general purpose technologies are also often used as a concept to explain economic growth, so you have things like the railroad or steam power or electricity or the motor vehicle or the airplane or the computer, which seem to change these processes that are important, again, for the economy or for society or for politics in really profound ways. And I think it’s very plausible that artificial intelligence not only is a general purpose technology, but is perhaps the quintessential general purpose technology.

And so in a way that sounds like a mundane statement. General purpose, it will sort of infuse throughout the economy and political systems, but it’s also quite profound because when you think about it, it’s like saying it’s this core innovation that generates a technological revolution. So, we could say a lot about that, and maybe I should just to sort of give a bit more color, I think Kevin Kelly has a nice quote where he says, “Everything that we formally electrified, we will now cognitize. There’s almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.” We could say a lot more about general purpose technologies and why they’re so transformative to wealth and power, but I’ll move on to the other two lenses.

The second lens is to think about AI as an information and communication technology. You might think this is a subset of general purpose technologies. So, other technologies in that reference class would include the printing press, the internet, and the telegraph. And these are important because they change, again, sort of all of society and the economy. They make possible new forms of military, new forms of political order, new forms of business enterprise, and so forth. So we could say more about that, and those have important properties related to inequality and some other characteristics that we care about.

But I’ll just move on to the third lens, which is that of intelligence. So, unlike every other general purpose technology, which applied to energy, production, or communication or transportation, AI is a new kind of general purpose technology. It changes the nature of our cognitive processes, it enhances them, it makes them more autonomous, generates new cognitive capabilities. And I think it’s that lens that makes it seem especially transformative. In part because the key role that humans play in the economy is increasingly as cognitive agents, so we are now building powerful complements to us, but also substitutes to us, and so that gives rise to the concerns about labor displacement and so forth. But also innovations in intelligence are hard things to forecast how they will work and what those implications will be for everything, and so that makes it especially hard to sort of see what’s through the mist of the future and what it will bring.

I think there’s a lot of interesting insights that come from those three lenses, but that gives you a sense of why AI could be so transformative.

Ariel: That’s a really nice introduction to what we want to talk about, which is, I guess, okay so then what? If we have this transformative technology that’s already in progress, how does society prepare for that? I’ve brought you both on because you deal with looking at the prospect of AI governance and AI policy, and so first, let’s just look at some definitions, and that is, what is the difference between AI governance and AI policy?

Jessica: So, I think that there are no firm boundaries between these terms. There’s certainly a lot of overlap. AI policy tends to be a little bit more operational, a little bit more finite. We can think of direct government intervention more for the sake of public service. I think governance tends to be a slightly broader term, can relate to industry norms and principles, for example, as well as government-led initiatives or regulations. So, it could be really useful as a kind of multi-stakeholder lens in bringing different groups to the table, but I don’t think there’s firm boundaries between these. I think there’s a lot of interesting work happening under the framework of both, and depending on what the audience is and the goals of the conversation, it’s useful to think about both issues together.

Allan: Yeah, and to that I might just add that governance has a slightly broader meaning, so whereas policy often sort of connotes policies that companies or governments develop intentionally and deploy, governance refers to those, but also sort of unintended policies or institutions or norms and just latent processes that shape how the phenomenon develops. So how AI develops and how it’s deployed, so everything from public opinion to the norms we set up around artificial intelligence and sort of emergent policies or regulatory environments. All of that you can group within governance.

Ariel: One more term that I want to throw in here is the word regulation, because a lot of times, as soon as you start talking about governance or policy, people start to worry that we’re going to be regulating the technology. So, can you talk a little bit about how that’s not necessarily the case? Or maybe it is the case.

Jessica: Yeah, I think what we’re seeing now is a lot of work around norm creation and principles of what ethical and safe development of AI might look like, and that’s a really important step. I don’t think we should be scared of regulation. We’re starting to see examples of policies come into place. A big important example is the GDPR that we saw in Europe that regulates how data can be accessed and used and controlled. We’re seeing increasing examples of these kinds of regulations.

Allan: Another perspective on these terms is that in a way, regulation is a subset, a very small subset, of what governance consists of. So regulation might be especially deliberate attempts by government to shape market behavior or other kinds of behavior, and clearly regulation is sometimes not only needed, but essential for safety and to avoid market failure and to generate growth and other sorts of benefits. But regulation can be very problematic, as you sort of alluded to, for a number of reasons. In general, with technology — and technology’s a really messy phenomenon — it’s often hard to forecast what the next generation of technology will look like, and it’s even harder to forecast what the implications will be for different industries, for society, for political structures.

And so because of that, designing regulation can often fail. It can be misapplied to sort of an older understanding of the technology. Often, the formation of regulation may not be done with a really state-of-the-art understanding of what the technology consists of, and then because technology, and AI in particular, is often moving so quickly, there’s a risk that regulation is sort of out of date by the time it comes into play. So, there are real risks of regulation, and I think a lot of policymakers are aware of that, but also markets do fail and there are really profound impacts of new technologies not only on consumer safety, but in fairness and other ethical concerns, but also more profound impacts, as I’m sure we’ll get to, like the possibility that AI will increase inequality within countries, between people, between countries, between companies. It could generate oligopolistic or monopolistic market structures. So there are these really big challenges emerging from how AI is changing the market and how society should respond, and regulation is an important tool there, but it needs to be done carefully.

Ariel: So, you’ve just brought up quite a few things that I actually do want to ask about. I think the first one that I want to go to is this idea that AI technology is developing a lot faster than the pace of government, basically. How do we deal with that? How do you deal with the fact that something that is so transformative is moving faster than a bureaucracy can handle it?

Allan: This is a very hard question. We can introduce a concept from economics, which is useful, and that is of an externality. So, an externality is some process that when two market actors transact, I buy a product from a seller, it impacts on a third party, so maybe we produce pollution or I produce noise or I deplete some resource or something like that. And policy often should focus on externalities. Those are the sources of market failure. Negative externalities are the ones like pollution that you want to tax or restrict or address, and then positive externalities like innovation are ones you want to promote, you want to subsidize and encourage. And so one way to think about how policy should respond to AI is to look at the character of the externalities.

If the externalities are local and if the sort of relevant stakeholder community is local, then I think a good general policy is to allow a local authority to develop to the lowest level that you can, so you want municipalities or even smaller groups to implement different regulatory environments. The purpose for that is not only so that the regulatory environment is adapted to the local preferences, but also you generate experimentation. So maybe one community uses AI in one way and another employs it in another way, and then over time, we’ll start seeing which approaches work better than others. So, as long as the externalities are local, then that’s, I think, what we should do.

However, many of these externalities are at least national, but most of them actually seem to be international. Then it becomes much more difficult. So, if the externalities are at the country level, then you need country level policy to optimally address them, and then if they’re transnational, international, then you need to negotiate with your neighbors to converge on a policy, and that’s when you get into much greater difficulty because you have to agree across countries and jurisdictions, but also the stakes are so much greater if you get the policy wrong, and you can’t learn from the sort of trial and error of the process of local regulatory experimentation.

Jessica: I just want to push back a little bit on this idea. I mean, if we take regulation out of it for a second and think about the speed at which AI research is happening and kind of policy development, the people that are conducting AI research, it’s a human endeavor, so there are people making decisions, there are institutions that are involved that rely upon existing power structures, and so this is already kind of embedded in policy, and there are political and ethical decisions just in the way that we’re choosing to design and build this technology from the get-go. So all of that’s to say that thinking about policy and ethics as part of that design process I think is really useful and just to not have them as always opposing factors.

One of the things that can really help in this is just improving those communication channels between technologists and policymakers so there isn’t such a wide gulf between these worlds and these conversations that are happening and also bringing in social scientists and others to join in on those conversations.

Allan: I agree.

Ariel: I want to take some of these ideas and look at where we are now. Jessica, you put together a policy resource that covers a lot of efforts being made internationally looking at different countries, within countries, and then also international efforts, where countries are working together to try to figure out how to address some of these AI issues that will especially be cropping up in the very near term. I was wondering if you could talk a little bit about what the current state of AI policy is today.

Jessica: Sure. So this is available publicly. This is futureoflife.org/ai-policy. It’s also available on the Future of Life homepage. And the idea here is that this is a living resource document, so this is being updated regularly and it’s mapping AI policy developments as they’re happening around the world, so it’s more of an empirical exercise in that way, kind of seeing how different groups and institutions, as well as nations, are framing and addressing these challenges. So, in most cases, we don’t have concrete policies on the ground yet, but we do have strategies, we have frameworks for addressing these challenges, and so we’re mapping what’s happening in that space and hoping that it encourages transparency and also collaboration between actors, which we think is important.

There are three complementary resources that are part of this resource. The first one is a map of national and international strategies, and that includes 27 countries and 6 international initiatives. The second resource is a compilation of AI policy challenges, and this is broken down into 14 different issues, so this ranges from economic impacts and technological unemployment to issues like surveillance and privacy or political manipulation and computational propaganda, and if you click on each of these different challenges, it actually links you with relevant policy principles and recommendations. So, the idea is if you’re a policymaker or you’re interested in this, you actually have some guidance. What are people in the field thinking about ways to address these challenges?

And then the third resource there is a set of reading lists. There are dozens of papers, reports, and articles that are relevant to AI policy debates. We have seven different categories here that include things like AI policy overviews or papers that delve into the security and existential risks of AI. So, this is a good starting place if you’re thinking about how to get involved in AI policy discussions.

Ariel: Can you talk a little bit about some of maybe the more interesting programs that you’ve seen developing so far?

Jessica: So, I mean the U.S. is really interesting right now. There’s been some recent developments. The 2019 National Defense Authorization Act was just signed last week by President Trump, and so this actually made official a new national security commission on artificial intelligence. So we’re seeing the kind of beginnings of a national strategy for AI within the U.S. through these kinds of developments that don’t really resemble what’s happening in other countries. This is part of the defense department, much more tailored to national defense and national security, so there’s going to be 15 commission members looking at a range of different issues, but particularly with how they relate to national defense.

We also have a new joint AI center in the DoD that will be looking at an ethical framework but for defense technologies using AI, so if you compare this kind of focus to what we’ve seen in France, for example, they have a national strategy for AI. It’s called AI for Humanity, and there’s a lengthy report that goes into numerous different kinds of issues; they’re talking about ecology and sustainability, about transparency, much more of a focus on having state-led developments kind of pushing back against the idea that we can just leave this to the private sector to figure out, which is really where the U.S. is going in terms of the consumer uses of AI. Trump’s priorities are to remove regulatory barriers as it relates to AI technology, so France is markedly different and they want to push back against the company control of data and the uses of these technologies. So, that’s kind of an interesting difference we’re seeing.

Allan: I would like to add that I think Jessica’s overview of global AI policy looks like a really useful resource. There’s a lot of links to most of the key, I think, readings that I would think you’d want to direct someone to, so I really recommend people check that out. And then specifically, I just want to respond to this remark Jessica made about sort of U.S. approach letting companies more have a free reign at developing AI versus the French approach, especially well articulated by Macron in his Wired interview is the insight that you’re unlikely to be able to develop AI successfully if you don’t have the trust of important stakeholders, and that mostly means the citizens of your country.

And I think Facebook has realized that and is working really hard to regain the trust from citizens and users, and just in general I think, yeah, if AI products are being deployed in an ecosystem where people don’t trust them, that’s going to handicap the deployment of those AI services. There’ll be sort of barriers to their use, there will be opposition regulation that will not necessarily be the most efficient way of generating AI that’s fair or safe or respects privacy. So, I think this conversation between different governmental authorities and the public and NGOs and researchers and companies around what is good AI, what are the norms that we should expect from AI, and then how do we communicate that and enter into a conversation that, between the public and the developers of AI, is really important and is sort of against U.S. national interests to not have that conversation and not develop that trust.

Ariel: I’d actually like to stick with this subject for a minute because trust is something that I find rather fascinating, actually. How big a risk is it, do you think, that the public could decide, “We just don’t trust this technology and we want it to stop,” and if they did decide that, do you think it would actually stop? Or do you think there’s enough government and financial incentive to continue promoting AI that the public trust may not be as big a deal as it has been for some other technologies?

Jessica: I certainly don’t think that there’s gonna be a complete stop from the companies that are developing this technology, but certainly responses from the public and from their employees can shift behavior. At Google, we’re seeing at Amazon that protests from the employees can lead to changes. So in the case of Google, the employees were upset about the involvement with the U.S. military on Project Maven and didn’t want their technology to be used in that kind of weaponized way, and that led Google to publish their own AI ethics principles, which included specifically that they would not renew that contract and that they would not pursue autonomous weapons. There is certainly a back and forth that happens between the public, between employees of companies and where the technology is going. I think we should feel empowered to be part of that conversation.

Allan: Yeah, I would just second that. Investments in AI and in research and development will not stop, certainly globally, but there’s still a lot of interest that could be substantially harmed, including the public interest from the development of valuable AI services and growth from a breakdown in trust. AI services really depend on trust. You see this with the big AI companies that rely on having a large user base and generating a lot of data. So the algorithms often depend on lots of user interaction and having a large user base to do well, and that only works if users are willing to share their data, if they trust that their data is protected and being used appropriately, if there are not political movements to inefficiently, or not in the interest of the public, prevent the accumulation and use of data.

So, that’s one of the big areas, but I think there are a lot of other ways in which a breakdown in trust would harm the development of AI. It will make it harder for start ups to get going. Also, as Jessica mentioned, I think AI researchers are, they’re not just in it for the money. A lot of them have real political convictions, and if they don’t feel like their work is doing good or if they have ethical concerns with how their work is being used, they are likely to switch companies or express their concerns internally as we saw at Google. I think this is really crucial for a country from the national interest perspective. If you want to have a healthy AI ecosystem, you need to develop a regulatory environment that works but also have relationships with key companies and the public that are informed and sort of stays within the bounds of the public interest in terms of all of the range of ethical and other concerns they would have.

Jessica: Two quick additional points on this issue of trust. The first is that policymakers should not assume that the public will necessarily trust their reaction and their approach to dealing with this, and there’s differences in the public policy processes that happen that can enable greater trust. So, for example, I think there’s a lot to learn from the way that France went about developing their strategy. It took place over the course of a year with hundreds of interviews, extremely consultative with members of the public, and that really encourages buy-in from a range of stakeholders, which I think is important. If we’re gonna be establishing policies that stick around, to have that buy-in not only from industry but also from the publics that are implicated and impacted by these technologies.

A second point is just the importance of norms that we’re seeing in creating cultures of trust, and I don’t want to overstate this, but it’s sort of a first step, and I think we also need monitoring services, we need accountability, we need ways to actually check that these norms aren’t just kind of disappearing into the ether but are upheld in some way. But that being said, they are an important first step, and so I think things like the Asilomar AI principles which were again, a very consultative process that were developed by a large number of people and iterated upon, and only those that had quite a lot of consensus made it into the final principles. We’ve seen thousands of people sign onto those. We’ve seen them being referenced around the world, so those kinds of initiatives are important in kind of helping to establish frameworks of trust.

Ariel: While we’re on this topic, you’ve both been sort of getting into roles of different stakeholders in developing policy and governance, and I’d like to touch on that more explicitly. We have, obviously governments, we have corporations, academia, NGOs, individuals. What are the different roles that these different stakeholders play and do you have tips for how these different stakeholders can try to help implement better and more useful policy?

Allan: Maybe I’ll start and then turn it over to Jessica for the comprehensive answer. I think there’s lots of things that can be said here, and really most actors should be involved in multiple ways. The one I want to highlight is I think the leading AI companies are in a good position to be leaders in shaping norms and best practice and technical understanding and recommendations for policies and regulation. We’re actually quite fortunate that many of them are doing an excellent job with this, so I’ll just call out one that I think is commendable in the extent to which it’s being a good corporate citizen and that’s Alphabet. I think they’ve developed their self-driving car technology in the right way, which is to say, carefully. Their policies towards patents is, I think, more in the public interest and that is that they oppose offensive patent litigation and have really sort of invested in opposing that. You can also tell a business case story for why they would do that. I think they’ve supported really valuable AI research that otherwise groups like FLI or other sort of public interest funding sources would want to support. To example, I’ll offer Chris Olah, in Google Brain, who has done work on transparency and legibility of neural networks. This is highly technical but also extremely important for safety in the near and long-term. This is the kind of thing that we’ll need to figure out to have confidence that really advanced AI is safe and working in our interest, but also in the near-term for understanding things like, “Is this algorithm fair or what was it doing and can we audit it?”

And then one other researcher I would flag, also at Google Brain, is Moritz Hardt has done some excellent work on fairness. And so here you have Alphabet supporting AI researchers who are doing, really I think, frontier work on the ethics of AI and developing technical solutions. And then of course, Alphabet’s been very good with user data and in particular, DeepMind, I think, has been a real leader in safety, ethics, and AI for good. So I think the reason I’m saying this is because I think we should develop a norm, a strong norm that says, “Companies who are the leading beneficiaries of AI services in terms of profit have a social responsibility to exemplify best practice,” and we should call out the ones who are doing a good job and also the ones that are doing bad jobs and encourage the ones that are not doing good jobs to do better, first through norms and then later through other instruments.

Jessica: I absolutely agree with that. I think that we are seeing a lot of leadership from companies and small groups, as well, not even just the major players. Just a couple days ago, an AI marketing company released an AI ethics policy and just said, “Actually, we think every AI company should do this, and we’re gonna start and say that we won’t use negative emotions to exploit people, for example, and that we’re gonna take action to avoid prejudice and bias.” I think these are really important ways to establish as best practices exactly as you said.

The only other thing I would say is that more than other technologies in the past, AI is really being led by a small handful of companies at the moment in terms of the major advances. So I think that we will need some external checks on some of the processes that are happening. If we kind of analyze the topics that come up, for example, in the AI ethics principles coming from companies, not every issue is being talked about. I think there certainly is an important role for governments and academia and NGOs to get involved and point out those gaps and help kind of hold them accountable.

Ariel: I want to transition now a little bit to talk about Allan, some of the work that you are doing at the Governance of AI program. You also have a paper that I believe will be live when this podcast goes live. I’d like you to talk a little bit about what you’re doing there and also maybe look at this transition of how we go from governance of this narrow AI that we have today to looking at how we deal with more advanced AI in the future.

Allan: Great. So the Governance of AI Program is a unit within the Future of Humanity Institute at the University of Oxford. The Future of Humanity Institute was founded by Nick Bostrom, and he’s the Director, and he’s also the author of Superintelligence. So you can see a little bit from that why we’re situated there. The Future of Humanity Institute is actually full of really excellent scholars thinking about big issues, as the title would suggest. And many of them converged on AI as an important thing to think through, an important phenomenon to think through, for the highest stakes considerations. Almost no matter what is important to you, over the time scale of say, four decades and certainly further into the future, AI seems like it will be really important for realizing or failing to realize those things that are important to you.

So, we are primarily focused on the highest stakes governance challenges arising from AI, and that’s often what we’re indicating when we talk about transformative AI. Is that we’re really trying to focus on the kinds of AI, the developments in AI, and maybe this is several decades in the future, that will radically transform wealth and power and safety and world order and other values. However, I think you can motivate a lot of this work by looking at near-term AI, so we could talk about a lot of developments in near-term AI and how they suggest the possibilities for really transformative impacts. I’ll talk through a few of those or just mention a few.

One that we’ve touched on a little bit is labor displacement and inequality. This is not science fiction to talk about the impact of automation and AI on inequality. Economists are now treating this as a very serious hypothesis, and I would say the bulk of belief within the economics community is that AI will at least pose displacement challenges to labor, if not more serious challenges in terms of persistent unemployment.

Secondarily is the issue of inequality that there’s a number of features of AI that seem like they could increase inequality. The main one that I’ll talk about is that digital services in general, but AI in particular, have what seems like a natural global monopoly structure. And this is because the provision of an AI service, like a digital service, often has a very low marginal cost. So it’s effectively free for Netflix to give me a movie. In a market like that for Netflix or for Google Search or for Amazon e-commerce, the competition is all in the fixed cost of developing the really good AI “engine” and then whoever develops the best one can then outcompete and sort of capture the whole market. And then the size of the market really depends on if there’s sort of cultural or consumer heterogeneity.

All of this to say, we see these AI giants, the three in China and the handful in the U.S. Europe, for example, is really concerned that they don’t have an AI giant, and they’re wondering how do they produce an AI champion. And it’s plausible that a combination of factors means it’s actually going to be very hard for Europe to generate the next AI champion. So this has important geopolitical implications, economic implications, implications for welfare of citizens in these countries, implications for tax.

Everything I’m saying right now is really, I think, motivated by near-term and quite credible possibilities. We can then look to other possibilities, which seem more like science fiction but are happening today. For example, the possibilities around surveillance and control from AI and from autonomous weapons, I think, are profound. So, if you have a country or any authority, that could be a company as well, that is able to deploy surveillance systems that can be surveilling your online behavior, for example your behavior on Facebook or your behavior at the workplace. When I leave my chair, if there’s a camera in my office, it can watch if I’m working and what I’m doing, and then of course my behavior in public spaces and elsewhere, then the authority can really get a lot of information on the person who’s being surveilled. And that could have profound implications for the power relations between governments and publics or companies and publics.

And this is the fundamental problem of politics, is how do you build this leviathan, this powerful organization that doesn’t abuse its power. And we’ve done pretty well in many countries developing institutions to discipline the leviathan so that it doesn’t abuse its power, but AI is now providing this dramatically more powerful surveillance tool and then sort of coercion tool, and so that could, say, at the least, enable leaders of totalitarian regimes to really reinforce their control over their country. More worryingly, it could lead to sort of an authoritarian sliding in countries that are less robustly democratic, and even countries that are pretty democratic, they might still worry about how it will shift power between different groups. And that’s another issue area, which again is, the stakes are tremendous, but we’re not invoking sort of radical advances in AI to get there.

And there’s actually some more that we could talk about, such as strategic stability, but I’ll skip it. Those are sort of all the challenges from near-term AI — AI as we see it today or likely it’s going to be coming in five years. But AI’s developing quickly, and we really don’t know how far it could go, how quickly. And so it’s important to also think about surprises. Where might we be in 10, 15, 20 years? And this is obviously very difficult, but I think, as you’ve mentioned, because it’s moving so quickly, it’s important that some people, scholars and policymakers, are looking down the tree a little bit farther to try to anticipate what might be coming and what we could do today to steer in a better direction.

So, at the Governance of AI Program, we work on every aspect of the development and deployment and regulation and norms around AI that we see as bearing on the highest stakes issues. And this document that you mentioned, it’s entitled AI Governance: A Research Agenda, is an attempt to articulate the space of issues that people could be working on that we see as potentially touching on these high stakes issues.

Ariel: One area that I don’t think you mentioned that I would like to ask about is the idea of an AI race. Why is that a problem, and what can we do to try to prevent an AI race from happening?

Allan: There’s this phenomenon that we might call the AI race, which has many layers and many actors, and this is the phenomenon where actors (those could be an AI researcher, they could be a lab, they could be a firm, they could be a country or even a region like Europe) perceive that they need to work really hard, invest resources, and move quickly to gain an advantage in AI — in AI capabilities, in AI innovations, deploying AI systems, entering a market — because if they don’t, they will lose out on something important to them. So, that could be, for the researchers, it could be prestige, right? “I won’t get the publication.” For firms it could be both prestige and maybe financial support. It could be a market. You might capture or fail to capture a really important market.

And then for countries, there’s a whole host of motivations. Everything from making sure there’s industries in our country for our workers to having companies that pay tax revenue so that the idea is if we have an AI champion, then we will have more taxable revenue but also other advantages. There’ll be more employment. Maybe we can have a good relationship with that champion and that will help us in other policy domains. And then, of course, there’s the military considerations that if AI becomes an important complement to other military technologies or even crucial tech in itself, then countries are often worried about falling behind and being inferior and always looking towards what might be the next source of advantage. So, that’s another driver for this sense that countries want to not fall behind and get ahead.

Jessica: We’re seeing competing interests at the moment. There are nationalistic kinds of tendencies coming up. We’re seeing national strategies emerging from all over the world, and there’s really strong economic and military motivations for countries to take this kind of stance. We’ve got Russian President Vladimir Putin telling students that whoever leads artificial intelligence will be the ruler of the world. We’ve got China declaring a national policy that they intend to be the global leader in AI by 2030, and other countries as well. Trump has said that he intends for the U.S. to be the global leader. The U.K. has said similar things.

So, there’s a lot of that kind of rhetoric coming from nations at the moment, and they do have economic and military motivations to say that. They’re competing for a relatively small number of AI researchers and a restricted talent pool, and everybody’s searching for that competitive advantage. That being said, as we see AI develop, particularly from more narrow applications to potential more generalized ones, the need for international cooperation, as well as more robust safety and reliability controls, are really going to increase, and so I think there are some emerging signs of international efforts that are really important to look to, and hopefully we’ll see that outweigh some of the competitive race dynamics that we’re seeing now.

Allan: The sort of crux of the problem is if everyone’s driving to achieve this performance achievement, they want to have the next most powerful system, and if there’s any other value that they might care about or society might care about, that’s sort of in the way or that there’s a trade-off. They have an incentive to trade away some of that value to gain a performance lead. Things that we see today, like privacy, so maybe countries that have a stricter privacy policy may have troubles generating an AI champion. Some look to China and see that maybe China has an AI advantage because it has such a cohesive national culture and close relationship between government and the private sector, as compared with, say, the United States, where you can see a real conflict at times between, say, Alphabet and parts of the U.S. government, which I think the petition around Project Maven really illustrates.

So, values you might lose include privacy or maybe not developing autonomous weapons, according to some ethical guidelines that you would want. There’s other concerns that put people’s lives at stake, so if you’re rushing to market with a self-driving car that isn’t sufficiently safe, then people can die. And the small numbers, they’re independent risks, but if say the risk that you’re deploying is that the self-driving car system itself is hackable at scale, then you might be generating a new weapon of mass destruction. So, there’s these accident risks or malicious use risks that are pretty serious, and then when you really start looking towards AI systems that would be very intelligent, hard for us to understand because they’re sort of opaque, complex, fast moving when they’re plugged into financial systems, energy grids, cyber systems, cyber defense, there’s an increasing risk that we won’t even know what risks we’re exposing ourselves to because of these highly complex interdependent, fast-moving systems.

And so if we could sort of all take a breath and reflect a little bit, that might be more optimal from everyone’s perspective. But because there’s this perception of a prize to be had, it seems likely that we are going to be moving more quickly than is optimal. It’s a very big challenge. It won’t be easily solved, but in my view, it is the most important issue for us to be thinking about and working towards over the coming decades, and if we solve it, I think we’re much more likely to develop beneficial advanced AI, which will help us solve all our other problems. So I really see this as the global issue of our era to work on.

Ariel: We sort of got into this a little bit earlier, but what are some of the other countries that have policies that you think maybe more countries should be implementing? And maybe more specifically, if you could speak about some of the international efforts that have been going on.

Jessica: Yeah, so an interesting thing we’re seeing from the U.K. is that they’ve established a center for data ethics and innovation, and they’re really making an effort to prioritize ethical considerations of AI. So I think it remains to be seen exactly what that looks like, but that’s an important element to keep in mind. Another interesting thing to watch, Estonia is working on an AI law at the moment, so they’re trying to make very clear guidelines so that when companies come in and they want to work on AI technology in that country, they know exactly what the framework they’re working in will be like, and they actually see that as something that can help encourage innovations. I think that’ll be a really important one to watch, as well.

But there’s a lot of great work happening. There’s task forces emerging, and not just at the federal level, at the local level, too. New York now has an algorithm monitoring task force and actually trying to see where are algorithms being used in public services and trying to encourage accountability about where those exist, so that’s a really important thing that potentially could spread to other states or other countries.

And then you mentioned international developments, as well. So, there are important things happening here. The E.U. is certainly a great example of this right now. 25 European countries signed a Declaration of Cooperation on AI. This is a plan, a strategy to actually work together to improve research and work collectively on the kind of social and security and legal issues that come up around AI. There’s also, at the G7 meeting, they signed, it’s called the Charlevoix Common Vision for the Future of AI. That again, it’s not regulatory, but setting out a vision that includes things like promoting human-centric AI and fostering public trust, supporting lifelong learning and training, as well as supporting women and underrepresented populations in AI development. So, those kinds of things, I think, are really encouraging.

Ariel: Excellent. And was there anything else that you think is important to add that we didn’t get a chance to discuss today?

Jessica: Just a couple things. There are important ways that government can shape the trajectory of AI that aren’t just about regulation. For example, deciding how to leverage government investment really changes the trajectory of what AI is developed, what kinds of systems people prioritize. That’s a really important policy lever that is different from regulation that we should keep in mind. Another one is around procurement standards. So, when governments want to bring AI technologies into government services, what are they going to be looking for? What are the best practices that they require for that? So, those are important levers.

Another issue just is somewhat taken for granted in this conversation but just to state it, is that, shaping AI for a safe and beneficial future is, we can’t just have technical fixes; these are really built by people, and we’re making choices about how and where they’re deployed and for what purposes, so these are social and political choices. This has to be a multidisciplinary process, and involve governments along with industry and civil society, so really encouraging to see these kinds of conversations take place.

Ariel: Awesome. I think that’s a really nice note to end on. Well, so Jessica and Allan, thank you so much for joining us today.

Allan: Thank you, Ariel, it was a real pleasure. And Jessica, it was a pleasure to chat with you. And thank you to all the good work coming out of FLI promoting beneficial AI.

Jessica: Yeah, thank you so much, Ariel, and thank you Allan, it’s really an honor to be part of this conversation.

Allan: Likewise.Ariel: If you’ve been enjoying the podcasts, please take a moment to like them, share them, follow us on whatever platform you’re listening to us on. And, I will be back again next month, with a new pair of experts.

[end of recorded material]

Podcast: Six Experts Explain the Killer Robots Debate

Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated.

In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of lethal autonomous weapons systems (LAWS), where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founder Mary Wareham and Bonnie Docherty of Human Rights Watch (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39).

Topics discussed in this episode include:

  • the history of semi-autonomous weaponry in World War II and the Cold War (including the Tomahawk Anti-Ship Missile)
  • how major military powers like China, Russia, and the US are imbuing AI in weapons today
  • why it’s so difficult to define LAWS and draw a line in the sand
  • the relationship between LAWS proliferation and war crimes
  • FLI’s recent pledge, where over 200 organizations and over 2800 individuals pledged not to assist in developing or using LAWS
  • comparing LAWS to blinding lasers and chemical weapons
  • why there is hope for the UN to address this issue

Publications discussed in this episode include:

You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher.

If you work with artificial intelligence in any way, and if you believe that the final decision to take a life should remain a human responsibility rather than falling to a machine, then please consider signing this pledge, either as an individual or on behalf of your organization.

Ariel: Hello. I’m Ariel Conn with the Future of Life Institute. As you may have seen, this month we announced a pledge against lethal autonomous weapons. The pledge calls upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. But in the meantime signatories agree that they they will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. At the time of this recording, over 220 AI-related organizations and over 2800 individuals have signed. Signatories include Google DeepMind and its founders, University College London, the XPRIZE Foundation, Clearpath Robotics, Silicon Valley Robotics, the European Association for Artificial Intelligence — and many other AI societies and organizations from around the world. Additionally, people who signed include Elon Musk, Google’s head of research and machine learning Jeff Dean, many other prominent AI researchers, such as Stuart Russell, Toby Walsh, Meredith Whitaker, Anca Dragan, Yoshua Bengio, and even politicians, like British MP Alex Sobel.

But why? We’ve all seen the movies and read the books about AI gone wrong, and yet most of the signatories agree that the last thing they’re worried about is malicious AI. No one thinks the Terminator is in our future. So why are so many people in the world of AI so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. For the longer answer, we have this podcast.

For this podcast, I spoke with six of the leading experts in autonomous weapons. You’ll hear from defense expert Paul Scharre, who recently released the book Army of None: Autonomous Weapons and the Future of War. We discuss the history of autonomous and semi-autonomous weaponry, which dates back to WWII, as well as some of the more nuanced issues today that often come up for debate. AI researcher Toby Walsh looks at lethal autonomous weapons from a more technical perspective, considering the impact of autonomous weapons on society, and also the negative effects they could have for AI researchers if AI technology is used to kill people. Richard Moyes, with Article 36, coined the phrase meaningful human control, which is what much of the lethal autonomous weapons debate at the United Nations now focuses on. He describes what that means and why it’s important. Mary Wareham and Bonnie Docherty joined from Human Rights Watch, and they’re also with the Campaign to Stop Killer Robots. They talk about the humanitarian impact of lethal autonomous weapons and they explain the process going on at the United Nations today as efforts move toward a ban. Finally, my interviews end with Peter Asaro with the International Committee for Robot Arms Control and also the Campaign to Stop Killer Robots. Peter considers the issue of lethal autonomous weapons from an ethical and legal standpoint, looking at the impact killer robots could have on everything from human dignity to war crimes.

But I’ll let each of them introduce themselves better when their interviews begin. And because this podcast is so long, in the description, we’ve included the times that each interview starts, so that you can more easily jump around or listen to sections as you have time.

One quick, final point to mention is that everyone was kind enough to join at the last minute, which means not all of the audio is perfect. Most of it is fine, but please bear with us if you can hear people chattering in the background or any other similar imperfections.

And now for the first interview with Paul Scharre.

Paul: I’m Paul Scharre. I’m a senior fellow and director of the Technology and National Security Program at the Center for a New American Security. We’re a Washington, D.C.-based national security think tank that’s an independent bipartisan research organization.

Ariel: You have a background in weaponry. You were in the military, correct?

Paul: Yeah. I served about five and a half years in the US Army as a Ranger and a civil affairs team leader. I did multiple tours to Iraq and Afghanistan, and then I worked for several years after that in the Pentagon in the Office of the Secretary of Defense, where I actually worked on policy issues for emerging weapons technologies, including autonomous weapons.

Ariel: Okay. One of the very first questions that I want to start with is, how do you define an autonomous weapon?

Paul: That’s sort of the million-dollar question in a lot of ways. I don’t want to imply that all of the debate around autonomous weapons is a misunderstanding of semantics. That’s not true at all. There are clearly people who have very different views on what to do about the technology, but it is a big complicating factor because I have certainly seen, especially at the United Nations, very heated disagreements where it’s clear that people are just talking past each other in terms of what they’re envisioning.

When you say the term “autonomous weapon,” it conjures all sorts of different ideas in people’s minds, some people envisioning super advanced intelligent machines that have human-like or superhuman intelligence, something like a Terminator or Cylon from science fiction. The other people are envisioning something that might be very simple and doable today, like a Roomba with a gun on it.

Both of those things are probably really bad ideas but for very different kinds of reasons. And I think that that’s a complicating factor. So one of the dimensions of autonomy that people tend to get fixated on is how smart the weapon system is. I actually don’t think that that’s a useful way to define an autonomous weapon. Sometimes I’ll hear people say things like, “Well, this is not an autonomous weapon. This is an automated weapon because of the level of sophistication.” I don’t think that’s very helpful.

I think it’s much better, actually, to focus on the functions that the weapon is performing on its own. This is similar to the approach that the International Committee of the Red Cross has, which focuses on critical functions in weapons systems. The way that I define it in my book is I basically define an autonomous weapon as one that can complete an entire engagement cycle on its own. That is to say, it has all of the functionality needed to search for targets, to identify them, to make a decision about whether or not to attack them, and then to start the engagement and carry through the engagement all by itself.

So there’s no human in this loop, this cognitive loop, of sensing and deciding and acting out on the battlefield all by itself. That defines it in such a way that there are some things — and this is where it gets into some of the tricky definitional issues — there are weapons that have been around since World War II that I would call semi-autonomous weapons that have some degree of autonomy, that have some sensors on board. They can detect the enemy, and they can make some rudimentary kinds of actions, like maneuvering towards the enemy.

Militaries generally call these “homing munitions.” They’re torpedoes or air-to-air missiles or surface-to-air, air-to-ground missiles. They have sensors on them that might use sonar or radar or acoustic signatures. They can sense that the enemy is there, and then they use those sensors to maneuver towards the enemy to strike the target. These are generally launched by people at targets where the human knows there’s a target there.

These were originally invented in World War II by the Germans to hit Allied ships in the submarine wars in the Atlantic in World War II. You can imagine there’s a technical challenge trying to hit a moving target of a ship that’s moving. In a submarine, you’re trying to fire a torpedo at it and you might miss. So the first versions of these had microphones that could listen to the sound of the propellers from Allied ships and then steer towards where the sound was greatest so they could hit the ship.

In those cases — and this is still the case in the ones that are used today — humans see the target or have some indication of the target, maybe from a radar or sonar signature. And humans say, “There’s something out there. I want to launch this weapon to go attack it.” Those have been around for 70 years or so. I bring them up because there are some people who sometimes say, “Well, look. These autonomous weapons already exist. This is all a bunch of hullaballoo about nothing.”

I don’t think that’s really true. I think that a lot of the weapons systems that you see concern about going forward, would be things that will be quite qualitatively different, things that are going out over a wide area and searching for targets on their own, where humans don’t necessarily know where the enemy is. They might have some suspicion that the enemy might be in this area at this point in time, but they don’t know, and they launch the weapon to then find the enemy. And then, without radioing back to a human for approval, that weapon is delegated the authority to attack on its own.

By and large, we don’t see weapons like this in existence today. There are some exceptions. The Israeli Harpy drone or loitering munition is an exception. There were a couple experimental US systems in the ’80s and ’90s that are no longer in service. But this isn’t something that is in widespread use. So I do think that the debate about where we’re going in the future is at least a very valid one, and we are on the cusp of, potentially, things that will be quite different than anything we’ve seen before in warfare.

Ariel: I want to ask a quick question about the Harpy and any other type of weapon similar to that. Have those actually been used to kill anyone yet, to actually identify a target and kill some enemy? Or are they still just being used for identifying and potentially targeting people, but it’s still a human who is making the final decision?

Paul: That’s a great question. To the best of my knowledge, the Israeli Harpy has not been used in its fully autonomous mode in combat. So a couple things about how the Harpy functions. First of all, it doesn’t target people per se; it targets radars. Now, having said that, if a person is standing next to a radar that it targets, you’re probably going to be killed. But it’s not looking for individual persons. It’s looking for radar signatures and then zeroing in on them.

I mention that as important for two reasons. One, sometimes in some of the concerns that people raise about autonomous weapons, it can sometimes be unclear, at least to a listener, whether they are concerned about specifically weapons that would target humans or any weapon that might target anything on the battlefield. So that’s one consideration.

But, also, from sort of a practicality standpoint, it is easier to identify radar signatures more accurately than people who, of course, in many modern conflicts are not wearing uniforms or insignia or the things that might clearly identify them as a combatant. So a lot of the issues around distinction and accurately discriminating between combatants and noncombatants are harder for weapons that would target people.

But the answer to the question is a little bit tricky because there was an incident a couple years ago where a second-generation version of the Harpy called the Harop, or Harpy II, was used in the Nagorno-Karabakh region in the conflict there between Azerbaijan and Armenia. I think it was used by Azerbaijan and used to attack what looked like — I believe it was a bus full of fighters.

Now, by all accounts, the incident was one of actual militants being targeted — combatants — not civilians. But here was a case where it was clearly not a radar. It was a bus that would not have been emitting radar signatures. Based on my understanding of how the technology works, the Harop, the Harpy II, has a human-in-the-loop mode. The first-generation Harpy, as far as I understand, is all autonomous. The second-generation version definitely has a human-in-the-loop mode. It looks like it’s not clear whether it also has an autonomous version.

In writing the book, I reached out to the manufacturer for more details on this, and they were not particularly forthcoming. But in that instance, it looks like it was probably directed by a human, that attack, because as far as we know, the weapon does not have the ability to autonomously target something like a bus.

Ariel: Okay.

Paul: That’s a really long-winded answer. This is what actually makes this issue super hard sometimes because they depend a lot on the technical specifications of the weapon, which a) are complicated and b) are not always very transparent. Companies are not always very transparent publicly about how their weapons systems function.

One can understand why that is. They don’t want adversaries to come up with methods of fooling them and countermeasures. On the other hand, for people who are interested in understanding how companies are pushing the bounds of autonomy, that can be very frustrating.

Ariel: One of the things that I really like about the way you think is that it is very nuanced and takes into account a lot of these different issues. I think it’s tempting and easy and, I don’t want to make it sound like I’m being lazy, because I personally support banning lethal autonomous weapons. But I think it’s a really complicated issue, and so I’d like to know more about What are your thoughts on a ban?

Paul: There are two areas on this topic that I think is where it gets really complicated and really tricky. If you start with a broad principle that someone might have of something like, “Humans should be making decisions about lethal force,” or, “Only humans should be deciding to take human life.” There’s two areas where you try to … How do I put them into practice? And then you really run into some serious challenges.

And I’m not saying that makes it impossible because difficult answers you have to really sort of roll up your sleeves and get into some of the details of the issue. One is, how do you translate a broad concept like that into technical specifications of a weapon? If you start with an idea and say, “Well, only humans should be responsible for taking human life,” that seems like a reasonable idea.

How do you translate that into technical guidance that you give weapons developers over what they can and cannot build? That’s actually really hard, and I say that as having done this when I worked at the Pentagon and we tried to write guidance that was really designed to be internal to the US Defense Department and to give guidance to defense companies and to military researchers on what they could build.

It was hard to translate some of these abstract concepts like, “Humans should decide the targets,” to technical ideas. Well, what does that mean for how long the weapon can loiter over a target area or how big its sensor field should be or how long it can search for? You have to try to figure out how to put those technical characteristics into practice.

Let me give you two examples of a weapon to illustrate how this can be challenging. You might imagine a weapon today where a human says, “Ah, here’s an enemy target. I want to take that target out.” They launch a missile, and the missile flies towards the target. Let’s say it’s a tank. The missile uses a millimeter-wave seeker on the tank. It’s an active seeker, sends out millimeter-wave radar signatures to see the tank and illustrate it and sort of highlight it from the background and then zero in on the tank, because the tank’s moving and they need to have the sensor to hit the moving tank.

If the weapon and the sensor can only search for a very limited space in time and geography, then you’ve constrained the autonomy enough that the human is still in control of what it’s targeting. But as you start to open that aperture up, and maybe it’s no longer that it’s searching for one minute in a one-kilometer area, it’s now searching for eight hours over 1,000 kilometers, now you have a completely different kind of weapon system. Now it’s one that’s much more like … I make the analogy in the book of the difference between a police dog that might be set loose to go chase down a suspect, where the human says, “There’s the suspect. Dog, go get them,” versus a mad dog roaming the streets attacking anyone at will.

You have two different paradigms, but where do you draw the line in between? And where do you say, “Well, is 1 minute of loiter time, is it 2 minutes, is it 10 minutes, is it 20 minutes? What’s the geographic area?” It’s going to depend a lot on the target, the environment, what kind of clutter is in the environment. What might be an appropriate answer for tanks in an urban combat setting might be very different than naval ships on the high seas or submarines underwater or some other target in a different environment.

So that’s one challenge, and then the other challenge, of course, which is even more contested, is just sort of, “What’s the feasibility of a ban and getting countries to come together to actually agree to things?” because, ultimately, countries have militaries because they don’t trust each other. They don’t trust international law to constrain other countries from aggressive action. So regardless of whether you favor one country or another, you consider yourself an American or a Russian or a Chinese or a French or Israeli or Guinean or someone else, countries in general, they have militaries because they don’t trust others.

That makes … Even if you get countries to sign up to a ban, that’s a major challenge in getting people to actually adhere to, then, because countries are always fearful about others breaking these rules and cheating and getting the upper hand.

Ariel: We have had other bans. We’ve banned biological weapons, chemical weapons, landmines, space weapons. Do you see this as different somehow?

Paul: Yeah. So one of the things I go through in my book is, as comprehensive as I can come up with, a list of all of the attempts to regulate and control emerging technologies dating back to antiquity, dating back to ancient Indian prohibitions and Hindu Laws of Manu or the Mahabharata on poisoned and barbed arrows and fire-tip weapons.

It’s really a mixed bag. I like to say that there’s sort of enough examples of both successes and failures for people to pick whichever examples they want for whatever side they’re arguing for because there are many examples of successful bans. And I would say they’re largely successful. There are some examples of isolated incidences of people not adhering to them. Very few bans are universally adhered to. We certainly have Bashar al-Assad using chemical weapons in Syria today.

But bans that have been largely successful and that they’ve at least had a major effect in reducing these weapons include landmines, cluster munitions, blinding lasers, biological weapons, chemical weapons, using the environment as a weapon, placing nuclear weapons on the seabed or in orbit, placing any weapons of any kind on the moon or Antarctica, various regulations during the Cold War, anti-ballistic missile systems, intermediate-range nuclear ground-launch missiles, and then, of course, regulations on a number of nuclear weapons.

So there are a lot of successful examples. Now, on the other side of the coin, there are failed attempts to ban, famously, the crossbow, and that’s often brought up in these conversations. But in more recent memory, attempts of the 20th century to ban and regulate aircraft and air-delivered weapons, submarine warfare, of course the failure of attempts to ban poison gas in World War I. So there are examples on other sides of the ledger as well.

One of the things that I try to do in my book is get beyond sort of just picking examples that people like, and say, “Well, is there a pattern here? Are there some common conditions that make certain bans more likely to succeed or fail?” There’s been great scholarship done by some others before me that I was able to build on. Rebecca Crootof and Sean Welsh have done work on this trying to identify some common patterns.

I think that that’s a … If you want to look at this analytically, that’s a fruitful place to start, is to say, “Why do some bans succeed and some fail?” And then, when you’re looking at any new technology, whether it’s autonomous weapons or something else, where do they fall on this spectrum, and what does that suggest about the feasibility of certain attempts at regulation versus others?

Ariel: Can you expand on that a little bit? What have you found, or what have they found in terms of patterns for success versus failure for a ban?

Paul: I think there’s a couple criteria that seem to matter. One is the clarity of a ban is really crucial. Everyone needs to have a clear agreement on what is in and what is out. The simpler and clearer the definition is, the better. In some cases, this principle is actually baked into the way that certain treaties are written. I think the ban on cluster munitions is a great example of this, where the Cluster Munition Convention has a very, very simple principle in the treaty. It says, “Cluster munitions are banned,” full stop.

Now, if you go into the definition, now there’s all sorts of nuance about what constitutes a cluster munition or not. That’s where they get into some of the horse trading with countries ahead of time. But sort of the principle is no cluster munitions. The archetype of this importance of clarity comes in the success of restraint among European powers in using chemical weapons against each other in World War II. All sides had them. They didn’t use them on the battlefield against each other. Of course, Germany used them in the Holocaust and there were some other isolated incidences in World War II of use against others who didn’t have them.

But the European powers all had tens of thousands of tons of mustard gas stockpiled, and they didn’t use it against each other. At the outset of World War II, there were also attempts to restrain aerial bombing of cities. It was widely viewed as reprehensible. It was also illegal under international law at the time, and there were attempts on all sides to refrain from that. At the outset of the war, in fact, they did, and Hitler actually put a directive to the Luftwaffe. I talk about this a little bit in the book, although unfortunately, a lot of the detail on some of this stuff got cut for space, which I was disappointed by.

Hitler put a directive to the Luftwaffe saying that they were not to engage in bombing of civilian targets, a terror bombing, in Britain, they were only to engage in bombing military targets, not because he was a humanitarian, because he was concerned about Britain retaliating. This attempt at restraint failed when, in the middle of the night, a German bomber strayed off course and bombed central London by mistake. In retaliation, Churchill ordered the bombing of Berlin. Hitler was incensed, gave a speech the following day announcing the launch of the London Blitz.

So here’s an example where there was some slippage in the principle of what was allowed and what was not, and so you had a little bit of accidental crossing of the line in conflict. So the sharper and clearer this line is, the better. You could extrapolate from that and say it’s likely that if, for example, what World War II powers had agreed to in World War II was that they could only use poison gas against military targets but not against civilian targets, that it would have quickly escalated to civilian targets as well.

In the context of autonomous weapons, that’s one of the arguments why you’ve see some advocates of a ban say that they don’t support what is sometimes called a partition treaty, which is something that would create a geographic partition that would say you could only use autonomous weapons outside of populated areas. What some advocates of a ban have said is, “Look, that’s never going to hold in combat.” That sounds good. I’ve heard some international humanitarian lawyers say that, “Oh, well, this is how we solve this problem.” But in practice, I agree that’s not likely to be very feasible.

So clarity’s important. Another factor is the relative value of, the military value of a weapon, versus its perceived horribleness. I think, again, a good case in point here is the difference in the International Committee’s success in largely getting most countries to give up chemical weapons, but the lack of success on nuclear weapons. Nuclear weapons by any reasonable measure are far more terrible in terms of their immediate and long-lasting effects on human life and the environment, but they have much more military value, at least perceived military value. So countries are much more reluctant to give them up.

So that’s another factor, and then there are some other ones that I think are fairly straightforward but also matter, things like the access to the weapon and the number of actors that are needed to get agreement. If only two countries have the technology, it’s easier to get them on board than if it’s widely available and everyone needs to agree. But I think those are some really important factors that are significant.

One of the things that actually doesn’t matter that much is the legality of a weapons treaty. I’m not saying it doesn’t matter at all, but you see plenty of examples of legally binding treaties that are violated in wartime, and you see some examples, not a ton, but some examples of mutual restraint among countries when there is no legally binding agreement or sometimes no agreement at all, no written agreement. It’s sort of a tacit agreement to refrain from certain types of competition or uses of weapons.

All of those, I think, are really important factors when you think about the likelihood of a ban actually succeeding on any weapons — not just autonomous weapons, any weapons — but the likelihood of a ban actually succeeding in wartime.

Ariel: I’m probably going to want to come back to this, but you mentioned something that reminded me of another question that I had for you. And that is, in your book, you mentioned … I don’t remember what the weapon was, but it was essentially an autonomous weapon that the military chose not to use and then ended up giving up because it was so costly, and ultimately they didn’t trust it to make the right decisions.

I’m interested in this idea of the extent to which we trust the weapons to do whatever it is that they’re tasked with if they’re in some sort of autonomous mode, and I guess where we stand today with various weapons and whether military will have increasing trust in their weapons in the future.

Paul: The case study I think you’re referring to was an anti-ship missile called the Tomahawk anti-ship missile, or TASM, that was in service by the US Navy in the 1980s. That I would classify as an autonomous weapon. It was designed to go over the horizon to attack Soviet ships, and it could fly a search pattern. I think, actually, in the book I included the graphic of the search pattern that it would fly to look for Soviet ships.

The concern was that the way this would work in anti-surface warfare is the navy would send out patrol aircraft because they’re much faster. They have much longer range than ships. And they would scout for other enemy ships. The principle in a wartime environment is patrol aircraft would find a Soviet ship and then radio back to a destroyer the Soviet ship’s location, and the destroyer would launch a missile.

Now, the problem was, by the time the missile got there, the ship would have moved. So the ship would now have what the military would call an area of uncertainty that the ship might be in. They wouldn’t have the ability to continuously track the ship, and so what they basically would do was the missile would fly a search pattern over this area of uncertainty, and when it found the ship, it would attack it.

Now, at the time in the 1980s, the technology was not particularly advanced and it wasn’t very good at discriminating between different kinds of ships. So one of the concerns was that if there happened to be another kind of ship in the area that was not an enemy combatant, it still might attack it if it was within this search pattern area. Again, it’s originally cued by a human that had some indication of something there, but there was enough uncertainty that it flies this pattern on its own. And I only for that reason call it autonomous weapon because there was a great amount of uncertainty about sort of what it might hit and whether it might do so accurately. And it could, once launched, it would sort of find and attack all on its own.

So it was never used, and there was great hesitance about it being used. I interview a retired US Navy officer who was familiar with it at the time, and he talks about that they didn’t trust that its targeting was good enough that once they let it loose, that it might hit the right target. Moreover, there was the secondary problem, which is it might hit the wrong target, sort of a false positive, if you will, but it also might miss the Soviet ship, in which case they would have simply wasted a weapons system.

That’s another problem that militaries have, which is missiles are costly. They don’t have very many of them in their inventory. Particularly if it’s something like a ship or an aircraft, there’s only so many that they can carry physically on board. So they don’t want to waste them for no good reason, which is another practical to an operational consideration. So eventually it was taken out of service for what I understand to be all of these reasons, and that’s a little bit of guesswork, I should say, as to why it was taken out of service. I don’t have any official documentation saying that, but that’s at least, I think, a reasonable assumption about some of the motivating factors based on talking to people who were familiar with it at the time.

One of the things that I think is an important dynamic that I talk about in the book, which is that, that is really an acute problem, the wasting the weapon problem for missiles that are not recoverable. You launch it, you’re not going to get it back. If the enemy’s not there, then you’ve just wasted this thing. That changes dramatically if you have a drone that can return back. Now, all of the concerns about it hitting the wrong target and civilian casualties, those still exist and those are very much on the minds of at least Western military professionals who are concerned about civilian casualties and countries that care about the rule of law more broadly.

But this issue of wasting the weapon is less of an issue when you have something that’s recoverable and you can send it out on patrol. So I think it’s possible, and this is a hypothesis, but it’s possible that as we see more drones and combat drones in particular being put into service and intended to be used in contested areas where they may have jammed communications, that we start to see that dynamic change.

To your question about trust, I guess I’d say that there is a lot of concern at least among the military professionals that I talk to in the United States and in other Allied countries, NATO countries or Australia or Japan, that there was a lot of concern about trust in these systems, and in fact, I see much more confidence … I’m going to make a broad generalization here, okay? So forgive me, but in general I would say that I see much more confidence in the technology coming from the engineers who are building them at military research labs or at defense companies, than in the military professionals in uniform who have to push the button and use them, that they’re a little bit more skeptical of wanting to actually trust these and delegate, what they see as their responsibility, to this machine.

Ariel: What do you envision, sort of if we go down current trajectories, as the future of weaponry specifically as it relates to autonomous weaponry and potentially lethal autonomous weaponry? And to what extent do you think that international agreements could change that trajectory? And maybe, even, to what extent to you think countries might possibly even appreciate having guidelines to work within?

Paul: I’ll answer that, but let me first make an observation about most of the dialogue in the space. There’s sort of two different questions wrapped up in there. What is the likely outcome of a future of autonomous weapons? Is it a good future or a bad future? And then another one is, what is the feasibility of some kind of international attention to control or regulate or limit these weapons? Is that possible or unlikely to succeed?

What I tend to hear is that people on all sides of this issue tend to cluster into two camps. They tend to either say, “Look, autonomous weapons are horrible and they’re going to cause all these terrible effects. But if we just all get together, we can ban them. All we need to do is just … I don’t know what’s wrong with countries. We need to sit down. We need to sign a treaty and we’ll get rid of these things and our problems will be solved.”

Other people in the opposite camp say, “Bans don’t work, and anyways, autonomous weapons would be great. Wouldn’t they be wonderful? They could make war so great, and humans wouldn’t make mistakes anymore, and no innocent people would be killed, and war would be safe and humane and pristine.” Those things don’t necessarily go together. So it’s entirely possible … Like if you sort of imagine a two-by-two matrix. It’s really convenient that everybody’s views fit into those boxes very harmoniously, but it may not be possible.

I suspect that, on the whole, autonomous weapons that have no human control over targeting are not likely to make war better. It’s hard for me to say that would be a better thing. I can see why militaries might want them in some instances. I think some of the claims about the military values might be overblown, but there are certainly some in situations where you can imagine they’d be valuable. I think it kind of remains to be seen how valuable and what context, but you can imagine that.

But in general, I think that humans add a lot of value to making decisions about lethal force, and we should be very hesitant to take humans away. I also am somewhat skeptical of the feasibility of actually achieving restraint on these topics. I think it’s very unlikely the way the current international dynamics are unfolding, which is largely focused on humanitarian concerns and berating countries and telling them that they are not going to build weapons that comply with international humanitarian law.

I just don’t think that’s a winning argument. I don’t think that resonates with most of the major military powers. So I think that when you look at, actually, historical attempts to ban weapons, that right now what we’re seeing is a continuation of the most recent historical playbook, which is that elements of civil society have kind of put pressure on countries to ban certain weapons for humanitarian reasons. I think it’s actually unusual when you look at the broader historical arc. Most attempts to ban weapons were driven by great powers and not by outsiders, and most of them centered on strategic concerns, concerns about someone getting an unfair military advantage, or weapons making war more challenging for militaries themselves or making life more challenging for combatants themselves.

Ariel: When you say that it was driven by powers, do you mean you’d have, say, two powerful countries and they’re each worried that the other will get an advantage, and so they agree to just ban something in advance to avoid that?

Paul: Yeah. There’s a couple time periods that kind of seem most relevant here. One would be a flurry of attempts to control weapons that came out of the Industrial Revolution around the dawn of the 20th century. These included air balloons, or basically air-delivered weapons from balloons or airplanes, submarines, poison gas, what was called fulminating projectiles. You could think of projectiles or bullets that have fire in them or are burning, or exploding bullets, sawback bayonets. There was some restraint on their use in World War I, although it wasn’t ever written down, but there seems to be a historical record of some constraint there.

That was one time period, and at the time, that was all driven by the great powers at the time. So these were generally driven by the major European powers and then Japan as Japan sort of came rising on the international stage and particularly was involved as a naval power in the naval treaties. The Washington Naval Treaty is another example of this that attempts to control a naval arms race.

And then, of course, there were a flurry of arms control treaties during the Cold War driven by the US and the USSR. Some of them were bilateral. Many of them were multilateral but driven principally by those two powers. So that’s not to say there’s anything wrong with the current models of NGOs in civil society pushing for bans, because it’s worked and it’s worked in landmines and cluster munitions. I’m not sure that the same conditions apply in this instance, in large part because in those cases, there was real humanitarian harm that was demonstrated.

So you could really, I think, fairly criticize countries for not taking action because people were being literally maimed and killed every day by landmines and cluster munitions, whereas here it’s more hypothetical, and so you see people sort of extrapolating to all sorts of possible futures and some people saying, “Well, this going to be terrible,” but other people saying, “Oh, wouldn’t it be great,” and some say it’d be wonderful.

I’m just not sure that the current playbook that some people are using, which is to sort of generate public pressure, will work when the weapons are still hypothetical. And, frankly, they sound like science fiction. There was this recent open letter that FLI was involved in, and I was sitting in the break room at CNN before doing a short bit on this and talking to someone about this. They said, “Well, what are you going on about?” I said, “Well, some AI scientists wrote a letter saying they weren’t going to build killer robots.”

I think to many people it just doesn’t sound like a near-term problem. That’s not to say that it’s not a good thing that people are leading into the issue. I think it’s great that we’re seeing people pay attention to the issue and anticipate it and not wait until it happens. But I’m also just not sure that the public sentiment to put pressure on countries will manifest. Maybe it will. It’s hard to say, but I don’t think we’ve seen it yet.

Ariel: Do you think in terms of considering this to be more near term or farther away, are military personnel also in that camp of thinking that it’s still farther away, or within militaries is it considered a more feasible technology in the near term?

Paul: I think it depends a little bit on how someone defines the problem. If they define an autonomous weapon as human-level intelligence, then I think there’s a wide agreement. Well, at least within military circles. I can’t say wide agreement. There’s probably a lot of people on the podcast who might, maybe, have varying degrees of where they think that might be in terms of listeners.

But in military circles, I think there’s a perception that that’s just not a problem in the near term at all. If what you mean is something that is relatively simple but can go over a wide area and identify targets and attack them, I think many military professionals would say that the technology is very doable today.

Ariel: Have you seen militaries striving to create that type of weaponry? Are we moving in that direction, or do you see this as something that militaries are still hesitating to move towards?

Paul: That’s a tricky question. I’ll give you my best shot at understanding the answer to that because I think it’s a really important one, and part of it is I just don’t know because there’s not great transparency in what a lot of countries are doing. I have a fairly reasonable understanding of what’s going on in the United States but much less so in other places, and certainly in countries like authoritarian regimes like Russia and China, it’s very hard to glean from the outside what they’re doing or how they’re thinking about some of these issues.

I’d say that almost all major military powers are racing forward to invest in more robotics and autonomous artificial intelligence. I think for many of them, they have not yet made a decision whether they will cross the line to weapons that actually choose their own targets, to what I would call an autonomous weapon. I think for a lot of Western countries, they would agree that there’s a meaningful line there. They might parse it in different ways.

The only two countries that have really put any public guidance out on this are the United States and the United Kingdom, and they actually define autonomous weapon in quite different ways. So it’s not clear from that to interpret sort of how they will treat that going forward. US defense leaders have said publicly on numerous occasions that their intention is to keep a human in the loop, but then they also will often caveat that and say, “Well, look. If other countries don’t, we might be forced to follow suit.”

So it’s sort of in the loop for now, but it’s not clear how long “for now” might be. I think it’s not clear to me whether countries like Russia and China even see the issue in the same light, whether they even see a line in the same place. And at least some of the public statements out of Russia, for example, talking about fully roboticized units or some Russian defense contractors claiming to have built autonomous weapons that can do targeting on their own, it would suggest that they may not even see the light in the same way.

In fairness, that is a view that I hear among some military professionals and technologists. I don’t want to say that’s the majority view, but it is at least a significant viewpoint where people will say, “Look, there’s no difference between that weapon, an autonomous weapon that can choose its own targets, and a missile today. It’s the same thing, and we’re already there.” Again, I don’t totally agree, but that is a viewpoint that’s out there.

Ariel: Do you think that the fact that countries have these differing viewpoints is a good reason to put more international pressure on developing some sort of regulations to try to bring countries in line, bring everyone onto the same page?

Paul: Yeah. I’m a huge supporter of the process that’s been going on with the United Nations. I’m frustrated, as many are, about the slowness of the progress. Part of this is a function of diplomacy, but part of this is just that they haven’t been meeting very often. When you add up all of the times over the last five years, it’s maybe five or six weeks of meetings. It’s just not very much time they spend together.

Part of it is, of course … Let’s be honest. It’s deliberate obstinacy on the part of many nations who want to slow the progress of talks. But I do think it would be beneficial if countries could come to some sort of agreement about rules of the road, about what they would see as appropriate in terms of where to go forward.

My view is that we’ve gotten the whole conversation off on the wrong foot by focusing on this question of whether or not to have a legally binding treaty, whether or not to have a ban. If this was me, that’s not how I would have framed the discussion from the get-go, because what happens is that many countries dig in their heels because they don’t want to sign to a treaty. So they’re just like they start off on a position of, “I’m opposed.” They don’t even know what they’re opposed to. They’re just opposed because they don’t want to sign a ban.

I think a better conversation to have would be to say, “Let’s talk about the role of autonomy and machines and humans in lethal decision-making in war going forward. Let’s talk about the technology. Let’s talk about what it can do, what it can’t do. Let’s talk about what humans are good at and what they’re not good at. Let’s think about the role that we want humans to play in these kinds of decisions on the battlefield. Let’s come up with a view of what we think ‘right’ looks like, and then we can figure out what kind of piece of paper we write it down on, whether it’s a piece of paper that’s legally binding or not.”

Ariel: Talking about what the technology actually is and what it can do is incredibly important, and in my next interview with Toby Walsh, we try to do just that.

Toby: I’m Toby Walsh, I’m a Scientia Professor of Artificial Intelligence at the University of New South Wales, which is in Sydney, Australia. I’m a bit of an accidental activist, in the sense that I’ve been drawn in, as a responsible scientist, to the conversation about the challenges, the opportunities, the risks that artificial intelligence pose in fighting war. And there’s many good things that AI’s going to do in terms of reducing casualties and saving lives, but equally, I’m very concerned, like many of my colleagues are, about the risks that it poses, especially when we hand over full control to computers and remove humans from the loop.

Ariel: So that will segue nicely into the first question I had for you, and that was what first got you thinking about lethal autonomous weapons? What first gave you reason for concern?

Toby: What gave me concern about the development of lethal autonomous weapons was to see prototype weapons being developed. And knowing the challenges that AI poses — we’re still a long way away from having machines that are as intelligent as humans, and knowing the limitations, and being very concerned that we were handing over control to machines that weren’t technically capable, and certainly weren’t morally capable, of making the right choices. And therefore, too, I felt a responsibility, as any scientist, that we want AI to be used for good and not for bad purposes. Unfortunately, like many technologies, it’s completely dual use. They’re pretty much the same algorithms that are going to go into your autonomous car, that are going to identify, track, and avoid pedestrians and cyclists, are going to go into autonomous drones that are going to identify combatants, track them, and kill them. It’s a very small change to turn one algorithm into the other. And we’re going to want autonomous cars, they’re going to bring great benefits to our lives, save lots of lives, give mobility to the elderly, to the young, to the disabled. So there can be great benefits for those algorithms, but equally, the same algorithms can be repositioned and used to make warfare much more terrible and much more terrifying.

Ariel: And with AI, we’ve seen some breakthroughs in recent years, just generally speaking. Do any of those give you reason to worry that lethal autonomous weapons are closer than maybe we thought they might have been five or ten years ago? Or has the trajectory been consistent?

Toby: The recent breakthroughs have to be put into the context and that they’ve been in things like games, like the game of Go, very narrow-focus task without uncertainty. The real world doesn’t interfere when you’re playing a game of Go, it’s very precise rules and very constrained actions that you need to do and things that you need to think about. And so to us it’s good to see progress in these narrow domains. We’re still not making much progress, there’s still a huge amount to be done to build machines that are as intelligent as us. But it’s not machines as intelligent as us that I’m very worried about, although that will be in 50 or 100 years time, when we have them, that will be something that we’ll have to think about then.

It’s actually stupid AI, the fact that we’re already thinking about giving responsibility to quite stupid algorithms that really cannot make the right distinctions, either in a technical sense, in terms of being able to distinguish combatants and civilians as required by international humanitarian law. And also from a moral ground, that they really can’t decide things like proportionality, they can’t make the moral distinctions that humans have. They don’t have any of the things like empathy and consciousness that allow us to make those difficult decisions that are made in the battlefield.

Ariel: If we do continue on our current path and we aren’t able to get a ban on these weapons, what concerns do you have? What do you fear will happen? Or what do you anticipate? What type of weapons?

Toby: The problem is, I think with the debate, is that people try and conflate the concerns that we have into just one concern. And there’s different concerns at different points in time and different developments of the technology.

So the concerns I have in the next 10 years or so are definitely concerns I would have in 50 years time. Now the concerns I would have in the next 10 years or so is largely around incompetence. The machines would not be capable of making the right distinctions. And later on, there are concerns that come, as the machines become more competent, different concerns. They would actually now change the speed, the duration, the accuracy of war. And they would be very terrible weapons that any ethical safeguards that we could, at that point, build in, might be removed by bad actors. Sadly, plenty of bad actors out there who would be willing to remove any of the ethical safeguards that we might build in. So there’s not one concern. I think, unfortunately, when you hear the discussion, often it’s people try and distill it down to just a single concern at a single point in time. And depending on the state of the technology, there are different concerns as the technology gets more sophisticated and more mature. But it’s only to begin with, I would be very concerned that we will introduce a rather stupid algorithm into battlefield and they couldn’t make the right moral and right technical distinctions that are required until IHL.

Ariel: Have you been keeping track at all of what sorts of developments have been coming out of different countries?

Toby: You can see, if you just go into YouTube, you can see there are prototype weapons. Pretty much in every theater of battle — in the air, there are autonomous drones and PA systems have autonomous drones that’s now been under development for a number of years. And on the sea, the US Navy’s launched, more than a year ago now, it’s first fully autonomous ship. And interestingly, when it was launched, they said it would just have defensive measures that we can use, hunting for mines, hunting for submarines, and now they’re talking about putting weapons on it. Under the sea, we have an autonomous submarine, an autonomous submarine the size of a bus that’s believed to be halfway across the Pacific, fully autonomously. And on land there are a number of different autonomous weapons. Certainly there are prototypes of autonomous tanks, autonomous sentry robots, and the like. So there is a bit of an arms race happening and it’s certainly very worrying to see that we’re sort of locked into one of these bad equilibria, where everyone is racing to develop these weapons, in part just because the other side is.

China is definitely one of the countries to be worried about. It’s made very clear its ambitions to seek economic military dominance through the use, in large part, in technologies like artificial intelligence and it’s investing very heavily to do that. The military and commercial companies are very tightly close together. It will give it quite a unique position, perhaps even some technical advantages to the development of AI, especially in the battlefield. So it was quite surprising, all of us at the UN meeting in April were pretty surprised when China came out and called for a ban on the deployment of autonomous weapons. It didn’t say anything about development of autonomous weapons, so that’s probably not as far as I would like countries to go because if they’re developed, then you still run the risk that they will be used, accidentally or otherwise. The world is still not as safe as if they’re not actually out there with their triggers waiting to go. But it’s interesting to see that they made that call. It’s hard to know whether they’re just being disruptive or whether they really do see the serious concern we have.

I’ve talked to my colleagues, academic researchers in China around, and they’ve been, certainly in private, sympathetic to the cause of regulating autonomous weapons. Of course, unfortunately, China is a country in which it’s not possible, in many respects, to talk freely. And so they’ve made it very clear that it would be a career-killing move for them, perhaps, to speak publicly like scientists in the West have done about these issues. Nevertheless, we have drawn signatures from Hong Kong, where it is possible to speak a bit more freely, which I think demonstrates that, within the scientific community internationally, across nations, there is actually broad support for these sorts of actions. But the local politics may prevent scientists from speaking out in their home country.

Ariel: A lot of the discussion around lethal autonomous weapons focuses on the humanitarian impact, but I was wondering if you could speak at all to the potential destabilizing effect that they could have for countries?

Toby: One of the aspects of autonomous weapons that I don’t think is discussed enough is quite how destabilizing they will be as a technology. They will be relatively easy, certainly cheap to get your hands on. As I was saying when I was in Korea most recently to the Koreans, the presence of autonomous weapons would make South Korea even less safe than it is today. A country like North Korea has demonstrated it’s willing to go to great lengths to attain atomic weapons. And it would be much easier for them to obtain autonomous weapons and that would put South Korea in a very difficult situation because if they were attacked by autonomous weapons and they weren’t able to defend themselves adequately, then that would escalate and we might well find ourselves in a nuclear conflict. One that, of course, none of us would like to see. So they will be rather destabilizing, like the weapons that fall into the wrong hands, they’ll be used not just by the superpowers, they’ll be used by smaller nations, even rogue states. Potentially, they might even be used by terrorist organizations.

And then another final aspect that makes them very destabilizing is one of attribution. If someone attacks you with autonomous weapons, then it’s going to be very hard to know who’s attacked you. It’s not like you can bring one of the weapons down, you can open it up and look inside it. It’s not going to tell you who launched it. There’s not a radio signal you can follow back to a base to find out who’s actually controlling this. So it’s going to be very hard to work out who’s attacking you and the countries will deny, vehemently, that it’s them, even if they went and attacked you. So they will be perfect weapons of terror, perfect weapons for troubling nations to do their troubling with.

One other concern that I have as a scientist is the risk of the field receiving a bad reputation by the misuse of the technology. We’ve seen this in areas like genetically modified crops. The great benefits that we might have had by that technology — making crops more disease-resistant, more climate-resistant, and that we need, in fact, to deal with the pressing problems that climate change and growing population’s put on our planet — have been negated by the fact that people were distrustful of the technology. And we run a similar sort of risk, I think, with artificial intelligence. That if people see the AI being used to fight terrible wars and to be used against civilians and other people, that the technology will have a stain on it. And all the many good uses and the great potential of the technology might be at risk because people will turn against all sorts of developments of artificial intelligence. And so that’s another risk and another reason many of my colleagues feel that we have to speak out very vocally to ensure that we get the benefits and that the public doesn’t turn against the whole idea of AI being used to improve the planet.

Ariel: Can you talk about the different between an AI weapon and an autonomous weapon?

Toby: Sure. There’s plenty of good things that the military can use artificial intelligence for. In fact, the U.S. military has historically been one of the greatest funders of AI research. There’s lots of good things you can use artificial intelligence for, in the battlefield and elsewhere. No one should risk a life or limb clearing a minefield, a perfect job for a robot because it could go rogue and blow up the robot and you can replace the robot easily. Equally, filtering through all the information coming at you, making sure that you can work out who are combatants and who are civilians, using AI to help you in a situation, once again, that’s a perfect job that will actually save lives, stop some of the mistakes that inevitably happen in the fog of war. And in lots of other areas in logistics and so on, there’s lots of good things in humanitarian aid that AI will be used for.

So I’m not against the use of AI in militaries, I think I can see great potential for it to save lives, to make war a little less dangerous. But there is a complete difference when we look at removing humans completely from the decision loop in a weapon and ending up with a fully autonomous weapon where it is the machine that is making the final decision as to who lives and who dies. And as I said before, that raises many technical, moral, and legal questions that we shouldn’t go down that line. And ultimately, I think there’s a very big moral argument, which is that we shouldn’t hand over those sorts of decisions, that would be taking us into a completely new moral territory that we’ve never seen before in our lives. Warfare is a terrible thing and we sanction it, and in part because we’re risking our own lives and it should be a matter of last resort, not something that we hand over easily to machines.

Ariel: Is there anything else that you think we should talk about?

Toby: I think we’d want to talk about whether regulating autonomous weapons, regulating AI, would hinder the benefits for peaceful or non-military uses. I’m very unconcerned, as many of my colleagues, that if we regulate autonomous weapons that that will actually hinder the development, in any way at all, of the peaceful and the good uses of AI. In fact, as I had mentioned earlier, I’m actually much more fearful that if we don’t regulate, there will be a backlash against the technology as a whole and that will actually hinder the good uses of AI. So I’m completely unconcerned, just like the bans on chemical weapons have not held back chemistry, the bans on biological weapons have not held back biology, the bans on nuclear weapons have not held back the development of peaceful uses of nuclear power. So I’m completely unconcerned, as many of my colleagues are, that regulating autonomous weapons will actually hold back the field in any way at all, in fact quite the opposite.

Ariel: Regulations for lethal autonomous weapons will be more effective if the debate is framed in a more meaningful way, so I’m happy Richard Moyes could talk about how the concept of meaningful human control has helped move the debate in a more focused direction.

Richard: I’m Richard Moyes, and I am Managing Director of Article 36, which is a non-governmental organization which focuses on issues of weapons policy and weapons law internationally.

Ariel: To start, you have done a lot of work, I think you’re credited with coining the phrase “meaningful human control.” So I was hoping you could talk a little bit about first, what are some of the complications around defining whether or not a human is involved and in control, and maybe if you could explain some of the human in the loop and on the loop ideas a little bit.

Richard: We developed and started using the term meaningful human control really as an effort to try and get the debate on autonomous weapons focused on the human element, the form and nature of human engagement that we want to retain as autonomy develops in different aspects of weapons function. First of all, that’s a term that’s designed to try and structure the debate towards thinking about that human element.

I suppose, the most simple question that we raised early on when proposing this term was really a recognition that I think everybody realizes that some form of human control would be needed over new weapon technologies. Nobody is really proposing weapon systems that operate without any human control whatsoever. At the same time, I think people could also recognize that simply having a human being pressing a button when they’re told to do so by a computer screen, without really having any understanding of what the situation is that they’re responding to, having a human simply pressing a button without understanding of the context, also doesn’t really involve human control. So even though in that latter situation, you might have a human in the loop, as that phrase goes, unless that human has some substantial understanding of what the context is and what the implications of their actions are, then simply a pro forma human engagement doesn’t seem sufficient either.

So, in a way, the term meaningful human control was put forward as a way of shifting the debate onto that human element, but also putting on the table this question of, well, what’s the quality of human engagement that we really need to see in these interactions in order to feel that our humanity is being retained in the use of force.

Ariel: Has that been successful in helping to frame the debate?

Richard: I think this sort of terminology, of course, different actors use different terms. Some people talk about necessary human control, or sufficient human control, or necessary human judgment. There’s different word choices there. I think there are pros and cons to those different choices, but we don’t tend to get too hung up on the specific wording that’s chosen there. The key thing is that these are seen bundled together as being a critical area now for discussion among states and other actors in multilateral diplomatic conversation about where the limits of autonomy in weapon systems lie.

I think coming out of the Group of Governmental Experts meeting of the Convention on Conventional Weapons that took place earlier this year, I think the conclusion of that meeting was more or less that this human element really does now need to be the focus of discussion and negotiation. So one way or another, I think the debate has shifted quite effectively onto this issue of the human element.

Ariel: What are you hoping for in this upcoming meeting?

Richard: Perhaps what I’m hoping for and what we’re going to get, or what we’re likely to get, might be rather different things. I would say I’d be hoping for states to start to put forward more substantial elaborations of what they consider the necessary human control, human element in the use of force to be. More substance on that policy side would be a helpful start, to give us material where we can start to see the differences and the similarities in states’ positions.

However, I suspect that the meeting in August is going to focus mainly on procedural issues around the adoption of the chair’s report, and the framing of what’s called the mandate for future work of the Group of Governmental Experts. That probably means that, rather than so much focus on the substance, we’re going to hear a lot of procedural talk in the room.

That said, in the margins, I think there’s still a very good opportunity for us to start to build confidence and a sense of partnership amongst states and non-governmental organizations and other actors who are keen to work towards the negotiation of an instrument on autonomous weapon systems. I think building that partnership between sort of progressive states and civil society actors and perhaps others from the corporate sector, building that partnership is going to be critical to developing a political dynamic for the period ahead.

Ariel: I’d like to go back, quickly, to this idea of human control. A while back, I talked with Heather Roff, and she gave this example, I think it was the empty hanger problem. Essentially what it is is no one expects some military leader to walk down to the airplane hangar and discover that the planes have all gone off to war without anyone saying something.

I think that gets at some of the confusion as to what human control looks like. You’d mentioned briefly the idea that a computer tells a human to push a button, and the human does that, but even in fully autonomous weapon systems, I think there would still be humans somewhere in the picture. So I was wondering if you could elaborate a little bit more on maybe some specifics of what it looks like for a human to have control or maybe where it starts to get fuzzy.

Richard: I think that we recognize that in the development of weapon technologies, already we see significant levels of automation, and a degree of handing over certain functions to sensors and to assistance from algorithms and the like. There are a number of areas that I think are of particular concern to us. I think, in a way, this is to recognize that a commander needs to have a sufficient contextual understanding of where it is that actual applications of force are likely to occur.

Already, we have weapon systems that might be projected over a relatively small area, and within that area, they will identify the heat shape of an armored fighting vehicle for example, and they may direct force against that object. That’s relatively accepted in current practice, but I think it’s accepted so long as we recognize that the area over which any application of force may occur is actually relatively bounded, and it’s occurring relatively shortly after a commander has initiated that mission.

Where I think my concerns, our concerns, lie is that that model of operation could be expanded over a greater area of space on the ground, and over a longer period of time. As that period of time and that area of space on the ground increase, then the ability of a commander to actually make an informed assessment about the likely implications of the specific applications of force that take place within that envelope becomes significantly diluted, to the point of being more or less meaningless.

For us, this is linked also to the concept of attacks as a term in international law. There’s a legal obligation that bears on human commanders at their unit of the attack, so there are certain legal obligations that a human has to fulfill for an attack. Now an attack doesn’t mean firing one bullet. An attack could retain a number of applications of actual force, but it seems to us that if you simply expand the space and the time over which an individual weapon systems can identify target objects for itself, ultimately you’re eroding that notion of an attack, which is actually a fundamental building block of the structure of the law. You’re diluting that legal framework to the point of it arguably being meaningless.

We want to see a reasonably constrained period of, say, let’s call it independence of operation for a system, it may not be fully independent, but where a commander has the ability to sufficiently understand the contextual parameters within which that operation is occurring.

Ariel: Can you speak at all, since you live in the UK, on what the UK stance is on autonomous weapons right now?

Richard: I would say the UK has, so far, been a somewhat reluctant dance partner on the issue of autonomous weapons. I do see some, I think, positive signs of movement in the UK’s policy articulations recently. One of the main problems they’ve had in the past is that they adopted a definition of lethal autonomous weapon systems, which is the terminology used in the CCW. It’s undetermined what this term lethal autonomous weapon systems means. That’s a sort of moving target in the debate, which makes the discussion quite complicated.

But the UK adopted a definition of that term which was somewhat in the realm of science fiction as far as we’re concerned. They describe lethal autonomous weapon systems as having the ability to understand a commander’s intent. I think, in doing so, they were suggesting an almost human-like intelligence within the system, which is a long way away, if even possible. It’s certainly a long way away from where we are now, and where already developments of autonomy in weapon systems are causing legal and practical management problems. By adopting that sort of futuristic definition, they a little bit ruled themselves out of being able to make constructive contributions to the actual debate about how much human control should there be in the use of force.

Now recently in certain publications, the UK has slightly opened up some space to recognize that that definition might actually not be so helpful, and maybe this focus on the human control element that needs to be retained is actually the most productive way forward. Now how positive the UK will be, from my perspective, in that discussion, and then talking about the level of human control that needs to be retained? I think that remains to be seen, but I think at least they’re engaging with some recognition that that’s the area where there needs to be more policy substance. So finger’s crossed.

Ariel: I’d asked Richard about the UK’s stance on autonomous weapons, but this is a global issue. I turned to Mary Wareham and Bonnie Docherty for more in-depth information about international efforts at the United Nations to ban lethal autonomous weapons.

Bonnie: My name’s Bonnie Docherty. I’m a senior researcher at Human Rights Watch, and also the director of Armed Conflict and Civilian Protection at Harvard Law School’s International Human Rights Clinic. I’ve been working on fully autonomous weapons since the beginning of the campaign doing most of the research and writing regarding the issue for Human Rights Watch and Harvard.

Mary: This is Mary Wareham. I’m the advocacy director of the Arms Division at Human Rights Watch. I serve as the global coordinator of the Campaign to Stop Killer Robots. This is the coalition of non-governmental organizations that we co-founded towards the end of 2012 and launched in April 2013.

Ariel: What prompted the formation of the Campaign to Stop Killer Robots?

Bonnie: Well, Human Rights Watch picked up this issue, we published our first report in 2012. Our concern was the development of this new technology that raised a host of concerns, legal concerns, compliance with international and humanitarian law and human rights law, moral concerns, accountability concerns, scientific concerns and so forth. We launched a report that was an initial foray into the issues, trying to preempt the development of these weapons before they came into existence because the genie’s out of the bottle, it’s hard to put it back in, hard to get countries to give up a new technology.

Mary: Maybe I can follow up there just to establish the Campaign to Stop Killer Robots. I did a lot of leg work in 2011, 2012 talking to a lot of the people that Bonnie was talking to for the preparation of the report. My questions were more about what should we do once we launch this report? Do you share the same concerns that we have at Human Rights Watch, and, if so, is there a need for a coordinated international civil society coalition to organize us going forward and to present a united voice and position to governments who we want to take action on this? For us, working that way in a coalition with other non-governmental organizations is what we do. We’ve been doing it for the two last decades on other humanitarian disarmament issues, the International Campaign to Ban Landmines, the Cluster Munition Coalition. We find it’s more effective when we all try to work together and provide a coordinated civil society voice. There was strong interest, and therefore, we co-founded the Campaign to Stop Killer Robots.

Ariel: What prompted you to consider a ban versus your trying to … I guess I don’t know other options there might have been.

Bonnie: We felt from the beginning that what was needed to address fully autonomous weapons is a preemptive ban on development, production and use. Some people have argued that existing law is adequate. Some people have argued you only need to regulate it, to limit it to certain circumstances, but in our mind a ban is essential, and that draws on past work on other conventional weapons such as landmines and cluster munitions, and more recently nuclear weapons.

The reason for a ban is that if you allow these weapons to exist, even to come into being, to be in countries’ arsenals, they will inevitably get in the hands of dictators or rogue actors that will use them against the law and against the rules of morality. They will harm combatants as well as civilians. It’s impossible once a weapon exists to restrict it to a certain circumstance. I think those who favor regulation assume the user will follow all the rules, and that’s just not the way it happens. We believe it should be preemptive because once they come into existence it’s too late. They will be harder to control, and so if you prevent them from even happening that will be the most effective solution.

The last point I’d make is that it also increases the stigma against the weapons, which can influence even countries that aren’t party to a treaty banning them. This is proven in past weapons treaties, and even there’s been a preemptive ban on blinding lasers in the 1990s, and that’s been very effective. There is legal precedent for this, and many arguments for why a ban is the best solution.

Mary: Yeah, there’s two ways of framing that call, which is not just the call of Human Rights Watch, but the call of the Campaign to Stop Killer Robots. We seek a preemptive ban on the development, production and use of fully autonomous weapons. That’s a kind of negative way of framing it. The positive way is that we want to retain meaningful human control over the use of force and over weapons systems going forward. There’s a lot of interest, and I’d say convergence on those two points.

We’re five years on since the launch of the campaign, 26 countries are now supporting the call for a ban and actively trying to get us there, and an even larger number of countries, actually, virtually all of the ones who’ve spoken to-date on this topic, acknowledge the need for some form of human control over the use of force and over weapons systems going forward. It’s been interesting to see in the five diplomatic meetings that governments have held on this topic since May 2014, the discussions keep returning to the notion of human control and the role of the human and how we can retain that going forward because autonomy and artificial intelligence are going to be used by militaries. What we want to do, though, is draw a normative line and provide some guidance and a framework going forward that we can work with.

Ariel: You just referred to them as fully autonomous weapons. At FLI we usually talk about lethal autonomous weapons versus non-lethal fully autonomous weapons, and so that sort of drives me to the question of, to what extent do definitions matter?

Then, this is probably a completely different question, how are lethal autonomous weapons different from conventional weapons? The reason I’m combining these two questions is because I’m guessing definition does play a little bit of a role there, but I’m not sure.

Bonnie: Well, it’s important for countries to make international law they have to have a general, common understanding of what we’re talking about. Generally, in a legal treaty the last thing to be articulated is the actual definition. It’s premature to get a detailed, technical definition, but we feel that, although a variety of names have been used, lethal autonomous weapon systems, fully autonomous weapons, killer robots, in essence they’re all talking about the same thing. They’re all talking about a system that can select a target and choose to fire on that target without meaningful human control. There’s already convergence around this definition, even if it hasn’t been defined in detail. In terms of conventional munitions, they are, in essence, a conventional munition if they deploy conventional weapons. It depends on what the payload is. If a fully autonomous system were launching nuclear weapons it would not be a conventional weapon. If it’s launching cluster munitions it would be a conventional. It’s not right to say they’re not conventional weapons.

Mary: The talks are being held at the Convention on Conventional Weapons in Geneva. This is where governments decided to house this topic. I think it’s natural for people to want to talk about definitions. From the beginning that’s what you do with a new topic, right? You try and figure out the boundaries of what you’re discussing here. Those talks in Geneva and the reporting that has been done to date and all of the discourse, I think it’s been pretty clear that this campaign and this focus on fully autonomous weapons is about kinetic weapons. It’s not about cyber, per se, it’s about actual things that can kill people physically.

I think the ICRC, the Red Cross, has made it an important contribution with its suggestion to focus on the critical functions of weapons systems, which is what we were doing in the campaign, we just weren’t calling it that. That’s this action of identifying and selecting a target, and then firing on it, using force, lethal or otherwise. Those are the two functions that we want to ensure remain under human control, under meaningful human control.

For some others, some other states, they like to draw what we call the very wide definition of meaningful human control. For some of them it means good programming, nice design, a weapons review, a kind of legal review of if the weapon system will be legal and if they can proceed to develop it. You could kind of cast a very wide loop when you’re talking about meaningful human control, but for us the crux of the whole thing is about this notion of selecting targets and firing on them.

Ariel: What are the concerns that you have about this idea of non-human control? What worries you about that?

Mary: Of autonomy in weapon systems?

Ariel: Yeah, essentially, yes.

Mary: We’ve articulated legal concerns here at Human Rights Watch just because that’s where we always start, and that’s Bonnie’s area of expertise, but there are much broader concerns here that we’re also worried about, too. This notion of crossing a moral line and permitting a machine to take human life on the battlefield or in policing or in border control and other circumstances, that’s abhorrent, and that’s something that the Nobel Peace Laureates, the faith leaders and the others involved in the Campaign to Stop Killer Robots want to prevent. For them that’s a step too far.

They also worry about outsourcing killing to machines. Where’s the ethics in that? Then, what impact is this going to have on the system that we have in place globally? How will it be destabilizing in various regions, and, as a whole, what will happen when dictators and one-party states and military regimes get ahold of fully autonomous weapons? How will they use them? How will non-state armed groups use them?

Bonnie: I would just add, building on what Mary said, another reason human control is so important is that humans bring judgment. They bring legal and ethical judgment based on their innate characteristics, on their understanding of another human being, of the mores of a culture, and that a robot cannot bring, certain things cannot be programmed. For example, when they’re weighing whether the military advantage will justify an attack if it causes civilian harm, they apply that judgment, which is both legal and ethical. A robot won’t have that, that’s a human thing. Losing humanity in use of force, potentially, violate the law, and as well as raise serious moral concerns that Mary discussed.

Ariel: I want to go back to the process to get these weapons banned. It’s been going on for quite a few years now. I was curious, is that slow, or is that just sort of the normal speed for banning a weapon?

Mary: Look at nuclear weapons, Ariel.

Ariel: Yeah, that’s a good point. That took a while.

Mary: That took so many years, you know? That’s the example that we’re trying to avoid here. We don’t want to be negotiating a non-proliferation treaty in 20 years time with the small number of countries who’ve got these and the other states who don’t. We’re at a crossroads here. Sorry to interrupt you.

Ariel: No, that was a good point.

Mary: There have been five meetings on this topic to date at the United Nations in Geneva, but each of those meetings has only been up to a week long, so, really, it’s only five weeks of talks that have happened in the last four years. That’s not much time to make a lot of progress to get everybody around the same table understanding, but I think there’s definitely been some progress in those talks to delineate the parameters of this issue, to explore it and begin to pull apart the notion of human control and how you can ensure that that’s retained in weapons systems in the selection of targets and the use of force. There’s a wide range of different levels of knowledge on this issue, not just in civil society and academia and in the public, but also within governments.

There’s a lot of leg work to be done there to increase the awareness, but also the confidence of governments to feel like they can deal with this. What’s happened, especially I think in the past year, has been increased calls to now move from exploring the issue and talking about the parameters of the challenge to, “What are we good do about it?” That’s going to be the big debate at the next meeting, which is coming up at the end of August, is what will the recommendation be for future work? Are the governments going to keep talking about this, which we hope they do, but what are they going to do about it, more importantly?

We’re seeing, I think, a groundswell of support now for moving towards an outcome. States realize that they do not have the time or the money to waste on inconclusive deliberations, and so they met to be exploring options on pathways forward, but there’s really not that many options. As has been mentioned, states can talk about international law and the existing rules and how they can apply them and have more transparency there, but I think we’ve moved beyond that.

There’s kind of a couple of possibilities which will be debated. One is political measures, political non-binding declaration. Can we get agreement on some form of principles over human control? That sounds good, but it doesn’t go nearly far enough. We could create new international law. How do we do that in this particular treaty at the Convention on Conventional Weapons? You move to a negotiating mandate, and you set the objective of negotiating a new protocol under the Convention on Conventional Weapons. At the moment, there has been no agreement to move to negotiate new international law, but we’re expecting that to be the main topic of debate at the next meeting because they have to decide now what they’re going to do next year.

For us, the biggest, I think, developments are happening outside of the room right now rather than in Geneva itself. There’s a lot of activity now starting to happen in national capitols by governments to try and figure out what their position is on this, what their policy is on this, but there’s more prodding and questioning and debate starting to happen in national parliaments, and that has to happen in order to determine what the government position is on this and what’s going to happen on it. Then we have the examples of the open letters, the sign-on letters, ethical principles, there’s all sorts of new things that are coming out in recent weeks that I think will be relevant to what the governments are discussing, and we hope will provide them with impetus to move forward with focus and purpose here.

We can’t put a timeline on by when they might create a new international treaty, but we’re saying you can do this quickly if you put your mind to it and you say that this is what you want to try and achieve. We believe that if they move to a negotiating mandate at the end of this year, they could negotiate the treaty next year. Negotiating the treaty is not the part that takes the long time. It’s about getting everybody into the position where they want to create new international law. The actual process of negotiating that law should be relatively swift. If it takes longer than a year or two, then it runs the risk of turning into another set of inconclusive deliberations that don’t produce anything. For us, the goal is absolutely crucial to get in there at the beginning. The goal at the moment has gone from informal talks to formal talks, but, still, with no option or outcome.

Ariel: What is some of the resistance that you’re facing to moving towards a ban? Are governments worried that they’re going to miss out on a great technology, or is there some other reason that they’re resisting?

Mary: Just to say, 85 countries have spoken out on this topic to date. Most of them not at any great length, but just to say, “This is important. We’re concerned. We support the international talks.” We have a majority of countries now who want to move towards negotiating new international law. Who’s the blockages at the moment? At the last round of talks and at the previous ones it was basically Israel, Russia and the United States who were saying it’s premature to decide where these talks should lead. We need to further explore and discuss the issues before we can make any progress. For others, now people are less patient with that position, and it will be interesting to see if those three countries in particular change their minds here.

The particular treaty that we’re at, the Convention on Conventional Weapons, the states there take their decisions by consensus, which means they can’t vote. There’s no voting procedures there. They have to strive for consensus where everybody in the room agrees, or at least does not object with moving forward. That threat of a kind of a blocking of consensus is always there, especially from Russia, but we’ll see. There’s no kind of pro-killer robot state which is saying, “We want these things. We need these things,” right now, at least not in the diplomatic talks. The only countries who have wanted to talk about the potential advantages or benefits are Israel and the United States. All of the other countries who speak about this are more concerned about understanding and coming to grips with all of the challenges that are raised, and then figuring out what the regulatory framework should be.

Ariel: Bonnie, was there anything you wanted to add to that?

Bonnie: I think Mary summarized the key points. I was just going to say that there’s some people who would argue that we should wait and see what the technology would bring, we don’t know where it’ll go. Our argument counter to that is something called the precautionary principle, that even if there’s scientific uncertainty about where a technology will go, if there’s a significant risk of public harm, which there is in this case, that the scientific uncertainty should not stand in the way of action. I think that the growing number of states that have expressed concern about these weapons, and the majority, the almost consensus or the merging around the need for human control show that there is willingness to act at this point. As Mary said, this is not a situation where people are advocating, and I think that in the long run the agreement that there should be human control over the use of force will outweigh any hesitation based on the wait-and-see approach.

Mary: We had a good proposal, or not proposal, but offer from the United Nations Secretary General in this big agenda for disarmament framework that he launched a couple of months ago, saying that he stands ready to support the efforts of UN member states to elaborate new measures on lethal autonomous weapon systems, including legally-binding arrangements. For him, he wants states to ensure that humans remain at all times in control over the use of force. To have that kind of offer of support from the highest level at the United Nations I think is very important.

The other recent pledges and commitments, the one by the 200 technology companies and more than 2600 scientists and AI experts and other individuals committing not to develop lethal autonomous weapons systems, that’s a very powerful message, I think, to the states that these groups and individuals are not going to wait for the regulation. They’re committing not to do it, and this is what they expect the governments to do as well. We also saw the ethical principles issued by Google in recent weeks and this pledge by the company not to design or develop artificial intelligence for use in weapons. All of these efforts and initiatives are very relevant to what states need to do going forward. This is why we in the Campaign to Stop Killer Robots welcome them and encourage them, and want to ensure that we have as much of a broad-based appeal to support the government action that we need taken.

Ariel: Can you talk a little bit about what’s happening with China? Because they’ve sort of supported a ban. They’re listed as supporting a ban, but it’s complicated.

Mary: It’s funny because so many other countries that have come forward and endorsed the call for a ban have not elicited the same amount of attention. I guess it’s obviously interesting, though, for China to do this because everybody knows about the investments that China is making into military applications of artificial intelligence and autonomy. We see the weapons systems that are in development at the moment, including swarms of very small miniature drones, and where will that head?

What China thinks about this issue matters. At the last meeting, China basically endorsed the call for a ban, but said — there’s always a but — that their support was limited to prohibiting use only, and to not address development or production. For us it’s a partial ban, but we put them on the list that the campaign maintains, and they’re the first state to have an asterisk by its entry saying, “Look, China is on the ban list, but it’s not fully committed here.” We needed to acknowledge that because it wasn’t really the first that China had hinted it would support creating new international law. It has been hinting at this in previous papers, including one that found that China’s review of existing international law found so many questions and doubts raised that it does see a need to create international law specific to fully autonomous weapons systems. China gave the example of the blinding lasers protocol at the CCW which prohibits laser weapons that would permanently blind human soldiers.

I think the real news on China is that its position now saying that existing law is insufficient and we need to create new international rules, splits the P5, the permanent five members of the United Nations Security Council. You have Russia and the United States arguing that it’s too early to determine what the outcome should be, and the UK — Richard can explain better exactly what the UK wants — but it seems to be satisfied with the status quo. Then France is pursuing a political declaration, but not legally-binding measures. There’s not unity anymore in that group of five permanent members of the Security Council, and those states do matter because they are some of the ones who are best-placed to be developing and investing in increasingly autonomous weapons systems.

Ariel: Okay. I wanted to also ask, unrelated, right now what you’re trying to do, what we’re trying to do, is get a ban, a preemptive ban on a weapon that doesn’t exist. What are some examples in the past of that having succeeded, as opposed to proving some humanitarian disaster as the result of a weapon?

Bonnie: Well, the main precedent for that is the preemptive ban on blinding lasers, which is a protocol to the Convention on Conventional Weapons. We did some research a few years ago into the motives behind the preemptive ban on blinding lasers, and many of them are the same. They raised concerns about the ethics of permanently blinding someone, whether it’s a combatant or a civilian. They raised concerns about the threat of an arms race. They raised concerns that there be a ban, but that it not impede peaceful development in that area. That ban has been very successful. It has not impeded the peaceful use of lasers for many civilian purposes, but it has created a stigma against and a legally-binding ruling against using blinding lasers. We think that that’s an excellent model for fully autonomous weapons, and it also appeared in the same treaty at which these fully autonomous weapons or lethal autonomous weapon systems are being discussed right now. It’s a good model to look at.

Mary: Bonnie, I really like that paper that you did on the other precedents for retaining human control over weapons systems. The notion that looking at past weapons that have been prohibited and finding that, in many instances, it’s because of the uncontrollable effects that the weapons create, from chemical weapons and biological and toxin ones to antipersonnel landmines where, once deployed, you cannot control them anymore. This is the kind of notion of being able to control the weapon system once it’s activated that has driven those previous negotiations, right?

Bonnie: Correct. There’s precedent for both a preemptive ban, but there’s also precedent for a desire to maintain human control over weapons. As Mary said, there are several treaties, chemical weapons, biological weapons and landmines, all have been banned, in large part because people in governments were concerned about losing control over the weapons system. In essence, it’s the same model here, that by launching fully autonomous weapons you’d be losing control over the use of force. I think there’s a precedent for a ban, and there’s a precedent for a preemptive ban, all of which are applicable in this situation.

Ariel: I talked to Paul Scharre a little bit earlier, and one of the things that he talked about were treaties that were developed as a result of the powers that be, recognizing that the weapon would be too big of a risk for them, and so they agreed to ban a weapon. Then, the other sort of driving force for treaties was usually civil societies and based on sort of the general public saying, “This is not okay.” What role do you see for both of those situations here?

Bonnie: There’s a multitude of reasons of why these weapons should be banned, and I think both the ones you mentioned are valid in this case. From our point of view, the main concern is a humanitarian one, and that’s civil society’s focus. We’re concerned about the risk to civilians. We’re concerned about moral issues, and those matters. That builds on past, what they call humanitarian disarmament treaties, treaties designed to protect humanity through legal norms, and, traditionally, often through bans, bans of landmines, cluster munitions and nuclear weapons.

There have been other treaties, sometimes they overlap, that have been driven more for security reasons. Countries that are concerned about other nations getting their hands on these weapons, and that they feel in the long run it’s better for no one to have them than for others to have them. Certainly, chemical weapons was an example of that. This does not mean that a treaty can’t be motivated for both reasons. That often happens, and I think both reasons are applicable here, but they just have come from slightly different trajectories.

Mary: It’s pretty amazing some of the diplomatic talks that we’ve been on on killer robots where we hear the governments debating the ethics of whether or not a specific weapon system such as fully autonomous weapons should be permitted, should be allowed. It’s rare that that happens. Normally, we are dealing with the aftermath of the consequences of proliferation and of widespread use and widespread production and stockpiling. This is an opportunity to do something in advance here, and it does kind of lead to a little bit of, I’d say, a North-South divide between the kind of military powers who have the resources at their disposal to invest in increasingly autonomous technology and try and push the boundaries, and then the vast majority of countries who are asking, “What’s the point of all of this? Where is the relevance of the UN charter which talks about general and complete disarmament as being the ultimate objective?” They ask, “Have we lost that goal here? Is the ultimate objective to create more and better and more sophisticated weapons systems, or is to end war and deal with the consequences through disarmament of warfare?”

Those are kind of really big-picture questions that are raised in this debate, and ones that we leave to those governments to make, but I think it is indicative of why there is so much interest in this particular concern, and that’s demonstrated by just the sheer number of governments who are participating in the international talks. The international talks, they’re in the setting called a Group of Governmental Experts, but this is not about a dozen guys sitting around the table in a small room. This is a big plenary meeting with more than 80 countries following, engaging, and avidly trying to figure out what to do.

Ariel: In terms of just helping people understand how the UN works, what role does a group like the Campaign to Stop Killer Robots play in the upcoming meeting? If, ultimately, the decision is made by the states and the nations, what is your role?

Mary: Our role is 24/7, all year round. These international meetings only happen a couple of times a year. This will be the second week this year. Most of our work has been this year happening in capitols and in places outside of the diplomatic meetings because that’s where you really make progress, is through the parliamentary initiatives, through reaching the high-level political leadership, through engaging the public, through talking to the media and getting an increased awareness about the challenges here and the need for action. All of those things are what makes things move inside the room with the diplomacy because the diplomats need instructions from capitols in order to really progress.

At the meeting itself, we seek to provide a diverse delegation that’s not just people from Europe and North America, but from around the world because this is a multilateral meeting. We need to ensure that we can reach out and engage with all of the delegates in the room because every country matters on this issue, and every country has questions. Can we answer all those questions? Probably not, but we can talk through them with those states, try and address the concerns, and try and be a valued partner in the deliberations that are happening. It’s the normal way of working for us here at Human Rights Watch, is to work alongside other organizations through coordinated civil society initiatives so that you don’t go to the meeting and have like 50 statements from different NGOs. You have just a few, or just one so that you can be absolutely clear and guiding where you want to see the deliberations go and the outcome that you want.

We’ll be holding side events and other efforts to engage with the delegates in different ways, as well as presenting new research and reports. I think you’ve got something coming out, Bonnie, right?

Bonnie: We’ll be releasing a new report on Martens Clause, which is a provision of international law, the Geneva conventions and other treaties that brings ethics into law. It basically has two prongs, which we’ll elaborate on in the report, but talking about that countries must comply with the principles of humanity and the dictates of public conscience, which, in short, we believe fully autonomous weapons raise concerns over both of those. We believe losing human control will violate basic principles of humanity, and that there’s the groundswell of opposition that’s growing among, not only governments, but also faith leaders, scientists, tech companies, academics, civil society, et cetera, all show that the public conscience is coming out against fully autonomous weapons and for maintaining human control over the use of force.

Ariel: To continue with this idea of the ethical issues surrounding lethal autonomous weapons, we’re joined now by Peter Asaro.

Peter: I’m Peter Asaro. I’m an Associate Professor in the School of Media Studies at the New School University in New York City, and I’m also the co-founder and vice chair of the International Committee for Robot Arms Control, which is part of the leadership steering committee of the Campaign to Stop Killer Robots, which is a coalition of NGOs that’s working at the UN to ban fully autonomous weapons.

Ariel: Could you tell us a little bit about how you got involved with this and what first gave you cause for concern?

Peter: My background is in philosophy and computer science, and I did a lot of work in artificial intelligence and in the philosophy of artificial intelligence as well as the history of science and early computing and the development of neural networks and the sort of mathematical and computational theories behind all of that. In the 1930s, ’40s, ’50s, and ’60s was my graduate work, and as part of that, I got really interested in the kind of modern or contemporary applications of both artificial intelligence and robotics, and specifically the kind of embodied forms of artificial intelligence, which are robotic in various ways, and got really interested in not just intelligence, but social interaction.

That sort of snowballed into thinking about robot ethics and what seems the most pressing issue within robot ethics was the use of violence, the use of force, and whether we would allow robots to kill people, and of course the first place that that was gonna happen would be the military. So, I’d been thinking a lot about the ethics of military robotics form the perspective of just war theory, but also a broad range of philosophical legal perspectives as well.

That got me involved with Noel Sharkey and some other people who were interested in this from a policy perspective and we launched the International Committee for Robot Arms Control back in 2009, and then in 2012, we got together with Human Rights Watch and a number of other NGOs to form the Campaign to Stop Killer Robots.

Ariel: That leads into the next question I have for you, and it’s very broad. Can you talk a little bit about what some of the ethical issues are surrounding robots and more specifically autonomous weapons in warfare?

Peter: I think of course there’s a whole host of ethical issues around robotics in general and privacy, safety, sort of the big ones, but all sorts of more complicated ones as well, job displacement, how we treat them, and the impacts on society and things like that. Within the military context, I think the issues are sort of clearer in some sense, because it’s mostly around the use autonomous systems in a lethal force.

So the primary question is should we allow autonomous weapons systems to make lethal decisions independently of human control or human judgment, however you frame that. And then sort of subsidiary to that, some would argue does the programming within a system constitute that kind of human control or decision making. From my perspective, pre-programming doesn’t really do that, and that’s because I come from a philosophical background and so we look at just war theory and you look at ethics, especially Kantian ethics, and the requirements for the morality of killing. So, killing is generally speaking immoral, but there are certain exceptions, and those are generally self-defense or collective self-defense in the case of war, but in order to justify that killing, you need reasons and justifications. And machines, and computational reasoning, at least at this stage of development, is not the type of system that has reasons. It follows rules and if certain conditions are met and a rule is applied and a result is obtained, but making a reasoned judgment about whether to use lethal force or whether to take a human life depends on a deeper understanding of reason, and I think that’s a sort of moral agency, it’s a moral decision making, and moral judgment that requires capacities that automated decision making systems just don’t have.

Maybe down the road in the future, machines will become conscious, machines will understand the meaning of life, machines will understand what it means to take a life, machines will be able to recognize human beings as humans who deserve rights that need to be respected, and systems may understand what it means to have a duty to respect the rights of others. But simply programming rules into machines doesn’t really do that. So, from a legal perspective as well, there’s no real accountability for these sorts of systems because they’re not legal agents, they’re not moral agents, you cannot sue a computer or a robot. You cannot charge them with crimes and put them in jail and things like that.

So, we have an entire legal system as well as a moral framework that assumes that humans are the responsible agents and the ones making decisions, and as soon as you start replacing that decision making with automated systems, you start to create significant problems for the regulation of these systems and for accountability and for justice. And then that leads directly to problems of safety and control, and what kinds of systems are gonna be fielded, what are gonna be the implications of that for international stability, who’s gonna have access to that, what are the implications for civilians and civilian infrastructures that might be targeted by these systems.

Ariel: I had wanted to go into some of this legality and liability stuff that you’ve brought up and you sort of given a nice overview of it as it is, but I was hoping you could expand a little bit on how this becomes a liability issue, and also … This is probably sort of an obvious question, but if you could touch a little on just how complicated it is to change the laws so that they would apply to autonomous systems as opposed to humans.

Peter: A lot of the work I’ve been doing under a grant for the Future of Life Institute, looks at liability in increasingly autonomous systems. I know within civilian domestic application, of course the big application that everybody’s looking at at the moment is the self-driving car, so you can ask this question, who’s responsible when the self-driving car creates an accident. And the way that liability law works, of course somebody somewhere is always going to wind up being responsible. The law will find a way to hold somebody responsible. The question is whether existing precedence and the ways of doing things under current legal frameworks is really just or is really the best way going forward as we have these kinds of increasingly autonomous systems.

So, in terms of holding persons responsible and liable, so under tort law, if you have an accident, then you can sue somebody. This isn’t criminal law, this is the law of torts, and under that, then you sort of receive monetary compensation for damages done. But ideally, the person, or agents, or company or what have you that causes the harm is the one that should pay. Of course, that’s not always true, and the way that liability works, does things like joint and several liability in which, even though one party only had a small hand in causing a harm, they may have lots of money, like a government or a state, or a city, or something like that, and so they may actually wind up paying far more as a share of damages than they actually contributed to a problem.

You also have situations of strict liability such that even if your agency in causing a problem was very limited, you can still be held fully responsible for the implications. There’s some interesting parallels here with the keeping of animals, which are kind of autonomous systems in a sense. They have their minds of their own, they sort of do things. On the other hand, we expect them to be well behaved and well trained, at least for domestic animals. So generally speaking, you have liability for harms caused by your dog or your horse and so forth as a domesticated animal, but you don’t have strict liability. So, you actually have to show that maybe you’ve trained your dog to attack or you’ve failed to properly train your horse or keep in a stable or what have you, whereas if you keep a tiger or something like that and it gets out and causes harm, then you’re strictly liable.

So the question is for a robot, should you be strictly liable for the robots that you create or the robots that you own? Should corporations that manufacture these systems be strictly liable for all of the accidents of self-driving cars? And while that seems like a good policy from the perspective of the public, because all the harms that are caused by these systems will be compensated, that could also stifle innovation. In the car sector, that doesn’t seem to be a problem. As it turns out, the president of Volvo said that they will accept strict liability for all of their self-driving cars. Tesla Motors has released a number of autopilot systems for their cars and more or less accepted the liability for that, although there’s only been a few accidents, so the actual jurisprudence or case law is still really emerging around that.

But those are, I think, a technology where the cars are very expensive, there’s a lot of money to be made in self-driving cars, and so the expectation of the car companies is that there will be very few accidents and that they can really afford to pay the damages for all those accidents. Now, is that gonna be true for personal robots? So, if you have a personal assistant, sort of butler robot who maybe goes on shopping errands and things like that for you, there’s a potential for them to cause significant economic damage. They’re probably not gonna be nearly as expensive as cars, hopefully, and it’s not clear that the market for them is going to be as big, and it’s not clear that companies would be able to absorb the cost of strict liability. So, there’s a question of whether that’s really the best policy for those kinds of systems.

Then there’s also questions of ability of people to modify their systems, so if you’re holding companies strictly responsible for their products, then those companies are not going to allow consumers to modify those products in any way, because that would affect their ability to control them. If you want a kind of DIY culture around autonomous systems of robotics, then you’re gonna see a lot of people modifying these systems, reprogramming these systems. So you also want, I think, a kind of strict liability around anybody who does those kinds of modifications rather than the manufacturer, and that’s to sort of break the seal and you accept all the responsibility for what happens.

And I think that’s sort of one side of it now and the military side of it, you don’t really have torts in the same way. There’s of course a couple of extreme issues around torts in war, but generally speaking, militaries do not pay monetary damages when they make mistakes. If they accidentally blow up the wrong building, they don’t pay to build a new building. That’s just considered a casualty of war and an accident, and it’s not even necessarily a war crime or anything else, because you don’t have these kind of mechanisms where you can sue an invading army for dropping a bomb in the wrong place.

The idea that liability is going to act as an accountability measure on autonomous system is just silly, I think, in warfare, because you just, you can’t sue people in war, basically. There’s a few exceptions and the governments that purchase weapons systems can sue the manufacturers, and that’s the sort of sense in which there is an ability to do that, but even most of those cases have been largely unsuccessful. Generally, those kinds of lawsuits are based on contracts and not the actual performance or damages caused by an actual system. So, you don’t really have that entire regulatory mechanism, so if you have a government that’s concerned about not harming civilians and not bombing the wrong buildings and things like that, of course, then they’re incentivized to put pressure on manufacturers to build systems that perform well, and that’s one of the sort of drivers of that technology.

But it’s a much weaker force if you think about what the engineers in a car company are thinking about in terms of safety and the kind of bottom line for their company if they make a product that causes accidents versus how that’s thought about in a defense company, where certainly they’re trying to protect civilians and ensure that systems work correctly, but they don’t have that enormously powerful economic concern about lawsuits in the future. The idea that the technology is going to be driven by similar forces, it doesn’t really apply. So that’s a big concern, I think, for the development of autonomous systems in the military sphere.

Ariel: Is there a worry or a risk that this sort of — I don’t know if it’s lack of liability, maybe it’s just whether or not we can trust the systems that are being built — but is there an increased risk of war crimes as a result of autonomous weapons, either intentionally or accidentally?

Peter: Yeah, I mean, the idea that there’s an increased risk of war crimes is kind of an interesting question, because the answer is simultaneously yes and no. What these autonomous systems actually do is diminish or remove, or put a distance between accountability of humans and their actions, or the consequences of their actions. So if you think of the autonomous system as a sort of intermediary between humans and the effects of their actions, there’s this sort of accountability gap that gets created. A system could go and do some horrendous act, like devastate a village and all the civilians in the village, and then we say, “Ah, is this a war crime?” And under international law as it stands, you’d have to prove intention, which is usually the most difficult part of war crimes tribunals, being able to actually demonstrate in court that a commander had the intention of committing some genocidal act or some war crime.

And you can build various forms of evidence for that. Now, if you send out an autonomous system, and you may not even know what that system is really gonna do and you don’t need to know exactly what it’s going to do when you give its orders, it becomes very easy to sort of distance yourself legally from what that system does in the field. Maybe you suspect it might do something terrible, and that’s what you really want, but it would be very easy then to sort of cover up your true intentions using these kinds of systems.

On the one hand, it would be much easier to commit war crimes. On the other hand, it’ll be much more difficult to prosecute or hold anybody accountable for war crimes that would be committed by autonomous weapons.

Ariel: You’ve also been producing some open letters this summer. There was one for academics calling on Google to stop work on Project Maven and … I’m sorry, you had another one… what was that one about?

Peter: The Amazon face recognition.

Ariel: Right. Right. Yeah. I was hoping you could talk a little bit about what you see as the role of academics and corporations and civil society in general in this debate about lethal autonomous weapons.

Peter: I think in terms of the debate of lethal autonomous weapons, civil society has a crucial role to play. I think in a broad range of humanitarian disarmament issues, and in the case of autonomous weapons, it’s really, it’s a technology that’s moving very quickly, and militaries are still a little bit unsure of exactly how they’re going to use it, but they’re very excited about it and they’re putting lots of research investment into new applications and trying to find new ways of using it. And I think that’s exciting from a research perspective, but it’s very concerning from a humanitarian and human rights perspective, because again, it’s not clear what kind of legal accountability will be around these systems. It’s not clear what kind of safety, control, and testing might be imposed on these systems, and it also seems quite clear that these systems are ready made for arms races and global and regional military destabilizations, where competitors are acquiring these systems and that has a potential to lead to conflict because of that destabilization itself. Then of course, the rapid proliferation.

So, in terms of civil society’s role, I think what we’ve been doing primarily is voicing of the general concern, I think, of the broad public globally and within specific countries that we’ve surveyed are largely opposed to these systems. Of course, the proponents say that’s just because they’ve seen too many sci fi movies and these things are gonna be just fine, but I don’t think that’s really the case. I think there’s some genuine fears and concerns that need to be addressed. So, we’ve also seen the involvement of a number of tech companies that are developing artificial intelligence, machine learning, robotics, and things like that.

And I think their interest and concern in this issue is twofold. We have companies like Clearpath Robotics, which is the largest robotics company in Canada, and also the largest supplier of robots to the Canadian military, whose engineers organized together to say that they do not want their systems to be used for autonomous weapons platforms, and they will not build them, but they also want to support the international campaign to ensure that governments don’t acquire their robots and then weaponize them. And they’re doing search and rescue robots and bomb disposal robots. This similar movement amongst academics and artificial intelligence and robotics who have spent really their life work developing these fundamental technologies who are then deeply concerned that the first and perhaps last application of this is going to be autonomous weapons, and the public will turn against artificial intelligence and robotics because of that, and then that these systems are genuinely scary and that we shouldn’t really be entrusting human lives or the decision to take human lives to these automated systems.

They have all kinds of great practical social applications and we should be pursuing those and just leave aside and really prohibit the use of these systems in the military context for autonomous targeting. And now I think we’re seeing more movement from the big companies, particularly this open letter that we’re a part of with Google, and their Project Maven. And Project Maven is a Pentagon project that aims at analyzing all the many thousands of hours of drone footage that the US military drones are collecting over Afghanistan and Iraq and various places where they’re operating. And to try to automate, using machine learning, to identify objects of interest, to kind of save time for human sensor analysts who have to pour through these images and then try to determine what that is.

And that in and of itself, that doesn’t seem too terrible, right? You’re just scanning through this imagery. But of course, this is really the first step to an automated targeted recognition system for drones, so if you wanted to fully automate drones, which currently require human operators to interpret the imagery to decide that this is something that should be targeted with a weapon and then to actually target and fire a weapon, that whole process is still controlled by humans. But if you wanted to automate it, the first thing you’d have to do is automate that visual analysis piece. So, Project Maven is trying to do exactly that, and to do that on a really big scale.

The other kind of issue from the perspective of a labor and research organization is that the Pentagon really has trouble, I think, attracting talent. There’s a really strong demand for artificial intelligence researchers and developers right now, because there’s so many applications and there’s so much business opportunity around it. It actually turns out the military opportunities are not nearly as lucrative as a lot of the other business applications. Google, and Amazon, and Facebook, and Microsoft can offer enormous salaries to people with PhDs in machine learning or even just masters degrees or some experience in systems development. And the Pentagon can’t compete with that on government salaries, and I think they’re even having trouble getting certain contracts with these companies. But when they get a contract with a company like Google, then they’re able to get access to really the top talent in artificial intelligence and their Cloud research groups and engineering, and also the sort of enormous capacity computationally of Google that has these massive data centers and processing capabilities.

And then you’re also getting … in some ways, Google is a company that collects data about people all over the world every day, all the time. Every Google search that you do, and there’s millions of Google searches per second or something in the world, so they have also the potential of applying the data that’s collected on the public in all these complicated ways. It’s really kind of a unique company in these respects. I think as a company that collects that kind of private data, they also have a certain obligation to society to ensure that that data isn’t used in detrimental ways, and siding with the single military in the world and using data that might be coming from users in countries where that military is operating, I think that’s deeply problematic.

We as academics kind of lined up with the engineers and researchers at Google who were already protesting Google’s involvement in this project. They were concerned about their involvement in the drone program. They were concerned about how this could be applied to autonomous weapons systems in the future. And they were just generally concerned with Google’s attempts to become a major military contractor and not just selling a simple service, like a word processor or a search, which they do anyway, but actually developing customized systems to do military operations, analyze these systems and apply their engineering skills and resources to that.

So, we really joined together as academics to support those workers. The workers passed around an open letter and then we passed around our letter, so the Google employees letter received over 4000 signatures and our letter from academics received almost 1200, a few shy. So, we really got a lot of mobilization and awareness, and then Google agreed to not renew that contract. So, they’re not dropping it, they’re gonna continue it till the end of the year, but they have said that they will not renew it in the future.

Ariel: Is there anything else that you think is important to mention?

Peter: I wrote a piece last night for a report on human dignity. So, I can just give you a little blurb about human dignity. I think the other kind of interesting ethical question around autonomous systems is this question of the right to human dignity and whether autonomous weapons or allowing robots to kill people would violate human dignity. I think some people have a very simplistic notion of human dignity, that it’s just some sort of aura or something of property that hangs around people and can be violated, but in fact I believe human dignity is a relation between people and this is a more Kantian view that human dignity means that you’re respected by others as a human. Others respect your rights, which doesn’t mean they can never violate them, but they have to have reasons and justifications that are sound in order to override your rights.

And in the case of human dignity, of course you can die in many terrible ways on a battlefield, but the question is whether the decision to kill you is justified and if it’s not, then it’s sort of an arbitrary killing. That means there’s no reasons for it, and I think if you look at the writings of the Special Rapporteur on extrajudicial summary on arbitrary executions, he’s written some interesting papers on this, which is essentially that all killing by autonomous weapons would be arbitrary in this kind of legal sense, because these systems don’t have access to reasons for killing you to know that it’s actually justified to use lethal force in a given situation.

And that’s because they’re not reasoning in the same way that we are, but it’s also because they’re not human moral agents, and it’s important in a sense that they be human, because human dignity is something that we all lose when it’s violated. So, if you look at slavery or you look at torture, it’s not simply the person who’s being tortured or enslaved who is suffering, though of course they are, but it is in fact all of us who lose a certain value of human life and human dignity by the very existence of slavery or torture, and the acceptance of that.

In a similar way, if we accept the killing of humans by machines, then we’re really diminishing the nature of human dignity and the value of human life, in a broad sense that affects everybody, and I think that’s really true, and I think we really have to think about what it means to have human control over these systems to ensure that we’re not violating the rights and dignity of people when we’re engaged in armed conflict.

Ariel: Excellent. I think that was a nice addition. Thank you so much for taking the time to do this today.

We covered a lot of ground in these interviews, and yet we still only scratched the surface of what’s going on in the debate on lethal autonomous weapons. If you want to learn more, please visit autonomousweapons.org and visit the research and reports page. On the FLI site, we’ve also addressed some of the common arguments we hear in favor of lethal autonomous weapons, and we explain why we don’t find those arguments convincing. And if you want to learn even more, of course there’s the Campaign to Stop Killer Robots website, ICRAC has a lot of useful information on their site, and Article 36 has good information, including their report on meaningful human control. And if you’re also concerned about a future with lethal autonomous weapons, please take a moment to sign the pledge. You can find links to the pledge and everything else we’ve talked about on the FLI page for this podcast.

I want to again thank Paul, Toby, Richard, Mary, Bonnie and Peter for taking the time to talk about their work with LAWS.

If you enjoyed this show, please take a moment to like it, share it and maybe even give it a good review. I’ll be back again at the end of next month discussing global AI policy. And don’t forget that Lucas Perry has a new podcast on AI value alignment, and a new episode from him will go live in the middle of the month.

[end of recorded material]

Podcast: Mission AI – Giving a Global Voice to the AI Discussion with Charlie Oliver and Randi Williams

How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI’s development?

On this month’s podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include everyone’s voice.

Charlie is the founder and CEO of the digital media strategy company Served Fresh Media, and she’s also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies and discuss the implications of emerging tech on society. Randi is a doctoral student in the Personal Robotics Group at the MIT Media Lab. She wants to understand children’s interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems. 

Topics discussed in this episode include:

  • How to inject diversity into the AI discussion
  • The launch of Mission AI and bringing technologists and the general public together
  • How children relate to AI systems, like Alexa
  • Why the Internet and AI can seem like “great equalizers,” but might not be
  • How we can bridge gaps between the generations and between people with varying technical skills

Papers discussed in this episode include:

  • Druga, S., Williams, R., Resnick, M., & Breazeal, C. (2017). “Hey Google, is it OK if I Eat You?”: Initial Explorations in Child-Agent Interaction. Proceedings of the 16th ACM SIGCHI Interaction Design and Children (IDC) Conference, ACM. [PDF]
  • Stefania Druga, Randi Williams, Hae Won Park, and Cynthia Breazeal. 2018. How smart are the smart toys?: children and parents’ agent interaction and intelligence attribution. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18). ACM, New York, NY, USA, 231-240. DOI: https://doi.org/10.1145/3202185.3202741[PDF]
  • Randi Williams, Christian Vazquez, Stefania Druga, Pattie Maes, Cynthia Breazeal. “My Doll Says It’s OK: Voice-Enabled Toy Influences Children’s Moral Decisions.” IDC. 2018

You can listen to this episode above or read the transcript below. And don’t forget to check out previous episodes of FLI’s monthly podcast on SoundCloud, iTunes, Google Play and Stitcher.

 

Ariel: Hi, I am Ariel Conn with The Future of Life Institute. As a reminder, if you’ve been enjoying our podcasts, please remember to take a minute to like them, and share them, and follow us on whatever platform you listen on.

And now we’ll get on with our podcast. So, FLI is concerned with broadening the conversation about AI, how it’s developed, and its future impact on society. We want to see more voices in this conversation, and not just AI researchers. In fact, this was one of the goals that Max Tegmark had when he wrote his book, Life 3.0, and when we set up our online survey about what you want the future to look like.

And that goal of broadening the conversation is behind many of our initiatives. But this is a monumental task, that we need a lot more people working on. And there is definitely still a huge communications gap when it comes to AI.

I am really excited to have Charlie Oliver, and Randi Williams with me today, to talk about a new initiative they’re working on, called Mission AI, which is a program specifically designed to broaden this conversation.

Charlie Oliver is a New York based entrepreneur. She is the founder and CEO of Served Fresh Media, which is a digital media strategy company. And, she’s also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies, and to discuss the implications of emerging tech on our society. The mission of Tech 2025 is to help humanity prepare for, and define what that next technological era will be. And so it was a perfect starting point for her to launch Mission AI.

Randi Williams is a doctoral student in the personal robotics group at the MIT Media Lab. Her research bridges psychology, education, engineering, and robotics, to accomplish two major goals. She wants to understand children’s interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems. And she’s also on the board of Mission AI.

Randi and Charlie, thank you both so much for being here today.

Charlie: Thank you. Thank you for having us.

Randi: Yeah, thanks.

Ariel: Randi, we’ll be getting into your work here a little bit later, because I think the work that you’re doing on the impact of AI on childhood development is absolutely fascinating. And I think you’re looking into some of the ethical issues that we’re concerned about at FLI.

But first, naturally we wanna start with some questions about Mission AI. And so for example, my very first question is, Charlie can you tell us what Mission AI is?

Charlie: Well, I hope I can, right? Mission AI is a program that we launched at Tech 2025. And Tech 2025 was launched back in January of 2017. So we’ve been around for a year and a half now, engaging with the general public about emerging technologies, like AI, blockchain, machine learning, VR/AR. And, we’ve been bringing in experts to engage with them — researchers, technologists, anyone who has a stake in this. Which pretty much tends to be everyone, right?

So we’ve spent the last year listening to both the public and our guest speakers, and we’ve learned so much. We’ve been so shocked by the feedback that we’ve been getting. And to your initial point, we learned, as I suspected early on, that there is a big, huge gap between how the general public is interpreting this, and what they expect, and how researchers are interpreting this. And how corporate America, the big companies, are interpreting this, and hope to implement these technologies.

Equally, those three separate entities also have their fears, their concerns, and their expectations. We have seen the collision of all three of those things at all of our events. So, I decided to launch Mission AI to be part of the answer to that. I mean, because as you mentioned, it is a very complicated, huge problem, monumental. And what we will do with Mission AI, is to address the fact that the general public really doesn’t know anything about the AI, machine learning research that’s happening. And there’s, as you know, a lot of money, globally, being tossed — I don’t wanna say toss — but AI research is heavily funded. And with good reason.

So, we want to do three things with this program. Number one, we want to educate the general public on the AI machine learning research ecosystem. We happen to believe that it’s crucial that, in order for the general public to participate — and understand what I mean by the general public, I should say, that includes technologists. Like 30 to 35 percent of our audience are engineers, and software developers, and people in tech companies, or in companies working in tech. They also include business people, entrepreneurs, students, we have baby boomers, we have a very diverse audience. And we designed it so that we can have a diverse conversation.

So we want to give people an understanding of what AI research is, and that they can actually participate in it. So we define the ecosystem for them to keep them up to date on what research is happening, and we give them a platform to share their ideas about it, and to have conversations in a way that’s not intimidating. I think research is intimidating for a lot of people, especially academic research. We however, will be focusing more on applied research, obviously.

The second thing that we want to do is, we want to produce original research on public sentiment, which, it’s a huge thing to take on, but the more that we have moved, grown this community — and we have several thousand people in our community now, we’ve done events here, and in Toronto; we’ve done over 40 events across different topics — we are learning that people are expressing ideas, and concerns, and just things that I have been told by researchers who come in to speak at our events, it’s surprising them. So, it’s all the more important that we get the public sentiment and their ideas out. So our goal here is to do research on what the public thinks about these technologies, about how they should be implemented, and on the research that is being presented. So a lot of our research will be derivative of already existing research that’s out there.

And then number three, we want to connect the research community, the AI research community, with our community, or with the broader public, which I think is something that’s really, very much missing. And we have done this at several events, and the results are not only absolutely inspiring, everyone involved learns so much. So, it’s important, I think, for the research community to share their work with the general public, and I think it’s important for the general public to know who these people are. There’s a lot of work being done, and we respect the work that’s being done, and we respect the researchers, and we want to begin to show the face of AI and machine learning, which I think is crucial for people to connect with it. And then also, that extends to Corporate America. So the research will also be available to companies, and we’ll be presenting what we learn with them as well. So that’s a start.

Ariel: Nice. So to follow up on that a little bit, what impact do you hope this will have? And Randi, I’d like to get your input on some of this as well in terms of, as an AI researcher, why do you personally find value in trying to communicate more with the general public? So it’s sort of, two questions for both of you.

Randi: Sure, I can hop in. So, a lot of what Charlie is saying from the researcher’s side, is a big question. It’s a big unknown. So actually a piece of my research with children is about, well when you teach a child what AI is, and how it works, how does that change their interaction with it?

So, if you were extend that to something that’s maybe more applicable to the audience — if you were to teach your great, great grandma about how all of the algorithms in Facebook work, how does that change the way that she posts things? And how does that change the way that she feels about the system. Because we very much want to build things that are meaningful for people, and that help people reach their goals and live a better life. But it’s often very difficult to collect that data. Because we’re not huge corporations, we can’t do thousand person user studies.

So, as we’re developing the technology and thinking about what directions to go in, it’s incredibly important that we’re hearing from the baby boomers, and from very young people, from the scientists and engineers who are maybe in similar spaces, but not thinking about the same things, as well as from parents, teachers, all of the people who are part of the conversation.

And so, I think what’s great about Mission AI is that it’s about access, on both ends.

Charlie: So true. And you know, to Randi’s point, the very first event that we did was January the 11th, 2017, and it was on chatbots. And I don’t know if you guys remember, but that doesn’t seem like a long time ago, but people really didn’t know anything about chatbots back then.

When we had the event, which was at NYU, it sold out in record time, like in two days. And when we got everybody in the room, it was a very diverse audience. I mean we’re talking baby boomers, college students, and the first question I asked was, “How many people in here are involved in some way with building, or developing chatbots, in whatever way you might be?” And literally I would say about, 20 to 25 percent of the hands went up.

For everyone else, I said, “Well, what do you know chatbots? What do you know about it?” And most said, “Absolutely nothing.” They said, “I don’t know anything about chatbots, I just came because it looked like a cool event, and I wanna learn more about it.”

But, by the end of the event, we help people to have these group discussions and solve problems about the technologies, together. So that’s why it’s called a think tank. At the end of the event there were these two guys who were like 25, they had a startup that works with agencies that develop chatbots for brands. So they were very much immersed in the space. After the event, I would say a week later, one of them emailed me and said, “Charlie, oh my God, that event that you did, totally blew our minds. Because we sat in a group with five other people, and one of those people was John. He’s 75 years old. And he talked to us.” Part of the exercise that they had to do was to create a Valentine’s Day chatbot, and to write the conversational flow of that chatbot. And he said that after talking to John, who’s 75 years old, about what the conversation would be, and what it should be, and how it can resonate with real people, and different types of people. He said that they realized they had been building chatbots incorrectly all along. He realized that they were narrowing their conversations, in the conversational flows, in a way that restricted their technology from being appealing to someone like him. And they said that they went back, and re-did a lot of their work to accommodate that.

So I thought that was great. I think that’s a big thing in terms of expectations. We want to build these technologies so that they connect with everyone. Right?

Ariel: I’d like to follow up with that. So there’s basically two sides of the conversation. We have one side, which is about educating the public about the current state, and future of artificial intelligence. And then, I think the other side is helping researchers better understand the impact of their work by talking to these people who are outside of their bubbles.

It sounds to me like you’re trying to do both. I’m curious if you think both are either, equally challenging, or easy to address, or do you think one side is harder? How do you address both sides, and effect change?

Charlie: That is a great, great question. And I have to tell you that on both sides, we have learned so much, about both researchers, and the general public. One of the things that we learned is that we are all taking for granted what we think we know about people. All of us. We think we’ve got it down. “I know what that student is thinking. I know what that black woman is thinking. I know how researchers think.” The fact of the matter is, we are all changing so much, just in the past two to three years, think about who you were three years ago. We have changed how we think about ourselves and the world so much in the past two years, that it’s pretty shocking, actually. And even within the year and a half that we have been up and going, my staff and I, we sit around and talk about it, because it kind of blows our minds. Even our community has changed how they think about technologies, from January of last year, to today. So, it’s actually extremely, extremely difficult. I thought it would get easier.

But here’s the problem. Number one, again, we all make assumptions about what the public is thinking. And I’m gonna go out on a limb here and say that we’re all wrong. Because they are changing the way that they think, just as quickly as the technologies are changing. And if we don’t address that, and meet that head on, we are always going to be behind, or out of sync, with what the general public is thinking about these technologies. And I don’t think that we can survive. I don’t think that we can actually move into the next era of innovation unless we fix that.

I will give you a perfect example of that. Dr. James Phan co-created the IBM Watson Q&A system. And he’s one of our speakers. He’s come to our events maybe two or three times to speak.

And he actually said to me, as I hear a lot from our researchers who come in, he says, “My God, Charlie, every time I come to speak at your event, I’m blown away by what I hear from people.” He said, “It seems like they are thinking about this very differently.” He says, “If you ask me, I think that they’re thinking far more in advance than we think that they are.”

And I said, “Well, that shocks me.” And so, to give you a perfect example of that, we did an event with Ohio State regarding their Opioid Technology Challenge. And we had people in New York join the challenge, to figure out AI technologies that could help them in their battle against opioid addiction in their state. And I had him come in, as well as several other people come in, to talk about the technologies that could be used in this type of initiative. And James is very excited. This is what I love about researchers, right? He’s very excited about what he does. And when he talks about AI, he lights up. I mean you’ve just never seen a man so happy to talk about it. So he’s talking to a room full of people who are on the front lines of working with people who who are addicted to opioids, or have some sort of personal connection it. Because we invited people like emergency responders, we invited people who are in drug treatment facilities, we’ve invited doctors. So these are people who are living this.

And the more he talked about algorithms, and machine learning, and how they could help us to understand things, and make decisions, and they can make decisions for us, the angrier people got. They became so visibly angry, that they actually started standing up. This was in December. They started standing up and shouting out to him, “No way, no way can algorithms make decisions for us. This is about addiction. This is emotional.” And they really, it shocked us.

I had to pull him off the stage. I mean, I didn’t expect that. And he didn’t see it, because he just kept talking, and I think he felt like the more he talked about it, the more excited they would become, like him, but it was quite the contrary, they became angrier. That is the priceless example, perfect example, of how the conversations that we have, that we initiate between researchers and the public, are going to continue to surprise us. And they’re going to continue to be shocking, and in some cases, very uncomfortable. But we need to have them.

So, no it is not easy. But yes we need to have them. And in the end, I think we’re all better for it. And we can really build technologies that people will embrace, and not protest.

Ariel: So Randi, I’d like to have you jump in now, because you’ve actually done, from the researcher side, you’ve done an event with Tech 2025, or maybe more than one, I’m not sure. So I was hoping you could talk about your experience with that, and what you gained out of it.

Randi: Yeah, so that event I was talking about a piece of research I had done, where I had children talk about their perceptions of smart toys. And so this is a huge, also, like Charlie was saying, inflammatory topic because, I don’t know, parents are extremely freaked out. And I think, no offense to the media, but there’s a bit of fear mongering going on around AI and that conversation. And so, as far as what’s easier, I think the first step, what makes it really difficult for researchers to talk to the public right now, is that we have been so far out of the conversation, that the education has gotten skewed. And so it’s difficult for us to come in and talk about algorithms, and machines making decisions, without first dealing with, you know, and this is okay, and it’s not a terminator kind of thing. At the end of the day, humans are still in control of the machines.

So what was really interesting about my experience, talking with Tech 2025, is that, I had all of these different people in the room, a huge variety of perspectives. And the biggest thing to hear, was what people already knew. And, as I was talking and explaining my research, hearing their questions, understanding what they understood already, what they knew, and what wasn’t so clear. So one of the biggest things is, when you see an AI system teach itself to play chess, and you’re like, “Oh my God, now it’s gonna teach itself to like, take over a system, and hack into the government, and this is that.” And it’s like, no, no, it’s just chess. And it’s a huge step to get any further than that.

And so it was really great practice for me to try and take people who are in that place, and say, “Well no, actually this is how the technology works, and this is the limitations.” And try to explain, you know, so when could this happen, in what particular universe could this happen? Well maybe, like in 20 years if we find a general AI, then yeah, it could teach itself to solve any problem. But right now, every single problem requires years of work.

And then seeing what metaphors work. What metaphors make sense for an AI scientist who wants to relate to the public. What things click, which things don’t click? And I think, another thing that happened, that I really loved was, just thinking about the application space. I’m asking research questions that I think are intellectually interesting for my work. But, there was a person from a company, who was talking about implementing a skill in Alexa, and how they didn’t know if using one of their characters on Alexa, would be weird for a child. Because, I was talking about how children look at an Alexa, and they think Alexa’s like a person. So Alexa is an Alexa, and if you talk to another Alexa, that’s a new Alexa. Yeah they have the same name, but completely different people, right?

So what happens when Alexa has multiple personality disorder? Like how does a child deal with that? And that was a question that never would have come up, because I’m not writing skills with different characters for children. So, that’s just an example of how learning as an AI scientist, how to give, how to listen to what people are trying to understand, and how to give them the education they need. But then also taking, okay, so when you’re at home and your child is doing xyz with Alexa, where are the questions there that you have, that researchers should be trying to answer? So, I don’t know which one is harder.

Charlie: I specifically went after Randi for this event. And I invited her because, I had been thinking in my mind for a while, that we are not talking about children in AI, not nearly enough. Considering that they’re gonna be the ones in ten to 15 years who are gonna be developing these things, and this technology and everything. So I said, “You know, I am willing to bet that children are thinking very differently about this. Why aren’t we talking about it?” So, I get online, I’m doing all my, as anyone would, I do all my little research to try to figure it out, and when I came across Randi’s research, I was blown away.

And also, I had her in mind with regards to this because I felt like this would be the perfect test of seeing how the general public would receive research, from a research assistant who is not someone who necessarily has — obviously she’s not someone who has like 20 years of experience behind her, she’s new, she’s a fresh voice. How would she be received? How would the research be received?

And on top of that, to be honest with you, she’s a young black woman. Okay? And in terms of diversity of voices within the research community, and within the AI discussion as a whole, this is something I want to address, aggressively.

So we reached out to the toy companies, we reached out to child psychologists, teachers, students, children’s museums, toy stores, I can’t tell you how many people we reached out to in the greater New York City area.

Randi was received so well, that I had people coming up to me, and high fiving me, saying, “Where did you get her? Where did you find her?” And I’m like, “Well you know, she didn’t drop out of the sky. She’s from MIT.”

But Randi’s feedback was crucial for me too because, I don’t know what she’s getting from it. And we cannot be effective at this if we are not, all of us, learning from each other. So if my researchers who come in and speak aren’t learning, I’m not doing my job. Same with the audience.

Ariel: So, Randi, I’m gonna want to start talking about your research here in a minute, ’cause we’ve just gotten a really great preview of the work you’re doing. But before we get to that, one, not final question, but for a little bit, a final question about Mission AI, and that is this idea of diversity.

AI is not a field that’s known for being diverse. And I read the press release about this, and the very first thing, in the very first bullet point, about what Mission AI is going to do, was about injecting diversity. And so my question to both of you is, how can we do that better? How can the AI community do that better? And in terms of the dialogue for who you’re reaching out to, as well, how can we get more voices?

Randi: You know in some ways, it’s like, there’s nothing you can do, to not do better. I think what Mission AI is really about, is thinking about who’s coming to the table to hear these things, very critically. And being on the board, as Charlie said, a black woman, the people who I talk to in AI are people of color, and women, right? So, I hope that as being a main part of this, and having Charlie also be a main part of that, we have a network that’s both powerful, in terms of having the main players in AI come to the table, but you know, main players that are also not, I guess the stereotypical AI scientist that you would think of.

So, what makes this different is who’s leading it, and the fact that we’re thinking about this from the very beginning. Like, “Okay, we’re gonna reach out. We want to recruit research scientists,” so I’m thinking of my peers who are in schools all across the country, and what they’re doing, and how this can be meaningful for them, and how they can, I guess, get an experience in communicating their research with the public.

Charlie: Yeah, I totally agree.

In addition to that, bringing in people who are from different backgrounds, and bringing diversity to the speakers, is very important. But it’s equally as important to have a diverse room. The first thing that I decided when I launched Tech 2025, and the reason that I’ve decided to do it this way, is because, I did not want to have a room full of the hoodie crowd. Which is, you know, white guys in their 20’s with hoodies on. Right? That’s the crowd that usually gets the attention with regards to AI and machine learning. And no offense to them, or to what they’re doing, everyone’s contributing in their own way.

But I go to tech events, as I know you guys do too. I go to tech events here, and in San Francisco, and across the country, and different parts of the world. And, I see that for the most part a lot of these rooms are filled, especially if you talk about blockchain, and cryptocurrency, which we do as well, they’re filled with primarily white guys.

So, I intentionally, and aggressively, made it a point to include as many people from various backgrounds as possible. And it is a very deliberate thing that you have to do, starting with the content. I don’t think a lot of people realize that, because people say to me, “How do you get such diverse people in the room?”

Well number one, I don’t exclude anyone, but also, the content itself asks people from various backgrounds to come in. So, a lot of times, especially in our earlier events, I would make a point of saying, it doesn’t matter who you are, where you’re from, we don’t care if you’re a technologist, or if you are a baby boomer who’s just curious about this stuff, come on in. And I have actually had people in their 60s come to me, I had a woman come to me last year, and she says, “My God Charlie, I feel like I really can participate in these discussions at your event. I don’t feel like I’m the odd woman out, because I’m older.”

So I think that’s a very important thing, is that, when researchers look at the audience that they’re talking to, they need to see diversity in that audience too. Otherwise, you can reinforce the biases that we have. So if you’re a white guy and you’re talking to an audience full of nothing but white guys, you’re reinforcing that bias that you have about what you are, and the importance of your voice in this conversation.

But when my guests come in to speak, I tell them first and foremost, “You are amazing. I love the work that you do, but you’re not the … The star of the show is the audience. So when you look at them, just know that they are, it’s very important that we get all of their feedback. Right? That we allow them to have a voice.” And it turns out that that’s what happens, and I’m really, I’m happy that we’re creating a dialogue between the two. It’s not easy. I think it’s definitely what needs to happen. And with going back to what Randi says, it does need to be deliberate.

Ariel: I’m going to want to come back to this, because I want to talk more about how Mission AI will actually work. But I wanna take a brief pause, because we’ve sort of brought up some of Randi’s work, and I think her work is really interesting. So I wanted to talk, just a little bit about that, since the whole idea of Mission AI is to give a researcher a platform to talk about their work too.

So, one of my favorite quotes ever, is the Douglas Adams quote about age and technology, and he says, “I’ve come up with a set of rules that describe our reactions to technologies. One, anything that is in the world when you’re born, is normal and ordinary and is just a natural part of the way the world works. Two, anything that’s been invented when you’re 15 to 35 is new, and exciting, and revolutionary, and you can probably get a career in it. Three, anything invented after you’re 35 is against the natural order of things.”

Now, I personally, I’m a little bit worried that I’m finding that to be the case. And so, one of things that I’ve found really interesting is, we watch these debates about what the impact of AI will be on future generations. There are technologies that can be harmful, period. And trying to understand, when you’re looking at a technology that can be harmful, versus when you’re looking at a technology and you just don’t really know what the future will be like with it, I’m really curious what your take on how AI will impact children as they develop, is. You have publications that, there’s at least a couple great titles. One is, “Hey Google, is it okay if I eat you?” And then another is, “My Doll Says It’s Okay, Voice Enabled Toy Influences Children’s Moral Decisions.”

So, my very first question for you is, what are you discovering so far with the way kids interact with technology? Is there a reason for us to be worried? Is there also reason for us to be hopeful?

Randi: So, now that I’m hearing you say that, I’m like, “Man I should edit the titles of my things.”

First, let me label myself as a huge optimist of AI. Obviously I work as an AI scientist. I don’t just study ethics, but I also build systems that use AI to help people reach their goals. So, yeah, take this with a grain of salt, because obviously I love this, I’m all in it, I’m doing a PhD on it, and that makes my opinion slightly biased.

But here’s what I think, here’s the metaphor that I like to use when I talk about AI, it’s kind of like the internet. When the internet was first starting, people were like, “Oh, the Internet’s amazing. It’s gonna be the great equalizer, ’cause everyone will be able to have the same education, ’cause we’ll all have access to the same information. And we’re gonna fix poverty. We’re gonna fix, everything’s gonna go away, because the internet.” And in 2018, the Internet’s kind of like, yeah, it’s the internet, everyone has it.

But it wasn’t a great equalizer. It was the opposite. It’s actually creating larger gaps in some ways, in terms of people who have access to the internet, and can do things, and people who don’t have access. As well as, what you know about on the internet makes a huge difference in your experience on it. It also in some ways, promotes, very negative things, if you think about like, the dark web, modern day slavery, all of these things, right? So it’s like, it’s supposed to be great, it’s supposed to be amazing. It went horribly wrong. AI is kind of like that. But maybe a little bit different in that, people are already afraid of it before it’s even had a chance.

In my opinion, AI is the next technology that has the potential to be a great equalizer. The reason for that is, because it’s able to extend the reach that each person has in terms of their intellectual ability, in terms of their physical ability. Even, in terms of how they deal with things emotionally and spiritually. There’s so many places that it can touch, if the right people are doing it, and if it’s being used right.

So what’s happening right now, is this conversation with children in AI. The toy makers, and the toy companies are like, “We can create a future where every child grows up, and someone is reading to them, and we’re solving all the problems. It’s gonna be great.” And then they say to the parents, “I’m gonna put this thing in your home, and it’s gonna record everything your child says, and then it’s gonna come back to our company, and we’re gonna use it to make your life better. And you’re gonna pay us for it.” And parents are like, “I have many problems with this. I have many, many problems with everything that you’re saying.”

And so, there’s this disconnect between the potential that AI has, and the way that it’s being seen as the public, because, people are recognizing the dangers of it. They’re recognizing that the amount of access that it has, is like, astronomical and crazy. So for a second, I’ll talk about the personal robots group. In the MIT Media Lab, the personal robots group, we specifically build AI systems that are humanistic. Meaning that we’re looking at the way that people interact with their computers, and with cellphones, and it’s very, cagey. It’s very transactional, and in many ways it doesn’t help people live their lives better, even though it gives them more access. It doesn’t help them achieve all of their goals. Because you know, in some ways it’s time consuming. You see a group of teenagers, they’re all together, but they’re all texting on phones. It’s like, “Who are you talking to? Talk to your friends, they’re right there.” But that’s not happening, so we built systems specifically, that try to help people achieve their goals. One great example of that, is we found educational research that says that your vocabulary at the age of five, is a direct predictor of your PSAT score in the 11th grade. And as we all know, your PSAT score is a predictor of your SAT score. Your SAT score is a predictor of your future income, and potential in life, and all these great things.

So we’re like, “Okay, we wanna build a robot that helps children, who may not have access for any number of reasons, be able to increase their vocabulary size.” And we were gonna use AI that can personalize to each child, because every child’s different. Some children want the competitive robot that’s gonna push them, some children want the friendly robot that’s gonna work with them, and ask them questions, and put them in the perspective of being a teacher. And, AI is the only thing, like in a world, where classroom sizes are getting bigger, where parents can’t necessarily spend as much time at home, those are the spaces where we’re like, AI can help. And so we build systems that do that.

We don’t just think about teaching this child vocabulary words. We think about how the personality of the robot is shaping the child as a learner. So how is the robot teaching the child to have a growth mindset, and teaching them to persevere, to continue learning better. So those are the kinds of things that we want to instill, and AI can do that.

So, when people say, “AI is bad, it’s evil.” We’re like, “Well, we’re using a robot that teaches children that working hard is more important than just being magically smart.” ‘Cause having a non-growth mindset, like, “I’m a genius,” can actually be very limiting ’cause when you mess up, then you’re like, “I’m not a genius. I’m stupid.” It’s like, no, work hard, you can figure things out.

So, personally, I think, that kind of AI is extremely impactful, but the conversation that we need to have now, is how do we get that into the public space, in an appropriate way. So maybe, huge toy companies shouldn’t be the ones to build it, because they obviously have a bottom line that they’re trying to fill. Maybe, researchers are the ones who wanna build it. My personal research is about helping the public build their own AI systems to reach these goals. I want a parent to be able to build a robot for their child, that helps the child better reach their goals. And not to replace the parent, but you know, there are just places where a parent can’t be there all the time. Play time, how can play time, how can the parent, in some ways, engineer their child’s play time, so that they’re helping the child reinforce having a growth mindset, and persevering, and working hard, and maybe cleaning up after yourself, there are all these things.

So if children are gonna be interacting with it anyways, how can we make sure that they’re getting the right things out of that?

Ariel: I’d like to interject with a question real quick. You’d mentioned earlier that parents aren’t psyched about having all of their kids’ information going back to toy companies.

Randi: Yeah.

Ariel: And so, I was gonna ask if you see ways in which AI can interact with children that doesn’t have to become basically massive data dumps for the AI companies? Is this, what you’re describing, is that a way in which parents can keep their children’s data private? Or would that still end up, all that data go someplace?

Randi: The way that the AI works depends heavily on the algorithm. And what’s really popular right now, are deep learning algorithms. And deep learning algorithms, they’re basically, instead of figuring out every single rule, like instead of hard programming every single possible rule and situation that someone could run into, we’re just gonna throw a lot of data at it, and the computer will figure out what we want at the end. So you tell it, what you have at the beginning, you tell it what you want at the end, and then the computer figures out everything.

That means you have to have like massive amounts of data, like, Google amounts of data, to be able to do that really well. So, right now, that’s the approach that companies are taking. Like, collect all the data, you can do AI with it, and we’re off to the races.

The systems that we’re building are different because, they rely on different algorithms than ones that require huge amounts of data. So we’re thinking about, how can we empower people so that … You know, it’s a little bit harder, you have to spend some time, you can’t just throw data at it, but it allows people to have control over their own system.

I think that’s hugely important. Like, what if Alexa wasn’t just Alexa; Alexa was your Alexa? You could rename her, and train her, and things like that.

Charlie: So, to Randi’s point, I mean I really totally agree with everything that she’s saying. And it’s why I think it’s so important to bring researchers, and the general public, together. Literally everything that she just said, it’s what I’m hearing from people at these events. And the first thing that we’re hearing is that people, obviously they’re very curious, but they are also very much afraid. And I’m sometimes surprised at the level of fear that comes into the room. But then again, I’m not, because the reason, I think anyway, that people feel so much fear about AI, is that they aren’t talking about it enough, in a substantive way.

So they may talk about it in passing, they may hear about it, or read about it online. But when they come into our events, we force them to have these conversations with each other, looking each other in the eye, and to problem solve about this stuff. And at the end of the evening, what we always hear, from so many people, is that number one, they didn’t realize that, it wasn’t as bad as they thought it was.

So there’s this realization that once they begin to have the conversations, and begin to feel as if they can participate in the discussion, then they’re like, “Wow, this is actually pretty cool.” Because part of our goal is to help them to understand, to Randi’s point, that they can participate in developing these technologies. You don’t have to have an advanced degree in engineering, and everything. They’re shocked when I tell them that, or when they learn it for themselves.

And the second thing, to Randi’s point, is that, people are genuinely excited about the technologies, after they talk about it enough to allow their fears to dissipate. So, the immediate emotional reaction to AI, and to the fear of data, and it’s a substantive fear, because they’re being told by the media that they, you know, they should be afraid. And to some degree, obviously, there is a big concern about this. But once they are able to talk about this stuff, and to do the exercises, and to think through these things, and to ask questions of the guest speakers and researchers, they then start asking us, and emailing us, saying “What more can I do? I wanna do more. Where can I go to learn more about this?”

I mean we’ve had people literally up-skill, just go take courses in algorithms and everything. And so one of the things that we’ve done, which is a a part of Mission AI is, we now have an online learning series called, Ask the Experts, where we will have AI researchers, answer questions about things that people are hearing and seeing in the news. So we’ll pick a hot topic that everyone is talking about, or that’s getting a lot of play, and we will talk about that from the perspective of the researcher. And we’ll present the research that either supports the topic, or the particular angle that the reporter is taking, or refutes it.

So we actually have one coming up on algorithms, and on YouTube’s algorithm, it’s called, Reverse Engineering YouTube’s Algorithms, and it talks about how the algorithms are causing the YouTube creators a lot of anxiety, because they feel like the algorithm is being unfair to them, as they say it. And that’s a great entry point for people, for the general public, to have these discussions. So researchers will be answering questions that I think we all have.

Ariel: So, I’m hesitant to ask this next question, because I do, I like the idea of remaining hopeful about technology, and about AI. But, I am curious as to whether or not, you have found ethical issues regarding children’s interactions with artificial intelligence, or with Alexa, or any of the other AIs that they might be playing with?

Randi: Of course there are ethical issues. So, I guess to talk specifically about the research. I think there are ethical issues, but they raise more questions than answers. So, in the first study that we did, the Hey Google, is it Okay if I Eat You? We would see things like, some of the older children thought that Alexa was smarter than them, because it could answer all of their questions. But then conversely, the younger children would say, “Well it’s not smarter than me, because it doesn’t know what my favorite song is,” or it doesn’t know about, some TV show that they watch. And so, that led us to ask the question, well what does it mean when a child says that something is more intelligent than them?

And so we followed up with a study that was also recently published. So we had children compare the intelligence of a mouse, to the intelligence of a robot, to their own intelligence. And the way that we did this was, all three of them solved a maze. And then we listened to the way that children talked about each of the different things as they were solving the maze. So first of all, the children would say immediately, “The robot solved it the best. It’s the smartest.” But what we came to realize, was that, they just thought robots were smart in general. Like that was just the perception that they had, and it wasn’t actually based on the robot’s performance, because we had the mouse and the robot do the exact same performance. So they would say, “Well the mouse just smells the cheese, so that’s not smart. But the robot, was figuring it out, it had programming, so it’s very smart.”

And then when they looked at their own intelligence, they would be able to think about, and analyze their strategy. So they’re like, “Well I would just run over all the walls until I found the cheese,” or, “I would just, try not to look at places that I had been to before.” But they couldn’t talk about the robot in the same way. Like, they didn’t intellectually understand the programming, or the algorithm that was behind it, so they just sort of saw it as some mystical intelligence, and it just knew where the cheese was, and that’s why it was so fast. And they would be forgiving of the robot when it made mistakes.

And so, what I’m trying to say, is that, when children even say, “Oh that thing is so smart,” or when they say, “Oh I love my talking doll,” or, “Oh I love Alexa, she’s my best friend.” Even when they are mean to Alexa, and do rude things, a lot of parents look at that and they say, “My child is being brainwashed by the robots, and they’re gonna grow up and not be able to socialize, ’cause they’re so emotionally dependent on Alexa.”

But, our research, that one, and the one that we just did with the children’s conformity, what we’re finding is that, children behave very differently when they interact with humans, than when they interact with these toys. And, it’s like, even if they are so young, ’cause we work with children from four to ten years old. Even if they’re four years old, and they can’t verbalize how the robot is different, their behavior is different. So, at some subconscious level, they’re acknowledging that this thing is not a human, and therefore, there are different rules. The same way that they would if they were interacting with their doll, or if they were interacting with a puppy, or a piece of food.

So, people are very freaked out, because they’re like “Oh these things are so lifelike, and children don’t know the difference, and they’re gonna turn into robots themselves.” But, mostly what I’ve seen in my research is that we need to give children more credit, because they do know the differences between these things, and they’re very curious and explorative with them. Like, we asked a six year old girl, “What do you want to build a robot for, if you were to build one?” And she was like, “Well I want one to go to countries where there are poor people, and teach them all how to read and be their friend, because some people don’t have friends.” And I was just like, “That’s so beautiful. Why don’t you grow up and start working in our lab now?”

And it’s very different from the kind of conversation that we would have with an adult. The adult would be like, “I want a robot that can do all my work for me, or that can fetch me coffee or beer, or drive my car.” Children are on a very different level, and that’s because they’re like native to this technology. They’re growing up with it. They see it for what it is.

So, I would say, yes there are ethical issues around privacy, and yes we should keep monitoring the situation, but, it’s not what it looks like. That’s why it’s so important that we’re observing behavior, and asking questions, and studying it, and doing research that concretely can sort of say, “Yeah, you should probably be worried,” or, “No, there’s something more that’s going on here.”

Ariel: Awesome, thank you. I like the six year old’s response. I think everyone always thinks of children as being selfish too, and that’s a very non-selfish answer.

Randi: Yeah. Well some of them also wanted robots to go to school for them. So you know, they aren’t all angels, they’re very practical sometimes.

Ariel: I want to get back to one question that I didn’t get a chance to ask about Mission AI that I wanted to. And that’s sort of the idea of, what audiences you’re going to reach with it, how you’re choosing the locations, what your goals specifically are for these initial projects?

Charlie: That’s a question, by the way, that I have struggled with for quite some time. How do we go about doing this? It is herculean, I can’t reach everyone. You have to have some sort of focus, right? It actually took several months to come to the conclusion that we came to. And actually that only happened after research was, ironically, research was published last month in three states on how AI automation is going to impact specific jobs, or specific sectors in three states that are aggressively trying to sort of address this now and trying to educate their public now about what this stuff is.

And from what I’ve read, I think these three states, in their legislation, they feel like they’re not getting the support maybe, that they need or want, from their federal government. And so they figured, “Let’s figure this out now, before things get worse, for all we know. Before people’s concerns reach a boiling point, and we can’t then address it calmly, the way we should.” So those states are Arizona, Indiana, and northeast Ohio. And all three, this past month, released these reports. And I thought to myself, “Well, where’s the need the most?” Because there’s so many topics here that we can cover with regards to research in AI, and everything. And this is a constant dialogue that I’m having also with my advisors, and our advisors, and people in the industries. So the idea of AI and jobs, and the possibility of AI sort of decimating millions of jobs, we’ve heard numbers all over the place; realistically, yes, jobs will go away, and then new jobs will be created. Right? It’s what happens in between that is of concern to everyone. And so one of the things in making this decision that I’ve had to look at, is what I am hearing from the community? What are we hearing that is of the greatest concern from both the general public, from the executives, and just from in general, even in the press? What is the press covering exhaustively? What’s contributing to people’s fears?

And so we’ve found that it is without a doubt, the impact of AI on jobs. But to go into these communities, where number one, they don’t get these events the way we get them in New York and San Francisco. We were never meant to be a New York organization. It was always meant to launch here, and then go where the conversation is needed. I mean, we can say it’s needed everywhere, but there are communities across this country where they really need to have this information, and this community, and in their own way. I’m in no way thinking that we can take what we do here in New York, and retrofit for every other community, and every other state. So this will be very much a learning process for us.

As we go into these different states, and we take the research that they have done on what they think the impact if AI and automation will be on specific jobs? We will be doing events in their communities, and gathering our own research, and trying to figure out the questions that we should be asking of people, at these events that will offer insight for them, for the researchers, and for the legislators.

The other thing that I would say, is that we want to begin to give people actionable feedback on what they can do. Because people are right now, very, very much feeling like, “There’s gotta be something else that I can do.” And understand that there’s a lot of pressure.

As you know, we’re at an all time low, with regards to employment, unemployment. And the concern of the executive today is that, “Oh my God, we’re going to lose jobs.” It’s, “Oh my God, how do I fill these jobs?” And so, they have a completely different mindset about this. And their goal is, “How do we up skill people? How do we prepare them for the jobs that are there now, and the ones that are to come?”

So, the research will also hopefully touch on that as well, because that is huge. And I don’t think that people are seeing the opportunities that are available to them in these spaces, and in adjacent spaces to develop the technologies. Or to help define what they might be, or to contribute to the legislative discussion. That’s another huge thing that we are seeing as a need.                    

Again, we want this to fill a need. I don’t want to in any way, dictate something that’s not going to be of use to people. And to that end, I welcome feedback. This is an open dialogue that we’re having with the community, and with businesses, and with of course, our awesome advisors, and the researchers. This is all the more of the reason too, why it’s important to hear from the young researchers. I am adamant on bringing in young researchers. I think they are chomping at the bit, to sort of share their ideas, and to get out there some of the things that they may not be able to share.

That’s pretty much the crux of it, is to meet the demand, and to help people to see how they can participate in this, and why the research is important. We want to emphasize that.

Ariel: A quick follow up for Randi, and that is, as an AI researcher what do you hope to get out of these outreach efforts?

Randi: As an AI researcher, we often do things that are public facing. So whether it be blog posts, or videos, or actually recruiting the public to do studies. Like recently we had a big study that happened in the lab, not in my group, but it was around the ethics of self driving cars. So, for me, it’s just going out and making sure that there are more people a part of the conversation than typically would be. Because, at the end of the day, I am based in MIT. So the people who I am studying are a select group of people. And I very much want to use this as a way to get out of that bubble, and to reach more people, hear their comments, hear their feedback, and design for them.

One of the big things I’ve been doing is trying to go, literally out of this country, to places where everyone doesn’t have a computer in their home, and think about, you know “Okay, so where does AI education, how does it make sense in this context?” And that’s what I think a lot of researchers want. ‘Cause this is a huge problem, and we can only see little bits of it as research assistants. So we want to be able to see more and more.

Charlie: I know you guys at the The Future of Life Institute have your annual conference on AI, and you produced the document a year ago, with 100 researchers or scientists on the Asilomar Principles.

Ariel: Yup.

Charlie: We took that document, that was one of the documents that I looked at, and I thought, “Wow this is fascinating.” So these are 23 principles, that some of the most brilliant minds in AI are saying that we should consider, when developing these technologies. Now, I know it wasn’t perfect, but I was also taken aback by the fact that the media was not covering it. And they did cover it, of course they announced it, it’s big. But there wasn’t any real critical discussion about it, and I was alarmed at that. ‘Cause I said, “This should be discussed exhaustively, or at least it should be sort of the impetus for a discussion, and there was none.”

So I decided to bring that discussion into the Tech 2025 community, and we had Dr. Seth Baum who is the executive director at the Global Catastrophic Risk Institute come in, and present what these 23 principles are, his feedback on them, and he did a quick presentation. It was great. And then we turned over to the audience, two problems, and one was, what is the one thing in this document that you think is so problematic that it should not be there? And number two, what should be there in its place?

It turned out to be a very contentious, really emotional discussion. And then when they came up with their answers, we were shocked at the ideas that they came up with, and where they felt the document was the most problematic. The group that came up with the solution that won the evening, ’cause sometimes we give out prizes depending on what it is, or we’ll ask the guest speaker to pick the solution that resonated the most with him. The one that resonated the most with Seth was a solution that Seth had never even considered, and he does this for a living, right?

So we hear that a lot from researchers, to Randi’s point. We actually hear from researchers who say, “My God, they’re people who are coming up with ideas, and I haven’t even considered.” And then on top of that, when we ask people, well what do you think about this document? Now this is no offense to the people who came up with this document, but they were not happy about it. And they all expressed that they were really concerned about the idea that anyone would be dictating what the morals or ethics of AI, or algorithms should be. Because the logical question is, whose morals, whose ethics, who dictates it, who polices it? That’s a problem.

And we don’t look at that as bad. I think that’s great, because that is where the dialogue between researchers, and the community, and the general public, that’s where to me, to becomes a beautiful thing.

Ariel: It does seem a little bit unfortunate since the goal of the document was in part, to acknowledge that you can’t just have one group of people saying, “These are what morals should be.” I’m concerned that people didn’t like it because, it was, sounds like it was misinterpreted, I guess. But that happens. So I’m gonna ask one last round up question to both of you. As you look towards a future with artificial intelligence, what are you most worried about, and what are you most excited about?

Randi: So, I’m most worried that a lot of people won’t have access to the benefits of AI until, like 30 years from now. And I think, we’re getting to the point, especially in business where AI can make a huge difference, like a huge difference, in terms of what you’re able to accomplish. And I’m afraid for that inequality to propagate in the wrong ways.

I’m most excited about the fact that, you know, at the same time as progress towards technologies that may broaden inequalities, there’s this huge push right now, for AI education. So literally, I’m in conversations with people in China, because China just made a mandate that everyone has AI education. Which is amazing. And in the United States, I think all 50 states just passed a CS requirement, and as a result, IEEE decided to start an AI K-12 initiative.

So, you know, as one of the first people in this space about AI education, I’m excited that it’s gaining traction, and I’m excited to see, you know, what we’re gonna do in the next five, ten years, that could really change what the landscape looks like right now.

Charlie: My concerns are pretty much the same with regards to who will be leveraging the technologies the most, and who will have control over them, and will the algorithms actually be biased or not. But I mean, right now, it’s unfortunate, but we have every reason to believe that the course on which we’re going, especially when we look at what’s happening now, and people realizing what’s happening with their data, my concern is that if we don’t reverse course on that, meaning become far more conscientious of what we’re doing with our own data, and how to engage companies, and how to help consumers to engage companies in discussions on what they’re doing, how they’re doing it, that we may not be able to sort of, not hit that brick wall. And I see it as a brick wall. Because if we get to the point where it is that only a few companies control all the algorithms of the world, or whatever you wanna say, I just think there’s no coming back from that. And that’s really a real fear that I have.

In terms of the hope, I think the thing that gives me hope, what keeps me going, and keeps me investing in this, and growing the community, is that, I talk to people and I see that they actually are hopeful. That they actually see that there is a possibility, a very real possibility, even though they are afraid… When people take time out of busy schedules to come and sit in a room, and listen to each other, and talk to each other about this stuff, that is the best indication that those people are hopeful about the future, and about their ability to participate in it. And so based on what I’m hearing from them, I am extremely hopeful, and I believe that there is a very huge opportunity here to do some incredible things, including helping people to see how they can reinvent the world.

We are being asked to redefine our reality, and I think some people will get that, some people won’t. But the fact that that’s being presented to us through these technologies, among other things, is to me, just exciting. It keeps me going.

Ariel: All right. Well, thank you both so much for joining us today.

Charlie: Thank you.

Randi: Thank you for having us.

Ariel: As I mentioned at the beginning, if you’ve been enjoying the podcasts, please take a moment to like them, share them, follow us on whatever platform you’re listening to us on. And, I will be back again next month, with a new pair of experts.

[end of recorded material]

 

 

Podcast: Nuclear Dilemmas, From North Korea to Iran

With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? (Update: The North Korea summit happened! But to understand what the future might look like with North Korea and Iran, it’s still helpful to understand the past.)

To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these countries are monitored, Ariel spoke with Melissa Hanham and Dave Schmerler on this month’s podcast. Melissa and Dave are both nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies, where they research weapons of mass destruction with a focus on North Korea. Topics discussed in this episode include:

  • the progression of North Korea’s quest for nukes,
  • what happened and what’s next regarding the Iran deal,
  • how to use open-source data to monitor nuclear weapons testing, and
  • how younger generations can tackle nuclear risk.

In light of the on-again/off-again situation regarding the North Korea Summit, Melissa sent us a quote after the podcast was recorded, saying:

“Regardless of whether the summit in Singapore takes place, we all need to set expectations appropriately for disarmament. North Korea is not agreeing to give up nuclear weapons anytime soon. They are interested in a phased approach that will take more than a decade, multiple parties, new legal instruments, and new technical verification tools.”

Links you might be interested in after listening to the podcast:

You can listen to the podcast above or read the transcript below.

 

Ariel: Hello. I am Ariel Conn with the Future of Life Institute. This last month has been a rather big month concerning nuclear weapons, with the US pulling out of the Iran deal and the on again off again summit with North Korea.

I have personally been doing my best to keep up with the news but I wanted to learn more about what’s actually going on with these countries, some of the history behind the nuclear weapons issues related to these countries, and just how big a risk nuclear programs in these countries could become.

Today I have with me Melissa Hanham and Dave Schmerler, who are nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies. They both research weapons of mass destruction with a focus on North Korea. Melissa and Dave, thank you so much for joining us today.

Dave: Thanks for having us on.

Melissa: Yeah, thanks for having us.

Ariel: I just said that you guys are both experts in North Korea, so naturally what I want to do is start with Iran. That has been the bigger news story of the two countries this month because the US did just pull out of the Iran deal. Before we get any further, can you just, if it’s possible, briefly explain what was the Iran deal first? Then we’ll get into other questions about it.

Melissa: Sure. The Iran deal was an agreement made between the … It’s formally known as the JCPOA and it was an agreement made between Iran and several countries around the world including the European Union as well. The goal was to freeze Iran’s nuclear program before they achieved nuclear weapons while still allowing them civilian access to medical isotopes, and power, and so on.

At the same time, the agreement would be that the US and others would roll back sanctions on Iran. The way that they verified that agreement was through a procurement channel, if-needed onsite inspections, and regular reporting from Iran. As you mentioned, the US has withdrawn from the Iran deal, which is really just, they have violated the terms of the Iran deal, and Iran and European Union and others have said that they wish to continue in the JCPOA.

Ariel: If I’ve been reading correctly, the argument on the US side is that Iran wasn’t holding up their side of the bargain. Was there actually any evidence for that?

Dave: I think the American side for pulling out was more based on them lying about having a nuclear weapons program at one point in time, leading up to the deal, which is strange, because that was the motivation for the deal in the first place, was to stop them from continuing their nuclear weapons, their research and investment. So, I’m not quite sure how else to frame it outside of that.

Melissa: Yeah, Israeli President Netanyahu, made this presentation where he revealed all these different archived documents in Iran, and mostly what they indicated was that Iran had an ongoing nuclear weapons program before the JCPOA, which is what we knew, and that they were planning on executing that program. For people like me, I felt like that was the justification for the JCPOA in the first place.

Ariel: And so, you both deal a lot with, at least Melissa I know you deal a lot with monitoring. Dave, I believe you do, too. With something like the Iran deal, if we had continued with it, what is the process involved in making sure the weapons aren’t being created? How do we monitor that?

Melissa: It’s a really difficult multilayered technical and legal proposition. You have to get the parties involved to agree to the terms, and then you have to be able to technically and logistically implement the terms. In the Iran deal, there were some things that were included and some things that were not included. Not because it was not technically possible, but because Iran or the other parties would not agree to it.

It’s kind of a strange marriage between diplomacy and technology, in order to execute these agreements. One of the criticisms of the Iran deal was that missiles weren’t included, so sure enough, Dave was monitoring many, many missile launches, and our colleague, Shea Cotton, even made a database of North Korean missile launches, and Americans really hated that Iran was launching these missiles, and we could see that they were happening. But the bottom line was that they were not part of the JCPOA agreement. That agreement focused only on nuclear, and the reason it did was because Iran refused to include missiles or human rights and these other kinds of things.

Dave: That’s right. Negotiating Iran’s missile program is a bit of another issue entirely. Iran’s missile program began before their nuclear program did. It’s accelerated, development has corresponded to their own security concerns within the region, and they have at the moment, a conventional ballistic missile force. The Iranians look at that program as being a completely different issue.

Ariel: Just quickly, how do you monitor a missile test? What’s involved in that? What do you look for? How can you tell they’re happening? Is it really obvious, or is there some sort of secret data you access?

Dave: A lot of the work that we do — Melissa and I, Shea Cotton, Jeffrey Lewis, and some other colleagues — is entirely based on information from the public. It’s all open source research, so if you know what you’re looking for, you can pull all the same information that we do from various sources of free information. The Iranians will often put propaganda or promo videos of their missile tests and launches as a way to demonstrate that they’re becoming a more sophisticated, technologically modern, ballistic missile producing nation.

We also get reports from the US government that are published in news sources. Whether from the US government themselves, or from reporters who have connections or access to the inside, and we take all this information, and Melissa will probably speak to this a bit further, but we fuse it together with satellite imagery of known missile test locations. We’ll reconstruct a much larger, more detailed chain of events as to what happened when Iran does missile testing.

Melissa: I have to admit, there’s just more open source information available about missile tests, because they’re so spread out over large areas and they have very large physical attributes to the sites, and of course, something lights up and ignites, and it takes off into the air where everyone can see it. So, monitoring a missile launch is easier than monitoring a specific facility in a larger network of facilities, for a nuclear program.

Ariel: So now that Trump has pulled out of the Iran deal, what happens next with them?

Melissa: Well, I think it’s probably a pretty bad sign. What I’ve heard from colleagues who work in or around the Trump administration is that confidence was extremely high on progress with North Korea, and so they felt that they didn’t need the Iran deal anymore. And in part, the reason that they violated it was because they felt that they had so much already going in North Korea, and those hopes were really false. There was a huge gap between reality and those hopes. It can be frustrating as an open source analyst who says these things all the time on Twitter, or in reports, that clearly nobody reads them. But no, things are not going well in North Korea. North Korea is not unilaterally giving over their nuclear weapons, and if anything, violating the Iran deal has made North Korea more suspicious of the US.

Ariel: I’m going to use that to transition to North Korea here in just a minute, but I guess I hadn’t realized that there was a connection between things seeming to go well in North Korea and the US pulling out of the Iran deal. You talk about hopes that the Iran deal is now necessary for North Korea, but what is the connection there? How does that work?

Melissa: Well, so the Iran deal represented diplomatic negotiation with an outcome among many parties that came to a concrete result. It happened under the Obama administration, which I think is why there is some distaste for it under the Trump administration. That doesn’t matter to North Korea. That doesn’t matter to other states. What matters is whether the United States appears to be able to follow through on a promise that may pass one administration to another.

The US has in a way, violated some norms about diplomatic behavior, by withdrawing from this agreement. That’s not to say that the US hasn’t done it before. I remember Clinton signing the, I think Rome Treaty, for the International Criminal Accord, then Bush unsigning it, it never got ratified. But it’s bad for our reputation. It makes us look like we’re not using international law the way other countries expect us to.

Ariel: All right. So before we move officially to North Korea, is there anything else, Melissa and Dave, that either of you want to mention about Iran that you think is either important for people to know about, that they don’t already, or that is important to reiterate?

Melissa: No. I guess let’s go to North Korea. That’s our bread and butter.

Ariel: All right. Okay, so yeah, North Korea’s been in the news for a while now. Before we get to what’s going on right now, I was hoping you could both talk a little bit about some of the background with North Korea, and how we got to this point. North Korea was once part of the Non-Proliferation Treaty, and they pulled out. Why were they in it in the first place? What prompted them to pull out? We’ll go from there.

Melissa: Okay, I’ll jump in, although Dave should really tell me if I keep talking over him. North Korea withdrew from the NPT, or so it said. It’s actually diplomatically very complex what they did, but North Korea either was or is a member of the Nuclear Non-Proliferation Treaty, the NPT, depending on who you ask. That is in large part because they were, and then they announced their withdrawal in 2003, and eventually we no longer think of them as officially being a member of the NPT, but of course, there were some small gaps over the notification period that they gave in order to withdraw, so I think my understanding is that some of the organizations involved actually keep a little North Korean nameplate for them.

But no, we don’t really think of them as being a member of an NPT, or IAEA. Sadly, while that may not be a legally settled, they’re out, they’re not abiding by traditional regimes or norms on this issue.

Ariel: And can you talk a little bit about, or do we know what prompted them to withdraw?

Melissa: Yeah. I think they really, really wanted nuclear weapons. I mean, I’m sorry to be glib about it, but … Yeah, they were seeking nuclear weapons since the ’50s. Kim Il-sung said he wanted nuclear weapons, he saw the power of the US’ weapons that were dropped on Japan. The US threatened North Korea during the Korean War with use of nuclear weapons, so yeah, they had physicists working on this issue for a long time.

They joined the NPT, they wanted access to the peaceful uses of nuclear power, they were very duplicitous in their work, but no, they kept working towards nuclear weapons. I think they reached a point where they probably thought that they had the technical capability, and they were dissatisfied with the norms and status as a pariah state, so yeah, they announced they were withdrawing, and then they exploded something three years later.

Ariel: Now that they’ve had a program in place then I guess for, what? Roughly 15 years then?

Melissa: Oh, my gosh. Math. Yeah. No, so I was sitting in Seoul. Dave, do you remember where you were when they had their first nuclear test?

Dave: This was-

Melissa: 2006.

Dave: A long time ago. I think I was still in high school.

Melissa: I mean, this is a challenge to our whole field, right? Is that there are generations passing through, so there are people who remember 1945. I don’t. But I’m not going to reveal my age. I was fresh out of grad school, and working in Seoul when North Korea tested its first nuclear device.

It was like cognitive dissonance around the world. I remember the just shock of the response out of pretty much every country. I think China had a few minutes notice ahead of everybody else, but not much. So yes, we did see the reactor getting built, yes, we did see activity happening at Yongbyon, no we deeply misunderstood and underestimated North Korea’s capabilities.

So, when that explosion happened, it was surprising, to people in the open source anyways. People scrambled. I mean, that was my first major gig. That’s why I still do this today, was we had an office at the International Crisis Group, of about six people, and all our Korean speakers were immediately sucked into other responsibilities, and so it was up to me to try to take out all these little puzzle pieces, about the seismic information, about the radionuclides that were actually leaked in that first explosion, and figure out what a Constant Phoenix was, and who was collecting what, and put it all together to try to understand what kind of warhead that they may or may not have exploded, if it was even a warhead at that point.

Ariel: I’m hoping that you can explain how monitoring works. I’m an ex-seismologist, so I actually do know a little bit about the seismic side of monitoring nuclear weapons testing, but I’m assuming a lot of listeners do not. I’m not as familiar with things like the radionuclide testing, or the Phoenix that you mentioned was a new phrase for me as well. I was hoping you could explain what you go through to monitor and confirm whether or not a nuclear weapon has been tested, and before you do that real quick — so did you actually see that first … Could you see the explosion?

Melissa: No. I was in Seoul, so I was a long ways away, and I didn’t really … Of course, I did not see or feel anything. I was in an office in downtown Seoul, so I remember actually how casual the citizens of Seoul were that day. I remember feeling kind of nervous about the whole thing. I was registered with the Canadian embassy in Seoul, and we actually had, when you registered with the embassy, we had instructions of what to do in case of an emergency.

I remember thinking, “Gosh, I wonder if this is an emergency,” because I was young and fresh out of school. But no, I mean, as I looked down out of our office windows, sure enough at noon, the doors opened up and all my Korean colleagues streamed out to lunch together, and really behaved pretty traditionally, the way everyone normally does.

South Koreans have always been very stoic about these tests, and I think they’re taken more anxiously by foreigners like me. But I do also remember there were these aerial sirens going off that day, and I actually never got an explanation of why there were sirens going off that day. I remember they tested them when I lived there, but I’m not sure why the sirens were going off that day.

Ariel: Okay. Let’s go back to how the monitoring works, and Dave, I don’t know if this is something that you can also jump in on?

Dave: Yeah, sure. I think I’ll let Melissa start and I’ll try to fill in any gaps, if there are any.

Melissa: So, the Comprehensive Test Ban Treaty Organization is an organization based in Vienna, but they have stations all over the world, and they’re continually monitoring for nuclear explosions. The Constant Phoenix is a WC-135. It’s a US Air Force vehicle, and so the information coming out of it is not open source and I don’t get to see it, but what I can do, or what journalists, investigative journalists sometimes do, is, say, when it’s taking off from Guam, or an Air Force Base, and then I know at least that the US Air Force is thinking it’s going to be sensing something, so this is like a specialty vehicle. I mean, it’s basically an airplane, but it has many, many interesting sensor arrays all over it that sniff the air. What they’re trying to detect are xenon isotopes, and these are isotopes that are possibly released from an underground nuclear test, depending on how well the tunnel was sealed.

In that very first nuclear explosion in 2006, some noble gases were released and I think that they were detected by the WC-135. I also remember back then, although this was a long time ago, that there were a few sensing stations in South Korea that detected them as well. What I remember from that time is that the ratio of xenon isotopes was definitely telling us that this was a nuclear weapon. This wasn’t like a big hoax that they’d exploded a bunch of dynamite or something like that, which actually would be a really big hoax, and hard to pull off. But we could see that it was a nuclear test, it was probably a fission device. The challenge with detecting these gases is that they decay very quickly, so we have, 1) not always sensed radionuclides after North Korea’s nuclear tests, and, 2) if we do sense them, sometimes they’re decayed enough that we can’t get anything more than it was a nuclear test, and not a chemical explosion test.

Dave: Yeah, so I might be able to offer, because Melissa did a great job of explaining how the process works, is maybe a bit more of a recent mechanism and how we interact with these tests as they occur. Usually most of the people in our field follow a set number of seismic-linked Twitter accounts that will give you updates on when some part of the world is shaking for some reason or another.

They’ll put a tweet or maybe you’ll get an email update saying, “There was an earthquake in California,” because we get earthquakes all the time, or in Japan. Then, all of a sudden you hear there’s an earthquake in North Korea and everyone pauses. You look at this little tweet, I guess, or email, you can also get them sent to your phone via text message, if you sign up for whichever region of the world you’re interested in, and you look for what province was this earthquake in?

If it registers in the right province, you’re like, “Okay.” What’s next is we’ll look at the data that comes out immediately. CTBTO will come out with information, usually within a couple of days, if not immediately after, and we’ll look at the seismic waves. While I don’t study these waves, the type of seismic signature you get from a nuclear explosion is like a fingerprint. It’s very unique and different from the type of seismic signature you get from an earthquake of varying degrees.

We’ll take that and compare those to previous tests, which the United States and Russia have done infinitely more than any other country in the world. And we’ll see if those match. And as North Korea has tested more nuclear devices, the signatures started coming more consistent. If that matches up, we’ll have a soft confirmation that they did it, and then we’ll wait for government news, press releases to give us the final nail confirming that there was a nuclear test.

Melissa: Yeah, so as Dave said, as a citizen scientist, I love just setting up the USGS alert, and then if there’s an earthquake near the village of Punggye-ri, I’m like, “Ah-hah, I got you” because it’s not a very seismically active area. When the earthquakes happen that are related to an underground nuclear test, they’re shallow. They’re not deep, geological events.

Yeah, there’s some giveaways like, people like to do them on the hour, or the half hour, and mother nature doesn’t care. But some resources for your listeners, if they want to get involved and see, is you can go to the USGS website and set up your own alert. The CTBTO has not just seismic stations, but the radionuclide stations I mentioned, as well as infrasound and hydroacoustic, and other types of facilities all over the world. There’s a really cool map on their website where they show the over… I think it’s nearly 300 stations all around the world now, that are devoted exclusively to monitoring nuclear tests.

They get their information out, I think in seven minutes, and I don’t get that information necessarily in the first seven minutes, because I’m not a state member, a state party. But they will give out information very soon afterwards, and actually based on the seismic data, our colleagues, Jeffrey Lewis and some other young, smart people of the world, actually threw together a map, not using CTBTO data, but using the seismic stations of I think Iran, China, Japan, South Korea, and so if you go to their website, it’s called SleuthingFromTheInternet.com, you can set up little alerts there too, or scale for all the activities that are happening.

That was really just intended I think to be a little bit transparent with the seismic data and try to see data from different country stations, and in part, it was conceived because I think the USGS was deleting some of their explosions from the database and someone noticed. So now the idea is that you take a little bit of data from all these different countries, and that you can compare it to each other.

The last place I would suggest is to go to the IRIS seismic monitoring station, because just as Dave was mentioning, each seismic event has a different P wave, and so it shows up differently, like a fingerprint. And so, when IRIS puts out information, you can very quickly see how the different explosions in North Korea compare to each other, relatively, and so that can be really useful, too.

Dave: I will say, though, that sometimes you might get a false alarm. I believe it was with the last nuclear test. There was one reporting station, their automatic alert system that was put up out of the UK, that didn’t report it. No one caught that it didn’t, and then it did report it like a week later. So, for all of half an hour until we figured it out, there was a bit of a pause because there was some concern they might have done another test again, which would have been the seventh, but it turned out just being a delayed reporting.

Dave: Most of the time these things work out really well, but you always have to look for secondary and third sources of confirmation when these types of events happen.

Ariel: So a quick aside, we will have links to everything that you both just brought up in the transcript, so anyone interested in following up with any of these options, will be able to. I’m also going to share a fun fact that I learned, and that was, we originally had a global seismic network in order to monitor nuclear weapons testing. That’s why it was set up. And it’s only because we set that up that we actually were able to prove the plate tectonics theory.

Melissa: Oh, cool.

Dave: That’s really cool.

Melissa: Yeah. No, the CTBTO is really interesting, because even though the treaty isn’t enforced yet, they have these amazing scientific resources, and they’ve done all kinds of things. Like, they can hear whales moving around with their hydroacoustic technology, and when Iran had an explosion, a major explosion at their solid motor missile facility, they detected that as well.

Ariel: Yeah. It’s fun. Like I said, I did seismology a while ago so I’m signed up for lots of fun alerts. It’s always fun to learn about where things are blowing up in the earth’s surface.

Melissa: Well, that’s really the magic of open source to me. I mean, it used to be that a government came out and said, “Okay, this is what happened, and this is what we’re going to do about it.” But the idea that me, like a regular person in the world, can actually look up this primary information in the moments that it happens, and make a determination for myself, is really empowering. It makes me feel like I have the agency I want to have in understanding the world, and so I have to admit, that day in South Korea, when I was sitting there in the office tower and it was like, “Okay, all hands on deck, everyone’s got to write a report” and I was trying to figure it out, I was like, “I can’t believe I’m doing this. I can’t believe I can do this.” It’s such a different world already.

Ariel: Yeah. That is really amazing. I like your description. It’s really empowering to know that we have access to this information. So, I do want to move on and with access to this information, what do we know about what’s going on in North Korea right now? What can you tell us about what their plans are? Do we think the summit will happen? I guess I haven’t kept up with whatever the most recent news is. Do we think that they will actually do anything to get rid of their nuclear weapons?

Dave: I think at this point, the North Koreans feel really comfortable with the amount of information and progress they’ve made in their nuclear weapons program. That’s why they’re willing to talk. This program was primarily as a means to create a security assurance for the North Koreans because the Americans and South Koreans and whatnot have always been interested in regime change, removing North Korea from the equation, trying to end the thing that started in the 1950s, the Korean War, right? So there’d just be one Korea, we wouldn’t have to worry about North Korea, or this mysterious Hermit Kingdom, above the 38th parallel.

With that said, there’s been a lot of speculation as to why the North Koreans are willing to talk to us now. Some people have been floating around the idea that maximum pressure, I think that was the word used, with sanctions and whatnot, has brought the North Koreans to their knees, and now they’re willing to give up their nukes, as we’ve been hearing about.

But the way the North Koreans use denuclearization is very important. Because on one hand, that could mean that they’re willing to give up their nuclear weapons, and to denuclearize the state itself, but the way the North Koreans use it is much broader. It’s more used in the way of denuclearizing the peninsula. It’s not specifically reflective onto them.

Now that they’ve finally achieved some type of reasonable success with their nuclear weapons program, they’re more in a position where they think they can talk to the United States as equals, and denuclearization falls into the terminology that it’s used by other nuclear weapons states, where it’s a, “In a better world we won’t need these types of horrible weapons, but we don’t live in that world today, so we will stand behind the effort to denuclearize, but not right now.”

Melissa: Yeah, I think we can say that if we look at North Korea’s capabilities first, and then why they’re talking now, we can see that in the time when Dave and I were cutting our teeth, they were really ramping up their nuclear and missile capabilities. It wasn’t immediately obvious, because a lot of what was happening was inside a laboratory or inside a building, but then eventually they started doing nuclear tests and then they did more and more missile tests.

It used to be that a missile test was just a short range missile off the coast, sometimes it was a political grandstanding. But if you look, our colleague, Shea Cotton, made a missile database that shows every North Korean missile test, and you can see that in the time under Kim Jong-un, those tests really started to ramp up. I think Dave, you started at CNS in like 2014?

Dave: Right around then.

Melissa: Right around then, so they jumped up to like 19 missile tests that year. I can say this because I’m looking at the database right now, and they started doing really more interesting things than ever before, too. Even though diplomatically and politically we were still thinking of them as being backwards, as not having a very good capability, if we looked at it quantitatively, we could say, “Well, they’re really working on something.”

So Dave actually was really excellent at geolocating. When they did engine tests, we could measure the bell of the engine and get a sense of what those engines were about. We could see solid fuel motors being tested, and so this went all the way up until ICBM launched last fall, and then they were satisfied.

Ariel: So when you say engine testing, what does that mean? What engine?

Dave: The North Korean ballistic missile fleet used to be entirely tied to this really old Soviet missile called the Scud. If anyone’s played video games in the late ’90s or early 2000s, that was the small missile that you always had to take out or something along that line, and it was fairly primitive. It was a design that the North Koreans hadn’t demonstrated they were able to move beyond, that’s why then the last three years started to kick in, the North Koreans started to field more complicated missiles instead of showing that they were doing engine tests with more experimental, more advanced designs that we had seen in other parts of the world previously. Some people were a bit speculative or doubting that the North Koreans were actually making serious progress. Then last year, they tested their first intermediate range ballistic missile which can hit Guam, which is something that they’ve been trying to do for a while, but it hadn’t worked out. Then, they made that missile larger, they made their first ICBM.

Then they made that missile even larger, came up with a much more ambitious engine design using two engines instead of one. They had a much more advanced steering system, and they came up with the Hwasong-15 which is their longest range ICBM. It’s a huge shift from the way we were having this conversation 5 to 10 years ago, where we were looking at their space launch vehicles, which were, again, modified Scuds that were stretched out and essentially tied together, to an actual functioning ICBM fleet.

The technological shift in pair with their nuclear weapons developments have really demonstrated that the North Koreans are no longer this 10 to 20 year, around the corner threat, that they actually possess the ability to launch nuclear weapons at the United States.

Melissa: And back when they had their first nuclear test in 2006, people were like, “It’s a device.” I think for years, we still call it a device. But back then, the US and others kept moving the goalposts. They were saying, “Well, all right. They had a nuclear device explode. We don’t know how big it was, they have no way of delivering it. We don’t know what the yield was. It probably fizzled.” It was dismissive.

So, from that period, 2006 to today, it’s a real remarkable challenge. Almost every criticism that North Korea has faced, right down to their heat shield on their ICBM, has been addressed vociferously with propaganda, photos and videos that we in turn can analyze. And yeah, I think they have demonstrated essentially that they can explode something, they can launch a missile that can carry something that can explode.

The only thing they haven’t done, and Dave can chime in here, is explode a nuclear weapon on the tip of a missile. Other countries have done this, and it’s terrifying, and because Dave is such a geographically visual person, I’ll let him describe what that might look like. But if we keep goading them, if we keep telling them they’re backwards, eventually they’re going to want to prove it.

Dave: Yeah, so off of Melissa’s point, this is something that I believe Jeffrey might have coined. It’s called the Juche Bird, which is a playoff of Frigate Bird, which was a live nuclear warhead test that the Americans conducted. The North Koreans, in order to prove that the system in its entirety — the nuclear device, the missile, the reentry shield — all work and it’s not just small random successes in different parts of a much larger program, is they would take a live nuclear weapon, put it on the end of a long range missile, launch it in the air, and detonate it at a specific location to show that they have the ability to actually use the purported weapon system.

Melissa: So if you’re sitting in Japan or South Korea, but especially Japan, and you imagine North Korea launching an intermediate range or intercontinental ballistic missile over your country, with a nuclear weapon on it, in order to execute an atmospheric test, that makes you extremely nervous. Extremely nervous, and we all should be a little bit nervous, because it’s really hard for anyone in the open source, and I would argue in the intelligence community, to know, “Well, this is just an atmospheric test. This isn’t the beginning of a war.”

We would have to trust that they pick up the trajectory of that missile really fast and determine that it’s not heading anywhere. That’s the challenge with all of these missile tests, is no one can tell if there’s a warhead on it, or not a warhead on it, and then we start playing games with ballistic missile defense, and that is a whole new can of worms.

Ariel: What do you guys think is the risk that North Korea or any other country for that matter, would intentionally launch a nuclear weapon at another country?

Melissa: For me, it’s accidents, and an accident can unfold a couple of different ways. One way would be perhaps the US is performing joint exercises. North Korea has some sensing equipment up on peaks of mountains, and Dave has found every single one probably, but it’s not perfect. It’s not great, and if the picture comes back to them, it’s a little fuzzy, maybe this is no longer a joint exercise. This is the beginning of an attack. They will decide to engage.

They’ve long said that they believe that a war will start based on the pretext of a joint exercise. In reverse scenario, what if North Korea does launch an ICBM with a nuclear warhead, in order to perform a test, and the US or Japan or South Korea think, “Well, this is it. This is the war.” And so it’s those accidental scenarios that I worry about, or even perhaps what happens if a test goes badly? Or, someone is harmed in some way?

I worry that these states would have a hard time politically rolling back where they feel they have to be, based on these high stakes.

Dave: I agree with Melissa. I think the highest risk we have is also depending on our nuclear posture in accident. There have been accidents that have happened in the past where someone in a monitoring base picks up a bunch of bleeps on a radar, and people start initiating the game on protocol, and luckily we’ve been able to avoid that to its completion in the past.

Now, with the North Koreans, this could also work in their direction, as well. I can’t imagine that their sensing technology is up to par with what the United States has, or had, back when these accidents were a real thing and they happened. So if the North Koreans see a military exercise that they don’t feel comfortable with, or they have some type of technical glitch on their side, they might notionally launch something, and that would be the start of a conflict.

Ariel: One of the final questions that I have for both of you. I’ve read that while nuclear weapons are scary, the greater threat with North Korea could actually be their conventional weapons. Could either of you speak to that?

Dave: Yeah, sure. North Korea has a very large conventional army. Some people might try to make jokes about how modern that army is, but military force only needs to be so modern with the type of geographical game that’s in play on the Korean Peninsula. Seoul is really not that far from the DMZ, and it’s a widely known fact that North Korea has tons of artillery pointed at Seoul. They’ve had these things pointed there since the end of the Korean War, and they’re all entrenched.

You might be able to hit some of them, but you’re not going to hit all of them. This type of artillery, in connection with their conventional ballistic missile force, we’re talking about things that aren’t carrying a WMD, it’s a real big threat for some type of conventional action.

Seoul is a huge city. The metropolitan area at least has a population of over 20 million people. I’m not sure if you’ve ever been to Seoul, it’s a great, beautiful city, but traffic is horrible, and if everyone’s trying to leave the city when something happens, everyone north of the river is screwed, and congestion on the south side, it would just be a total disaster. Outside of the whole nuclear aspect of this dangerous relationship, the conventional forces North Korea has are equally as terrifying.

Melissa: I think Dave’s bang on, but the only thing I would add is that one of the things that’s concerning about having both nuclear and conventional forces is how you use your conventional forces with that extra nuclear guarantee. This is something that our boss, Jeffrey Lewis, has written about extensively. But do you use that extra measure of security and just preserve it, save it? Does Kim Jong-un go home at night to his family and say, “Yes, I feel extra safe today because I have my nuclear security?”

Or do you use that extra nuclear security in order to increase the number of provocations that you do conventionally? Because we’ve had theses crises break out over the sinking of the Cheonan naval vessel, or the shelling of Yeonpyeong, near the border. In both cases, South Koreans died, but the question is will North Korea feel emboldened by its nuclear security, and will it carry out more conventional provocations?

Ariel: Okay, and so for the last question that I want to ask, we’ve talked about all these things that could go wrong, and there’s really just never anything that positive about a nuclear weapons discussion, but I still want to end with is there anything that gives you hope about this situation?

Dave: That’s a tough question. I mean, on one side, we have a nuclear armed North Korea, and this is something that we knew was coming for quite some time. I think if anything, this is one thing that I know I have and I believe Melissa has been advocating as well, is conversation and dialogue between North [Korea] and all the other associated parties, including the United States, is a way to begin some type of line of communication, hopefully so that accidents don’t happen.

‘Cause North Korea’s not going to be giving up their nukes anytime soon. Even though the talks that you may be having aren’t going to be as productive as you would want them to be, I believe conversation is critical at this moment, because the other alternatives are pretty bad.

Melissa: I guess I’ll add on that we have Dave now, and I know it sounds like I’m teasing my colleague, but it’s true. Things are bad, things are bad, but we’re turning out generation after generation of young, brilliant, enthusiastic people. Before 2014, we didn’t have a Dave, and now we have a Dave, and Dave is making more Daves, and every year we’re matriculating students who care about this issue, who are finding new ways to engage with this issue, that are disrupting entrenched thinking on this issue.

Nuclear weapons are old. They are scary, they are the biggest explosion that humans have ever made, but they are physical and finite, and the technology is aging, and I do think with new creative, engaging ways, the next generation’s going to come along and they’re going to be able to address this issue with new hacks. These can be technical hacks, they can be along the side of verification and trust building. These can be diplomatic hacks.

The grassroots movements we see all around the world, that are taking place to ban nuclear weapons, those are largely motivated by young people. I’m on this bridge where I get to see… I remember the Berlin Wall coming down, I also get to see the students who don’t remember 9/11, and it’s a nice vantage point to be able to see how history’s changing, and while it feels very scary and dark in this moment, in this administration, we’ve been in dark administrations before. We’ve faced much more terrifying adversaries than North Korea, and I think it’s going to be generations ahead who are going to help crack this problem.

Ariel: Excellent. That was a really wonderful answer. Thank you. Well, thank you both so much for being here today. I’ve really enjoyed talking with you.

Melissa: Thanks for having us.

Dave: Yeah, thanks for having us on.

Ariel: For listeners, as I mentioned earlier, we will have links to anything we discussed on the podcast in the transcript of the podcast, which you can find from the homepage of FutureOfLife.org. So, thanks again for listening, like the podcast if you enjoyed it, subscribe to hear more, and we will be back again next month.

[end of recorded material]

 

Podcast: What Are the Odds of Nuclear War? A Conversation With Seth Baum and Robert de Neufville

What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing.

On this month’s podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability of Nuclear War. The report examines 60 historical incidents that could have escalated to nuclear war and presents a model for determining the odds are that we could have some type of nuclear war in the future.

Topics discussed in this episode include:

  • the most hair-raising nuclear close calls in history
  • whether we face a greater risk from accidental or intentional nuclear war
  • China’s secrecy vs the United States’ transparency about nuclear weapons
  • Robert’s first-hand experience with the false missile alert in Hawaii
  • and how researchers can help us understand nuclear war and craft better policy

Links you might be interested in after listening to the podcast:

You can listen to this podcast above or read the transcript below.

 

 

Ariel: Hello, I’m Ariel Conn with the Future of Life Institute. If you’ve been listening to our previous podcasts, welcome back. If this is new for you, also welcome, but in any case, please take a moment to follow us, like the podcast, and maybe even share the podcast.

Today, I am excited to present Seth Baum and Robert de Neufville with the Global Catastrophic Risk Institute (GCRI). Seth is the Executive Director and Robert is the Director of Communications, he is also a super forecaster, and they have recently written a report called A Model for the Probability of Nuclear War. This was a really interesting paper that looks at 60 historical incidents that could have escalated to nuclear war and it basically presents a model for how we can determine what the odds are that we could have some type of nuclear war in the future. So, Seth and Robert, thank you so much for joining us today.

Seth: Thanks for having me.

Robert: Thanks, Ariel.

Ariel: Okay, so before we get too far into this, I was hoping that one or both of you could just talk a little bit about what the paper is and what prompted you to do this research, and then we’ll go into more specifics about the paper itself.

Seth: Sure, I can talk about that a little bit. So the paper is a broad overview of the probability of nuclear war, and it has three main parts. One is a detailed background on how to think about the probability, explaining differences between the concept of probability versus the concept of frequency and related background in probability theory that’s relevant for thinking about nuclear war. Then there is a model that scans across a wide range, maybe the entire range, but at least a very wide range of scenarios that could end up in nuclear war. And then finally, is a data set of historical incidents that at least had some potential to lead to nuclear war, and those incidents are organized in terms of the scenarios that are in the model. The historical incidents give us at least some indication of how likely each of those scenario types are to be.

Ariel: Okay. At the very, very start of the paper, you guys say that nuclear war doesn’t get enough scholarly attention, and so I was wondering if you could explain why that’s the case and what role this type of risk analysis can play in nuclear weapons policy.

Seth: Sure, I can talk to that. The paper, I believe, specifically says that the probability of nuclear war does not get much scholarly attention. In fact, we put a fair bit of time into trying to find every previous study that we could, and there was really, really little that we were able to find, and maybe we missed a few things, but my guess is that this is just about all that’s out there and it’s really not very much at all. We can only speculate on why there has not been more research of this type, my best guess is that the people who have studied nuclear war — and there’s a much larger literature on other aspects of nuclear war — they just do not approach it from a risk perspective as we do, that they are inclined to think about nuclear war from other perspectives and focus on other aspects of it.

So the intersection of people who are both interested in studying nuclear war and tend to think in quantitative risk terms is a relatively small population of scholars, which is why there’s been so little research, is at least my best guess.

Robert: Yeah, it’s a really interesting question. I think that the tendency has been to think about it strategically, something we have control over, somebody makes a choice to push a button or not, and that makes sense from some perspective. I think there’s also a way in which we want to think about it as something unthinkable. There hasn’t been a nuclear detonation in a long time and we hope that there will never be another one, but I think that it’s important to think about it this way so that we can find the ways that we can mitigate the risk. I think that’s something that’s been neglected.

Seth: Just one quick clarification, there have been very recent nuclear detonations, but those have all been tests detonations, not detonations in conflict.

Robert: Fair enough. Right, not a use in anger.

Ariel: That actually brings up a question that I have. As you guys point out in the paper, we’ve had one nuclear war and that was World War II, so we essentially have one data point. How do you address probability with so little actual data?

Seth: I would say “carefully,” and this is why the paper itself is very cautious with respect to quantification. We don’t actually include any numbers for the probability of nuclear war in this paper.

The easy thing to do for calculating probabilities is when you have a large data set of that type of event. If you want to calculate the probability of dying in a car crash, for example, there’s lots of data on that because it’s something that happens with a fairly high frequency. Nuclear war, there’s just one data point and it was under circumstances that are very different from what we have right now, World War II. Maybe there would be another world war, but no two world wars are the same. So we have to, instead, look at all the different types of evidence that we can bring in to get some understanding for how nuclear war could occur, which includes evidence about the process of going from calm into periods of tension, or the thought of going to nuclear war all the way to the actual decision to initiate nuclear war. And then also look at a wider set of historical data, which is something we did in this paper, looking at incidents that did not end up as nuclear wars, but pushed at least a little bit in that direction, to see what we can learn about how likely it is for things to go in the direction of nuclear war, which tells us at least something about how likely it is to get there all the way.

Ariel: Robert, I wanted to turn to you on that note, you were the person who did a lot of work figuring out what these 60 historical events were. How did you choose them?

Robert: Well, I wouldn’t really say I chose them, I tried to just find every event that was there. There are a few things that we left out because we thought it falls below some threshold of the seriousness of the incident, but in theory you could probably expand it in the scope even a little wider than we did. But to some extent we just looked at what’s publicly known. I think the data set is really valuable, I hope it’s valuable, but one of the issues with it is it’s kind of a convenience sample of the things that we know about, and some areas, some parts of history, are much better reported on than others. For example, we know a lot about the Cuban Missile Crisis in the 1960s, a lot of research has been done on that, there are the times when the US government has been fairly transparent about incidents, but we know less about other periods and other countries as well. We don’t have incidents from China’s nuclear program, but that doesn’t mean there weren’t any, it just means it’s hard to figure out, and that scenario would be really interesting to do more research on.

Ariel: So, what was the threshold you were looking at to say, “Okay, I think this could have gone nuclear”?

Robert: Yeah, that’s a really good question. It’s somewhat hard to say. I think that a lot of these things are judgment calls. If you look at the history of incidents, I think a number of them have been blown a little bit out of proportion. As they’ve been retold, people like to say we came close to nuclear war, and that’s not always true. There are other incidents which are genuinely hair-raising and then there are some incidents that seem very minor, that you could say maybe it could have gotten to a nuclear war. But there was some safety incident on an Air Force Base and they didn’t follow procedures, and you could maybe tell yourself a story in which that led to a nuclear war, but at some point you make a judgment call and say, well, that doesn’t seem like a serious issue.

But it wasn’t like we have a really clear, well-defined line. In some ways, we’d like to broaden the data set so that we can include even smaller incidents just because the more incidents, the better as far as understanding, not the more incidents the better as far as being safe.

Ariel: Right. I’d like this question to go to both of you, as you were looking through these historical events, you mentioned that they were already public records so they’re not new per se, but were there any that surprised you, and which were one or two that you found the most hair-raising?

Robert: Well, I would say one that surprised me, and this may just be because of my ignorance of certain parts of geopolitical history, but there was an incident with the USS Liberty in the Mediterranean, in which the Israelis mistook it for an Egyptian destroyer and they decided to take it out, essentially, not realizing it was actually an American research vessel, and they did, and what happened was the US scrambled planes to respond. The problem was that most of the planes, or the ordinary planes they would have ordinarily scrambled, were out on some other sorties, some exercise, something like that, and they ended up scrambling planes which had a nuclear payload on them. These planes were recalled pretty quickly. They mentioned this to Washington and the Secretary of Defense got on the line and said, “No, recall those planes,” so it didn’t get that far necessarily, but I found it a really shocking incident because it was a friendly fire confusion, essentially, and there were a number of cases like that in which nuclear weapons were involved because they happened to be on equipment where they shouldn’t have been that was used to respond to some kind of a real or false emergency. That seems like a bigger issue than I would’ve at first expected, that just the fact that nuclear weapons are lying around somewhere where they could be involved with something.

Ariel: Wow, okay. And Seth?

Seth: Yeah. For me this was a really eye-opening experience. I had some familiarity with the history of incidents involving nuclear weapons, but there turned out to be much more that’s gone on over the years than I really had any sense for. Some of it is because I’m not a historian, this is not my specialty, but there were any number of events that it appears that the nuclear weapons were, at least may have been, seriously considered for use in a conflict.

Just to pick one example, in 1954 and 1955 was known as the first Taiwan Straits Crisis, and the second crisis, by the way, in 1958, also included plans for nuclear weapons use. But in the first one there were plans made up by the United States, the Joint Chiefs of Staff allegedly recommended that nuclear weapons be used against China if the conflict intensified and that President Eisenhower was apparently pretty receptive to this idea. In the end, there was a ceasefire negotiated so it didn’t come to that, but had that ceasefire not been made, my sense is that … The historical record is not clear on whether the US would’ve used nuclear weapons or not, maybe even the US leadership hadn’t made any final decisions on this matter, but there any number of these events, especially earlier in the years or decades after World War II when nuclear weapons were still relatively new, in which the use of nuclear weapons in conflict seemed to at least get a serious consideration that I might not have expected.

I’m accustomed to thinking of nuclear weapons as having a fairly substantial taboo attached to them, but I feel like the taboo has perhaps strengthened over the years, such that leadership now is less inclined to give the use of nuclear weapons serious consideration than it was back then. That may be mistaken, but that’s the impression that I get and that we may be perhaps more fortunate to have gotten through the first couple decades after World War II without an additional nuclear war. But it might be less likely at this time, though still not entirely impossible by any means.

Ariel: Are you saying that you think the risk is higher now?

Seth: I think the risk is probably higher now. I think I would probably say that the risk is higher now than it was, say, 10 years ago because various relations between nuclear armed states have gotten worse, certainly including between the United States and Russia, but whether the probability of nuclear war is higher now versus in, say, the ’50s or the ’60s, that’s much harder to say. That’s a degree of detail that I don’t think we can really comment on conclusively based on the research that we have at this point.

Ariel: Okay. In a little while I’m going to want to come back to current events and ask about that, but before I do that I want to touch first on the model itself, which lists four steps to a potential nuclear war: initiating the event, crisis, nuclear weapon use and full-scale nuclear war. Could you talk about what each of those four steps might be? And then I’m going to have follow-up questions about that next.

Seth: I can say a little bit about that. The model you’re describing is a model that was used by our colleague, Martin Hellman, in a paper that he did on the probability of nuclear war, and that was probably the first paper that develops the study of the probability of nuclear war using the sort of methodology that we use in this paper, which is to develop nuclear war scenarios.

So the four steps in this model are four steps to go from a period of calm into a full-scale nuclear war. His paper was looking at the probability of nuclear war based on an event that is similar to the Cuban Missile Crisis, and what’s distinctive about the Cuban Missile Crisis is we may have come close to going directly to nuclear war without any other type of conflicts in the first place. So that’s where the initiating event and the crisis in this model comes from, it’s this idea that there will be some of event that leads to a crisis, and the crisis will go straight to nuclear weapons use which could then scale to a full-scale nuclear war. The value of breaking it into those four steps is then you can look at each step in turn, think through the conditions for each of them to occur and maybe the probability of going from one step to the next, which you can use to evaluate the overall probability of that type of nuclear war. That’s for one specific type of nuclear war. Our paper then tries to scan across the full range of different types of nuclear war, different nuclear war scenarios, and put that all into one broader model.

Ariel: Okay. Yeah, your paper talks about 14 scenarios, correct?

Seth: That’s correct, yes.

Ariel: Okay, yeah. So I guess I have two questions for you: one, how did you come up with these 14 scenarios, and are there maybe a couple that you think are most worrisome?

Seth: So the first question we can definitely answer, we came up with them through our read of the nuclear war literature and our overall understanding of the risk and then iterating as we put the model together, thinking through what makes the most sense for how to organize the different types of nuclear war scenarios, and through that process, that’s how we ended up with this model.

As far as which ones seem to be the most worrisome, I would say a big question is whether we should be more worried about intentional versus accidental, or inadvertent nuclear war. I feel like I still don’t actually have a good answer to that question. Basically, should we be more worried about nuclear war that happens when a nuclear armed country decides to go ahead and start that nuclear war versus one where there’s some type of accident or error, like a false alarm or the detonation of a nuclear weapon that was not intended to be an act of war? I still feel like I don’t have a good sense for that.

Maybe the one thing I do feel is that it seems less likely that we would end up in a nuclear war from a detonation of a nuclear weapon that was not intentionally an act of war just because it feels to me like those events are less likely to happen. This would be nuclear terrorism or the accidental detonation of nuclear weapons, and even if it did happen it’s relatively likely that they would be correctly diagnosed as not being an act of war. I’m not certain of this. I can think of some reasons why maybe we should be worried about that type of scenario, but especially looking at the historical data it felt like those historical incidents were a bit more of a stretch, a bit further away from actually ending up in nuclear war.

Robert, I’m actually curious, your reaction to that, if you agree or disagree with that.

Robert: Well, I don’t think that non-state actors using a nuclear weapon is the big risk right now. But as far as whether it’s more likely that we’re going to get into a nuclear war through some kind of human error or a technological mistake, or whether it will be a deliberate act of war, I can think of scary things that have happened on both sides. I mean, the major thing that looms in one’s mind when you think about this is the Cuban Missile Crisis, and that’s an example of a crisis in which there were a lot of incidents during the course of that crisis where you think, well, this could’ve gone really badly, this could’ve gone the other way. So a crisis like that where tensions escalate and each country, or in this case the US and Russia, each thought the other might seriously threaten the homeland, I think are very scary.

On the other hand, there are incidents like the 1995 Norwegian rocket incident, which I find fairly alarming. In that incident, what happened was Norway was launching a scientific research rocket for studying the weather and had informed Russia that they were going to do this, but somehow that message hadn’t got passed along to the radar technicians, so the radar technician saw what looked like a submarine launched ballistic missile that could have been used to do an EMP, a burst over Russia which would then maybe take out radar and could be the first move in a full-scale attack. So this is scary because this got passed up the chain and supposedly, President Boris Yeltsin, it was Yeltsin at the time, actually activated the nuclear football in case he needed to authorize a response.

Now, we don’t really have a great sense how close anyone came to this, this is a little hyperbole after the fact, but this kind of thing seems like you could get there. And 1995 wasn’t a time of big tension between the US and Russia, so this kind of thing is also pretty scary and I don’t really know, I think that which risk you would find scarier depends a little bit on the current geopolitical climate. Right now, I might be most worried that the US would launch a bloody-nose attack against North Korea and North Korea would respond with a nuclear weapon, so it depends a little bit. I don’t know the answer either, I guess, is my answer.

Ariel: Okay. You guys brought up a whole bunch of things that I had planned to ask about, which is good. I mean, one of my questions had been are you more worried about intentional or accidental nuclear war, and I guess the short answer is, you don’t know? Is that fair to say?

Seth: Yeah, that’s pretty fair to say. The short answer is, at least at this time, they both seem very much worth worrying about.

As far as which one we should be more worried about, this is actually a very important detail to try to resolve for policy purposes because this speaks directly to how we should manage our nuclear weapons. For example, if we are especially worried about accidental or inadvertent nuclear war, then we should keep nuclear weapons on a relatively low launch posture. They should not be on hair-trigger alert because when things are on a high-alert status, it takes relatively little for the nuclear weapons to be launched and makes it easier for a mistake to lead to a launch. Versus if we are more worried about intentional nuclear war, then there may be some value to having them on a high-alert status in order to have a more effective deterrence in order to convince the other side to not launch their nuclear weapons. So this is an important matter to try resolving, but at this point, based on the research that we have so far, it remains, I think, somewhat ambiguous.

Ariel: I do want to follow up with that. Everything I’ve read, there doesn’t seem to be any benefit really to having things like our intercontinental ballistic missiles on hair-trigger alert, which are the ones that are on hair-trigger alert is my understanding, because submarines and the bombers still have the capability to strike back. Do you disagree with that?

Seth: I can’t say for sure whether or not I do disagree with that because it’s not something that I have looked at closely enough, so I would hesitate to comment on that matter. My general understanding is that hair-trigger alert is used as a means to enhance deterrence in order to make it less likely that either side would use their nuclear weapons in the first place, but regarding the specifics of it, that’s not something that I’ve personally looked at closely enough to really be able to comment on.

Robert: I think Seth’s right that it’s a question that needs more research in a lot of ways and that we shouldn’t answer it in the context of… We didn’t figure out the answer to that in this paper. I will say, I would personally sleep better if they weren’t on hair-trigger alert. My suspicion is that the big risk is not that one side launches some kind of decapitating first strike, I don’t think that’s really a very high risk, so I’m not as concerned as someone else might be about how well we need to deter that, how quickly we need to be able to respond. Whereas, I am very concerned about the possibility of an accident because… I mean, readings these incidents will make you concerned about it, I think. Some of them are really frightening. So that’s my intuition, but, as Seth says, I don’t think we really know. There’s more, at least in terms of this model, there’s more studying we need to do.

Seth: If I may, to one of your earlier questions regarding motivations for doing this research in the first place, I feel like to try giving more rigorous answers to some of these very basic nuclear weapons policy questions, like “should nuclear weapons be on hair-trigger alert, is that safer or more dangerous,” we can talk a little bit about what the trade-offs might be, but we don’t really have much to say about how that trade-off actually would be resolved. This is where I think that it’s important for the international security community to be trying harder to analyze the risks in these structured and, perhaps, even quantitative terms so that we can try to answer these questions more rigorously than just, this is my intuition, this is your intuition. That’s really, I think, one of the main values for doing this type of research is to be able to answer these important policy questions with more confidence and also perhaps, more consensus across different points of view than we would otherwise be able to have.

Ariel: Right. I had wanted to continue with some of the risk questions, but while we’re on the points that you’re making, Seth, what do you see moving forward with this paper? I mean, it was a bummer to read the paper and not get what the probabilities of nuclear war actually are, just a model for how we can get there, how do you see either you, or other organizations, or researchers, moving forward to start calculating what the probability could actually be?

Seth: The paper does not give us final answers for what the probability would be, but it definitely makes some important steps in that direction. Additional steps that can be taken would include things like exploring the historical incidence data set more carefully to check to see if there may be important incidents that have been missed, to see for each of the incidents how close do we really think that that came to nuclear war? And this is something that the literature on these incidents actually diverges on. There are some people who look at these incidents and see them as being really close calls, other people look at them and see them as being evidence that the system works as it should, that, sure, there were some alarms but the alarms were handled the way that they should be handled and that the tools are in place to make sure that those don’t end in nuclear war. So exactly how close these various incidents got is one important way forward towards quantifying the probability.

Another one is to come up with some sense for what the actual population of historical incidences relative to the data set that we have, we are presumably missing some number of historical incidents, some of them might be smaller and less important, but there might be some big ones that maybe they happened and we don’t know about it because they are only in literatures in other languages, we only did research in English, or because all of the evidence about them is classified government records by whichever governments were involved in the incident, and so we need to-

Ariel: Actually, I do actually want to interrupt with a question real quick there, and my apologies for not having read this closer, I know there were incidents involving the US, Russia, and I think you guys had some about Israel. Were there incidents mentioning China or any of the European countries that have nuclear weapons?

Seth: Yeah, I think there were probably incidents involving all of the nuclear armed countries, certainly involving China. For example, China had a war with the Soviet Union over their border some years ago and there was at least some talk of nuclear weapons involved in that. Also, the one I mentioned earlier, the Taiwan Straits Crises, those involved China. Then there were multiple incidents between India and Pakistan, especially regarding the situation in Kashmir. With France, I believe we included one incident in which a French nuclear bomber got a faulty signal to take off in combat and then it was eventually recalled before it got too far. There might’ve been something with the UK also. Robert, do you recall if there were any with the UK?

Robert: Yes, there was, during the Falklands war, apparently, they left with nuclear depth charges. It’s actually not really, honestly clear to me why you would use a nuclear depth charge, but there’s not any evidence they ever intended to use them but they sent out nuclear armed ships, essentially, to deal with a crisis in the Falklands.

There’s also, I think, an incident in South Africa as well when South Africa was briefly a nuclear state.

Ariel: Okay. Thanks. It’s not at all disturbing.

Robert: It’s very disturbing. I will say, I think that China is the one we know the least about. Some of the incidents that Seth mentioned with China, the danger or the nuclear armed power that might have used nuclear weapons was the United States. So there is the Soviet-China incident, but we don’t really know a lot about the Chinese program and Chinese incidents. I think some of that is because it’s not reported in English and to some extent it’s also that it’s classified and the Chinese are not as open about what’s going on.

Seth: Yeah, the Chinese are definitely much, much less transparent than the United States, as are the Russians. I mean, the United States might be the most transparent out of all of the nuclear armed countries.

I remember some years ago when I was spending time at the United Nations I got the impression that the Russians and the Chinese were actually not quite sure what to make of the Americans’ transparency, that they found it hard to believe that the US government was not just putting out loads of propaganda and misinformation that it didn’t make sense to them that we just actually put out a lot of honest data about government activities here, and that’s just the standard and that you can actually trust this information, this data. So yeah, we may be significantly underestimating the number of incidents involving China and perhaps Russia and other countries because their governments are less transparent.

Ariel: Okay. That definitely addresses a question that I had, and my apologies for interrupting you earlier.

Seth: No, that’s fine. But this is one aspect of the research that still remains to be done that would help us figure out what the probabilities might be. It would be a mistake to just calculate them based on the data set as it currently stands, because this is likely to be only a portion of the actual historical incidents that may have ended in nuclear war.

So these are the sorts of details and nuances that were, unfortunately, beyond the scope of the project that we were able to do, but it would be important work for us or other research groups to do to take us closer to having good probability estimates.

Ariel: Okay. I want to ask a few questions that, again, are probably going to be you guys guessing as opposed to having good, hard information, and I also wanted to touch a little bit on some current events. So first, one of the things that I hear a lot is that if a nuclear war is going to happen, it’s much more likely to happen between India and Pakistan than, say, the US and Russia or US and … I don’t know about US and North Korea at this point, but I’m curious what your take on that is, do you feel that India and Pakistan are actually the greatest risk or do you think that’s up in the air?

Robert: I mean, it’s a really tough question. I would say that India and Pakistan is one of the scariest situations for sure. I don’t think they have actually come that close, but it’s not that difficult to imagine a scenario in which they would. I mean, these are nuclear powers that occasionally shoot at each other across the line of control, so I do think that’s very scary.

But I also think, and this is an intuition, this isn’t a conclusion that we have from the paper, but I also think that the danger of something happening between the United States and Russia is probably underestimated, because we’re not in the Cold War anymore, relations aren’t necessarily good, it’s not clear what relations are, but people will say things like, “Well, neither side wants a war.” Obviously neither side wants a war, but I think there’s a danger of the kind of inadvertent escalation, miscalculation, and that hasn’t really gone away. So that’s something I think is probably not given enough attention. I’m also concerned about the situation in North Korea. I think that that is now an issue which we have to take somewhat seriously.

Seth: I think the last five years or so have been a really good learning opportunity for all of us on these matters. I remember having conversations with people about this, maybe five years ago, and they thought the thought of a nuclear war between the United States and Russia was just ridiculous, that that’s antiquated Cold War talk, that the world has changed. And they were right and their characterization of the world as it was at that moment, but I was always uncomfortable with that because the world could change again. And sure enough, in the last five years, the world has changed very significantly that I think most people would agree makes the probability of nuclear war between the United States and Russia substantially higher than it was five years ago, especially starting with the Ukraine crisis.

There’s also just a lot of basic volatility in the international system that I think is maybe underappreciated, that we might like to think of it as being more deterministic, more logical than it actually is. The classic example is that World War I maybe almost didn’t happen, that it only happened because a very specific sequence of events happened that led to the assassination of Archduke Ferdinand and had that gone a little bit differently, he wouldn’t have been assassinated and World War I wouldn’t have happened and the world we live in now would be very different than what it is. Or, to take a more recent example, it’s entirely possible that had the 2016 FBI director not made an unusual decision regarding the disclosure of information regarding one candidate’s emails a couple weeks before the election, the outcome of the 2016 US election might’ve gone different and international politics would look quite different than it is right now. Who knows what will happen next year or the year after that.

So I think we can maybe make some generalizations about which conflicts seem more likely or less likely, especially at the moment, but we should be really cautious about what we think it’s going to be overall over 5, 10, 20, 30 year periods just because things really can change substantially in ways that may be hard to see in advance.

Robert: Yeah, for me, one of the lessons of World War I is not so much that it might not have happened, I think it probably would have anyway — although Seth is right, things can be very contingent — but it’s more that nobody really wanted World War I. I mean, at the time people thought it wouldn’t happen because it was sort of bad for everyone and no one thought, “Well, this is in our interest to pursue it,” but wars can happen that way where countries end up thinking, for one reason or another, they need to go, they need to do one thing or another that leads to war when in fact everyone would prefer to have gotten together and avoided it. It’s suboptimal equilibrium. So that’s one thing.

The other thing is that, as Seth says, things change. I’m not that concerned about what’s going on in the week that we’re recording this, but we had this week the Russian ambassador saying he would shoot down US missiles aimed at Syria and the United States’ president responding on Twitter, that they better get ready for his smart missiles. This is, I suspect, won’t escalate to a nuclear war. I’m not losing that much asleep about it. But this is the kind of thing that you would like to see a lot less of, this is the kind of thing that’s worrying and maybe you wouldn’t have anticipated this 10 years ago.

Seth: When you say you’re not losing much sleep on this, you’re speaking as someone who has, as I understand, it very recently, actually, literally lost sleep over the threat of nuclear war, correct?

Robert: That’s true. I was woken up early in the morning by an alert saying a ballistic missile was coming to my state, and that was very upsetting.

Ariel: Yes. So we should clarify, Robert lives in Hawaii.

Robert: I live in Hawaii. And because I take the risk of nuclear war seriously, I might’ve been more upset than some people, although I think that a large percentage of the population of Hawaii thought to themselves, “Maybe I’m going to die this morning. In fact, maybe, my family’s going to die and my neighbors and the people at the coffee shop, and our cats and the guests who are visiting us,” and it really brought home the danger, not that it should be obvious that nuclear war is unthinkable but when you actually face the idea … I also had relatively recently read Hiroshima, John Hersey’s account of, really, most of the aftermath of the bombing of Hiroshima, and it was easy to put myself in that and say, “Well, maybe I will be suffering from burns or looking for clean water,” and of course, obviously, again, none of us deserve it. We may be responsible for US policy in some way because the United States is a democracy, but my friends, my family, my cat, none of us want any part of this. We don’t want to get involved in a war with North Korea. So this really, I’d say, it really hit home.

Ariel: Well, I’m sorry you had to go through that.

Robert: Thank you.

Ariel: I hope you don’t have to deal with it again. I hope none of us have to deal with that.

I do want to touch on what you’ve both been talking about, though, in terms of trying to determine the probability of a nuclear war over the short term where we’re all saying, “Oh, it probably won’t happen in the next week,” but in the next hundred years it could. How do you look at the distinction in time in terms of figuring out the probability of whether something like this could happen?

Seth: That’s a good technical question. Arguably, we shouldn’t be talking about the probability of nuclear war as one thing. If anything, we should talk about the rate, or the frequency of it, that we might expect. If we’re going to talk about the probability of something, that something should be a fairly specific distinct event. For example, an example we use in the paper, what’s the probability of a given team, say, the Cleveland Indians, winning the World Series? It’s good to say what’s the probability of them winning the World Series in, say, 2018, but to say what’s the probability of them winning the World Series overall, well, if you wait long enough, even the Cleveland Indians will probably eventually win the World Series as long as they continue to play them. When we wrote the paper we actually looked it up, and it said that they have about a 17% chance of winning the 2018 World Series even though they haven’t won a World Series since like 1948. Poor Cleveland- sorry, I’m from Pittsburgh so I get to gloat a little bit.

But yeah, we should distinguish between saying what is the probability of any nuclear war happening this week or this year, versus how often we might expect nuclear wars to occur or what the total probability of any nuclear war happening over a century or whatever time period it might be.

Robert: Yeah. I think that over the course of the century, I mean, as I say, I’m probably not losing that much sleep on any given week, but over the course of a century if there’s a probability of something really catastrophic, you have to do everything you can to try to mitigate that risk.

I think, honestly, some terrible things are going to happen in 21st century. I don’t know what they are, but that’s just how life is. I don’t know which things they are. Maybe it will involve a nuclear war of some kind. But you can also differentiate among types of nuclear war. If one nuclear bomb is used in anger in the 21st century, that’s terrible, but wouldn’t be all that surprising or mean the destruction of the human race. But then there are the kinds nuclear wars that could potentially trigger a nuclear winter by kicking so much soot up into the atmosphere and blocking out the sun, and might actually threaten not just the people who were killed in the initial bombing, but the entire human race. That is something we need to look at, in some sense, even more seriously, even though the chance of that is probably a fair amount smaller than the chance of one nuclear weapon being used. Not that one nuclear weapon being used wouldn’t be an incredibly catastrophic event as well, but I think with that kind of risk you really need to be very careful to try to minimize it as much possible.

Ariel: Real quick, I got to do a podcast with Brian Toon and Alan Robock a little while ago on nuclear winter, so we’ll link to that in the transcript for anyone who wants to learn about nuclear winter, and you brought up a point that I was also curious about, and that is: what is the likelihood, do you guys think, of just one nuclear weapon being used and limited retaliation? Do you think that is actually possible or do you think if a nuclear weapon is used, it’s more likely to completely escalate into full-scale nuclear war?

Robert: I personally do think that’s possible because I think a number of the scenarios that would involve using a nuclear weapon or not between the United States and Russia, or even the United States and China, so I think that some scenarios involve a few nuclear weapons. If it were an incident with North Korea, you might worry that it would spread to Russia or China, but you can also see a scenario in which North Korea uses one or two nuclear weapons. Even with India and Pakistan, they don’t necessarily, I wouldn’t think they would necessarily, use all — what do they have each, like a hundred or so nuclear weapons — I wouldn’t necessarily assume they would use them all. So there are scenarios in which just one or a few nuclear weapons would be used. I suspect those are the most likely scenarios, but it’s really hard to know. We don’t know the answer to that question.

Seth: There are even scenarios between the United States and Russia that involve one or just a small number of nuclear weapons, and the Russian military has the concept of the de-escalatory nuclear strike, which is the idea that if there is a major conflict that is emerging and might not be going in a favorable way for Russia, especially since their conventional military is not as strong as ours, that they may use a single nuclear weapon, basically, to demonstrate their seriousness on the matter in hopes of persuading us to back down. Now, whether or not we would actually back down or escalate it into an all-out nuclear war, I don’t think that’s something that we can really know in advance, but it’s at least plausible. It’s certainly plausible that that’s what would happen and presumably, Russia considers this plausible which is why they talk about it in the first place. Not to just point fingers at Russia, this is essentially the same thing the NATO had in the earlier point in the Cold War when the Soviet Union had the larger conventional military and our plan was to use nuclear weapons in a limited basis in order to prevent the Soviet Union from conquering Western Europe with their military, so it is possible.

I think this is one of the biggest points of uncertainty for the overall risk, is if there is an initial use of nuclear weapons, how likely is it that additional nuclear weapons are used and how many and in what ways? I feel like despite having studied this a modest amount, I don’t really have a good answer to that question. This is something that may be hard to figure out in general because it could ultimately depend on things like the personalities involved in that particular conflict, who the political and military leadership are and what they think of all of this. That’s something that’s pretty hard for us as outside analysts to characterize. But I think, both possibilities, either no escalation or lots of escalation, are possible as is everything in between.

Ariel: All right, so we’ve gone through most of the questions that I had about this paper now, thank you very much for answering those. You guys have also published a working paper this month called A Model for the Impacts of Nuclear War, but I was hoping you could maybe give us a quick summary of what is covered in that paper and why we should read it.

Seth: Risk overall is commonly quantified as the probability of some type of event multiplied by the severity of the impacts. So our first paper was on the probability side, this one’s on the impact side, and it scans across the full range of different types of impacts that nuclear war could have looking at the five major impacts of nuclear weapons detonation, which is thermal radiation, blast, ionizing radiation, electromagnetic pulse and then finally, human perceptions, the ways that the detonation affects how people think and in turn, how we act. We, in this paper, built out a pretty detailed model that looks at all of the different details, or at least a lot of the various details, of what each of those five effects of nuclear weapons detonations would have and what that means in human terms.

Ariel: Were there any major or interesting findings from that that you want to share?

Seth: Well, the first thing that really struck me was, “Wow, there are a lot of ways of being killed by nuclear weapons.” Most of the time when we think about nuclear detonations and how you can get killed by them, you think about, all right, there’s the initial explosion and whether it’s the blast itself or the buildings falling on you, or the fire, it might be the fire, or maybe it’s a really high dose of radiation that you can get if you’re close enough to the detonation, that’s probably how you can die. In our world of talking about global catastrophic risks, we also will think about the risk of nuclear winter and in particular, the effect that that can have on global agriculture. But there’s a lot of other things that can happen too, especially related to the effect on physical infrastructure, or I should say civil infrastructure, roads, telecommunications, the overall economy when cities are destroyed in the war, those take out potentially major nodes in the global economy that can have any number of secondary effects, among other things.

It’s just a really wide array of effects, and that’s one thing that I’m happy for with this paper is that for, perhaps, the first time, it really tries to lay out all of these effects in one place and in a model form that can be used for a much more complete accounting of the total impact of nuclear war.

Ariel: Wow. Okay. Robert, was there anything you wanted to add there?

Robert: Well, I agree with Seth, it’s astounding what the range, the sheer panoply of bad things that could happen, but I think that once you get into a situation where cities are being destroyed by nuclear weapons, or really anything being destroyed by nuclear weapons, it can unpredictable really fast. You don’t know the effect on the global system. A lot of times, I think, when you talk about catastrophic risk, you’re not simply talking about the impact of the initial event, but the long-term consequences it could have — starting more wars, ongoing famines, a shock to the economic system that can cause political problems, so these are things that we need to look at more. I mean, it would be the same with any kind of thing we would call a catastrophic risk. If there were a pandemic disease, the main concern might not be the pandemic disease would wipe out everyone, but that the aftermath would cause so many problems that it would be difficult to recover from. I think that would be the same issue if there were a lot of nuclear weapons used.

Seth: Just to follow up on that, some important points here, one is that the secondary effects are more opaque. They’re less clear. It’s hard to know in advance what would happen. But then the second is the question of how much we should study them. A lot of people look at the secondary effect and say, “Oh, it’s too hard to study. It’s too unclear. Let’s focus our attention on these other things that are easier to study.” And maybe there’s something to be said for that where if there’s really just no way of knowing what might happen, then we should at least focus on the part that we are able to understand. I’m not convinced that that’s true, maybe it is, but I think it’s worth more effort than there has been to try to understand the secondary effects, see what we can say about them. I think there are a number of things that we can say about them. The various systems are not completely unknown, they’re the systems that we live in now and we can say at least a few intelligent things about what might happen to those after a nuclear war or after other types of events.

Ariel: Okay. My final question for both of you then is, as we’re talking about all these horrible things that could destroy humanity or at the very least, just kill and horribly maim way too many people, was there anything in your research that gave you hope?

Seth: That’s a good question. I feel like one thing that gave me some hope is that, when I was working on the probability paper, it seemed that at least some of the events and historical incidents that I had been worried about might not have actually come as close to nuclear war as I previously thought they had. Also, a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.

But the other is that as you lay out all the different elements of both the probability and the impacts and see it in full how it all works, that really often points to opportunities that may be out there to reduce the risk and hopefully, some of those opportunities can be taken.

Robert: Yeah, I’d agree with that. I’d say there were certainly things in the list of historical incidents that I found really frightening, but I also thought that in a large number of incidents, the system, more or less, worked the way it should have, they caught the error of whatever kind it was and fixed it quickly. It’s still alarming, I still would like there not to be incidents, and you can imagine that some of those could’ve not been fixed, but they were not all as bad as I had imagined at first. So that’s one thing.

I think the other thing is, and I think Seth you were sort of indicating this, there’s something we can do, we can think about how to reduce the risk, and we’re not the only ones doing this kind of work. I think that people are starting to take efforts to reduce the risk of really major catastrophes more seriously now, and that kind of work does give me hope.

Ariel: Excellent. I’m going to end on something that … It was just an interesting comment that I heard recently, and that was: Of all the existential risks that humanity faces, nuclear weapons actually seem the most hopeful because there’s something that we can so clearly do something about. If we just had no nuclear weapons, nuclear weapons wouldn’t be a risk, and I thought that was an interesting way to look at it.

Seth: I can actually comment on that idea. I would add that you would need not just to not have any nuclear weapons, but also not have the capability to make new nuclear weapons. There is some concern that if there aren’t any nuclear weapons, then in a crisis there may be a rush to build some in order to give that side the advantage. So in order to really eliminate the probability of nuclear war, you would need to eliminate both the weapons themselves and the capacity to create them, and you would probably also want to have some monitoring measures so that the various countries had confidence that the other sides weren’t cheating. I apologize for being a bit of a killjoy on that one.

Robert: I’m afraid you can’t totally reduce the risk of any catastrophe, but there are ways we can mitigate the risk of nuclear war and other major risks too. There’s work that can be done to reduce the risk.

Ariel: Okay, let’s end on that note. Thank you both very much!

Seth: Yeah. Thanks for having us.

Robert: Thanks, Ariel.

Ariel: If you’d like to read the papers discussed in this podcast or if you want to learn more about the threat of nuclear weapons and what you can do about it, please visit futureoflife.org and find this podcast on the homepage, where we’ll be sharing links in the introduction.

[end of recorded material]

Podcast: Navigating AI Safety – From Malicious Use to Accidents

Is the malicious use of artificial intelligence inevitable? If the history of technological progress has taught us anything, it’s that every “beneficial” technological breakthrough can be used to cause harm. How can we keep bad actors from using otherwise beneficial AI technology to hurt others? How can we ensure that AI technology is designed thoughtfully to prevent accidental harm or misuse?

On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along with the many efforts to ensure safe and beneficial AI.

Topics discussed in this episode include:

  • the Facebook Cambridge Analytica scandal,
  • Goodhart’s Law with AI systems,
  • spear phishing with machine learning algorithms,
  • why it’s so easy to fool ML systems,
  • and why developing AI is still worth it in the end.
In this interview we discuss The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation, the original FLI grants, and the RFP examples for the 2018 round of FLI grants. This podcast was edited by Tucker Davey. You can listen to it above or read the transcript below.

 

Ariel: The challenge is daunting and the stakes are high. So ends the executive summary of the recent report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. I’m Ariel Conn with the Future of Life Institute, and I’m excited to have Shahar Avin and Victoria Krakovna joining me today to talk about this report along with the current state of AI safety research and where we’ve come in the last three years.

But first, if you’ve been enjoying our podcast, please make sure you’ve subscribed to this channel on SoundCloud, iTunes, or whatever your favorite podcast platform happens to be. In addition to the monthly podcast I’ve been recording, Lucas Perry will also be creating a new podcast series that will focus on AI safety and AI alignment, where he will be interviewing technical and non-technical experts from a wide variety of domains. His upcoming interview is with Dylan Hadfield-Menell, a technical AI researcher who works on cooperative inverse reinforcement learning and inferring human preferences. The best way to keep up with new content is by subscribing. And now, back to our interview with Shahar and Victoria.

Shahar is a Research Associate at the Center for the Study of Existential Risk, which I’ll be referring to as CSER for the rest of this podcast, and he is also the lead co-author on the Malicious Use of Artificial Intelligence report. Victoria is a co-founder of the Future of Life Institute and she’s a research scientist at DeepMind working on technical AI safety.

Victoria and Shahar, thank you so much for joining me today.

Shahar: Thank you for having us.

Victoria: Excited to be here.

Ariel: So I want to go back three years, to when FLI started our grant program, which helped fund this report on the malicious use of artificial intelligence, and I was hoping you could both talk for maybe just a minute or two about what the state of AI safety research was three years ago, and what prompted FLI to take on a lot of these grant research issues — essentially what prompted a lot of the research that we’re seeing today? Victoria, maybe it makes sense to start with you quickly on that.

Victoria: Well three years ago, AI safety was less mainstream in the AI research community than it is today, particularly long-term AI safety. So part of what FLI has been working on and why FLI started this grant program was to stimulate more work into AI safety and especially its longer-term aspects that have to do with powerful general intelligence, and to make it a more mainstream topic in the AI research field.

Three years ago, there were fewer people working in it, and many of the people who were working in it were a little bit disconnected from the rest of the AI research community. So part of what we were aiming for with our Puerto Rico conference and our grant program, was to connect these communities better, and to make sure that this kind of research actually happens and that the conversation shifts from just talking about AI risks in the abstract to actually doing technical work, and making sure that the technical problems get solved and that we start working on these problems well in advance before it is clear that, let’s say general AI, would appear soon.

I think part of the idea with the grant program originally, was also to bring in new researchers into AI safety and long-term AI safety. So to get people in the AI community interested in working on these problems, and for those people whose research was already related to the area, to focus more on the safety aspects of their research.

Ariel: I’m going to want to come back to that idea and how far we’ve come in the last three years, but before we do that, Shahar, I want to ask you a bit about the report itself.

So this started as a workshop that Victoria had also actually participated in last year and then you’ve turned it into this report. I want you to talk about what prompted that and also this idea that’s mentioned in the report is that, no one’s really looking at how artificial intelligence could be used maliciously. And yet what we’ve seen with every technology and advance that’s happened throughout history, I can’t think of anything that people haven’t at least attempted to use to cause harm, whether they’ve succeeded or not, I don’t know if that’s always the case, but almost everything gets used for harm in some way. So I’m curious why there haven’t been more people considering this issue yet?

Shahar: So going to back to maybe a few months before the workshop, which as you said was February 2017. Both Miles Brundage at the Future of Humanity Institute and I at the Center for the Study of Existential Risk, had this inkling that there were more and more corners of malicious use of AI that were being researched, people were getting quite concerned. We were in discussions with the Electronic Frontier Foundation about the DARPA Cyber Grand Challenge and progress being made towards the use of artificial intelligence in offensive cybersecurity. I think Miles was very well connected to the circle who were looking at lethal autonomous weapon systems and the increasing use of autonomy in drones. And we were both kind of — stories like the Facebook story that has been in the news recently, there were kind of the early versions of that coming up already back then.

So it’s not that people were not looking at malicious uses of AI, but it seemed to us that there wasn’t this overarching perspective that is not looking at particular domains. This is not, “what will AI do to cybersecurity in terms of malicious use? What will malicious use of AI look like in politics? What do malicious use of AI look like in warfare?” But rather across the board, if you look at this technology, what new kinds of malicious actions does it enable, and other commonalities across those different domains. Plus, it seemed that that “across the board” more technology-focused perspective, other than “domain of application” perspective, was something that was missing. And maybe that’s less surprising, right? People get very tied down to a particular scenario, a particular domain that they have expertise on, and from the technologists’ side, many of them just wouldn’t know all of the legal minutiae of warfare, or — one thing that we found was there weren’t enough channels of communication between the cybersecurity community and the AI research community; similarly the political scientists and the AI research community. So it did require quite an interdisciplinary workshop to get all of these things on the table, and tease out some the commonalities, which is what we then try to do with the report.

Ariel: So actually, you mentioned the Facebook thing and I was a little bit curious about that. Does that fall under the umbrella of this report or is that a separate issue?

Shahar: It’s not clear if it would fall directly under the report, because the way we define malicious could be seen as problematic. It’s the best that we could do with this kind of report, which is to say that there is a deliberate attempt to cause harm using the technology. It’s not clear, whether in the Facebook case, there was a deliberate attempt to cause harm or whether there was disregard of harm that could be caused as a side effect, or just the use of this in an arena that there are legitimate moves, just some people realize that the technology can be used to gain an upper hand within this arena.

But, there are whole scenarios that sit just next to it, that look very similar, but that are centralized use of this kind of surveillance, diminishing privacy, potentially the use of AI to manipulate individuals, manipulate their behavior, target messaging at particular individuals.

There are clearly imaginable scenarios in which this is done maliciously to keep a corrupt government in power, to overturn a government in another nation, kind of overriding the self-determination of the members of their country. There are not going to be clear rules about what is obviously malicious and what is just part of the game. I don’t know where to put Facebook’s and Cambridge Analytica’s case, but there are clearly cases that I think universally would be considered as malicious that from the technology side look very similar.

Ariel: So this gets into a quick definition that I would like you to give us and that is for the term ‘dual use.’ I was at a conference somewhat recently and a government official who was there, not a high level, but someone who should have been familiar with the term ‘dual use’ was not. So I would like to make sure that we all know what that means.

Shahar: So I’m not, of course, a legal expert, but the term did come up a lot in the workshop and in the report. ‘Dual use,’ as far as I can understand it, refers to technologies or materials that both have peace-time or peaceful purposes and uses, but also wartime, or harmful uses. A classical example would be certain kinds of fertilizer that could be used to grow more crops, but could also be used to make homegrown explosives. And this matters because you might want to regulate explosives, but you definitely don’t want to limit people’s access to get fertilizer and so you’re in a bind. How do you make sure that people who have a legitimate peaceful use of a particular technology or material get to have that access without too much hassle that will increase the cost or make things more burdensome, but at the same time, make sure that malicious actors don’t get access to capabilities or technologies or materials that they can use to do harm.

I’ve also heard the term ‘omni use,’ being referred to artificial intelligence, this is the idea that technology can have so many uses across the board that regulating it because of its potential for causing harm comes at a very, very high price, because it is so foundational for so many other things. So one can think of electricity: it is true that you can use electricity to harm people, but vetting every user of the electric grid before they are allowed to consume electricity, seems very extreme, because there is so much benefit to be gained from just having access to electricity as a utility, that you need to find other ways to regulate. Computing is often considered as ‘omni use’ and it may well be that artificial intelligence is such a technology that would just be foundational for so many applications that it will be ‘omni use,’ and so the way to stop malicious actors from having access to it is going to be fairly complicated, but it’s probably not going to be any kind of a heavy-handed regulation.

Ariel: Okay. Thank you. So going back a little bit to the report more specifically, I don’t know how detailed we want to get with everything, but I was hoping you could touch a little bit on a few of the big topics that are in the report. For example, you talk about changes in the landscape of threats, where there is an expansion of existing threats, there’s an intro to new threats, and typical threats will be modified. Can you speak somewhat briefly as to what each of those mean?

Shahar: So I guess what I was saying, the biggest change is that machine learning, at least in some domains, now works. That means that you don’t need to have someone write out the code in order to have a computer that is performant at the particular task, if you can have the right kind of labeled data or the right kind of simulator in which you can train an algorithm to perform that action. That means that, for example, if there is a human expert with a lot of tacit knowledge in a particular domain, let’s say the use of a sniper rifle, it may be possible to train a camera that sits on top of a rifle, coupled with a machine learning algorithm that does the targeting for you, so that now any soldier becomes as expert as an expert marksman. And of course, the moment you’ve trained this model once, making copies of it is essentially free or very close to free, the same as it is with software.

Another is the ability to go through very large spaces of options and using some heuristics to more effectively search through that space for effective solutions. So one example of that would be AlphaGo, which is a great technological achievement and has absolutely no malicious use aspects, but you can imagine as an analogy, similar kinds of technologies being used to find weaknesses in software, discovering vulnerabilities and so on. And I guess, finally, one example we’ve seen that came up a lot, is the capabilities in machine vision. The fact that you can now look at an image and tell what is in that image, through training, which is something that computers were just not able to do a decade ago, at least nowhere near human levels of performance, starts unlocking potential threats both in autonomous targeting, say on top of drones, but also in manipulation. If I can know whether a picture is a good representation of something or not, then my ability to create forgeries significantly increases. This is the technology of generative adversarial networks, that we’ve seen used to create fake audio and potentially fake videos in the near future.

All of these new capabilities, plus the fact that access to the technology is becoming — I mean these technologies are very democratized at the moment. There are papers on arXiv, there are good tutorials on You Tube. People are very keen to have more people join the AI revolution, and for good reason, plus the fact that moving these trained models around is very cheap. It’s just the cost of copying the software around, and the computer that is required to run those models is widely available. This suggests that the availability of these malicious capabilities is going to rapidly increase, and that the ability to perform certain kinds of attacks would no longer be limited to a few humans, but would become much more widespread.

Ariel: And so I have one more question for you, Shahar, and then I’m going to bring Victoria back in. You’re talking about the new threats, and this expansion of threats and one of the things that I saw in the report that I’ve also seen in other issues related to AI is, we’ve had computers around for a couple decades now, we’re used to issues pertaining to phishing or hacking or spam. We recognize computer vulnerabilities. We know these are an issue. We know that there’s lots of companies that are trying to help us defend our computers against malicious cyber attacks, stuff like that. But one of the things that you get into in the report is this idea of “human vulnerabilities” — that these attacks are no longer just against the computers, but they are also going to be against us.

Shahar: I think for many people, this has been one of the really worrying things about the Cambridge Analytica, Facebook issue that is in the news. It’s the idea that because of our particular psychological tendencies, because of who we are, because of how we consume information, and how that information shapes what we like and what we don’t like, what we are likely to do and what we are unlikely to do, the ability of the people who control the information that we get, gives them some capability to control us. And this is not new, right?

People who are making newspapers or running radio stations or national TV stations, have known for a very long time, that the ability to shape the message is the ability to influence people’s decisions. But coupling that with algorithms that are able to run experiments on millions or billions of people simultaneously with very tight feedback loops — so you make a small change in the feed of one individual and see whether their behavior changes. And you can run many of these experiments and you can get very good data, is something that was never available at the age of broadcasts. To some extent, it was available in the age of software. When software starts moving into big data and big data analytics, the boundaries start to blur between those kinds of technologies and AI technologies.

This is the kind of manipulation that you seem to be asking about that we definitely flag in the report, both in terms of political security, the ability of large communities to govern themselves in a way that they find to truthfully represent their own preferences, but also, on a more small scale, with the social side of cyber attacks. So, if I can manipulate an individual, or a few individuals in a company to disclose their passwords or to download or click a link that they shouldn’t have, through modeling of their preferences and their desires, then that is a way in that might be a lot easier than trying to break the system through its computers.

Ariel: Okay, so one other thing that I think I saw come up, and I started to allude to this — there’s, like I said, the idea that we can defend our computers against attacks and we can upgrade our software to fix vulnerabilities, but then how do we sort of “upgrade” people to defend themselves? Is that possible? Or is it a case of we just keep trying to develop new software to help protect people?

Shahar: I think the answer is both. One thing that did come up a lot is, unfortunately unlike computers, you cannot just download a patch to everyone’s psychology. We have slow processes of doing that. So we can incorporate parts of what is a trusted computer, what is a trusted source, into the education system and get people to be more aware of the risks. You can definitely design the technology such that it makes a lot more explicit where it’s vulnerabilities and where it’s more trusted parts are, which is something that we don’t do very well at the moment. The little lock on the browser is kind of the high end of our ability to design systems to disclose where security is and why it matters, and there is much more to be done here, because just awareness of the amount of vulnerability is very low.

So there is some more probably that we can do with education and with notifying the public, but it also should be expected that this ability is limited, and it’s also, to a large extent, an unfair burden to put on the population at large. It is much more important, I think, that the technology is being designed in the first place, to as much as possible be explicit and transparent about its levels of security, and if those levels of security are not high enough, then that in turn should lead for demands for more secure systems.

Ariel: So one of the things that came up in the report that I found rather disconcerting, was this idea of spear phishing. So can you explain what that is?

Shahar: We are familiar with phishing in general, which is when you pretend to be someone or something that you’re not in order to gain your victim’s trust and get them to disclose information that they should not be disclosing to you as a malicious actor. So you could pretend to be the bank and ask them to put in their username and password, and now you have access to their bank account and can transfer away their funds. If this is part of a much larger campaign, you could just pretend to be their friend, or their secretary, or someone who wants to give them a prize, get them to trust you, get one of the passwords that maybe they are using, and maybe all you do with that is you use that trust to talk to someone else who is much more concerned. So now that I have the username and password, say for the email or the Facebook account of some low-ranking employee in a company, I can start messaging their boss and pretending to be them and maybe get even more passwords and more access through that.

Phishing is usually kind of a “spray and pray” approach. You have a, “I’m a Nigerian prince, I have all of this money stocked in Africa, I’ll give you a cut if you help me move it out of the country, you need to send me some money.” You send this to millions of people, and maybe one or two fall for it. The cost for the sender is not very high, but the success rate is also very, very low.

Spear phishing on the other hand, is when you find a particular target, and you spend quite a lot of time profiling them and understanding what their interests are, what their social circles are, and then you craft a message that is very likely to work on them, because it plays to their ego, it plays to their normal routine, it plays on their interests and so on.

In the report we talk about this research by ZeroFOX, where they took a very simple version of this. They said, let’s look at what people tweet about, we’ll take that as an indication of the stuff that they’re interested in. We will train a machine learning algorithm to create a model of the topics that people are interested in, form the tweets, craft a malicious tweet that is based on those topics of interest and have that be a link to a malicious site. So instead of sending kind of generally, “Check this out, super cool website,” with a link to a malicious website most people know not to click on, it will be, “Oh, you are clearly interested in sports in this particular country, have you seen what happened, like the new hire in this team?” Or, “You’re interested in archeology, crazy new report about recent finds in the pyramids,” or something. And what they showed was that, once that they’ve kind of created the bot, that bot then crafted targeted messages, those spear phishing messages, to a large number of users, and in principle they could scale it up indefinitely because now it’s software, and the click through rate was very high. I think it was something like 30 percent, which is orders of magnitude more than you get with phishing.

So automating spear phishing changes what used to be a trade off between spray and pray, target millions of people, but very few of them would click on it, or spear phishing where you target only a few individuals with very high success rates — now you can target millions of people and customize the message to each one so you have high success rates for all of them. Which means that, you and me, who previously wouldn’t be very high on the target list for cyber criminals or other cyber attackers can now become targets simply because the cost is very low.

Ariel: So the cost is low, I don’t think I’m the only person who likes to think that I’m pretty good at recognizing sort of these phishing scams and stuff like that. I’m assuming these are going to also become harder for us to identify?

Shahar: Yep. So the idea is that the moment you have access to people’s data, because they’re explicit on social media about their interests and about their circles of friends, then the better you get at crafting messages and, say, comparing them to authentic messages from people, and saying, “oh this is not quite right, we are going to tweak the algorithm until we get something that looks a lot like something a human would write.” Quite quickly you could get to the point where computers are generating, say, to begin with texts that are indistinguishable from what a human would write, but increasingly also images, audio segments, maybe entire websites. As long as the motivation or the potential for profit is there, it seems like the technology, either the ones that we have now or the ones that we can foresee in the five years, would allow these kinds of advances to take place.

Ariel: Okay. So I want to touch quickly on the idea of adversarial examples. There was an XKCD cartoon that came out a week or two ago about self driving cars and the character says, “I worry about self driving car safety features, what’s to stop someone from painting fake lines on the road or dropping a cutout of a pedestrian onto a highway to make cars swerve and crash,” and then realizes all of those things would also work on human drivers. Sort of a personal story, I used to live on a street called Climax and I actually lived at the top of Climax, and I have never seen a street sign stolen more in my life, it was often the street sign just wasn’t there. So my guess is it’s not that hard to steal a stop sign if someone really wanted to mess around with drivers, and yet we don’t see that happen very often.

So I was hoping both of you could weigh in a little bit on what you think artificial intelligence is going to change about these types of scenarios where it seems like the risk will be higher for things like adversarial examples versus just stealing a stop sign.

Victoria: I agree that there is certainly a reason for optimism in the fact that most people just aren’t going to mess with the technology, that there aren’t that many actual bad actors out there who want to mess it up. On the other hand, as Shahar said earlier, democratizing both the technology and the ways to mess with it, to interfere with it, does make that more likely. For example, the ways in which you could provide adversarial examples to cars, can be quite a bit more subtle than stealing a stop sign or dropping a fake body on the road or anything like that. For example, you can put patches on a stop sign that look like noise or just look like rectangles in certain places and humans might not even think to remove them, because to humans they’re not a problem. But an autonomous car might interpret that as a speed limit sign instead of a stop sign, and similarly, more generally people can use adversarial patches to fool various vision systems, for example if they don’t want to be identified by a surveillance camera or something like that.

So a lot of these methods, people can just read about it online, there are papers in arXiv and I think the fact that they are so widely available might make it easier for people to interfere with technology more, and basically might make this happen more often. It’s also the case that the vulnerabilities of AI are different than the vulnerabilities of humans, so it might lead to different ways that it can fail that humans are not used to, and ways in which humans would not fail. So all of these things need to be considered, and of course, as technologists, we need to think about ways in which things can go wrong, whether it is presently highly likely, or not.

Ariel: So that leads to another question that I want to ask, but before I go there, Shahar, was there anything you wanted to add?

Shahar: I think that covers almost all of the basics, but I’d maybe stress a couple of these points. One thing about machines failing in ways that are different from how humans fail, it means that you can craft an attack that would only mess up a self driving car, but wouldn’t mess up a human driver. And that means let’s say, you can go in the middle of the night and put some stickers on and you are long gone from the scene by the time something bad happens. So this diminished ability to attribute the attack, might be something that means that more people feel like they can get away with it.

Another one is that we see people much more willing to perform malicious or borderline acts online. So it’s important, I mean we often talk about adversarial examples as things that affect vision systems, because that’s where a lot of the literature is, but it is very likely — in fact, there are several examples that also things like anomaly detection that uses machine learning patterns, malicious code detection that is based on machine-learned patterns, anomaly detection in networks and so on, all of these have their kinds of adversarial examples as well.  And so thinking about adversarial examples against defensive systems and adversarial examples against systems that are only available online, brings us back to one attacker somewhere in the world could have access to your system and so the fact that most people are not attackers doesn’t really help you defense-wise.

Ariel: And, so this whole report is about how AI can be misused, but obviously the AI safety community and AI safety research goes far beyond that. So especially in the short term, do you see misuse or just general safety and design issues to be a bigger deal?

Victoria: I think it is quite difficult to say which of them would be a bigger deal. I think both misuse and accidents are something that are going to increase in importance and become more challenging and these are things that we really need to be working on as a research community.

Shahar: Yeah, I agree. We wrote this report not because we don’t think accident risk and safety risk matters are important — we think they are very important. We just thought that there was some pretty good technical reports out there outlining the risks from accident with near-term machine learning and with long-term and some of the researching that could be used to address them, and we felt like a similar thing was missing for misuse, which was why we wrote that report.

Both are going to be very important, and to some extent there is going to be an interplay. It is possible that systems that are more interpretable are also easier to secure. It might be the case that if there is some restriction in the diffusion of capabilities that also means that there is less incentive to cut corners to out-compete someone else by skimping on safety and so on. So there are strategic questions across both misuse and accidents, but I agree with Victoria, probably if we don’t do our job, we are just going to see more and more of both of these categories causing harm in the world, and more reason to work on both of them. I think both fields need to grow.

Victoria: I just wanted to add, a common cause of both accident risks and misuse risks that might happen in the future is just that these technologies are advancing quickly and there are often unforeseen and surprising ways in which they can fail, either by accident or by having vulnerabilities that can be misused by bad actors. And so as the technology continues to advance quickly we really need to be on the lookout for new ways that it can fail, new accidents but also new ways in which it can be used for harm by bad actors.

Ariel: So one of the things that I got out of this report, and that I think is also coming through now is, it’s kind of depressing. And I found myself often wondering … So at FLI, especially now we’ve got the new grants that are focused more on AGI, we’re worried about some of these bigger, longer-term issues, but with these shorter-term things, I sometimes find myself wondering if we’re even going to make it to AGI, or if something is going to happen that prevents that development in some way. So I was hoping you could speak to that a little bit.

Shahar: Maybe I’ll start with the Malicious Use report, and apologize for its somewhat gloomy perspective. So it should probably be mentioned that, I think almost all of the authors of the report are somewhere between fairly and very optimistic about artificial intelligence. So it’s much more the fact that we see this technology going, we want to see it developed quickly, at least in various narrow domains that are of very high importance, like medicine, like self driving cars — I’m personally quite a big fan. We think that the best way to, if we can foresee and design around or against the misuse risks, then we will eventually end up with a technology that it is more mature, that is more acceptable, that is more trusted because it is trustworthy, because it is secure. We think it is going to be much better to plan for these things in advance.

It is also, again, say we use electricity as an analogy, if I just sat down at the beginning of the age of electricity and I wrote a report about how many people were going to be electrocuted, it would look like a very sad thing. And it’s true, there has been a rapid increase in the number of people who die from electrocution compared to before the invention of electricity and much safety has been built since then to make sure that that risk is minimized, but of course, the benefits have far, far, far outweighed the risks when it comes to electricity and we expect, probably, hopefully, if we take the right actions, like we lay out in the report, then the same is going to be true for misuse risk for AI. At least half of the report, all of Appendix B and a good chunk of the parts before it, talk about what we can do to mitigate those risks, so hopefully the message is not entirely doom and gloom.

Victoria: I think that the things we need to do remain the same no matter how far away we expect these different developments to happen. We need to be looking out for ways that things can fail. We need to be thinking in advance about ways that things can fail, and not wait until problems show up and we actually see that they’re happening. Of course, we often will see problems show up, but in these matters an ounce of prevention can be worth a pound of cure, and there are some mistakes that might just be too costly. For example, if you have some advanced AI that is running the electrical grid or the financial system, we really don’t want that thing to, hack its reward function.

So there are various predictions about how soon different transformative developments of AI might happen and it is possible that things might go awry with AI before we get to general intelligence and what we need to do is basically work hard to try to prevent these kinds of accidents or misuse from happening and try to make sure that AI is ultimately beneficial, because the whole point of building it is because it would be able to solve big problems that we cannot solve by ourselves. So let’s make sure that we get there and that we sort of handle this with responsibility and foresight the whole way.

Ariel: I want to go back to the very first comments that you made about where we were three years ago. How have things changed in the last three years and where do you see the AI safety community today?

Victoria: In the last three years, we’ve seen the AI safety research community get a fair bit bigger and topics of AI safety have become more mainstream, so I will say that long-term AI safety is definitely less controversial and there are more people engaging with the questions and actually working on them. While near-term safety, like questions of fairness and privacy and technological unemployment and so on, I would say that’s definitely mainstream at this point and a lot of people are thinking about that and working on that.

In terms of long term AI safety or AGI safety we’ve seen teams spring up, for example, both DeepMind and OpenAI have a safety team that’s focusing on these sort of technical problems, which includes myself on the DeepMind side. There have been some really interesting bits of progress in technical AI safety. For example, there has been some progress in reward learning and generally value learning. For example, the cooperative inverse reinforcement learning work from Berkeley. There has been some great work from MIRI on logical induction and quantilizing agents and that sort of thing. There have been some papers at mainstream machine learning conferences that focus on technical AI safety, for example, there was an interruptibility paper at NIPS last year and generally I’ve been seeing more presence of these topics in the big conferences, which is really encouraging.

On a more meta level, it has been really exciting to see the Concrete Problems in AI Safety research agenda come out two years ago. I think that’s really been helpful to the field. So these are only some of the exciting advances that have happened.

Ariel: Great. And so, Victoria, I do want to turn now to some of the stuff about FLI’s newest grants. We have an RFP that included quite a few examples and I was hoping you could explain at least two or three of them, but before we get to that if you could quickly define what artificial general intelligence (AGI) is, what we mean when we refer to long-term AI? I think those are the two big ones that have come up so far.

Victoria: So, artificial general intelligence is this idea of an AI system that can learn to solve many different tasks. Some people define this in terms of human-level intelligence as an AI system that will be able to learn to do all human jobs, for example. And this contrasts to the kind of AI systems that we have today which we could call “narrow AI,” in the sense that they specialize in some task or class of tasks that they can do.

So, for example Alpha Zero is a system that is really good at various games like Go and Chess and so on, but it would not be able to, for example, clean up a room, because that’s not in its class of tasks. While if you look at human intelligence we would say that humans are our go-to example of general intelligence because we can learn to do new things, we can adapt to new tasks and new environments that we haven’t seen before and we can transfer our knowledge that we have acquired through previous experience, that might not be in exactly the same settings, to whatever we are trying to do at the moment.

So, AGI is the idea of building an AI system that is also able to do that — not necessarily in the same way as humans, like it doesn’t necessarily have to be human-like to be able to perform the same tasks, or it doesn’t have to be structured the way a human mind is structured. So the definition of AGI is about what it’s capable of rather than how it can do those things. I guess the emphasis there is on the word general.

In terms of the FLI grant program this year, it is specifically focused on the AGI safety issue, which we also call long-term AI safety. Long term here doesn’t necessarily mean that it’s 100 years away. We don’t know how far away AGI actually is; the opinions of experts vary quite widely on that. But it’s more emphasizing that it’s not an immediate problem in the sense that we don’t have AGI yet, but we are trying to foresee what kind of problems might happen with AGI and make sure that if and when AGI is built that it is as safe and aligned with human preferences as possible.

And in particular as a result of the mainstreaming of AI safety that has happened in the past two years, partly, as I like to think, due to FLI’s efforts, at this point it makes sense to focus on long-term safety more specifically since this is still the most neglected area in the AI safety field. I’ve been very happy to see lots and lots of work happening these days on adversarial examples, fairness, privacy, unemployment, security and so on.  I think this allows us to really zoom in and focus on AGI safety specifically to make sure that there’s enough good technical work going on in this field and that the big technical problems get as much progress as possible and that the research community continues to grow and do well.

In terms of the kind of problems that I would want to see solved, I think some of the most difficult problems in AI safety that sort of feed into a lot of the problem areas that we have are things like Goodhart’s Law. Goodhart’s Law is basically that, when a metric becomes a target, it ceases to be a good metric. And the way this applies to AI is that if we make some kind of specification of what objective we want the AI system to optimize for — for example this could be a reward function, or a utility function, or something like that — then, this specification becomes sort of a proxy or a metric for our real preferences, which are really hard to pin down in full detail. Then if the AI system explicitly tries to optimize for the metric or for that proxy, for whatever we specify, for the reward function that we gave, then it will often find some ways to follow the letter but not the spirit of that specification.

Ariel: Can you give a real life example of Goodhart’s Law today that people can use as an analogy?

Victoria: Certainly. So Goodhart’s Law was not originally coined in AI. This is something that generally exists in economics and in human organizations. For example, if employees at a company have their own incentives in some way, like they are incentivized to clock in as many hours as possible, then they might find a way to do that without actually doing a lot of work. If you’re not measuring that then the number of hours spent at work might be correlated with how much output you produce, but if you just start rewarding people for the number of hours then maybe they’ll just play video games all day, but they’ll be in the office. That could be a human example.

There are also a lot of AI examples these days of reward functions that turn out not to give good incentives to AI systems.

Ariel: For a human example, would the issues that we’re seeing with standardized testing be an example of this?

Victoria: Oh, certainly, yes. I think standardized testing is a great example where when students are optimizing for doing well on the tests, then the test is a metric and maybe the real thing you want is learning, but if they are just optimizing for doing well on the test, then actually learning can suffer because they find some way to just memorize or study for particular problems that will show up on the test, which is not necessarily a good way to learn.

And if we get back to AI examples, there was a nice example from OpenAI last year where they had this reinforcement learning agent that was playing a boat racing game and the objective of the boat racing game was to go along the racetrack as fast as possible and finish the race before the other boats do, and to encourage the player to go along the track there were some reward points — little blocks that you have to hit to get rewards — that were along the track, and then the agent just found a degenerate solution where it would just go in a circle and hit the same blocks over and over again and get lots of reward, but it was not actually playing the game or winning the race or anything like that. This is an example of Goodhart’s Law in action. There are plenty of examples of this sort with present day reinforcement learning systems. Often when people are designing a reward function for a reinforcement learning system they end up adjusting it a number of times to eliminate these sort of degenerate solutions that happen.

And this is not limited to reinforcement learning agents. For example, recently there was a great paper that came out about many examples of Goodhart’s Law in evolutionary algorithms. For example, if some evolved agents were incentivized to move quickly in some direction, then they might just evolve to be really tall and then they fall in this direction instead of actually learning to move. There are lots and lots of examples of this and I think that as AI systems become more advanced and more powerful, then I think they’ll just get more clever at finding these sort of loopholes in our specifications of what we want them to do. Goodhart’s Law is, I would say, part of what’s behind various other AI safety issues. For example, negative side effects are often caused by the agent’s specification being incomplete, so there’s something that we didn’t specify.

For example, if we want a robot to carry a box from point A to point B, then if we just reward it for getting the box to point B as fast as possible, then if there’s something in the path of the robot — for example, there’s a vase there — then it will not have an incentive to go around the vase, it would just go right through the vase and break it just to get to point B as fast as possible, and this is an issue because our specification did not include a term for the state of the vase. So, when data is just optimizing for this reward that’s all about the box, then it doesn’t have an incentive to avoid disruptions to the environment.

Ariel: So I want to interrupt with a quick question. These examples so far, we’re obviously worried about them with a technology as powerful as AGI, but they’re also things that apply today. As you mentioned, Goodhart’s Law doesn’t even just apply to AI. What progress has been made so far? Are we seeing progress already in addressing some of these issues?

Victoria: We haven’t seen so much progress in addressing these questions in a very general sort of way, because when you’re building a narrow AI system, then you can often get away with a sort of trial and error approach where you run it and maybe it does something stupid, finds some degenerate solution, then you tweak your reward function, you run it again and maybe it finds a different degenerate solution and then so on and so forth until you arrive at some reward function that doesn’t lead to obvious failure cases like that. For many narrow systems and narrow applications where you can sort of foresee all the ways in which things can go wrong, and just penalize all those ways or build a reward function that avoids all of those failure modes, then there isn’t so much need to find a general solution to these problems. While as we get closer to general intelligence, there will be more need for more principled and more general approaches to these problems.

For example, how do we build an agent that has some idea of what side effects are, or what it means to disrupt an environment that it’s in, no matter what environment you put it in. That’s something we don’t have yet. One of the promising approaches that has been gaining traction recently is reward learning. For example, there was this paper in collaboration between DeepMind and OpenAI called Deep Reinforcement Learning from Human Preferences, where instead of directly specifying a reward function for the agent, it learns a reward function from human feedback. Where, for example, if your agent is this simulated little noodle or hopper that’s trying to do a backflip, then the human would just look at two videos off the agent trying to do a backflip and say, “Well this one looks more like a back flip.” And so, you have a bunch of data from the human about what is more similar to what the human wants the agent to do.

With this kind of human feedback, unlike, for example, demonstrations, the agent can learn something that the human might not be able to demonstrate very easily. For example, even if I cannot do a backflip myself, I can still judge whether someone else has successfully done a backflip or whether this reinforcement agent has done a backflip. This is promising for getting agents to potentially solve problems that humans cannot solve or do things that humans cannot demonstrate. Of course, with human feedback and human-in-the-loop kind of work, there is always the question of scalability because human time is expensive and we want the agent to learn as efficiently as possible from limited human feedback and we also want to make sure that the agent actually gets human feedback in all the relevant situations so it learns to generalize correctly to new situations. There are a lot of remaining open problems in this area as well, but the progress so far has been quite encouraging.

Ariel: Are there others that you want to talk about?

Victoria: Maybe I’ll talk about one other question, which is that of interpretability. Interpretability of AI systems is something that is a big area right now in near-term AI safety that increasingly more people on the research community are thinking about and working on, that is also quite relevant in long-term AI safety. This generally has to do with being able to understand why your system does things a certain way, or makes certain decisions or predictions, or in the case of an agent, why it takes certain actions and also understanding what different components of the system are looking for in the data or how the system is influenced by different inputs and so on. Basically making it less of a black box, and I think there is a reputation for deep learning systems in particular that they are seen as black boxes and it is true that they are quite complex, but I think they don’t necessarily have to be black boxes and there has certainly been progress in trying to explain why they do things.

Ariel: Do you have real world examples?

Victoria: So, for example, if you have some AI system that’s used for medical diagnosis, then on the one hand you could have something simple like a decision tree that just looks at your x-ray and if there is something in a certain position then it gives you a certain diagnosis, and otherwise it doesn’t and so on. Or you could have a more complex system like a neural network that takes into account a lot more factors and then at the end it says, like maybe this person has cancer or maybe this person has something else. But it might not be immediately clear why that diagnosis was made. Particularly in sensitive applications like that, what sometimes happens is that people end up using simpler systems that they find more understandable where they can say why a certain diagnosis was made, even if those systems are less accurate, and that’s one of the important cases for interpretability where if we figure out how to make these more powerful systems more interpretable, for example, through visualization techniques, then they would actually become more useful in these really important applications where it actually matters not just to predict well, but to explain where the prediction came from.

And another area, another example is an algorithm that’s deciding whether to give someone a loan or a mortgage, then if someone’s loan application got rejected then they would really want to know why it got rejected. So the algorithm has to be able to point at some variables or some other aspect of the data that influences decisions or you might need to be able to explain how the data will need to change for the decision to change, what variables would need to be changed by a certain amount for the decision to be different. So these are just some examples of how this can be important and how this is already important. And this kind of interpretability of present day systems is of course already on a lot of people’s minds. I think it is also important to think about interpretability in the longer term as we build more general AI systems that will continue to be important or maybe even become more important to be able to look inside them and be able to check if they have particular concepts that they’re representing.

Like, for example, especially from a safety perspective, whether your system was thinking about the off switch and if it’s thinking about whether it’s going to be turned off, that might be something good to monitor for. We also would want to be able to explain how our systems fail and why they fail. This is, of course, quite relevant today if, let’s say your medical diagnosis AI makes a mistake and we want to know what led to that, why it made the wrong diagnosis. Also on the longer term we want to know why an AI system hacks its reward function, what is it thinking — well “thinking” with quotes, of course — while it’s following a degenerate solution instead of the kind of solution we would want it to find. So, what is the boat race agent that I mentioned earlier paying attention to while it’s going in circles and collecting the same rewards over and over again instead of playing the game, that kind of thing. I think the particular application of interpretability techniques to safety problems is going to be important and it’s one of the examples of the kind of work that we’re looking for in the in the RFP.

Ariel: Awesome. Okay, and so, we’ve been talking about how all these things can go wrong and we’re trying to do all this research to make sure things don’t go wrong, and yet basically we think it’s worthwhile to continue designing artificial intelligence, that no one’s looking at this and saying “Oh my god, artificial intelligence is awful, we need to stop studying it or developing it.” So what are the benefits that basically make these risks worth the risk?

Shahar: So I think one thing is in the domain of narrow applications, it’s very easy to make analogies to software, right? For the things that we have been able to hand over to computers, they really have been the most boring and tedious and repetitive things that humans can do and we now no longer need to do them and productivity has gone up and people are generally happier and they can get paid more for doing more interesting things and we can just build bigger systems because we can hand off the control of them to machines that don’t need to sleep and don’t make small mistakes in calculations. Now the promise of turning that and adding to that all of the narrow things that experts can do, whether it’s improving medical diagnosis, whether it’s maybe farther down the line some elements of drug discovery, whether it’s piloting a car or operating machinery, many of these areas where human labor is currently required because there is a fuzziness to the task, it does not enable a software engineer to come in and code an algorithm, but maybe with machine learning in the not too distant future we’ll be able to turn them over to machines.

It means taking some skills that only a few individuals in the world can do and making those available to everyone around the world in some domains. That seems, I mean, concrete examples are, the ones that I have I try to find the companies that do them and get involved with them because I want to see them happen sooner and the ones that I can’t imagine yet, someone will come along and make a company out of it, or a not-for-profit for it. But we’ve seen applications from agriculture, to medicine, to computer security, to entertainment and art, and driving and transport, and in all of these I think we’re just gonna be seeing even more. I think we’re gonna have more creative products out there that were designed in collaboration between humans and machines. We’re gonna see more creative solutions to scientific engineering problems. We’re gonna see those professions where really good advice is very valuable, but there are only so many people who can help you — so if I’m thinking of doctors and lawyers, taking some of that advice and making it universally accessible through an app just makes life smoother. These are some of the examples that come to my mind.

Ariel: Okay, great. Victoria what are the benefits that you think make these risks worth addressing?

Victoria: I think there are many ways in which AI systems can make our lives a lot better and make the world a lot better especially as we build more general systems that are more adaptable. For example, these systems could help us with designing better institutions and better infrastructure, better health systems or electrical systems or what have you. Even now, there are examples like the Google project on optimizing the data center energy use using machine learning, which is something that Deep Mind was working on, where the use of machine learning algorithms to automate energy used in the data centers improved their energy efficiency by I think something like 40 percent. That’s of course with fairly narrow AI systems.

I think as we build more general AI systems we can expect, we can hope for really creative and innovative solutions to the big problems that humans face. So you can think of something like AlphaGo’s famous “move 37” that overturned thousands of years of human wisdom in Go. What if you can build even more general and even more creative systems and apply them to real world problems? I think there is great promise in that. I think this can really transform the world in a positive direction, and we just have to make sure that as the systems are built that we think about safety from the get go and think about it in advance and trying to build them to be as resistant to accidents and misuse as possible so that all these benefits can actually be achieved.

The things I mentioned were only examples of the possible benefits. Imagine if you could have an AI scientist that’s trying to develop better drugs against diseases that have really resisted treatment or more generally just doing science faster and better if you actually have more general AI systems that can think as flexibly as humans can about these sort of difficult problems. And they would not have some of the limitations that humans have where, for example, our attention is limited our memory is limited, while AI could be, at least theoretically, unlimited in it’s processing power, in the resources available to it, it can be more parallelized, it can be more coordinated and I think all of the big problems that are so far unsolved are these sort of coordination problems that require putting together a lot of different pieces of information and a lot of data. And I think there are massive benefits to be reaped there if we can only get to that point safely.

Ariel: Okay, great. Well thank you both so much for being here. I really enjoyed talking with you.

Shahar: Thank you for having us. It’s been really fun.

Victoria: Yeah, thank you so much.

[end of recorded material]

Podcast: AI and the Value Alignment Problem with Meia Chita-Tegmark and Lucas Perry

What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can’t even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration?

Ariel spoke with FLI’s Meia Chita-Tegmark and Lucas Perry on this month’s podcast about the value alignment problem: the challenge of aligning the goals and actions of AI systems with the goals and intentions of humans. 

Topics discussed in this episode include:

  • how AGI can inform human values,
  • the role of psychology in value alignment,
  • how the value alignment problem includes ethics, technical safety research, and international coordination,
  • a recent value alignment workshop in Long Beach,
  • and the possibility of creating suffering risks (s-risks).

This podcast was edited by Tucker Davey. You can listen to it above or read the transcript below.

 

Ariel: I’m Ariel Conn with the Future of Life Institute, and I’m excited to have FLI’s Lucas Perry and Meia Chita-Tegmark with me today to talk about AI, ethics and, more specifically, the value alignment problem. But first, if you’ve been enjoying our podcast, please take a moment to subscribe and like this podcast. You can find us on iTunes, SoundCloud, Google Play, and all of the other major podcast platforms.

And now, AI, ethics, and the value alignment problem. First, consider the statement “I believe that harming animals is bad.” Now, that statement can mean something very different to a vegetarian than it does to an omnivore. Both people can honestly say that they don’t want to harm animals, but how they define “harm” is likely very different, and these types of differences in values are common between countries and cultures, and even just between individuals within the same town. And then we want to throw AI into the mix. How can we train AIs to respond ethically to situations when the people involved still can’t come to an agreement about what an ethical response should be?

The problem is even more complicated because often we don’t even know what we really want for ourselves, let alone how to ask an AI to help us get what we want. And as we’ve learned with stories like that of King Midas, we need to be really careful what we ask for. That is, when King Midas asked the genie to turn everything to gold, he didn’t really want everything — like his daughter and his food — turned to gold. And we would prefer than an AI we design recognize that there’s often implied meaning in what we say, even if we don’t say something explicitly. For example, if we jump into an autonomous car and ask it to drive us to the airport as fast as possible, implicit in that request is the assumption that, while we might be OK with some moderate speeding, we intend for the car to still follow most rules of the road, and not drive so fast as to put anyone’s life in danger or take illegal routes. That is, when we say “as fast as possible,” we mean “as fast as possible within the rules of law,” and not within the rules of physics or within the laws of physics. And these examples are just the tiniest tip of the iceberg, given that I didn’t even mention artificial general intelligence (AGI) and how that can be developed such that its goals align with our values.

So as I mentioned a few minutes ago, I’m really excited to have Lucas and Meia joining me today. Meia is a co-founder of the Future of Life Institute. She’s interested in how social sciences can contribute to keeping AI beneficial, and her background is in social psychology. Lucas works on AI and nuclear weapons risk-related projects at FLI. His background is in philosophy with a focus on ethics. Meia and Lucas, thanks for joining us today.

Meia: It’s a pleasure. Thank you.

Lucas: Thanks for having us.

Ariel: So before we get into anything else, one of the big topics that comes up a lot when we talk about AI and ethics is this concept value alignment. I was hoping you could both maybe talk just a minute about what value alignment is and why it’s important to this question of AI and ethics.

Lucas: So value alignment, in my view, is bringing AI’s goals, actions, intentions and decision-making processes in accordance with what humans deem to be the good or what we see as valuable or what our ethics actually are.

Meia: So for me, from the point of view of psychology, of course, I have to put the humans at the center of my inquiry. So from that point of view, value alignment … You can think about it also in terms of humans’ relationships with other humans. But I think it’s even more interesting when you add artificial agents into the mix. Because now you have an entity that is so wildly different from humans yet we would like it to embrace our goals and our values in order to keep it beneficial for us. So I think the question of value alignment is very central to keeping AI beneficial.

Lucas: Yeah. So just to expand on what I said earlier: The project of value alignment is in the end creating beneficial AI. It’s working on what it means for something to be beneficial, what beneficial AI exactly entails, and then learning how to technically instantiate that into machines and AI systems. Also, building the proper like social and political context for that sort of technical work to be done and for it to be fulfilled and manifested in our machines and AIs.

Ariel: So when you’re thinking of AI and ethics, is value alignment basically synonymous, just another way of saying AI and ethics or is it a subset within this big topic of AI and ethics?

Lucas: I think they have different connotations. If one’s thinking about AI ethics, I think that one is tending to be moreso focused on applied ethics and normative ethics. One might be thinking about the application of AI systems and algorithms and machine learning in domains in the present day and in the near future. So one might think about atomization and other sorts of things. I think that when one is thinking about value alignment, it’s much more broad and expands also into metaethics and really sort of couches and frames the problem of AI ethics as something which happens over decades and which has a tremendous impact. I think that value alignment has a much broader connotation than what AI ethics has traditionally had.

Meia: I think it all depends on how you define value alignment. I think if you take the very broad definition that Lucas has just proposed, I think that yes, it probably includes AI ethics. But you can also think of it more narrowly as simply instantiating your own values into AI systems and having them adopt your goals. In that case, I think there are other issues as well because if you think about it from the point of view of psychology, for example, then it’s not just about which values get instantiated and how you do that, how you solve the technical problem, but also we know that humans, even if they know what goals they have and what values they uphold, it’s very, very hard for them sometimes to actually act in accordance to them because they have all sorts of cognitive and emotional effective limitations. So in that case I think value alignment is, in this narrow sense, is basically not sufficient. We also need to think about AIs and applications of AIs in terms of how do they help us and how do they make sure that we gain the cognitive competencies that we need to be moral beings and to be really what we should be, not just what we are.

Lucas: Right. I guess to expand on what I was just saying. Value alignment I think in the more traditional sense, it’s sort of all … It’s more expansive and inclusive in that it’s recognizing a different sort of problem than AI ethics alone has. I think that when one is thinking about value alignment, there are elements of thinking about — somewhat about machine ethics but also about social, political, technical and ethical issues surrounding the end goal of eventually creating AGI. Whereas, AI ethics can be more narrowly interpreted just as certain sorts of specific cases where AI’s having impact and implications in our lives in the next 10 years. Whereas, value alignment’s really thinking about the instantiation of ethics and machines and making machine systems that are corrigible and robust and docile, which will create a world that we’re all happy about living in.

Ariel: Okay. So I think that actually is going to flow really nicely into my next question, and that is, at FLI we tend to focus on existential risks. I was hoping you could talk a little bit about how issues of value alignment are connected to the existential risks that we concern ourselves with.

Lucas: Right. So, we can think of AI systems as being very powerful optimizers. We can imagine there being a list of all possible futures and what intelligence is good for is for modeling the world and then committing to and doing actions which constrain the set of all possible worlds to ones which are desirable. So intelligence is sort of the means by which we get to an end, and ethics is the end towards which we strive. So these are how these two things really integral and work together and how AI without ethics makes no sense and how ethics without AI or intelligence in general also just doesn’t work. So in terms of existential risk, there are possible futures that intelligence can lead us to where earth-originating intelligent life no longer exists either intentionally or by accident. So value alignment sort of fits in by constraining the set of all possible futures by working on technical work by doing political and social work and also work in ethics to constrain the actions of AI systems such that existential risks do not occur, such that by some sort of technical oversight, by some misalignment of values, by some misunderstanding of what we want, the AI generates an existential risk.

Meia: So we should remember that homo sapiens represent an existential risk to itself also. We are creating nuclear weapons. We have more of them than we need. So many, in fact, that we could destroy the entire planet with them. Not to mention homo sapiens has also represented an existential risk for all other species. The problem is AI is that we’re introducing in the mix a whole new agent that is by definition supposed to be more intelligent, more powerful than us and also autonomous. So as Lucas mentioned, it’s very important to think through what kind of things and abilities do we delegate to these AIs and how can we make sure that they have the survival and the flourishing of our species in mind. So I think this is where value alignment comes in as a safeguard against these very terrible and global risks that we can imagine coming from AI.

Lucas: Right. What makes doing that so difficult is beyond the technical issue of just having AI researchers and AI safety researchers knowing how to just get AI systems to actually do what we want without creating a universe of paperclips. There’s also this terrible social and political context in which this is all happening where there is really great game-theoretic incentives to be the first person to create artificial general intelligence. So in a race to create AI, a lot of these efforts that seem very obvious and necessary could be cut in favor of more raw power. I think that’s probably one of the biggest risks for us not succeeding in creating value-aligned AI.

Ariel: Okay. Right now it’s predominantly technical AI people who are considering mostly technical AI problems. How to solve different problems is usually, you need a technical approach for this. But when it comes to things like value alignment and ethics, most of the time I’m hearing people suggest that we can’t leave that up to just the technical AI researchers. So I was hoping you could talk a little bit about who should be part of this discussion, why we need more people involved, how we can get more people involved, stuff like that.

Lucas: Sure. So maybe if I just break the problem down into just what I view to be the three different parts then talking about it will make a little bit more sense. So we can break down the value alignment problem into three separate parts. The first one is going to be the technical issues, the issues surrounding actually creating artificial intelligence. The issues of ethics, so the end towards which we strive. The set of possible futures which we would be happy in living, and then also there’s the governance and the coordination and the international problem. So we can sort of view this as a problem of intelligence, a problem of agreeing on the end towards which intelligence is driven towards, and also the political and social context in which all of this happens.

So thus far, there’s certainly been a focus on the technical issue. So there’s been a big rise in the field of AI safety and in attempts to generate beneficial AI, attempts at creating safe AGI and mechanisms for avoiding reward hacking and other sorts of things that happen when systems are trying to optimize their utility function. The Concrete Problems on AI Safety paper has been really important and sort of illustrates some of these technical issues. But even between technical AI safety research and ethics there’s disagreement about something also like machine ethics. So how important is machine ethics? Where does machine ethics fit in to technical AI safety research? How much time and energy should we put into certain kinds of technical AI research versus how much time and effort should we put into issues in governance and coordination and addressing the AI arms race issues? How much of ethics do we really need to solve?

So I think there’s a really important and open question regarding how do we apply and invest our limited resources in sort of addressing these three important cornerstones in value alignment so that the technical issue, the issues in ethics and then issues in governance and coordination, and how do we optimize working on these issues given the timeline that we have? How much resources should we put in each one? I think that’s an open question. Yeah, one that certainly needs to be addressed more about how we’re going to move forward given limited resources.

Meia: I do think though the focus so far has been so much on the technical aspect. As you were saying, Lucas, there are other aspects to this problem that need to be tackled. What I’d like to emphasize is that we cannot solve the problem if we don’t pay attention to the other aspects as well. So I’m going to try to defend, for example, psychology here, which has been largely ignored I think in the conversation.

So from the point of view of psychology, I think the value alignment problem is double fold in a way. It’s about a triad of interactions. Human, AI, other humans, right? So we are extremely social animals. We interact a lot with other humans. We need to align our goals and values with theirs. Psychology has focused a lot on that. We have a very sophisticated set of psychological mechanisms that allow us to engage in very rich social interactions. But even so, we don’t always get it right. Societies have created a lot of suffering, a lot of moral harm, injustice, unfairness throughout the ages. So for example, we are very ill-prepared by our own instincts and emotions to deal with inter-group relations. So that’s very hard.

Now, people coming from the technical side, they can say, “We’re just going to have AI learn our preferences.” Inverse reinforcement learning is a proposal that says that basically explains how to keep humans in the loop. So it’s a proposal for programing AI such that it gets its reward not from achieving a goal but from getting good feedback from a human because it achieved a goal. So the hope is that this way AI can be correctable and can learn from human preferences.

As a psychologist, I am intrigued, but I understand that this is actually very hard. Are we humans even capable of conveying the right information about our preferences? Do we even have access to them ourselves or is this all happening in some sort of subconscious level? Sometimes knowing what we want is really hard. How do we even choose between our own competing preferences? So this involves a lot more sophisticated abilities like impulse control, executive function, etc. I think that if we don’t pay attention to that as well in addition to solving the technical problem, I think we are very likely to not get it right.

Ariel: So I’m going to want to come back to this question of who should be involved and how we can get more people involved, but one of the reasons that I’m talking to the both of you today is because you actually have made some steps in broadening this discussion already in that you set up a workshop that did bring together a multidisciplinary team to talk about value alignment. I was hoping you could tell us a bit more about how that workshop went, what interesting insights were gained that might have been expressed during the workshop, what you got out of it, why you think it’s important towards the discussion? Etc.

Meia: Just to give a few facts about the workshop. The workshop took place in December 2017 in Long Beach, California. We were very lucky to have two wonderful partners in co-organizing this workshop. The Berggruen Institute and the Canadian Institute for Advanced Research. And the idea for the workshop was very much to have a very interdisciplinary conversation about value alignment and reframe it as not just a technical problem but also one that involves disciplines such as philosophy and psychology, political science and so on. So we were very lucky actually to have a fantastic group of people there representing all these disciplines. The conversation was very lively and we discussed topics all the way from near term considerations in AI and how we align AI to our goals and also all the way to thinking about AGI and even super intelligence. So it was a fascinating range both of topics discussed and also perspectives being represented.

Lucas: So my inspiration for the workshop was being really interested in ethics and the end towards which this is all going. What really is the point of creating AGI and perhaps even eventually superintelligence? What is it that is good and what is that is valuable? Broadening from that and becoming more interested in value alignment, the conversation thus far has been primarily understood as something that is purely technical. So value alignment has only been seen as something that is for technical AI safety researchers to work on because there are technical issues regarding AI safety and how you get AIs to do really simple things without destroying the world or ruining a million other things that we care about. But this is really, as we discussed earlier, an interdependent issue that covers issues in metaethics and normative ethics, applied ethics. It covers issues in psychology. It covers issue in law, policy, governance, coordination. It covers the AI arms race issue. Solving the value alignment problem and creating a future with beneficial AI is a civilizational project where we need everyone working on all these different issues. On issues of value, on issues of game theory among countries, on the technical issues, obviously.

So what I really wanted to do was I wanted to start this workshop in order to broaden the discussion. To reframe value alignment as not just something in technical AI research but something that really needs voices from all disciplines and all expertise in order to have a really robust conversation that reflects the interdependent nature of the issue and where different sorts of expertise on the different parts of the issue can really come together and work on it.

Ariel: Is there anything specific that you can tell us about what came out of the workshop? Were there any comments that you thought were especially insightful or ideas that you think are important for people to be considering?

Lucas: I mean, I think that for me one of the takeaways from the workshop is that there’s still a mountain of work to do and that there are a ton of open questions. This is a very, very difficult issue. I think that one thing I took away from the workshop was that we couldn’t even agree on the minimal conditions for which it would be okay to safely deploy AGI. There are just issues that seem extremely trivial in value alignment from the technical side and from the ethical side that seem very trivial, but on which I think there is very little understanding or agreement right now.

Meia: I think the workshop was a start and one good thing that happened during the workshop is I felt that the different disciplines or rather their representatives were able to sort of air out their frustrations and also express their expectations of the others. So I remember this quite iconic moment when one roboticist simply said, “But I really want you ethics people to just tell me what to implement in my system. What do you want my system to do?” So I think that was actually very illustrative of what Lucas was saying — the need for more joint work. I think there was a lot of expectations I think from both the technical people towards the ethicists but also from the ethicists in terms of like, “What are you doing? Explain to us what are the actual ethical issues that you think you are facing with the things that you are building?” So I think there’s a lot of catching up to do on both sides and there’s much work to be done in terms of making these connections and bridging the gaps.

Ariel: So you referred to this as sort of a first step or an initial step. What would you like to see happen next?

Lucas: I don’t have any concrete or specific ideas for what exactly should happen next. I think that’s a really difficult question. Certainly, things that most people would want or expect. I think in the general literature and conversations that we were having, I think that value alignment, as a word and as something that we understand, needs to be expanded outside of the technical context. I don’t think that it’s expanded that far. I think that more ethicists and more moral psychologists and people in law policy and governance need to come in and need to work on this issue. I’d like to see more coordinated collaborations, specifically involving interdisciplinary crowds informing each other and addressing issues and identifying issues and really some sorts of formal mechanisms for interdisciplinary coordination on value alignment.

It would be really great if people in technical research, in technical AI safety research and in ethics and governance could also identify all of the issues in their own fields, which the resolution to those issues and the solution to those issues requires answers from other fields. So for example, inverse reinforcement learning is something that Meia was talking about earlier and I think it’s something that we can clearly decide and see as being interdependent on a ton of issues in a law and also in ethics and in value theory. So that would be sort of like an issue or node in the landscape of all issues and technical safety research that would be something that is interdisciplinary.

So I think it would be super awesome if everyone from their own respective fields are able to really identify the core issues which are interdisciplinary and able to dissect them into the constituent components and sort of divide them among the disciplines and work together on them and identify the different timelines at which different issues need to be worked on. Also, just coordinate on all those things.

Ariel: Okay. Then, Lucas, you talked a little bit about nodes and a landscape, but I don’t think we’ve explicitly pointed out that you did create a landscape of value alignment research so far. Can you talk a little bit about what that is and how people can use it?

Lucas: Yeah. For sure. With the help of other colleagues at the Future of Life Institute like Jessica Cussins and Richard Mallah, we’ve gone ahead and created a value alignment conceptual landscape. So what this is is it’s a really big tree, almost like an evolutionary tree that you would see, but what it is, is a conceptual mapping and landscape of the value alignment problem. What it’s broken down into are the three constituent components, which we were talking about earlier, which is the technical issues, the issues in technically creating safe AI systems. Issues in ethics, breaking that down into issues in metaethics and normative ethics and applied ethics and moral psychology and descriptive ethics where we’re trying to really understand values, what it means for something to be valuable and what is the end towards which intelligence will be aimed at. Then also, the other last section is governance. So issues in coordination and policy and law in creating a world where AI safety research can proceed and where there aren’t … Where we don’t develop or allow a sort of winner-take-all scenario to rush us towards the end and not really have a final and safe solution towards fully autonomous powerful systems.

So what the landscape here does is it sort of outlines all of the different conceptual nodes in each of these areas. It lays out what all the core concepts are, how they’re all related. It defines the concepts and also gives descriptions about how the concepts fit into each of these different sections of ethics, governance, and technical AI safety research. So the hope here is that people from different disciplines can come and see the truly interdisciplinary nature of the value alignment problem, to see where ethics and governance and the technical AI safety research stuff all fits in together and how this all together really forms, I think, the essential corners of the value alignment problem. It’s also nice for researchers and other persons to understand the concepts and the landscape of the other parts of this problem.

I think that, for example, technical AI safety researchers probably don’t know much about metaethics or they don’t spend too much time thinking about normative ethics. I’m sure that ethicists don’t spend very much time thinking about technical value alignment and how inverse reinforcement learning is actually done and what it means to do robust human imitation in machines. What are the actual technical, ethical mechanisms that are going to go into AI systems. So I think that this is like a step in sort of laying out the conceptual landscape, in introducing people to each other’s concepts. It’s a nice visual way of interacting with I think a lot of information and sort of exploring all these different really interesting nodes that explore a lot of very deep, profound moral issues, very difficult and interesting technical issues, and issues in law, policy and governance that are really important and profound and quite interesting.

Ariel: So you’ve referred to this as the value alignment problem a couple times. I’m curious, do you see this … I’d like both of you to answer this. Do you see this as a problem that can be solved or is this something that we just always keep working towards and it’s going to influence — whatever the current general consensus is will influence how we’re designing AI and possibly AGI, but it’s not ever like, “Okay. Now we’ve solved the value alignment problem.” Does that make sense?

Lucas: I mean, I think that that sort of question really depends on your metaethics, right? So if you think there are moral facts, if you think that more statements can be true or false and aren’t just sort of subjectively dependent upon whatever our current values and preferences historically and evolutionarily and accidentally happen to be, then there is an end towards which intelligence can be aimed that would be objectively good and which would be the end toward which we would strive. In that case, if we had solved the technical issue and the governance issue and we knew that there was a concrete end towards which we would strive that was the actual good, then the value alignment problem would be solved. But if you don’t think that there is a concrete end, a concrete good, something that is objectively valuable across all agents, then the value alignment problem or value alignment in general is an ongoing process and evolution.

In terms of the technical and governance sides of those, I think that there’s nothing in the laws of physics or I think in computer science or in game theory that says that we can’t solve those parts of the problem. Those ones seem intrinsically like they can be solved. That’s nothing to say about how easy or how hard it is to solve those. But whether or not there is sort of an end towards value alignment I think depends on difficult questions in metaethics and whether something like moral error theory is true where all moral statements are simply false and that morality is maybe sort of just like a human invention, which has no real answers or who’s answers are all false. I think that’s sort of the crux of whether or not value alignment can “be solved” because I think the technical issues and the issues in governance are things which are in principle able to be solved.

Ariel: And Meia?

Meia: I think that regardless of whether there is an absolute end to this problem or not, there’s a lot of work that we need to do in between. I also think that in order to even achieve this end, we need more intelligence, but as we create more intelligent agents, again, this problem gets magnified. So there’s always going to be a race between the intelligence that we’re creating and making sure that it is beneficial. I think at every step of the way, the more we increase the intelligence, the more we need to think about the broader implications. I think in the end we should think of artificial intelligence also not just as a way to amplify our own intelligence but also as a way to amplify our moral competence as well. As a way to gain more answers regarding ethics and what our ultimate goals should be.

So I think that the interesting questions that we can do something about are somewhere sort of in between. We will not have the answer before we are creating AI. So we always have to figure out a way to keep up with the development of intelligence in terms of our development of moral competence.

Ariel: Meia, I want to stick with you for just a minute. When we talked for the FLI end of your podcast, one of the things you said you were looking forward to in 2018 is broadening this conversation. I was hoping you could talk a little bit more about some of what you would like to see happen this year in terms of getting other people involved in the conversation, who you would like to see taking more of an interest in this?

Meia: So I think that unfortunately, especially in academia, we’ve sort of defined our work so much around these things that we call disciplines. I think we are now faced with problems, especially in AI, that really are very interdisciplinary. We cannot get the answers from just one discipline. So I would actually like to see in 2018 more sort of, for example, funding agencies proposing and creating funding sources for interdisciplinary projects. The way it works, especially in academia, so you propose grants to very disciplinary-defined granting agencies.

Another thing that would be wonderful to start happening is our education system is also very much defined and described around these disciplines. So I feel that, for example, there’s a lack of courses, for example, that teach students in technical fields things about ethics, moral psychology, social sciences and so on. The converse is also true; in social sciences and in philosophy we hear very little about advancements in artificial intelligence and what’s new and what are the problems that are there. So I’d like to see more of that. I’d like to see more courses like this developed. I think a friend of mine and I, we’ve spent some time thinking about how many courses are there that have an interdisciplinary nature and actually talk about the societal impacts of AI and there’s a handful in the entire world. I think we counted about five or six of them. So there’s a shortage of that as well.

But then also educating the general public. I think thinking about the implications of AI and also the societal implications of AI and also the value alignment problem is something that’s probably easier for the general public to grasp rather than thinking about the technical aspects of how to make it more powerful or how to make it more intelligent. So I think there’s a lot to be done in educating, funding, and also just simply having these conversations. I also very much admire what Lucas has been doing. I hope he will expand on it, creating this conceptual landscape so that we have people from different disciplines understanding their terms, their concepts, each other’s theoretical frameworks with which they work. So I think all of this is valuable and we need to start. It won’t be completely fixed in 2018 I think. But I think it’s a good time to work towards these goals.

Ariel: Okay. Lucas, is there anything that you wanted to add about what you’d like to see happen this year?

Lucas: I mean, yeah. Nothing else I think to add on to what I said earlier. Obviously we just need as many people from as many disciplines working on this issue because it’s so important. But just to go back a little bit, I was also really liking what Meia said about how AI systems and intelligence can help us with our ethics and with our governance. I think that seems like a really good way forward potentially if as our AI systems grow more powerful in their intelligence, they’re able to inform us moreso about our own ethics and our own preferences and our own values, about our own biases and about what sorts of values and moral systems are really conducive to the thriving of human civilization and what sorts of moralities lead to sort of navigating the space of all possible minds in a way that is truly beneficial.

So yeah. I guess I’ll be excited to see more ways in which intelligence and AI systems can be deployed for really tackling the question of what beneficial AI exactly entails. What does beneficial mean? We all want beneficial AI, but what is beneficial, what does that mean? What does that mean for us in a world in which no one can agree on what beneficial exactly entails? So yeah, I’m just excited to see how this is going to work out, how it’s going to evolve and hopefully we’ll have a lot more people joining this work on this issue.

Ariel: So your comment reminded me of a quote that I read recently that I thought was pretty interesting. I’ve been reading Paula Boddington’s book Toward a Code of Ethics for Artificial Intelligence. This was actually funded at least in part if not completely by FLI grants. But she says, “It’s worth pointing out that if we need AI to help us make moral decisions better, this cast doubt on the attempts to ensure humans always retain control over AI.” I’m wondering if you have any comments on that.

Lucas: Yeah. I don’t know. I think this sort of a specific way of viewing the issue or it’s a specific way of viewing what AI systems are for and the sort of future that we want. In the end is the best at all possible futures a world in which human beings ultimately retain full control over AI systems. I mean, if AI systems are autonomous and if value alignment actually succeeds, then I would hope that we created AI systems which are more moral than we are. AI systems which have better ethics, which are less biased, which are more rational, which are more benevolent and compassionate than we are. If value alignment is able to succeed and if we’re able to create autonomous intelligent systems of that sort of caliber of ethics and benevolence and intelligence, then I’m not really sure what the point is of maintaining any sort of meaningful human control.

Meia: I agree with you, Lucas. That if we do manage to create … In this case, I think it would have to be artificial general intelligence that is more moral, more beneficial, more compassionate than we are, then the issue of control, it’s probably not so important. But in the meantime, I think, while we are sort of tinkering with artificial intelligent systems, I think the issue of control is very important.

Lucas: Yeah. For sure.

Meia: Because we wouldn’t want to … We wouldn’t want to cut out of the loop too early before we’ve managed to properly test the system, make sure that indeed it is doing what we intended to do.

Lucas: Right. Right. I think that in the process of that that it requires a lot of our own moral evolution, something which we humans are really bad and slow at. As president of FLI Max Tegmark likes to talk about, he likes to talk about the race between our growing wisdom and the growing power of our technology. Now, human beings are really kind of bad at keeping our wisdom in pace with the growing power of our technology. If we sort of look at the moral evolution of our species, we can sort of see huge eras in which things which were seen as normal and mundane and innocuous, like slavery or the subjugation of women or other sorts of things like that. Today we have issues with factory farming and animal suffering and income inequality and just tons of people who are living with exorbitant wealth that doesn’t really create much utility for them, whereas there’s tons of other people who are in poverty and who are still starving to death. There are all sorts of things that we can see in the past as being obviously morally wrong.

Meia: Under the present too.

Lucas: Yeah. So then we can see that obviously there must be things like that today. We wonder, “Okay. What are the sorts of things today that we see and innocuous and normal and as mundane that the people of tomorrow, as William MacAskill says, will see us as moral monsters? How are we moral monsters today, but we simply can’t see it? So as we create powerful intelligence systems and we’re working on our ethics and we’re trying to really converge on constraining the set of all possible worlds into ones which are good and which are valuable and ethical, it really demands a moral evolution of ourselves that we sort of have to figure out ways to catalyze and work on and move through, I think, faster.

Ariel: Thank you. So as you consider attempts to solve the value alignment problem, what are you most worried about, either in terms of us solving it badly or not quickly enough or something along those lines? What is giving you the most hope in terms of us being able to address this problem?

Lucas: I mean, I think just technically speaking, ignoring the likelihood of this — the worst of all possible outcomes would be something like an s-risk. So an s-risk is a subset of x-risks — s-risk stands for suffering risk. So this is a sort of risk whereby some sort of value misalignment, whether it be intentional or much more likely accidental, some seemingly astronomical amount of suffering is produced by deploying a misaligned AI system. The way that this was function is given certain sorts of assumptions about the philosophy of mind, about consciousness and machines, if we understand potentially consciousness and experience to be substrate-independent, meaning if consciousness can be instantiated in machine systems, that you don’t just need meat to be conscious, but you need something like integrated information or information processing or computation or something like that, then the invention of AI systems and superintelligence and the spreading of intelligence, which optimizes towards any sort of arbitrary end, it could potentially lead to vast amounts of digital suffering, which would potentially arise accidentally or through subroutines or simulations, which would be epistemically useful but that involve a great amount of suffering. That coupled with these artificial intelligent systems running on silicon and iron and not on squishy, wet, human neurons would be that it would be running at digital time scales and not biological time scales. So there would be huge amplification of the speed of which the suffering was run. So subjectively, we might infer that a second for a computer, a simulated person on a computer, would be much greater than that for a biological person. Then we can sort of reflect that these are the sorts of risks — or an s-risk would be something that would be really bad. Just any sort of way that AI can be misaligned and lead to a great amount of suffering. There’s a bunch of different ways that this could happen.

So something like an s-risk would be something super terrible but it’s not really clear how likely that would be. But yeah, I think that beyond that obviously we’re worried about existential risk, we’re worried about ways that this could curtail or destroy the development of earth-originating intelligent life. Ways that this really might happen are I think most likely because of this winner-take-all scenario that you have with AI. We’ve had nuclear weapons for a very long time now, and we’re super lucky that nothing bad has happened. But I think the human civilization is really good at getting stuck into minimum equilibria where we get locked into these positions where it’s not easy to escape from. So it’s really not easy to disarm and get out of the nuclear weapons situation once we’ve discovered it. Once we start to develop, I think, more powerful and robust AI systems, I think already that a race towards AGI and towards more and more powerful AI might be very, very hard to stop if we don’t make significant progress on that soon, if we’re not able to get a ban on lethal autonomous weapons and if we’re not able to introduce any real global coordination and that we all just start racing towards more powerful systems that there might be a race towards AGI, which would cut corners on safety and potentially make the likelihood of an existential risk or suffering risk more likely.

Ariel: Are you hopeful for anything?

Lucas: I mean, yeah. If we get it right, then the next billion years can be super amazing, right? It’s just kind of hard to internalize that and think about that. It’s really hard to say I think how likely it is that we’ll succeed in any direction. But yeah, I’m hopeful that if we succeed in value alignment that the future can be unimaginably good.

Ariel: And Meia?

Meia: What’s scary to me is that it might be too easy to create intelligence. That there’s nothing in the laws of physics making it hard for us. Thus I think that it might happen too fast. Evolution took a long time to figure out how to make us intelligent, but that was probably just because it was trying to optimize for things like energy consumption and making us a certain size. So that’s scary. It’s scary that it’s happening so fast. I’m particularly scared that it might be easy to crack general artificial intelligence. I keep asking Max, “Max, but isn’t there anything in the laws of physics that might make it tricky?” His answer and also that of more physicists that I’ve been discussing with is that, “No, it doesn’t seem to be the case.”

Now, what makes me hopeful is that we are creating this. Stuart Russell likes to give this example of a message from an alien civilization, an alien intelligence that says, “We will be arriving in 50 years.” Then he poses the question, “What would you do when you prepare for that?” But I think with artificial intelligence it’s different. It’s not like it’s arriving and it’s a given and it has a certain form or shape that we cannot do anything about. We are actually creating artificial intelligence. I think that’s what makes me hopeful that if we actually research it right, that if we think hard about what we want and we work hard at getting our own act together, first of all, and also on making sure that this stays and is beneficial, we have a good chance to succeed.

Now, there’ll be a lot of challenges in between from very near-term issues like Lucas was mentioning, for example, autonomous weapons, weaponizing our AI and giving it the right to harm and kill humans, to other issues regarding income inequality enhanced by technological development and so on, to down the road how do we make sure that autonomous AI systems actually adopt our goals. But I do feel that it is important to try and it’s important to work at it. That’s what I’m trying to do and that’s what I hope others will join us in doing.

Ariel: All right. Well, thank you both again for joining us today.

Lucas: Thanks for having us.

Meia: Thanks for having us. This was wonderful.

Ariel: If you’re interested in learning more about the value alignment landscape that Lucas was talking about, please visit FutureofLife.org/valuealignmentmap. We’ll also link to this in the transcript for this podcast. If you enjoyed this podcast, please subscribe, give it a like, and share it on social media. We’ll be back again next month with another conversation among experts.

[end of recorded material]

Podcast: Top AI Breakthroughs and Challenges of 2017 with Richard Mallah and Chelsea Finn

AlphaZero, progress in meta-learning, the role of AI in fake news, the difficulty of developing fair machine learning — 2017 was another year of big breakthroughs and big challenges for AI researchers!

To discuss this more, we invited FLI’s Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month’s podcast. They talked about some of the technical progress they were most excited to see and what they’re looking forward to in the coming year.

You can listen to the podcast here, or read the transcript below.

Ariel: I’m Ariel Conn with the Future of Life Institute. In 2017, we saw an increase in investments into artificial intelligence. More students are applying for AI programs, and more AI labs are cropping up around the world. With 2017 now solidly behind us, we wanted to take a look back at the year and go over some of the biggest AI breakthroughs. To do so, I have Richard Mallah and Chelsea Finn with me today.

Richard is the director of AI projects with us at the Future of Life Institute, where he does meta-research, analysis and advocacy to keep AI safe and beneficial. Richard has almost two decades of AI experience in industry and is currently also head of AI R & D at the recruiting automation firm, Avrio AI. He’s also co-founder and chief data science officer at the content marketing planning firm, MarketMuse.

Chelsea is a PhD candidate in computer science at UC Berkeley and she’s interested in how learning algorithms can enable robots to acquire common sense, allowing them to learn a variety of complex sensory motor skills in real-world settings. She completed her bachelor’s degree at MIT and has also spent time at Google Brain.

Richard and Chelsea, thank you so much for being here.

Chelsea: Happy to be here.

Richard: As am I.

Ariel: Now normally I spend time putting together questions for the guests, but today Richard and Chelsea chose the topics. Many of the breakthroughs they’re excited about were more about behind-the-scenes technical advances that may not have been quite as exciting for the general media. However, there was one exception to that, and that’s AlphaZero.

AlphaZero, which was DeepMind’s follow-up to AlphaGo, made a big splash with the popular press in December when it achieved superhuman skills at Chess, Shogi and Go without any help from humans. So Richard and Chelsea, I’m hoping you can tell us more about what AlphaZero is, how it works and why it’s a big deal. Chelsea, why don’t we start with you?

Chelsea: Yeah, so DeepMind first started with developing AlphaGo a few years ago, and AlphaGo started its learning by watching human experts play, watching how human experts play moves, how they analyze the board — and then once it analyzed and once it started with human experts, it then started learning on its own.

What’s exciting about AlphaZero is that the system started entirely on its own without any human knowledge. It started just by what’s called “self-play,” where the agent, where the artificial player is essentially just playing against itself from the very beginning and learning completely on its own.

And I think that one of the really exciting things about this research and this result was that AlphaZero was able to outperform the original AlphaGo program, and in particular was able to outperform it by removing the human expertise, by removing the human input. And so I think that this suggests that maybe if we could move towards removing the human biases and removing the human input and move more towards what’s called unsupervised learning, where these systems are learning completely on their own, then we might be able to build better and more capable artificial intelligence systems.

Ariel: And Richard, is there anything you wanted to add?

Richard: So, what was particularly exciting about AlphaZero is that it’s able to do this by essentially a technique very similar to what Paul Christiano of AI Safety fame has called “capability amplification.” It’s similar in that it’s learning a function to predict a prior or an expectation over which moves are likely at a given point, as well as function to predict which player will win. And it’s able to do these in an iterative manner. It’s able to apply what’s called an “amplification scheme” in the more general sense. In this case it was Monte Carlo tree search, but in the more general case it could be other more appropriate amplification schemes for taking a simple function and iterating it many times to make it stronger, to essentially have a leading function that is then summarized.

Ariel: So I do have a quick follow up question here. With AlphaZero, it’s a program that’s living within a world that has very strict rules. What is the next step towards moving outside of that world with very strict rules and into the much messier real world?

Chelsea: That’s a really good point. The catch with these results, with these types of games — and even video games, which are a little bit messier than the strict rules of a board game — these games, all of these games can be perfectly simulated. You can perfectly simulate what will happen when you make a certain move or when you take a certain action, either in a video game or in the game of Go or the game of Chess, et cetera. Then therefore, you can train these systems with many, many lifetimes of data.

The real physical world on the other hand, we can’t simulate. We don’t know how to simulate the complex physics of the real world. As a result, you’re limited by the number of robots that you have if you’re interested in robots, or if you’re interested in healthcare, you’re limited by the number of patients that you have. And you’re also limited by safety concerns, the cost of failure, et cetera.

I think that we still have a long way to go towards taking these sorts of advances into real world settings where there’s a lot of noise, there’s a lot of complexity in the environment, and I think that these results are inspiring, and we can take some of the ideas from these approaches and apply them to these sorts of systems, but we need to keep in mind that there are a lot of challenges ahead of us.

Richard: So between real world systems and something like the game of Go, there are also incremental improvements, like introducing this port for partial observability or more stochastic environments, or more continuous environments as opposed to the very discrete ones. So these challenges, assuming that we do have a situation where we could actually simulate what we would like to see or use a simulation to help to get training data on the fly, then in those cases, we’re likely to be able to make some progress. Using a technique like this with some extensions or with some modifications to support those criteria.

Ariel: Okay. Now, I’m not sure if this is a natural jump to the next topic or not, but you’ve both mentioned that one of the big things that you saw happening last year were new creative approaches to unsupervised learning, and Richard in an email to me you mentioned “word translation without parallel data.” So I was hoping you could talk a little bit more about what these new creative approaches are and what you’re excited about there.

Richard: So this year, we saw an application of taking vector spaces, or taking word embeddings, which are essentially these multidimensional spaces where there are relationships between points that are meaningful semantically. The space itself is learned by a relatively shallow deep-learning network, but this meaningfulness that is imbued in the space, is actually able to be used, we’ve seen this year, by taking different languages, or I should say vector spaces that were trained in different languages or created from corpora of different languages and compared, and via some techniques to sort of compare and rationalize the differences between those spaces, we’re actually able to translate words and translate things between language pairs in ways that actually, in some cases, exceed supervised approaches because typically there are parallel sets of documents that have the same meaning in different languages. But in this case, we’re able to essentially do something very similar to what the Star Trek universal translator does. By consuming enough of the alien language, or the foreign language I should say, it’s able to model the relationships between concepts and then realign those with the concepts that are known.

Chelsea, would you like to comment on that?

Chelsea: I don’t think I have too much to add. I’m also excited about the translation results and I’ve also seen similar, I guess, works that are looking at unsupervised learning, not for translation, that have a little bit of a similar vein, but they’re fairly technical in terms of the actual approach.

Ariel: Yeah, I’m wondering if either of you want to try to take a stab at explaining how this works without mentioning vector spaces?

Richard: That’s difficult because it is a space, I mean it’s a very geometric concept, and it’s because we’re aligning shapes within that space that we actually get the magic happening.

Ariel: So would it be something like you have different languages going in, some sort of document or various documents from different languages going in, and this program just sort of maps them into this space so that it figures out which words are parallel to each other then?

Richard: Well it figures out the relationship between words and based on the shape of relationships in the world, it’s able to take those shapes and rotate them into a way that sort of matches up.

Chelsea: Yeah, perhaps it could be helpful to give an example. I think that generally in language you’re trying to get across concepts, and there is structure within the language, I mean there’s the structure that you learn about in grade school when you’re learning vocabulary. You learn about verbs, you learn about nouns, you learn about people and you learn about different words that describe these different things, and different languages have shared this sort of structure in terms of what they’re trying to communicate.

And so, what these algorithms do is they are given basically data of people talking in English, or people writing documents in English, and they’re also given data in another language — and the first one doesn’t necessarily need to be English. They’re given data in one language and data in another language. This data doesn’t match up. It’s not like one document that’s been translated into another, it’s just pieces of language, documents, conversations, et cetera, and by using the structure that exists, and the data such as nouns, verbs, animals, people, it can basically figure out how to map from the structure of one language to the structure of another language. It can recognize this similar structure in both languages and then figure out basically a mapping from one to the other.

Ariel: Okay. So I think, I want to keep moving forward, but continuing with the concept of learning, and Chelsea I want to stick with you for a minute. You mentioned that there were some really big metalearning advances that occurred last year, and you also mentioned a workshop and symposium at NIPS. I was wondering if you could talk a little more about that.

Chelsea: Yeah, I think that there’s been a lot of excitement around metalearning, or learning to learn. There were two gatherings at NIPS, one symposium, one workshop this year and both were well-attended by a number of people. Actually, metalearning has a fairly long history, and so it’s by no means a recent or a new topic, but I think that it has renewed attention within the machine learning community.

And so, I guess I can describe metalearning. It’s essentially having systems that learn how to learn. There’s a number of different applications for such systems. So one of them is an application that’s often referred to as AutoML, or automatic machine learning, where these systems can essentially optimize the hyper parameters, basically figure out the best set of parameters and then run a learning algorithm with those sets of hyper parameters. Essentially kind of taking the job of the machine learning researcher that is tuning different models on different data sets. And this can basically allow people to more easily train models on a data set.

Another application of metalearning that I’m really excited about is enabling systems to reuse data and reuse experience from other tasks when trying to solve new tasks. So in machine learning, there’s this paradigm of creating everything from scratch, and as a result, if you’re training from scratch, from zero prior knowledge, then it’s going to take a lot of data. It’s going to take a lot of time to train because you’re starting from nothing. But if instead you’re starting from previous experience in a different environment or on a different task, and you can basically learn how to efficiently learn from that data, then when you see a new task that you haven’t seen before, you should be able to solve it much more efficiently.

And so, one example of this is what’s called One-Shot Learning or Few-Shot Learning, where you learn essentially how to learn from a few examples, such that when you see a new setting and you just get one or a few examples, labeled examples, labeled data points, you can figure out the new task and solve the new task just from a small number of examples.

One explicit example of how humans do this is that you can have someone point out a Segway to you on the street, and even if you’ve never seen a Segway before or never heard of the concept of a Segway, just from that one example of a human pointing out to you, you can then recognize other examples of Segways. And the way that you do that is basically by learning how to recognize objects over the course of your lifetime.

Ariel: And are there examples of programs doing this already? Or we’re just making progress towards programs being able to do this more effectively?

Chelsea: There are some examples of programs being able to do this in terms of image recognition. There’s been a number of works that have been able to do this with real images. I think that more recently we’ve started to see systems being applied to robotics, which I think is one of the more exciting applications of this setting because when you’re training a robot in the real world, you can’t have the robot collect millions of data points or days of experience in order to learn a single task. You need it to share and reuse experiences from other tasks when trying to learn a new task.

So one example of this is that you can have a robot be able to manipulate a new object that it’s never seen before based on just one demonstration of how to manipulate that object from a human.

Ariel: Okay, thanks.

I want to move to a topic that is obviously of great interest to FLI and that is technical safety advances that occurred last year. Again in an email to me, you’ve both mentioned “inverse reward design” and “deep reinforcement learning for human preferences” as two areas related to the safety issue that were advanced last year. I was hoping you could both talk a little bit about what you saw happening last year that gives you hope for developing safer AI and beneficial AI.

Richard: So, as I mentioned, both inverse reward design and deep reinforcement learning from human preferences are exciting papers that came out this year.

So inverse reward design is where the AI system is trying to understand what the original designer or what the original user intends for the system to do. So it actually tries, if it’s in some new setting, a test setting where there are some potentially problematic new things that were introduced relative to the training time, then it tries specifically to back those out or to mitigate the effects of those, so that’s kind of exciting.

Deep reinforcement learning from human preferences is an algorithm for trying to very efficiently get feedback from humans based on trajectories in the context of reinforcement learning systems. So, these are systems that are trying to learn some way to plan, let’s say a path through a game environment or in general trying to learn a policy of what to do in a given scenario. This algorithm, deep RL from human preferences, shows little snippets of potential paths to humans and has them simply choose which are better, very similar to what goes on at an optometrist. Does A look better or does B look better? And just from that, very sophisticated behaviors can be learned from human preferences in a way that was not possible before in terms of scale.

Ariel: Chelsea, is there anything that you wanted to add?

Chelsea: Yeah. So, in general, I guess, going back to AlphaZero and going back to games in general, there’s a very clear objective for achieving the goal, which is whether or not you won the game or your score at the game. It’s very clear what the objective is and what each system should be optimizing for. AlphaZero should be, like when playing Go should be optimizing for winning the game, and if a system is playing Atari games it should be optimizing for maximizing the score.

But in the real world, when you’re training systems, when you’re training agents to do things, when you’re training an AI to have a conversation with you, when you’re training a robot to set the table for you, there is no score function. The real world doesn’t just give you a score function, doesn’t tell you whether or not you’re winning or losing. And I think that this research is exciting and really important because it gives us another mechanism for telling robots, telling these AI systems how to do the tasks that we want them to do.

And for example, the human preferences work, it allows us, in sort of specifying some sort of goal that we want the robot to achieve or kind of giving it a demonstration of what we want the robot to achieve, or some sort of reward function, instead lets us say, “okay, this is not what I want, this is what I want,” throughout the process of learning. And then as a result, at the end you can basically guarantee that if it was able to optimize for your preferences successfully, then you’ll end up with behavior that you’re happy with.

Ariel: Excellent. So I’m sort of curious, before we started recording, Chelsea, you were telling me a little bit about your own research. Are you doing anything with this type of work? Or is your work a little different?

Chelsea: Yeah. So more recently I’ve been working on metalearning and so some of the metalearning works that I talked about previously, like learning just from a single demonstration and reusing data, reusing experience that you talked about previously, has been some of the things that I’ve been focusing on recently in terms of getting robots to be able to do things in the real world, such as manipulating objects, pushing objects around, using a spatula, stuff like that.

I’ve also done work on reinforcement learning where you essentially give a robot an objective, tell it to try to get the object as close as possible to the goal, and I think that the human preferences work provides a nice alternative to the classic setting, to the classic framework of reinforcement learning, that we could potentially apply to real robotic systems.

Ariel: Chelsea, I’m going to stick with you for one more question. In your list of breakthroughs that you’re excited about, one of the things that you mentioned is very near and dear to my heart, and that was better communication, and specifically better communication of the research. And I was hoping you could talk a little bit about some of the websites and methods of communicating that you saw develop and grow last year.

Chelsea: Yes. I think that more and more we’re seeing researchers put their work out in blog posts and try to make their work more accessible to the average user by explaining it in terms that are easier to understand, by motivating it in words that are easier for the average person to understand and I think that this is a great way to communicate the research in a clear way to a broader audience.

In addition, I’ve been quite excited about an effort, I think led by Chris Olah, on building what is called distill.pub. It’s a website and a journal, an academic journal, that tries to move away from this paradigm of publishing research on paper, on trees essentially. Because we have such rich digital technology that allows us to communicate in many different ways, it makes sense to move past just completely written forms of research dissemination. And I think that’s what distill.pub does, is it allows us, allows researchers to communicate research ideas in the form of animations, in the form of interactive demonstrations on a computer screen, and I think this is a big step forward and has a lot of potential in terms of moving forward the communication of research, the dissemination of research among the research community as well as beyond to people that are less familiar with the technical concepts in the field.

Ariel: That sounds awesome, Chelsea, thank you. And distill.pub is probably pretty straight forward, but we’ll still link to it on the post that goes along with this podcast if anyone wants to click straight through.

And Richard, I want to switch back over to you. You mentioned that there was more impressive output from GANs last year, generative adversarial networks.

Richard: Yes.

Ariel: Can you tell us what a generative adversarial network is?

Richard: So a generative adversarial network is an AI system where there are two parts, essentially a generator or creator that comes up with novel artifacts and a critic that tries to determine whether this is a good or legitimate or realistic type of thing that’s being generated. So both are learned in parallel as training data is streamed into the system, so in this way, the generator learns relatively efficiently how to create things that are good or realistic.

Ariel: So I was hoping you could talk a little bit about what you saw there that was exciting.

Richard: Sure, so new architectures and new algorithms and simply more horsepower as well have led to more impressive output. Particularly exciting are conditional generative adversarial networks, where there can be structured biases or new types of inputs that one wants to base some output around.

Chelsea: Yeah, I mean, one thing to potentially add is that I think the research on GANs is really exciting and I think that it will not only make advances in generating images of realistic quality, but also generating other types of things, like generating behavior potentially, or generating speech, or generating a language. We haven’t seen as much advances in those areas as generating images, thus far the most impressive advances have been in generating images. I think that those are areas to watch out for as well.

One thing to be concerned about in terms of GANs is the ability for people to generate fake images, fake videos of different events happening and putting those fake images and fake videos into the media, because while there might be ways to detect whether or not these images are made-up or are counterfeited essentially, the public might choose to believe something that they see. If you see something, you’re very likely to believe it, and this might exacerbate all of the, I guess, fake news issues that we’ve had recently.

Ariel: Yeah, so that actually brings up something that I did want to get into, and honestly, that, Chelsea, what you just talked about, is some of the scariest stuff I’ve seen, just because it seems like it has the potential to create sort of a domino effect of triggering all of these other problems just with one fake video. So I’m curious, how do we address something like that? Can we? And are there other issues that you’ve seen crop in the last year that also have you concerned?

Chelsea: I think there are potentially ways to address the problem in that if media websites, if it seems like it’s becoming a real danger in the imminent future, then I think that media websites, including social media websites, should take measures to try to be able to detect fake images and fake videos and either prevent them from being displayed or put a warning that it seems like it was detected as something that was fake, to explicitly try to mitigate the effects.

But, that said, I haven’t put that much thought into it. I do think it’s something that we should be concerned about, and the potential solution that I mentioned, I think that even if it can help solve some of the problems, I think that we don’t have a solution to the problem yet.

Ariel: Okay, thank you. I want to move on to the last question that I have that you both brought up, and that was, last year we saw an increased discussion of fairness in machine learning. And Chelsea, you mentioned there was a NIPS tutorial on this and the keynote mentioned it at NIPS as well. So I was hoping you could talk a bit about what that means, what we saw happen, and how you hope this will play out to better programs in the future.

Chelsea: So, there’s been a lot of discussion in how we can build machine-learning systems, build AI systems such that when they make decisions, they are fair and they aren’t biased. And all this discussion has been around fairness in machine learning, and actually one of the interesting things about the discussion from a technical point of view is how you even define fairness and how you define removing biases and such, because a lot of the biases are inherent to the data itself. And how you try to remove those biases can be a bit controversial.

Ariel: Can you give us some examples?

Chelsea: So one example is, if you’re trying to build an autonomous car system that is trying to avoid hitting pedestrians, and recognize pedestrians when appropriate and respond to them, then if these systems are trained in environments and in communities that are predominantly of one race, for example in Caucasian communities, and you then deploy this system in settings where there are people of color and in other environments that it hasn’t seen before, then the resulting system won’t have as good accuracy on settings that it hasn’t seen before and will be biased inherently, when it for example tries to recognize people of color, and this is a problem.

So some other examples of this is if machine learning systems are making decisions about who to give health insurance to, or speech recognition systems that are trying to recognize different speeches, if these systems are trained on a smaller part of the community that is not representative of the entire population as a whole, then they won’t be able to accurately make decisions about the entire population. Or if they’re trained on data that was collected by humans that has the same biases as humans, then they will make the same mistake, they will inherit the same biases that humans inherit, that humans have.

I think that the people that have been researching fairness in machine learning systems, unfortunately one of the conclusions that they’ve made so far is that there isn’t just a one size fits all solution to all of these different problems, and in many cases we’ll have to think about fairness in individual contexts.

Richard: Chelsea, you mentioned that some of the remediations for fairness issues in machine learning are themselves controversial. Can you go into an example or so about that?

Chelsea: Yeah, I guess part of what I meant there is that even coming up with a definition for what is fair is unclear. It’s unclear what even the problem specification is, and without a problem specification, without a definition of what you want your system to be doing, creating a system that’s fair is a challenge if you don’t have a definition for what fair is.

Richard: I see.

Ariel: So then, my last question to you both, as we look towards 2018, what are you most excited or hopeful to see?

Richard: I’m very hopeful for the FLI grants program that we announced at the very end of 2017 leading to some very interesting and helpful AI safety papers and AI safety research in general that will build on past research and break new ground and will enable additional future research to be built on top of it to make the prospect of general intelligence safer and something that we don’t need to fear as much. But that is a hope.

Ariel: And Chelsea, what about you?

Chelsea: I think I’m excited to see where metalearning goes. I think that there’s a lot more people that are paying attention to it and starting to research into “learning to learn” topics. I’m also excited to see more advances in machine learning for robotics. I think that, unlike other fields in machine learning like machine translation, image recognition, et cetera, I think that robotics still has a long way to go in terms of being useful and solving a range of complex tasks and I hope that we can continue to make strides in machine learning for robotics in the coming year and beyond.

Ariel: Excellent. Well, thank you both so much for joining me today.

Richard: Sure, thank you.

Chelsea: Yeah, I enjoyed talking to you.

 

This podcast was edited by Tucker Davey.

Podcast: Beneficial AI and Existential Hope in 2018

For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we’ve built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how we’ve honored one of civilization’s greatest heroes.

Full transcript:

Ariel: I’m Ariel Conn with the Future of Life Institute. As you may have noticed, 2017 was quite the dramatic year. In fact, without me even mentioning anything specific, I’m willing to bet that you already have some examples forming in your mind of what a crazy year this was. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. But I’ll let Max Tegmark, president of FLI, tell you a little more about that.

Max: I think it’s important when we reflect back at the years news to understand how things are all connected. For example, the drama we’ve been following with Kim Jung Un and Donald Trump and Putin with nuclear weapons, is really very connected to all the developments in artificial intelligence because in both cases we have a technology which is so powerful that it’s not clear that we humans have sufficient wisdom to manage it well. And that’s why I think it’s so important that we all continue working towards developing this wisdom further, to make sure that we can use these powerful technologies like nuclear energy, like artificial intelligence, like biotechnology and so on to really help rather than to harm us.

Ariel: And it’s worth remembering that part of what made this such a dramatic year was that there were also some really positive things that happened. For example, in March of this year, I sat in a sweltering room in New York City, as a group of dedicated, caring individuals from around the world discussed how they planned to convince the United Nations to ban nuclear weapons once and for all. I don’t think anyone in the room that day realized that not only would they succeed, but by December of this year, the International Campaign to Abolish Nuclear Weapons, led by Beatrice Fihn would be awarded the Nobel Peace Prize for their efforts. And while we did what we could to help that effort, our own big story had to be the Beneficial AI Conference that we hosted in Asilomar California. Many of us at FLI were excited to talk about Asilomar, but I’ll let Anthony Aguirre, Max, and Victoria Krakovna start.

Anthony: I would say pretty unquestionably the big thing that I felt was most important and felt most excited about was the big meeting in Asilomar and centrally putting together the Asilomar Principles.

Max: I’m going to select the Asilomar conference that we organized early this year, whose output was the 23 Asilomar Principles, which has since been signed by over a thousand AI researchers around the world.

Vika: I was really excited about the Asilomar conference that we organized this year. This was the sequel to FLI’s Puerto Rico Conference, which was at the time a real game changer in terms of making AI safety more mainstream and connecting people working in AI safety with the machine learning community and integrating those two. I think Asilomar did a great job of continuing to build on that.

Max: I’m very excited about this because I feel that it really has helped mainstream AI safety work. Not just near term AI safety stuff, like how to transform today’s buggy and hackable computers into robust systems that you can really trust but also mainstream larger issues. The Asilomar Principles actually contain the word super intelligence, contain the phrase existential risk, contain the phrase recursive self improvement and yet they have been signed by really a who’s who in AI. So it’s from now on, it’s impossible for anyone to dismiss these kind of concerns, this kind of safety research. By saying, that’s just people who have no clue about AI.

Anthony: That was a process that started in 2016, brainstorming at FLI and then the wider community and then getting rounds of feedback and so on. But it was exciting both to see how much cohesion there was in the community and how much support there was for getting behind some sort of principles governing AI. But also, just to see the process unfold because one of the things that I’m quite frustrated about often is this sense that there’s this technology that’s just unrolling like a steam roller and it’s going to go where it’s going to go, and we don’t have any agency over where that is. And so to see people really putting thought into what is the world we would like there to be in ten, fifteen, twenty, fifty years and how can we distill what it is that we like about that world into principles like these…that felt really, really good. It felt like an incredibly useful thing for society as a whole but in this case, the people who are deeply engaged with AI, to be thinking through in a real way rather than just how can we put out the next fire, or how can we just turn the progress one more step forward, to really think about the destination.

Ariel: But what’s that next step? How do we transition from Principles that we all agree on to actions that we can also all get behind. Jessica Cussins joined FLI later in the year, but when asked what she was excited about as far as FLI was concerned, she immediately mentioned the implementation of things like the Asilomar Principles.

Jessica: I’m most excited about the developments we’ve seen over the last year related to safe, beneficial and ethical AI. I think FLI has been a really important player in this. We had the beneficial AI conference in January that resulted in the Asilomar AI Principles. It’s been really amazing to see how much traction those principles have gotten and to see a growing consensus around the importance of being thoughtful about the design of AI systems, the challenges of algorithmic bias of data control and manipulation and accountability and governance. So the thing I’m most excited about right now, is the growing number of initiatives we’re seeing around the world related to ethical and beneficial IA.

Anthony: What’s been great to see is the development of ideas both from FLI and from many other organizations of what policies might be good. What concrete legislative actions there might be or standards, organizations or non-profits, agreements between companies and so on might be interesting.

But I think, we’re only at the step of formulating those things and not that much action has been taken anywhere in terms of actually doing those things. Little bits of legislation here and there. But I think we’re getting to the point where lots of governments, lots of companies, lots of organizations are going to be publishing and creating and passing more and more of these things. I think seeing that play out and working really hard to ensure that it plays out in a way that’s favorable in as many ways and as many people as possible, I think is super important and something we’re excited to do.

Vika: I think that Asilomar principles are a great common point for the research community and others to agree what we are going for, what’s important.

Besides having the principles as an output, the event itself was really good for building connections between different people from interdisciplinary backgrounds, from different related fields who are interested in the questions of safety and ethics.

And we also had this workshop that was adjacent to Asilomar where our grant winners actually presented their work. I think it was great to have a concrete discussion of research and the progress we’ve made so far and not just abstract discussions of the future, and I hope that we can have more such technical events, discussing research progress and making the discussion of AI safety really concrete as time goes on.

Ariel: And what is the current state of AI safety research? Richard Mallah took on the task of answering that question for the Asilomar conference, while Tucker Davey has spent the last year interviewing various FLI grant winners to better understand their work.

Richard: I presented a landscape of technical AI safety research threads. This lays out hundreds of different types of research areas and how they are related to each other. All different areas that need a lot more research going into them than they have today to help keep AI safe and beneficent and robust. I was really excited to be at Asilomar and to have co-organized Asilomar and that so many really awesome people were there and collaborating on these different types of issues. And that they were using that landscape that I put together as sort of a touchpoint and way to coordinate. That was pretty exciting.

Tucker: I just found it really inspiring interviewing all of our AI grant recipients. It’s kind of been an ongoing project interviewing these researchers and writing about what they’re doing. Just for me, getting recently involved in AI, it’s been incredibly interesting to get either a half an hour, an hour with these researchers to talk in depth about their work and really to learn more about a research landscape that I hadn’t been aware of before working at FLI. Really, being a part of those interviews and learning more about the people we’re working with and these people that are really spearheading AI safety was really inspiring to be a part of.

Ariel: And with that, we have a big announcement.

Richard: So, FLI is launching a new grants program in 2018. This time around, we will be focusing more on artificial general intelligence, artificial super intelligence and ways that we can do technical research and other kinds of research today. On today’s systems or things that we can analyze today, things that we can model or make theoretical progress on today that are likely to actually still be relevant at the time, where AGI comes about. This is quite exciting and I’m excited to be part of the ideation and administration around that.

Max: I’m particularly excited about the new grants program that we’re launching for AI safety research. Since AI safety research itself has become so much more mainstream, since we did our last grants program three years ago, there’s now quite a bit of funding for a number of near term challenges. And I feel that we at FLI should focus on things more related to challenges and opportunities from super intelligence, since there is virtually no funding for that kind of safety research. It’s going to be really exciting to see what proposals come in and what research teams get selected by the review panels. Above all, how this kind of research hopefully will contribute to making sure that we can use this powerful technology to create a really awesome future.

Vika: I think this grant program could really build on the impact of our previous grant program. I’m really excited that it’s going to focus more on long term AI safety research, which is still the most neglected area.

AI safety has really caught on in the past two years, and there’s been a lot more work on that going on, which is great. And part of what this means is that the we at FLI can focus more on the long term. The long term work has also been getting more attention, and this grant program can help us build on that and make sure that the important problems get solved. This is really exciting.

Max: I just came back from spending a week at the NIPS Conference, the biggest artificial intelligence conference of the year. Its fascinating how rapidly everything is proceeding. AlphaZero has now defeated not just human chess players and Go players but it has also defeated human AI researchers, who after spending 30 years handcrafting artificial intelligence software to play computer chess, got all their work completely crushed by AlphaZero that just learned to do much better than that from scratch in four hours.

So, AI is really happening, whether we like it or not. The challenge we face is simply to compliment that through AI safety research and a lot of good thinking to make sure that this helps humanity flourish rather than flounder.

Ariel: In the spirit of flourishing, FLI also turned its attention this year to the movement to ban lethal autonomous weapons. While there is great debate around how to define autonomous weapons and whether or not they should be developed, more people tend to agree that the topic should at least come before the UN for negotiations. And so we helped create the video Slaughterbots to help drive this conversation. I’ll let Max take it from here.

Max: Slaughterbots, autonomous little drones that can go anonymously murder people without any human control. Fortunately, they don’t exist yet. We hope that an international treaty is going to keep it that way, even though we almost have the technology to do them already. Just need to integrate then mass produce tech we already have. So to help with this, we made this video called Slaughterbots. It was really impressive to see it get over forty million views and make the news throughout the world. I was very happy that Stewart Russell, whom we partnered with in this, also presented this to the diplomats at the United Nations in Geneva when they were discussing whether to move towards a treaty, drawing a line in the sand.

Anthony: Pushing on the autonomous weapons front, it’s been really scary, I would say to think through that issue. But a little bit like the issue of AI, in general, there’s a potential scary side but there’s also a potentially helpful side in that I think this is an issue that is a little bit tractable. Even a relatively small group of committed individuals can make difference. So I think, I’m excited to see how much movement we can get on the autonomous weapons front. It doesn’t seem at all like a hopeless issue to me and I think 2018 will be kind of a turning point — I hope that will be sort of a turning point for that issue. It’s kind of flown under the radar but it really is coming up now and it will be at least interesting. Hopefully, it will be exciting and happy and so on as well as interesting. It will at least be interesting to see how it plays out on the world stage.

Jessica: For 2018, I’m hopeful that we will see the continued growth of the global momentum against lethal autonomous weapons. Already, this year a lot has happened at the United Nations and across communities around the world, including thousands of AI and robotics researchers speaking out and saying they don’t want to see their work used to create these kinds of destabilizing weapons of mass destruction. One thing I’m really excited for 2018 is to see a louder, rallying call for an international ban of lethal autonomous weapons.

Ariel: Yet one of the biggest questions we face when trying to anticipate autonomous weapons and artificial intelligence in general, and even artificial general intelligence – one of the biggest questions is: when? When will these technologies be developed? If we could answer that, then solving problems around those technologies could become both more doable and possibly more pressing. This is an issue Anthony has been considering.

Anthony: Of most interest has been the overall set of projects to predict artificial intelligence timelines and milestones. This is something that I’ve been doing through this prediction website, Metaculus, which I’ve been a part of. And also something where I’ve took part in a very small workshop run by the Foresight Institute over the summer. It’s both a super important question because I think the overall urgency with which we have to deal with certain issues really depends on how far away they are. It’s also an instructive one, in that even posing the questions of what do we want to know exactly, really forces you to think through what is it that you care about, how would you estimate things, what different considerations are there in terms of this sort of big question.

We have this sort of big question, like when is really powerful AI going to appear? But when you dig into that, what exactly is really powerful, what exactly…  What does appear mean? Does that mean in sort of an academic setting? Does it mean becomes part of everybody’s life?

So there are all kinds of nuances to that overall big question that lots of people asking. Just getting into refining the questions, trying to pin down what it is that mean — make them exact so that they can be things that people can make precise and numerical predictions about. I think its been really, really interesting and elucidating to me and in sort of understanding what all the issues are. I’m excited to see how that kind of continues to unfold as we get more questions and more predictions and more expertise focused on that. Also, a little but nervous because the timeline seemed to be getting shorter and shorter and the urgency of the issue seems to be getting greater and greater. So that’s a bit of a fire under us, I think, to keep acting and keep a lot of intense effort on making sure that as AI gets more powerful, we get better at managing it.

Ariel: One of the current questions AI researchers are struggling with is the problem of value alignment, especially when considering more powerful AI. Meia Chita-Tegmark and Lucas Perry recently co-organized an event to get more people thinking creatively about how to address this.

Meia: So we just organized a workshop about the ethics of value alignment together with a few partner organizations, the Berggruen Institute and also CFAR.

Lucas: This was a workshop recently that took place in California and just to remind everyone, value alignment is the process by which we bring AI’s actions, goals, and intention in alignment with and in accordance with what is deemed to be the good or what are human values and preferences and goals and intentions.

Meia: And we had a fantastic group of thinkers there. We had philosophers. We had social scientists, AI researchers, political scientists. We were all discussing this very important issue of how do we get an artificial intelligence that is aligned to our own goals and our own values.

It was really important to have the perspectives of ethicists and moral psychologists, for example, because this question is not just about the technical aspect of how do you actually implement it, but also about whose values do we want implemented and who should be part of the conversation and who gets excluded and what process do we want to establish to collect all the preferences and values that we want implemented in AI. That was really fantastic. It was a very nice start to what I hope will continue to be a really fruitful collaboration between different disciplines on this very important topic.

Lucas: I think one essential take-away from that was that value alignment is truly something that is interdisciplinary. It’s normally been something which has been couched and understood in the context of technical AI safety research, but value alignment, at least in my view, also inherently includes ethics and governance. It seems that the project of creating beneficial AI through efforts and value alignment can really only happen when we have lots of different people from lots of different disciplines working together on this supremely hard issue.

Meia: I think the issue with AI is something that … first of all, it concerns such a great number of people. It concerns all of us. It will impact, and it already is impacting all of our experiences. There’re different disciplines that look at this impact from different ways.

Of course, technical AI researchers will focus on developing this technology, but it’s very important to think about how does this technology co-evolve with us. For example, I’m a psychologist. I like to think about how does it impact our own psyche. How does it impact the way we act in the world, the way we behave. Stuart Russell many times likes to point out that one danger that can come with very intelligent machines is a subtle one, not necessarily what they will do, but what we will not do because of them. He calls this enfeeblement. What are the capacities that are being stifled because we no longer engage in some of the cognitive tasks that we’re now delegating to AIs.

So that’s just one example of how, for example, psychologists can help really bring more light and make us reflect on what is it that we want from our machines and how do we want to interact with them and how do we wanna design them such that they actually empower us rather than enfeeble us.

Lucas: Yeah, I think that one essential thing to FLI’s mission and goal is the generation of beneficial AI. To me, and I think many other people coming out of this Ethics of Value Alignment conference, you know, what beneficial exactly entails and what beneficial looks like is still a really open question both in the short term and in the long-term. I’d be really interested in seeing both FLI and other organizations pursue questions in value alignment more vigorously. Issues with regard to the ethics of AI and issues regarding value and the sort of world that we want to live in.

Ariel: And what sort of world do we want to live in? If you’ve made it this far through the podcast, you might be tempted to think that all we worry about is AI. And we do think a lot about AI. But our primary goal is to help society flourish. And so this year, we created the Future of Life Award to be presented to people who act heroically to ensure our survival and hopefully move us closer to that ideal world. Our inaugural award was presented in honor of Vasili Arkhipov who stood up to his commander on a Soviet submarine, and prevented the launch of a nuclear weapon during the height of tensions in the Cold War.

Tucker: One thing that particularly stuck out to me was our inaugural Future of Life Award and we presented this award to Vasili Arkhipov who was a Soviet officer in the Cold War and arguably saved the world and is the reason we’re all alive today. He’s now passed, but FLI presented a generous award to his daughter and his grandson. It was really cool to be a part of this because it seemed like the first award of its kind.

Meia: So, of course with FLI, we have all these big projects that take a lot of time. But I think for me, one of the more exciting and heartwarming and wonderful moments that I was able to experience due to our work here at FLI was a train ride from London to Cambridge with Elena and Sergei, the daughter and the grandson of Vasili Arkhipov. Vasili Arkhipov is this Russian naval officer that helped prevent a second world war in the Cuban missile crisis. The Future of Life Institute awarded him the Future of Life prize this year. He is now dead unfortunately, but his daughter and his grandson was there in London to receive it.

Vika: It was great to get to meet them in person and to all go on stage together and have them talk about their attitude towards the dilemma that Vasili Arkhipov has faced, and how it is relevant today, and how we should be really careful with nuclear weapons and protecting our future. It was really inspiring.

At that event, Max was giving his talk about his book, and then at the end we had the Arkhipovs come up on stage and it was kind of fun for me to translate their speech to the audience. I could not fully transmit all the eloquence, but thought it was a very special moment.

Meia: It was just so amazing to really listen to their stories about the father, the grandfather, and look at photos that they had brought all the way from Moscow. This person who has become the hero for so many people that are really concerned about this essential risk, it was nice to really imagine him in his capacity as a son, as a grandfather, as a husband, as a human being. It was very inspiring and touching.

One of the nice things was they showed a photo of him that had actually notes that he had written on the back of it. That was his favorite photo. And one of the comments he made is that he felt that that was the most beautiful photo of himself because there was no glint in his eyes. It was just this pure sort of concentration. I thought that said a lot about his character. He rarely smiled in photos, also. Also always looked very pensive. Very much like you’d imagine a hero who saved the world would be.

Tucker: It was especially interesting for me to work on the press release for this award and to reach out to people from different news outlets, like The Guardian and The Atlantic, and to actually see them write about this award.

I think something like the Future of Life Award is inspiring because it highlights people in the past that have done an incredible service to civilization, but I also think it’s interesting to look forward and think about who might be the future Vasili Arkhipov that saves the world.

Ariel: As Tucker just mentioned, this award was covered by news outlets like the Guardian and the Atlantic. And in fact, we’ve been incredibly fortunate to have many of our events covered by major news. However, there are even more projects we’ve worked on that we think are just as important and that we’re just as excited about that most people probably aren’t aware of.

Jessica: So people may not know that FLI recently joined the partnership on AI. This was the group that was founded by Google and Amazon, Facebook and Apple and others to think about issues like safety, and fairness and impact from AI systems. So I’m excited about this because I think it’s really great to see this kind of social commitment from industry, and it’s going to be critical to have the support and engagement from these players to really see AI being developed in a way that’s positive for everyone. So I’m really happy that FLI is now one of the partners of what will likely be an important initiative for AI.

Anthony: I attending the first meeting of the partnership on AI in October. And to see, at that meeting, so much discussion of some of the principles themselves directly but just in a broad sense. So much discussion from all of the key organizations that are engaged with AI, that almost all of whom had representation there, about how are we going to make these things happen. If we value transparency, if we value fairness, if we value safety and trust in AI systems, how are we going to actually get together and formulate best practices and policies, and groups and data sets and things to make all that happen. And to see the speed at which, I would say the field has moved from purely, wow, we can do this, to how are we going to do this right and how are we going to do this well and what does this all mean, has been a ray of hope I would say.

AI is moving so fast but it was good to see that I think the sort of wisdom race hasn’t been conceded entirely. That there are dedicated group of people that are working really hard to figure out how to do it well.

Ariel: And then there’s Dave Stanley, who has been the force around many of the behind-the-scenes projects that our volunteers have been working on that have helped FLI grow this year.

Dave: As for another project that has very much been ongoing and more relates to the website is basically our ongoing effort to make the English content on the website that’s been fairly influential in English speaking countries about AI safety and nuclear weapons, take that content and make it available in a lot of other languages to maximize the impact that it’s having.

Right now, thanks to the efforts of our volunteers, we have 55 translations available on our website right now in nine different languages, which are Russian, Chinese, French, Polish, Spanish, German, Hindi, Japanese, and Korean. All in all, this represents about 1000 hours of volunteer time put in by our volunteers. I’d just like to give a shoutout to some of the volunteers who have been involved. They are Alan Yan, Kevin Wang, Kazue Evans, Jake Beebe, Jason Orlosky, Li Na, Bena Lim, Alina Kovtun, Ben Peterson, Carolyn Wu, Zhaoran Joanna Wang, Mayumi Nakamura, Derek Su, Dipti Pandey, Marvin, Vera Koroleva, Grzegorz Orwiński, Szymon Radziszewicz, Natalia Berezovskaya, Vladimir Nimensky, Natalia Kuzmenko, George Godula, Eric Gastfriend, Olivier Grondin, Claire Park, Kristy Wen, Yishuai Du, and Revathi Vinoth Kumar.

Ariel: As we’ve worked to establish AI safety as a global effort, Dave and the volunteers were behind the trip Richard took to China, where he participated in the Global Mobile Internet Conference in Beijing earlier this year.

Dave: So basically, this was something that was actually prompted and largely organized by one of FLIs volunteers, George Godula, who’s based in Shanghai right now.

Basically, this is partially motivated by the fact that recently, China’s been promoting a lot of investment in artificial intelligence research, and they’ve made it a national objective to become a leader in AI research by 2025. So FLI and the team have been making some efforts to basically try to build connections with China and raise awareness about AI safety, at least our view on AI safety and engage in dialogue there.

It’s culminated with George organizing this trip for Richard, and A large portion of the FLI volunteer team participating in basically support for that trip. So identifying contacts for Richard to connect with over there and researching the landscape and providing general support for that. And then that’s been coupled with an effort to take some of the existing articles that FLI has on their website about AI safety and translate those to Chinese to make it accessible to that audience.

Ariel: In fact, Richard has spoken at many conferences, workshops and other events this year, and he’s noted a distinct shift in how AI researchers view AI safety.

Richard: This is a single example of many of these things I’ve done throughout the year. Yesterday I gave a talk to a bunch of machine learning and artificial intelligence researchers and entrepreneurs in Boston, here where I’m based about AI safety and beneficence. Every time I do this it’s really fulfilling that so many of these people who really are pushing the leading edge of what AI does in many respects. They realize that these are extremely valid concerns and there are new types of technical avenues to help just keep things better for the future. The facts that I’m not receiving push back anymore as compared to many years ago when I would talk about these things — that people really are trying to gauge and understand and kind of weave themselves into whatever is going to turn into the best outcome for humanity. Given the type of leverage that advanced AI will bring us. I think people are starting to really get what’s at stake.

Ariel: And this isn’t just the case among AI researchers. Throughout the year, we’ve seen this discussion about AI safety broaden into various groups outside of traditional AI circles, and we’re hopeful this trend will continue in 2018.

Meia: I think that 2017 has been fantastic to start this project of getting more thinkers from different disciplines to really engage with the topic of artificial intelligence, but I think we are just manage to scratch the surface of this topic in this collaboration. So I would really like to work more on strengthening this conversation and this flow of ideas between different disciplines. I think we can achieve so much more if we can make sure that we hear each other, that we go past our own disciplinary jargon, and that we truly are able to communicate and join each other in research projects where we can bring different tools and different skills to the table.

Ariel: The landscape on AI safety research that Richard presented at Asilomar at the start of the year was designed to enable greater understanding among researchers. Lucas rounded off the year with another version of the landscape. This one looking at ethics and value alignment with the goal, in part, of bringing more experts from other fields into the conversation.

Lucas: One thing that I’m also really excited about for next year is seeing our conceptual landscapes of both AI safety and value alignment being used in more educational context and in context in which they can foster interdisciplinary conversations regarding issues in AI. I think that their virtues are that they create a conceptual landscape of both AI safety and value alignment, but also include definitions and descriptions of jargon. Given this, it functions both as a means by which you can introduce people to AI safety and value alignment and AI risk, but it also serves as a means of introducing experts to sort of the conceptual mappings of the spaces that other experts are engaged with and so they can learn each other’s jargon and really have conversations that are fruitful and sort of streamlined.

Ariel: As we look to 2018, we hope to develop more programs, work on more projects, and participate in more events that will help draw greater attention to the various issues we care about. We hope to not only spread awareness, but also to empower people to take action to ensure that humanity continues to flourish in the future.

Dave: There’s a few things that are coming up that I’m really excited about. The first one is basically we’re going to be trying to release some new interactive apps on the website that’ll hopefully be pages that can gather a lot of attention and educate people about the issues that we’re focused on, mainly nuclear weapons, and answering questions to give people a better picture of what are the geopolitical and economic factors that motivate countries to keep their nuclear weapons and how does this relate to public support, based on polling data, for whether the general public wants to keep these weapons or not.

Meia: One thing that I think has made me also very excited in 2017, and I’m looking forward to seeing the evolution of in 2018 was the public’s engagement with this topic. I’ve had the luck to be in the audience for many of the book talks that Max has given for his book “Life 3.0: Being Human in the Age of Artificial Intelligence,” and it was fascinating just listening to the questions. They’ve become so much more sophisticated and nuanced than a few years ago. I’m very curious to see how this evolves in 2018, and I hope that FLI will contribute to this conversation and making it more rich. I think I’d like people in general to get engaged with this topic much more, and refine their understanding of it.

Tucker: Well, I think in general it’s been amazing to watch FLI this year because we’ve made big splashes in so many different things with the Asilomar conference, with our Slaughterbots video, helping with the nuclear ban, but I think one thing that I’m particularly interested in is working more this coming year to I guess engage my generation more on these topics. I sometimes sense a lot of defeatism and hopelessness with people in my generation. Kind of feeling like there’s nothing we can do to solve civilization’s biggest problems. I think being at FLI has kind of given me the opposite perspective. Sometimes I’m still subject to that defeatism, but working here really gives me a sense that we can actually do a lot to solve these problems. I’d really like to just find ways to engage more people in my generation to make them feel like they actually have some sense of agency to solve a lot of our biggest challenges.

Ariel: Learn about these issues and more, join the conversation, and find out how you can get involved by visiting futureoflife.org.

[end]

 

Podcast: Balancing the Risks of Future Technologies with Andrew Maynard and Jack Stilgoe

What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future?

To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona State University School for the Future of Innovation in Society, where his work focuses on exploring how emerging and converging technologies can be developed and used responsibly within an increasingly complex world. Jack is a senior lecturer in science and technology studies at University College London where he works on science and innovation policy with a particular interest in emerging technologies.

The following transcript has been edited for brevity, but you listen to the podcast above or read the full transcript here.

Ariel: Before we get into anything else, could you first define what risk is?

Andrew: The official definition of risk is it looks at the potential of something to cause harm, but it also looks at the probability. Say you’re looking at exposure to a chemical, risk is all about the hazardous nature of that chemical, its potential to cause some sort of damage to the environment or the human body, but then exposure that translates that potential into some sort of probability. That is typically how we think about risk when we’re looking at regulating things.

I actually think about risk slightly differently, because that concept of risk runs out of steam really fast, especially when you’re dealing with uncertainties, existential risk, and perceptions about risk when people are trying to make hard decisions and they can’t make sense of the information they’re getting. So I tend to think of risk as a threat to something that’s important or of value. That thing of value might be your health, it might be the environment; but it might be your job, it might be your sense of purpose or your sense of identity or your beliefs or your religion or your politics or your worldview.

As soon as we start thinking about risk in that sense, it becomes much broader, much more complex, but it also allows us to explore that intersection between different communities and their different ideas about what’s important and worth protecting.

Jack: I would draw attention to all of those things that are incalculable. When we are dealing with new technologies, they are often things to which we cannot assign probabilities and we don’t know very much about what the likely outcomes are going to be.

I think there is also a question of what isn’t captured when we talk about risk. Not all of the impacts of technology might be considered risk impacts. I’d say that we should also pay attention to all the things that are not to do with technology going wrong, but are also to do with technology going right. Technologies don’t just create new risks, they also benefit some people more than others. And they can create huge inequalities. If they’re governed well, they can also help close inequalities. But if we just focus on risk, then we lose some of those other concerns as well.

Andrew: Jack, so this obviously really interests me because to me an inequality is a threat to something that’s important to someone. Do you have any specific examples of what you think about when you think about inequalities or equality gaps?

Jack: Before we get into examples, the important thing is to bear in mind a trend with technology, which is that technology tends to benefit the powerful. That’s an overall trend before we talk about any specifics, which quite often goes against the rhetoric of technological change, because, often, technologies are sold as being emancipatory and helping the worst off in society – which they do, but typically they also help the better off even more. So there’s that general question.

I think in the specific, we can talk about what sorts of technologies do close inequities and which tend to exacerbate inequities. But it seems to me that just defining that as a social risk isn’t quite getting there.

Ariel: I would consider increasing inequality to be a risk. Can you guys talk about why it’s so hard to get agreement on what we actually define as a risk?

Andrew: People very quickly slip into defining risk in very convenient ways. So if you have a company or an organization that really wants to do something – and that doing something may be all the way from making a bucket load of money to changing the world in the ways they think are good – there’s a tendency for them to define risk in ways that benefit them.

So, for instance, if you are the maker of an incredibly expensive drug, and you work out that that drug is going to be beneficial in certain ways with minimal side effects, but it’s only going to be available to a very few very rich number of people, you will easily define risk in terms of the things that your drug does not do, so you can claim with confidence that this is a risk-free or a low-risk product. But that’s an approach where you work out where the big risks are with your product and you bury them and you focus on the things where you think there is not a risk with your product.

That sort of extends across many, many different areas – this tendency to bury the big risks associated with a new technology and highlight the low risks to make your tech look much better than it is so you can reach the aims that you’re trying to achieve.

Jack: I quite agree, Andrew. I think what tends to happen is that the definition of risk gets socialized as being that stuff that society’s allowed to think about whereas the benefits are sort of privatized. The innovators are there to define who benefits and in what ways.

Andrew: I would agree. Though it also gets quite complex in terms of the social dialogue around that and who actually is part of those conversations and who has a say in those conversations.

To get back to your point, Ariel, I think there are a lot of organizations and individuals that want to do what they think is the right thing. But they also want the ability to decide for themselves what the right thing is rather than listening to other people.

Ariel: How do we address that?

Andrew: It’s a knotty problem, and it has its roots in how we are as people and as a society, how we’ve evolved. I think there are a number of ways forwards towards beginning to sort of pick apart the problem. A lot of those are associated with work that is carried out in the social sciences and humanities around how you make these processes more inclusive, how you bring more people to the table, how you begin listening to different perspectives, different sets of values and incorporating them into decisions rather than marginalizing groups that are inconvenient.

Jack: If you regard these things as legitimately political discussions rather than just technical discussions, then the solution is to democratize them and to try to wrest control over the direction of technology away from just the innovators and to see that as the subject of proper democratic conversation.

Andrew: And there are some very practical things here. This is where Jack and I might actually diverge in our perspectives. But from a purely business sense, if you’re trying to develop a new product or a new technology and get it to market, the last thing you can afford to do is ignore the nature of the population, the society that you’re trying to put that technology into. Because if you do, you’re going to run up against roadblocks where people decide they either don’t like the tech or they don’t like the way that you’ve made decisions around it or they don’t like the way that you’ve implemented it.

So from a business perspective, taking a long-term strategy, it makes far more sense to engage with these different communities and develop a dialogue around them so you understand the nature of the landscape that you’re developing a technology into. You can see ways of partnering with communities to make sure that that technology really does have a broad beneficial impact.

Ariel: Why do you think companies resist doing that?

Andrew: I think we’ve had centuries of training that says you don’t ask awkward questions because they potentially lead to you not being able to do what you want to do. It’s partly the mentality around innovation. But, also, it’s hard work. It takes a lot of effort, and it actually takes quite a lot of humility as well.

Jack: There’s a sort of well-defined law in technological change, which is that we overestimate the effect of technology in the short term and underestimate the effect of technology in the long term. Given that companies and innovators have to make short time horizon decisions, often they don’t have the capacity to take on board these big world-changing implications of technology.

If you look at something like the motorcar, it would have been inconceivable for Henry Ford to have imagined the world in which his technology would exist in 50 years time. Even though we know that the motorcar has led to the reshaping of large parts of America. It’s led to an absolutely catastrophic level of public health risk while also bringing about clear benefits of mobility. But those are big long-term changes that evolve very slowly, far slower than any company could appreciate.

Andrew: So can I play devil’s advocate here, Jack? With hindsight should Henry Ford have developed his production line process differently to avoid some of the impacts we now see of motor vehicles?

Jack: You’re right to say with hindsight it’s really hard to see what he might have done differently, because the point is the changes that I was talking about are systemic ones with responsibility shared across large parts of the system. Now, could we have done better at anticipating some of those things? Yes, I think we could have done, and I think had motorcar manufacturers talked to regulators and civil society at the time, they could have anticipated some of those things because there are also barriers that stop innovators from anticipating. There are actually things that force innovators time horizons to narrow.

Andrew: That’s one of the points that really interests me. It’s not this case of “do we, don’t we” with a certain technology, but could we do things better so we see more longer-term benefits and we see fewer hurdles that maybe we could have avoided if we had been a little smarter from the get-go.

Ariel: But how much do you think we can actually anticipate?

Andrew: Well, the basic answer is very little indeed. The one thing that we know about anticipating the future is that we’re always going to get it wrong. But I think that we can put plausible bounds around likely things that are going to happen. Simply from what we know about how people make decisions and the evidence around that, we know that if you ignore certain pieces of information, certain evidence, you’re going to make worse decisions in terms of projecting or predicting future pathways than if you’re actually open to evaluating different types of evidence.

By evidence, I’m not just meaning the scientific evidence, but I’m also thinking about what people believe or hold as valuable within society and what motivates them to do certain things and react in certain ways. All of that is important evidence in terms of getting a sense of what the boundaries are of a future trajectory.

Jack: Yes, we will always get our predictions wrong, but if anticipation is about preparing us for the future rather than predicting the future, then rightness or wrongness isn’t really the target. Instead, I would draw attention to the history of cases in which there has been willful ignorance of particular perspectives or particular evidence that has only been realized later – which, as you know better than anybody, the evidence of public health risk that has been swept under the carpet. We have to look first at the sort of incentives that prompt innovators to overlook that evidence.

Andrew: I think that’s so important. It’s worthwhile bringing up the Late lessons from early warnings report that came out of Europe a few years ago, which were a series of case studies of previous technological innovations over the last 100 years or so, looking at where innovators and companies and even regulators either missed important early warnings or willfully ignored them, and that led to far greater adverse impacts than there really should have been. I think there are a lot of lessons to be learned from those.

Ariel: I’d like to take that and move into some more specific examples now. Jack, I know you’re interested in self-driving vehicles. I was curious, how do we start applying that to these new technologies that will probably be, literally, on the road soon?

Jack: It’s extremely convenient for innovators to define risks in particular ways that suit their own ambitions. I think you see this in the way that the self-driving cars debate is playing out. In part, that’s because the debate is a largely American one and it emanates from an American car culture.

Here in Europe, we see a very different approach to transport with a very different emerging debate. So the trolley problem, the classic example of a risk issue where engineers very conveniently are able to treat it as an algorithmic challenge. How do we maximize public benefits and reduce public risk? Here in Europe where our transport systems are complicated, multimodal; where our cities are complicated, messy things, the self-driving car risks start to expand pretty substantially in all sorts of dimensions.

So the sorts of concerns that I would see for the future of self-driving cars relate more to what are sometimes called second order consequences. What sorts of worlds are these technologies likely to enable? What sorts of opportunities are they likely to constrain? I think that’s a far more important debate than the debate about how many lives a self-driving car will either save or take in its algorithmic decision-making.

Andrew: Jack, you have referred to the trolley problem as trolleys and follies. One of the things I really grapple with, and I think it’s very similar to what you were saying, is that the trolley problem seems to be a false or a misleading articulation of risk. It’s something which is philosophical and hypothetical, but actually doesn’t seem to bear much relation to the very real challenges and opportunities that we’re grappling with with these technologies.

Now, the really interesting thing here is, I get really excited about the self-driving vehicle technologies, partly living here in Tempe where Google and Uber and various other companies are testing them on the road now. But you have quite a different perspective in terms of how fast we’re going with the technology and how little thought there is into the longer term social consequences. But to put my full cards on the table, I can’t wait for better technologies in this area.

Jack: Well, without wishing to be too congenial, I am also excited about the potential for the technology. But what I know about past technology suggests that it may well end up gloriously suboptimal. I’m interested in a future involving self-driving cars that might actually realize some of the enormous benefits to, for example, bringing accessibility to people who currently can’t drive. The enormous benefits to public safety, to congestion, but making that work will not just involve a repetition of current dynamics of technological change. I think current ownership models in the US, current modes of transport in the US just are not conducive to making that happen. So I would love to see governments taking control of this and actually making it work in the same way as in the past, governments have taken control of transport and built public value transport systems.

Ariel: If governments are taking control of this and they’re having it done right, what does that mean?

Jack: The first thing that I don’t see any of within the self-driving car debate, because I just think we’re at too early a stage, is an articulation of what we want from self-driving cars. We have the Google vision, the Waymo vision of the benefits of self-driving cars, which is largely about public safety. But no consideration of what it would take to get that right. I think that’s going to look very different. I think to an extent Tempe is an easy case, because the roads in Arizona are extremely well organized. It’s sunny, pedestrians behave themselves. But what you’re not going to be able to do is take that technology and transport it to central London and expect it to do the same job.

So some understanding of desirable systems across different places is really important. That, I’m afraid, does mean sharing control between the innovators and the people who have responsibility for public safety, public transport and public space.

Andrew: Even though most people in this field and other similar fields are doing it for what they claim is for future benefits and the public good, there’s a huge gap between good intentions of doing the right thing and actually being able to achieve something positive for society. I think the danger is that good intentions go bad very fast if you don’t have the right processes and structures in place to translate them into something that benefits society. To do that, you’ve got to have partnerships and engagement with agencies and authorities that have oversight over these technologies, but also the communities and the people that are either going to be impacted by them or benefit by them.

Jack: I think that’s right. Just letting the benefits as stated by the innovators speak for themselves hasn’t worked in the past, and it won’t work here. We have to allow some sort of democratic discussion about that.

Ariel: I want to move forward in the future to more advanced technology, looking at more advanced artificial intelligence, even super intelligence. How do we address risks that are associated with that when a large number of researchers don’t even think this technology can be developed, or if it is developed, it’s still hundreds of years away? How do you address these really big unknowns and uncertainties?

Andrew: That’s a huge question. So I’m speaking here as something of a cynic of some of the projections of superintelligence. I think you’ve got to develop a balance between near and mid-term risks, but at the same time, work out how you take early action on trajectories so you’re less likely to see the emergence of those longer-term existential risks. One of the things that actually really concerns me here is if you become too focused on some of the highly speculative existential risks, you end up missing things which could be catastrophic in a smaller sense in the near to mid-term.

Pouring millions upon millions of dollars into solving a hypothetical problem around superintelligence and the threat to humanity sometime in the future, at the expense of looking at nearer-term things such as algorithmic bias, autonomous decision-making that cuts people out of the loop and a whole number of other things, is a risk balance that doesn’t make sense to me. Somehow, you’ve got to deal with these emerging issues, but in a way which is sophisticated enough that you’re not setting yourself up for problems in the future.

Jack: I think getting that balance right is crucial. I agree with your assessment that that balance is far too much, at the moment, in the direction of the speculative and long-term. One of the reasons why it is, is because that’s an extremely interesting set of engineering challenges. So I think the question would be on whose shoulders does the responsibility lie for acting once you recognize threats or risks like that? Typically, what you find when a community of scientists gathers to assess risks is that they frame the issue in ways that lead to scientific or technical solutions. It’s telling, I think, that in the discussion about superintelligence, the answer, either in the foreground or in the background, is normally more AI not less AI. And the answer is normally to be delivered by engineers rather than to be governed by politicians.

That said, I think there’s sort of cause for optimism if you look at the recent campaign around autonomous weapons. That would seem to be a clear recognition of a technologically mediated issue where the necessary action is not on the part of the innovators themselves but on all the people who are in control of our armed forces.

Andrew: I think you’re exactly right, Jack. I should clarify that even though there is a lot of discussion around speculative existential risks, there is also a lot of action on nearer-term issues such as the lethal autonomous weapons. But one of the things that I’ve been particularly struck with in conversations is the fear amongst technologists of losing control over the technology and the narrative. I’ve had conversations where people have said that they’re really worried about the potential down sides, the potential risks of where artificial intelligence is going. But they’re convinced that they can solve those problems without telling anybody else about them, and they’re scared that if they tell a broad public about those risks that they’ll be inhibited in doing the research and the development that they really want to do.

That really comes down to not wanting to relinquish control over technology. But I think that there has to be some relinquishment there if we’re going to have responsible development of these technologies that really focuses on how they could impact people both in the short as well as the long-term, and how as a society we find pathways forwards.

Ariel: Andrew, I’m really glad you brought that up. That’s one that I’m not convinced by, this idea that if we tell the public what the risks are, then suddenly the researchers won’t be able to do the research they want. Do you see that as a real risk for researchers?

Andrew: I think there is a risk there, but it’s rather complex. Most of the time, the public actually don’t care about these things. There are one or two examples; genetically modifying organisms is the one that always comes up. But that is a very unique and very distinct example. Most of the time, if you talk broadly about what’s happening with a new technology, people will say, that’s interesting, and get on with their lives. So there’s much less risk there about talking about it than I think people realize.

The other thing, though, is even if there is a risk of people saying “hold on a minute, we don’t like what’s happening here,” better to have that feedback sooner rather than later, because the reality is people are going to find out what’s happening. If they discover as a company or a research agency or a scientific group that you’ve been doing things that are dangerous and you haven’t been telling them about it, when they find out after the fact, people get mad. That’s where things get really messy.

[What’s also] interesting – you’ve got a whole group of people in the technology sphere who are very clearly trying to do what they think is the right thing. They’re not in it primarily for fame and money, but they’re in it because they believe that something has to change to build a beneficial future.

The challenge is, these technologists, if they don’t realize the messiness of working with people and society and they think just in terms of technological solutions, they’re going to hit roadblocks that they can’t get over. So this to me is why it’s really important that you’ve got to have the conversations. You’ve got to take the risk to talk about where things are going with the broader population. You’ve got to risk your vision having to be pulled back a little bit so it’s more successful in the long-term.

Ariel: I was hoping you could both touch on the impact of media as well and how that’s driving the discussion.

Jack: I think blaming the media is always the convenient thing to do. They’re the convenient target. I think the question is about actually the culture, which is extremely technologically utopian and which wants to believe that there are simple technological solutions to some of our most pressing problems. In that culture, it is understandable if seemingly seductive ideas, whether about artificial intelligence or about new transport systems, are taken. I would love there to be a more skeptical attitude so that when those sorts of claims are made, just as when any sort of political claim is made, that they are scrutinized and become the starting point for a vigorous debate about the world in which we want to live in. I think that is exactly what is missing from our current technological discourse.

Andrew: The media is a product of society. We are titillated by extreme, scary scenarios. The media is a medium through which that actually happens. I work a lot with journalists, and I’ve had very few experiences with being misrepresented or misquoted where it wasn’t my fault in the first place.

So I think we’ve got to think of two things when we think of media coverage. First of all, we’ve got to get smarter in how we actually communicate, and by we I mean the people that feel we’ve got something to say here. We’ve got to work out how to communicate in a way that makes sense with the journalists and the media that we’re communicating through. We’ve also got to realize that even though we might be outraged by a misrepresentation, that usually doesn’t get as much traction in society as we think it does. So we’ve got to be a little bit more laid back about how we see things reported.

Ariel: Is there anything else that you think is important to add?

Andrew: I would just sort of wrap things up. There has been a lot of agreement, but actually, and this is an important thing, it’s because most people, including people that are often portrayed as just being naysayers, are trying to ask difficult questions so we can actually build a better future through technology and through innovation in all its forms. I think it’s really important to realize that just because somebody asks difficult questions doesn’t mean they’re trying to stop progress, but they’re trying to make sure that that progress is better for everybody.

Jack: Hear, hear.

Podcast: AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene and Iyad Rahwan

As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can’t even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI?

To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the podcast. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University, where his lab has used behavioral and neuroscientific methods to study moral judgment, focusing on the interplay between emotion and reason in moral dilemmas. He’s the author of Moral Tribes: Emotion, Reason and the Gap Between Us and Them. Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. He created the Moral Machine, which is “a platform for gathering human perspective on moral decisions made by machine intelligence.”

In this episode, we discuss the trolley problem with autonomous cars, how automation will affect rural areas more than cities, how we can address potential inequality issues AI may bring about, and a new way to write ghost stories.

This transcript has been heavily edited for brevity. You can read the full conversation here.

Ariel: How do we anticipate that AI and automation will impact society in the next few years?

Iyad: AI has the potential to extract better value from the data we’re collecting from all the gadgets, devices and sensors around us. We could use this data to make better decisions, whether it’s micro-decisions in an autonomous car that takes us from A to B safer and faster, or whether it’s medical decision-making that enables us to diagnose diseases better, or whether it’s even scientific discovery, allowing us to do science more effectively, efficiently and more intelligently.

Joshua: Artificial intelligence also has the capacity to displace human value. To take the example of using artificial intelligence to diagnose disease. On the one hand it’s wonderful if you have a system that has taken in all of the medical knowledge we have in a way that no human could and uses it to make better decisions. But at the same time that also means that lots of doctors might be out of a job or have a lot less to do. This is the double-edged sword of artificial intelligence, the value it creates and the human value that it displaces.

Ariel: Can you explain what the trolley problem is and how does that connect to this question of what do autonomous vehicles do in situations where there is no good option?

Joshua: One of the original versions of the trolley problem goes like this (we’ll call it “the switch case”): A trolley is headed towards five people and if you don’t do anything, they’re going to be killed, but you can hit a switch that will turn the trolley away from the five and onto a side track. However on that side track, there’s one unsuspecting person and if you do that, that person will be killed.

The question is: is it okay to hit the switch to save those five people’s lives but at the cost of saving one life? In this case, most people tend to say yes. Then we can vary it a little bit. In “the footbridge case,” the situation is different as follows: the trolley is now headed towards five people on a single track, over that track is a footbridge and on that footbridge is a large person wearing a very large backpack. You’re also on the bridge and the only way that you can save those five people from being hit by the trolley is to push that big person off of the footbridge and onto the tracks below.

Assume that it will work, do you think it’s okay to push the guy off the footbridge in order to save five lives? Here, most people say no, and so we have this interesting paradox. In both cases, you’re trading one life for five, yet in one case it seems like it’s the right thing to do, in the other case it seems like it’s the wrong thing to do.

One of the classic objections to these dilemmas is that they’re unrealistic. My view is that the point is not that they’re realistic, but instead that they function like high contrast stimuli. If you’re a vision researcher and you’re using flashing black and white checkerboards to study the visual system, you’re not using that because that’s a typical thing that you look at, you’re using it because it’s something that drives the visual system in a way that reveals its structure and dispositions.

In the same way, these high contrast, extreme moral dilemmas can be useful to sharpen our understanding of the more ordinary processes that we bring to moral thinking.

Iyad: The trolley problem can translate in a cartoonish way to a scenario with which an autonomous car is faced with only two options. The car is going at a speed limit on a street and due to mechanical failure is unable to stop and is going to hit it a group of five pedestrians. The car can swerve and hit a bystander. Should the car swerve or should it just plow through the five pedestrians?

This has a structure similar to the trolley problem because you’re making similar tradeoffs between one and five people and the decision is not being taken on the spot, it’s actually happening at the time of the programming of the car.

There is another complication in which the person being sacrificed to save the greater number of people is the person in the car. Suppose the car can swerve to avoid the five pedestrians but as a result falls off a cliff. That adds another complication especially that programmers are going to have to appeal to customers. If customers don’t feel safe in those cars because of some hypothetical situation that may take place in which they’re sacrificed, that pits the financial incentives against the potentially socially desirable outcome, which can create problems.

A question that raises itself is: Is it going to ever happen? How many times do we face these kinds of situations as we drive today? So the argument goes: these situations are going to be so rare that they are irrelevant and that autonomous cars promise to be substantially safer than human-driven cars that we have today, that the benefits significantly outweigh the costs.

There is obviously truth to this argument, if you take the trolley problem scenario literally. But what the autonomous car version of the trolley problem is doing, is it’s abstracting the tradeoffs that are taking place every microsecond, even now.

Imagine you’re driving on the road and there is a large truck on the lane to your left and as a result you choose to stick a little bit further to the right, just to minimize risk in case this car gets off its lane. Now suppose that there could be a cyclist later on the right hand side, what you’re effectively doing in this small maneuver is slightly reducing risk to yourself but slightly increasing risk to the cyclist. These sorts of decisions are being made millions and millions of times every day.

Ariel: Applying the trolley problem to self-driving cars seems to be forcing the vehicle and thus the programmer of the vehicle to make a judgment call about whose life is more valuable. Can we not come up with some other parameters that don’t say that one person’s life is more valuable than someone else’s?

Joshua: I don’t think that there’s any way to avoid doing that. If you’re a driver, there’s no way to avoid answering the question, how cautious or how aggressive am I going to be. You can not explicitly answer the question; you can say I don’t want to think about that, I just want to drive and see what happens. But you are going to be implicitly answering that question through your behavior, and in the same way, autonomous vehicles can’t avoid the question. Either the people who are designing the machines, training the machines or explicitly programming to behave in certain ways, they are going to do things that are going to affect the outcome.

The cars will constantly be making decisions that inevitably involve value judgments of some kind.

Ariel: To what extent have we actually asked customers what it is that they want from the car? In a completely ethical world, I would like the car to protect the person who’s more vulnerable, who would be the cyclist. In practice, I have a bad feeling I’d probably protect myself.

Iyad: We could say we want to treat everyone equally. On the other hand, you have this self-protective instinct which presumably as a consumer, that’s what you want to buy for yourself and your family. On the other hand you also care for vulnerable people. Different reasonable and moral people can disagree on what the more important factors and considerations should be and I think this is precisely why we have to think about this problem explicitly, rather than leave it purely to – whether it’s programmers or car companies or any particular single group of people – to decide.

Joshua: When we think about problems like this, we have a tendency to binarize it, but it’s not a binary choice between protecting that person or not. It’s really going to be matters of degree. Imagine there’s a cyclist in front of you going at cyclist speed and you either have to wait behind this person for another five minutes creeping along much slower than you would ordinarily go, or you have to swerve into the other lane where there’s oncoming traffic at various distances. Very few people might say I will sit behind this cyclist for 10 minutes before I would go into the other lane and risk damage to myself or another car. But very few people would just blow by the cyclist in a way that really puts that person’s life in peril.

It’s a very hard question to answer because the answers don’t come in the form of something that you can write out in a sentence like, “give priority to the cyclist.” You have to say exactly how much priority in contrast to the other factors that will be in play for this decision. And that’s what makes this problem so interesting and also devilishly hard to think about.

Ariel: Why do you think this is something that we have to deal with when we’re programming something in advance and not something that we as a society should be addressing when it’s people driving?

Iyad: We very much value the convenience of getting from A to B. Our lifetime odds of dying from a car accident is more than 1%, yet somehow, we’ve decided to put up with this because of the convenience. As long as people don’t run through a red light or are not drunk, you don’t really blame them for fatal accidents, we just call them accidents.

But now, thanks to autonomous vehicles that can make decisions and reevaluate situations hundreds or thousands of times per second and adjust their plan and so on – we potentially have the luxury to make those decisions a bit better and I think this is why things are different now.

Joshua: With the human we can say, “Look, you’re driving, you’re responsible, and if you make a mistake and hurt somebody, you’re going to be in trouble and you’re going to pay the cost.” You can’t say that to a car, even a car that’s very smart by 2017 standards. The car isn’t going to be incentivized to behave better – the motivation has to be explicitly trained or programmed in.

Iyad: Economists say you can incentivize the people who make the cars to program them appropriately by fining them and engineering the product liability law in such a way that would hold them accountable and responsible for damages, and this may be the way in which we implement this feedback loop. But I think the question remains what should the standards be against which we hold those cars accountable.

Joshua: Let’s say somebody says, “Okay, I make self-driving cars and I want to make them safe because I know I’m accountable.” They still have to program or train the car. So there’s no avoiding that step, whether it’s done through traditional legalistic incentives or other kinds of incentives.

Ariel: I want to ask about some other research you both do. Iyad you look at how AI and automation impact us and whether that could be influenced by whether we live in smaller towns or larger cities. Can you talk about that?

Iyad: Clearly there are areas that may potentially benefit from AI because it improves productivity and it may lead to greater wealth, but it can also lead to labor displacement. It could cause unemployment if people aren’t able to retool and improve their skills so that they can work with these new AI tools and find employment opportunities.

Are we expected to experience this in a greater way or in a smaller magnitude in smaller versus bigger cities? On one hand there are lots of creative jobs in big cities and, because creativity is so hard to automate, it should make big cities more resilient to these shocks. On the other hand if you go back to Adam Smith and the idea of the division of labor, the whole idea is that individuals become really good at one thing. And this is precisely what spurred urbanization in the first industrial revolution. Even though the system is collectively more productive, individuals may be more automatable in terms of their narrowly-defined tasks.

But when we did the analysis, we found that indeed larger cities are more resilient in relative terms. The preliminary findings are that in bigger cities there is more production that requires social interaction and very advanced skills like scientific and engineering skills. People are better able to complement the machines because they have technical knowledge, so they’re able to use new intelligent tools that are becoming available, but they also work in larger teams on more complex products and services.

Ariel: Josh, you’ve done a lot of work with the idea of “us versus them.” And especially as we’re looking in this country and others at the political situation where it’s increasingly polarized along this line of city versus smaller town, do you anticipate some of what Iyad is talking about making the situation worse?

Joshua: I certainly think we should be prepared for the possibility that it will make the situation worse. The central idea is that as technology advances, you can produce more and more value with less and less human input, although the human input that you need is more and more highly skilled.

If you look at something like Turbo Tax, before you had lots and lots of accountants and many of those accountants are being replaced by a smaller number of programmers and super-expert accountants and people on the business side of these enterprises. If that continues, then yes, you have more and more wealth being concentrated in the hands of the people whose high skill levels complement the technology and there is less and less for people with lower skill levels to do. Not everybody agrees with that argument, but I think it’s one that we ignore at our peril.

Ariel: Do you anticipate that AI itself would become a “them,” or do you think it would be people working with AI versus people who don’t have access to AI?

Joshua: The idea of the AI itself becoming the “them,” I am agnostic as to whether or not that could happen eventually, but this would involve advances in artificial intelligence beyond anything we understand right now. Whereas the problem that we were talking about earlier – humans being divided into a technological, educated, and highly-paid elite as one group and then the larger group of people who are not doing as well financially – that “us-them” divide, you don’t need to look into the future, you can see it right now.

Iyad: I don’t think that the robot will be the “them” on their own, but I think the machines and the people who are very good at using the machines to their advantage, whether it’s economic or otherwise, will collectively be a “them.” It’s the people who are extremely tech savvy, who are using those machines to be more productive or to win wars and things like that. There would be some sort of evolutionary race between human-machine collectives.

Joshua: I think it’s possible that people who are technologically enhanced could have a competitive advantage and set off an economic arms race or perhaps even literal arms race of a kind that we haven’t seen. I hesitate to say, “Oh, that’s definitely going to happen.” I’m just saying it’s a possibility that makes a certain kind of sense.

Ariel: Do either of you have ideas on how we can continue to advance AI and address these divisive issues?

Iyad: There are two new tools at our disposal: experimentation and machine-augmented regulation.

Today, [there are] cars with a bull bar in front of them. These metallic bars at the front of the car increase safety for the passenger in the case of collision, but they have disproportionate impact on other cars, on pedestrians and cyclists, and they’re much more likely to kill them in the case of an accident. As a result, by making this comparison, by identifying that cars with bull bars are worse for certain group, the trade off was not acceptable, and many countries have banned them, for example the UK, Australia, and many European countries.

If there was a similar trade off being caused by a software feature, then, we wouldn’t know unless we allowed for experimentation as well as monitoring – if we looked at the data to identify whether a particular algorithm is making for very safe cars for customers, but at the expense of a particular group.

In some cases, these systems are going to be so sophisticated and the data is going to be so abundant that we won’t be able to observe them and regulate them in time. Think of algorithmic trading programs. No human being is able to observe these things fast enough to intervene, but you could potentially insert another algorithm, a regulatory algorithm or an oversight algorithm, that will observe other AI systems in real time on our behalf, to make sure that they behave.

Joshua: There are two general categories of strategies for making things go well. There are technical solutions to things and then there’s the broader social problem of having a system of governance that can be counted on to produce outcomes that are good for the public in general.

The thing that I’m most worried about is that if we don’t get our politics in order, especially in the United States, we’re not going to have a system in place that’s going to be able to put the public’s interest first. Ultimately, it’s going to come down to the quality of the government that we have in place, and quality means having a government that distributes benefits to people in what we would consider a fair way and takes care to make sure that things don’t go terribly wrong in unexpected ways and generally represents the interests of the people.

I think we should be working on both of these in parallel. We should be developing technical solutions to more localized problems where you need an AI solution to solve a problem created by AI. But I also think we have to get back to basics when it comes to the fundamental principles of our democracy and preserving them.

Ariel: As we move towards smarter and more ubiquitous AI, what worries you most and what are you most excited about?

Joshua: I’m pretty confident that a lot of labor is going to be displaced by artificial intelligence. I think it is going to be enormously politically and socially disruptive, and I think we need to plan now. With self-driving cars especially in the trucking industry, I think that’s going to be the first and most obvious place where millions of people are going to be out of work and it’s not going to be clear what’s going to replace it for them.

I’m excited about the possibility of AI producing value for people in a way that has not been possible before on a large scale. Imagine if anywhere in the world that’s connected to the Internet, you could get the best possible medical diagnosis for whatever is ailing you. That would be an incredible life-saving thing. And as AI teaching and learning systems get more sophisticated, I think it’s possible that people could actually get very high quality educations with minimal human involvement and that means that people all over the world could unlock their potential. And I think that that would be a wonderful transformative thing.

Iyad: I’m worried about the way in which AI and specifically autonomous weapons are going to alter the calculus of war. In order to aggress on another nation, you have to mobilize humans, you have to get political support from the electorate, you have to handle the very difficult process of bringing back people in coffins, and the impact that this has on electorates.

This creates a big check on power and it makes people think very hard about making these kinds of decisions. With AI, when you’re able to wage wars with very little loss to life, especially if you’re a very advanced nation that is at the forefront of this technology, then you have disproportionate power. It’s kind of like a nuclear weapon, but maybe more because it’s much more customizable. It’s not an all out or nothing – you could start all sorts of wars everywhere.

I think it’s going to be a very interesting shift in the way superpowers think about wars and I worry that this might make them trigger happy. I think a new social contract needs to be written so that this power is kept in check and that there’s more thought that goes into this.

On the other hand, I’m very excited about the abundance that will be created by AI technologies. We’re going to optimize the use of our resources in many ways. In health and in transportation, in energy consumption and so on, there are so many examples in recent years in which AI systems are able to discover ways in which even the smartest humans haven’t been able to optimize.

Ariel: One final thought: This podcast is going live on Halloween, so I want to end on a spooky note. And quite conveniently, Iyad’s group has created Shelley, which is a Twitter chatbot that will help you craft scary ghost stories. Shelley is, of course, a nod to Mary Shelley who wrote Frankenstein, which is the most famous horror story about technology. Iyad, I was hoping you could tell us a bit about how Shelley works.

Iyad: Yes, well this is our second attempt at doing something spooky for Halloween. Last year we launched the nightmare machine, which was using deep neural networks and style transfer algorithms to take ordinary photos and convert them into haunted houses and zombie-infested places. And that was quite interesting; it was a lot of fun. More recently, now we’ve launched Shelley, which people can visit on shelley.ai, and it is named after Mary Shelley who authored Frankenstein.

This is a neural network that generates text and it’s been trained on a very large data set of over 100 thousand short horror stories from a subreddit called No Sleep. And so it’s basically got a lot of human knowledge about what makes things spooky and scary, and the nice thing is that it generates part of the story and people can tweet back at it a continuation of the story and then basically take turns with the AI to craft stories. And we feature those stories on the website afterwards. if I’m correct, this is the first collaborative human-AI horror writing exercise ever.