FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you’ll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us.

Topics discussed include:

  • Max and Yuval’s views and intuitions about consciousness
  • How they ground and think about morality
  • Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk
  • The function of myths and stories in human society
  • How emerging science, technology, and global paradigms challenge the foundations of many of our stories
  • Technological risks of the 21st century

Timestamps:

0:00 Intro

3:14 Grounding morality and the need for a science of consciousness

11:45 The effective altruism community and it’s main cause areas

13:05 Global health

14:44 Animal suffering and factory farming

17:38 Existential risk and the ethics of the long-term future

23:07 Nuclear war as a neglected global risk

24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence

28:37 On creating new stories for the challenges of the 21st century

32:33 The risks of big data and AI enabled human hacking and monitoring

47:40 What does it mean to be human and what should we want to want?

52:29 On positive global visions for the future

59:29 Goodbyes and appreciations

01:00:20 Outro and supporting the Future of Life Institute Podcast

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today, I’m excited to be bringing you a conversation between professor, philosopher, and historian Yuval Noah Harari and MIT physicist and AI researcher, as well as Future of Life Institute president, Max Tegmark.  Yuval is the author of popular science best sellers, Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, and of 21 Lessons for the 21st Century. Max is the author of Our Mathematical Universe and Life 3.0: Being human in the Age of Artificial Intelligence. 

This episode covers a variety of topics related to the interests and work of both Max and Yuval. It requires some background knowledge for everything to make sense and so i’ll try to provide some necessary information for listeners unfamiliar with the area of Max’s work in particular here in the intro. If you already feel well acquainted with Max’s work, feel free to skip ahead a minute or use the timestamps in the description for the podcast. 

Topics discussed in this episode include: morality, consciousness, the effective altruism community, animal suffering, existential risk, the function of myths and stories in our world, and the benefits and risks of emerging technology. For those new to the podcast or effective altruism, effective altruism or EA for short is a philosophical and social movement that uses evidence and reasoning to determine the most effective ways of benefiting and improving the lives of others. And existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, to kill large swaths of the global population and leave the survivors unable to rebuild society to current living standards. Advanced emerging technologies are the most likely source of existential risk in the 21st century, for example through unfortunate uses of synthetic biology, nuclear weapons, and powerful future artificial intelligence misaligned with human values and objectives.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate

These contributions make it possible for us to bring you conversations like these and to develop the podcast further. You can also follow us on your preferred listening platform by searching for us directly or following the links on the page for this podcast found in the description. 

And with that, here is our conversation between Max Tegmark and Yuval Noah Harari.

Max Tegmark: Maybe to start at a place where I think you and I both agree, even though it’s controversial, I get the sense from reading your books that you feel that morality has to be grounded on experience, subjective experience. It’s just what I like to call consciousness. I love this argument you’ve given, for example, that people who think consciousness is just bullshit and irrelevant. You challenge them to tell you what’s wrong with torture if it’s just a bunch of electrons and quarks moving around this way rather than that way.

Yuval Noah Harari: Yeah. I think that there is no morality without consciousness and without subjective experiences. At least for me, this is very, very obvious. One of my concerns, again, if I think about the potential rise of AI, is that AI will be super superintelligence but completely non-conscious, which is something that we never had to deal with before. There’s so much of the philosophical and theological discussions of what happens when there is a greater intelligence in the world. We’ve been discussing this for thousands of years with God of course as the object of discussion, but the assumption always was that this greater intelligence would be A) conscious in some sense, and B) good, infinitely good.

And therefore I think that the question we are facing today is completely different and to a large extent is I suspect that we are really facing philosophical bankruptcy that what we’ve done for thousands of years didn’t really prepare us for the kind of challenge that we have now.

Max Tegmark: I certainly agree that we have a very urgent challenge there. I think there is an additional risk which comes from the fact that, I’m embarrassed as a scientist that we actually don’t know for sure which kinds of information processing are conscious and which are not. For many, many years, I’ve been told for example that it’s okay to put lobsters in hot water to boil them but alive before we eat them because they don’t feel any suffering. And then I guess some guy asked the lobster does this hurt? And it didn’t say anything and it was a self serving argument. But then there was a recent study out that showed that actually lobsters do feel pain and they banned lobster boiling in Switzerland now.

I’m very nervous whenever we humans make these very self serving arguments saying, don’t worry about the slaves. It’s okay. They don’t feel, they don’t have a soul, they won’t suffer or women don’t have a soul or animals can’t suffer. I’m very nervous that we’re going to make the same mistake with machines just because it’s so convenient. When I feel the honest truth is, yeah, maybe future superintelligent machines won’t have any experience, but maybe they will. And I think we really have a moral imperative there to do the science to answer that question because otherwise we might be creating enormous amounts of suffering that we don’t even know exists.

Yuval Noah Harari: For this reason and for several other reasons, I think we need to invest as much time and energy in researching consciousness as we do in researching and developing intelligence. If we develop sophisticated artificial intelligence before we really understand consciousness, there is a lot of really big ethical problems that we just don’t know how to solve. One of them is the potential existence of some kind of consciousness in these AI systems, but there are many, many others.

Max Tegmark: I’m so glad to hear you say this actually because I think we really need to distinguish between artificial intelligence and artificial consciousness. Some people just take for granted that they’re the same thing.

Yuval Noah Harari: Yeah, I’m really amazed by it. I’m having quite a lot of discussions about these issues in the last two or three years and I’m repeatedly amazed that a lot of brilliant people just don’t understand the difference between intelligence and consciousness, and when it comes up in discussions about animals, but it also comes up in discussions about computers and about AI. To some extent the confusion is understandable because in humans and other mammals and other animals, consciousness and intelligence, they really go together, but we can’t assume that this is the law of nature and that it’s always like that. In a very, very simple way, I would say that intelligence is the ability to solve problems. Consciousness is the ability to feel things like pain and pleasure and love and hate.

Now in humans and chimpanzees and dogs and maybe even lobsters, we solve problems by having feelings. A lot of the problems we solve, who to mate with and where to invest our money and who to vote for in the elections, we rely on our feelings to make these decisions, but computers make decisions a completely different way. At least today, very few people would argue that computers are conscious and still they can solve certain types of problems much, much better than we.

They have high intelligence in a particular field without having any consciousness and maybe they will eventually reach superintelligence without ever developing consciousness. And we don’t know enough about these ideas of consciousness and superintelligence, but it’s at least feasible that you can solve all problems better than human beings and still have zero consciousness. You just do it in a different way. Just like airplanes fly much faster than birds without ever developing feathers.

Max Tegmark: Right. That’s definitely one of the reasons why people are so confused. There are two other reasons I noticed also among even very smart people why they are utterly confused on this. One is there’s so many different definitions of consciousness. Some people define consciousness in a way that’s almost equivalent intelligence, but if you define it the way you did, the ability to feel things simply having subjective experience. I think a lot of people get confused because they have always thought of subjective experience and intelligence for that matter as something mysterious. That can only exist in biological organisms like us. Whereas what I think we’re really learning from the whole last of century of progress in science is that no, intelligence and consciousness are all about information processing.

People fall prey to this carbon chauvinism idea that it’s only carbon or meat that can have these traits. Whereas in fact it really doesn’t matter whether the information is processed by a carbon atom and a neuron in the brain or by the silicon atom in a computer.

Yuval Noah Harari: I’m not sure I completely agree. I mean, we still don’t have enough data on that. There doesn’t seem to be any reason that we know of that consciousness would be limited to carbon based life forms, but so far this is the case. So maybe we don’t know something. My hunch is that it could be possible to have non-organic consciousness, but until we have better evidence, there is an open possibility that maybe there is something about organic biochemistry, which is essential and we just don’t understand.

And also with the other open case, we are not really sure that’s consciousness is just about information processing. I mean, at present, this is the dominant view in the life sciences, but we don’t really know because we don’t understand consciousness. My personal hunch is that nonorganic consciousness is possible, but I wouldn’t say that we know that for certain. And the other point is that really if you think about it in the broadest sense possible, I think that there is an entire potential universe of different conscious states and we know just a tiny, tiny bit of it.

Max Tegmark: Yeah.

Yuval Noah Harari: Again, thinking a little about different life forms, so human beings are just one type of life form and there are millions of other life forms that existed and billions of potential life forms that never existed but might exist in the future. And it’s a bit like that with consciousness that we really know just human consciousness, we don’t understand even the consciousness of other animals and beyond that potentially there is an infinite number of conscious states or traits that never existed and might exist in the future.

Max Tegmark: I agree with all of that. And I think if you can have nonorganic consciousness, artificial consciousness, which would be my guess, although we don’t know it, I think it’s quite clear then that the mind space of possible artificial consciousness is vastly larger than anything that evolution has given us, so we have to have a very open mind.

If we simply take away from this that we should understand which entities biological and otherwise are conscious and can experience suffering, pleasure and so on, and we try to base our morality on this idea that we want to create more positive experiences and eliminate suffering, then this leads straight into what I find very much at the core of the so called effective altruism community, which we with the Future of Life Institute view ourselves as part of where the idea is we want to help do what we can to make a future that’s good in that sense. Lots of positive experiences, not negative ones and we want to do it effectively.

We want to put our limited time and money and so on into those efforts which will make the biggest difference. And the EA community has for a number of years been highlighting a top three list of issues that they feel are the ones that are most worth putting effort into in this sense. One of them is global health, which is very, very non-controversial. Another one is animal suffering and reducing it. And the third one is preventing life from going extinct by doing something stupid with technology.

I’m very curious whether you feel that the EA movement has basically picked out the correct three things to focus on or whether you have things you would subtract from that list or add to it. Global health, animal suffering, X-risk.

Yuval Noah Harari: Well, I think that nobody can do everything, so whether you’re an individual or an organization, it’s a good idea to pick a good cause and then focus on it and not spend too much time wondering about all the other things that you might do. I mean, these three causes are certainly some of the most important in the world. I would just say that about the first one. It’s not easy at all to determine what are the goals. I mean, as long as health means simply fighting illnesses and sicknesses and bringing people up to what is considered as a normal level of health, then that’s not very problematic.

But in the coming decades, I think that the healthcare industry would focus and more, not on fixing problems but rather on enhancing abilities, enhancing experiences, enhancing bodies and brains and minds and so forth. And that’s much, much more complicated both because of the potential issues of inequality and simply that we don’t know where to aim for. One of the reasons that when you ask me at first about morality, I focused on suffering and not on happiness is that suffering is a much clearer concept than happiness and that’s why when you talk about health care, if you think about this image of the line of normal health, like the baseline of what’s a healthy human being, it’s much easier to deal with things falling under this line than things that potentially are above this line. So I think even this first issue, it will become extremely complicated in the coming decades.

Max Tegmark: And then for the second issue on animal suffering, you’ve used some pretty strong words before. You’ve said that industrial farming is one of the worst crimes in history and you’ve called the fate of industrially farmed animals one of the most pressing ethical questions of our time. A lot of people would be quite shocked when they hear you using strong words about this since they routinely eat factory farmed meat. How do you explain to them?

Yuval Noah Harari: This is quite straightforward. I mean, we are talking about billions upon billions of animals. The majority of large animals today in the world are either humans or are domesticated animals, cows and pigs and chickens and so forth. And so we’re talking about a lot of animals and we are talking about a lot of pain and misery. The industrially farmed cow and chicken are probably competing for the title of the most miserable creature that ever existed. They are capable of experiencing a wide range of sensations and emotions and in most of these industrial facilities they are experiencing the worst possible sensations and emotions.

Max Tegmark: In my case, you’re preaching to the choir here. I find this so disgusting that my wife and I just decided to mostly be vegan. I don’t go preach to other people about what they should do, but I just don’t want to be a part of this. It reminds me so much also things you’ve written about yourself, about how people used to justify having slaves before by saying, “It’s the white man’s burden. We’re helping the slaves. It’s good for them”. And much of the same way now, we make these very self serving arguments for why we should be doing this. What do you personally take away from this? Do you eat meat now, for example?

Yuval Noah Harari: Personally I define myself as vegan-ish. I mean I’m not strictly vegan. I don’t want to make kind of religion out of it and start thinking in terms of purity and whatever. I try to limit as far as possible mindful movement with industries that harm animals for no good reason and it’s not just meat and dairy and eggs, it can be other things as well. The chains of causality in the world today are so complicated that you cannot really extricate yourself completely. It’s just impossible. So for me, and also what I tell other people is just do your best. Again, don’t make it into a kind of religious issue. If somebody comes and tells you that you, I’m now thinking about this animal suffering and I decided to have one day a week without meat then don’t start blaming this person for eating meat the other six days. Just congratulate them on making one step in the right direction.

Max Tegmark: Yeah, that sounds not just like good morality but also good psychology if you actually want to nudge things in the right direction. And then coming to the third one, existential risk. There, I love how Nick Bostrom asks us to compare these two scenarios one in which some calamity kills 99% of all people and another where it kills 100% of all people and then he asks how much worse is the second one. The point being obviously is you know that if we kill everybody we might actually forfeit having billions or quadrillions or more of future minds in the future experiencing these amazing things for billions of years. This is not something I’ve seen you talk as much about in you’re writing it. So I’m very curious how you think about this morally? How you weigh future experiences that could exist versus the ones that we know exist now?

Yuval Noah Harari: I don’t really know. I don’t think that we understand consciousness and experience well enough to even start making such calculations. In general, my suspicion, at least based on our current knowledge, is that it’s simply not a mathematical entity that can be calculated. So we know all these philosophical riddles that people sometimes enjoy so much debating about whether you have five people have this kind and a hundred people of that kind and who should you save and so forth and so on. It’s all based on the assumption that experience is a mathematical entity that can be added and subtracted. And my suspicion is that it’s just not like that.

To some extent, yes, we make these kinds of comparison and calculations all the time, but on a deeper level, I think it’s taking us in the wrong direction. At least at our present level of knowledge, it’s not like eating ice cream is one point of happiness. Killing somebody is a million points of misery. So if by killing somebody we can allow 1,000,001 persons to enjoy ice cream, it’s worth it.

I think the problem here is not that we given the wrong points to the different experiences, it’s just it’s not a mathematical entity in the first place. And again, I know that in some cases we have to do these kinds of calculations, but I will be extremely careful about it and I would definitely not use it as the basis for building entire moral and philosophical projects.

Max Tegmark: I certainly agree with you that it’s an extremely difficult set of questions you get into if you try to trade off positives against negatives, like you mentioned in the ice cream versus murder case there. But I still feel that all in all, as a species, we tend to be a little bit too sloppy and flippant about the future and maybe partly because we haven’t evolved to think so much about what happens in billions of years anyway, and if we look at how reckless we’ve been with nuclear weapons, for example, I recently was involved with our organization giving this award to honor Vasily Arkhipov who quite likely prevented nuclear war between the US and the Soviet Union, and most people hadn’t even heard about that for 40 years. More people have heard of Justin Bieber, than Vasily Arkhipov even though I would argue that that would really unambiguously had been a really, really bad thing and that we should celebrate people who do courageous acts that prevent nuclear war, for instance.

In the same spirit, I often feel concerned that there’s so little attention, even paid to risks that we drive ourselves extinct or cause giants catastrophes compared to how much attention we pay to the Kardashians or whether we can get 1% less unemployment next year. So I’m curious if you have some sympathy for my angst here or whether you think I’m overreacting.

Yuval Noah Harari: I completely agree. I often define it that we are now kind of irresponsible gods. Certainly with regard to the other animals and the ecological system and with regard to ourselves, we have really divine powers of creation and destruction, but we don’t take our job seriously enough. We tend to be very irresponsible in our thinking, and in our behavior. On the other hand, part of the problem is that the number of potential apocalypses is growing exponentially over the last 50 years. And as a scholar and as a communicator, I think it’s part of our job to be extremely careful in the way that we discuss these issues with the general public. And it’s very important to focus the discussion on the more likely scenarios because if we just go on bombarding people with all kinds of potential scenarios of complete destruction, very soon we just lose people’s attention.

They become extremely pessimistic that everything is hopeless. So why worry about all that? So I think part of the job of the scientific community and people who deal with these kinds of issues is to really identify the most likely scenarios and focus the discussion on that. Even if there are some other scenarios which have a small chance of occurring and completely destroying all of humanity and maybe all of life, but we just can’t deal with everything at the same time.

Max Tegmark: I completely agree with that. With one caveat, I think it’s very much in the spirit of effective altruism, what you said. We want to focus on the things that really matter the most and not turn everybody into hypochondriac, paranoid, getting worried about everything. The one caveat I would give is, we shouldn’t just look at the probability of each bad thing happening but we should look at the expected damage it will do so the probability of times how bad it is.

Yuval Noah Harari: I agree.

Max Tegmark: Because nuclear war for example, maybe the chance of having an accidental nuclear war between the US and Russia is only 1% per year or 10% per year or one in a thousand per year. But if you have the nuclear winter caused by that by soot and smoke in the atmosphere, you know, blocking out the sun for years, that could easily kill 7 billion people. So most people on Earth and mass starvation because it would be about 20 Celsius colder. That means that on average if it’s 1% chance per year, which seems small, you’re still killing on average 70 million people. That’s the number that sort of matters I think. That means we should make it a higher priority to reduce that more.

Yuval Noah Harari: With nuclear war, I would say that we are not concerned enough. I mean, too many people, including politicians have this weird impression that well, “Nuclear war, that’s history. No, that was in the 60s and 70s people worried about it.”

Max Tegmark: Exactly.

Yuval Noah Harari: “It’s not a 21st century issue.” This is ridiculous. I mean we are now in even greater danger, at least in terms of the technology than we were in the Cuban missile crisis. But you must remember this in Stanley Kubrick, Dr Strange Love-

Max Tegmark: One of my favorite films of all time.

Yuval Noah Harari: Yeah. And so the subtitle of the film is “How I Stopped Fearing and Learned to Love the Bomb.”

Max Tegmark: Exactly.

Yuval Noah Harari: And the funny thing is it actually happened. People stopped fearing them. Maybe they don’t love it very much, but compared to the 50s and 60s people just don’t talk about it. Like you look at the Brexit debate in Britain and Britain is one of the leading nuclear powers in the world and it’s not even mentioned. It’s not part of the discussion anymore. And that’s very problematic because I think that this is a very serious existential threat. But I’ll take a counter example, which is in the field of AI, even though I understand the philosophical importance of discussing the possibility of general AI emerging in the future and then rapidly taking over the world and you know all the paperclips scenarios and so forth.

I think that at the present moment it really distracts attention of people from the immediate dangers of the AI arms race, which has a far, far higher chance of materializing in the next, say, 10, 20, 30 years. And we need to focus people’s minds on these short term dangers. And I know that there is a small chance that general AI would be upon us say in the next 30 years. But I think it’s a very, very small chance, whereas the chance that kind of primitive AI will completely disrupt the economy, the political system and human life in the next 30 years is about a 100%. It’s bound to happen.

Max Tegmark: Yeah.

Yuval Noah Harari: And I worry far more about what primitive AI will do to the job market, to the military, to people’s daily lives than about a general AI appearing in the more distant future.

Max Tegmark: Yeah, there are a few reactions to this. We can talk more about artificial general intelligence and superintelligence later if we get time. But there was a recent survey of AI researchers around the world asking what they thought and I was interested to note that actually most of them guessed that we will get artificial general intelligence within decades. So I wouldn’t say that the chance is small, but I would agree with you, that is certainly not going to happen tomorrow.

But if we eat our vitamins, you and I and meditate, go to the gym, it’s quite likely we will actually get to experience it. But more importantly, coming back to what you said earlier, I see all of these risks as really being one in the same risk in the sense that what’s happened is of course that science has kept getting ever more powerful. And science definitely gives us ever more powerful technology. And I love technology. I’m a nerd. I work at a university that has technology in its name and I’m optimistic we can create an inspiring high tech future for life if we win what I like to call the wisdom race.

The race between the growing power of the technology and the growing wisdom with which we manage it or putting it in your words, that you just used there, if we can basically learn to take more seriously our job as stewards of this planet, you can look at every science and see exactly the same thing happening. So we physicists are kind of proud that we gave the world cell phones and computers and lasers, but our problem child has been nuclear energy obviously, nuclear weapons in particular. Chemists are proud that they gave the world all these great new materials and their problem child is climate change. Biologists in my book actually have done the best so far, they actually got together in the 70s and persuaded leaders to ban biological weapons and draw a clear red line more broadly between what was acceptable and unacceptable uses of biology.

And that’s why today most people think of biology as really a force for good, something that cures people or helps them live healthier lives. And I think AI is right now lagging a little bit in time. It’s finally getting to the point where they’re starting to have an impact and they’re grappling with the same kind of question. They haven’t had big disasters yet, so they’re in the biology camp there, but they’re trying to figure out where do they draw the line between acceptable and unacceptable uses so you don’t get a crazy military AI arms race in lethal autonomous weapons, so you don’t create very destabilizing income inequality so that AI doesn’t create 1984 on steroids, et cetera.

And I wanted to ask you about what sort of new story as a society you feel we need in order to tackle these challenges. And I’ve been very, very persuaded by your arguments that stories are so central to society for us to collaborate and accomplish stuff, but you’ve also made a really compelling case. I think that’s the most popular recent stories are all getting less powerful or popular. Communism, now there’s a lot of disappointment, and this liberalism and it feels like a lot of people are kind of craving for a new story that involves technology somehow and that can help us get our act together and also help us feel meaning and purpose in this world. But I’ve never in your books seen a clear answer to what you feel that this new story should be.

Yuval Noah Harari: Because I don’t know. If I knew the new story, I will tell it. I think we are now in a kind of double bind, we have to fight on two different fronts. On the one hand we are witnessing in the last few years the collapse of the last big modern story of liberal democracy and liberalism more generally, which has been, I would say as a story, the best story humans ever came up with and it did create the best world that humans ever enjoyed. I mean the world of the late 20th century and early 21st century with all its problems, it’s still better for humans, not for cows or chickens for humans, it’s still better than it’s any previous moment in history.

There are many problems, but anybody who says that this was a bad idea, I would like to hear which year are you thinking about as a better year? Now in 2019, when was it better? In 1919, in 1719, in 1219? I mean, for me, it’s obvious this has been the best story we have come up with.

Max Tegmark: That’s so true. I have to just admit that whenever I read the news for too long, I start getting depressed. But then I always cheer myself up by reading history and reminding myself it was always worse in the past.

Yuval Noah Harari: That never fails. I mean, the last four years have been quite bad, things are deteriorating, but we are still better off than in any previous era, but people are losing faith. In this story, we are reaching really a situation of zero story. All the big stories of the 20th century have collapsed or are collapsing and the vacuum is currently filled by nostalgic fantasies, nationalistic and religious fantasies, which simply don’t offer any real solutions to the problems of the 21st century. So on the one hand we have the task of supporting or reviving the liberal democratic system, which is so far the only game in town. I keep listening to the critics and they have a lot of valid criticism, but I’m waiting for the alternative and the only thing I hear is completely unrealistic nostalgic fantasies about going back to some past golden era that as a historian I know was far, far worse, and even if it was not so far worse, you just can’t go back there. You can’t recreate the 19th century or the middle ages under the conditions of the 21st century. It’s impossible.

So we have this one struggle to maintain what we have already achieved, but then at the same time, on a much deeper level, my suspicion is that the liberal stories we know it at least is really not up to the challenges of the 21st century because it’s built on foundations that the new science and especially the new technologies of artificial intelligence and bioengineering are just destroying the belief we are inherited in the autonomous individual, in free will, in all these basically liberal mythologies. They will become increasingly untenable in contact with new powerful bioengineering and artificial intelligence.

To put it in a very, very concise way, I think we are entering the era of hacking human beings, not just hacking smartphones and bank accounts, but really hacking homo sapiens which was impossible before. I mean, AI gives us the computing power necessary and biology gives us the necessary biological knowledge and when you combine the two you get the ability to hack human beings and if you continue to try, and build society on the philosophical ideas of the 18th century about the individual and freewill and then all that in a world where it’s feasible technically to hack millions of people systematically, it’s just not going to work. And we need an updated story, I’ll just finish this thought. And our problem is that we need to defend the story from the nostalgic fantasies at the same time that we are replacing it by something else. And it’s just very, very difficult.

When I began writing my books like five years ago, I thought the real project was to really go down to the foundations of the liberal story, expose the difficulties and build something new. And then you had all these nostalgic populous eruption of the last four or five years, and I personally find myself more and more engaged in defending the old fashioned liberal story instead of replacing it. Intellectually, it’s very frustrating because I think the really important intellectual work is finding out the new story, but politically it’s far more urgent. If we allow the emergence of some kind of populist authoritarian regimes, then whatever comes out of it will not be a better story.

Max Tegmark: Yeah, unfortunately I agree with your assessment here. I love to travel. I work in basically the United Nations like environment at my university with students from all around the world, and I have this very strong sense that people are feeling increasingly lost around the world today because the stories that used to give them a sense of purpose and meaning and so on are sort of dissolving in front of their eyes. And of course, we don’t like to feel lost then likely to jump on whatever branches are held out for us. And they are often just retrograde things. Let’s go back to the good old days and all sorts of other unrealistic things. But I agree with you that the rise in population we’re seeing now is not the cause. It’s a symptom of people feeling lost.

So I think I was a little bit unfair to ask you in a few minutes to answer the toughest question of our time, what should our new story be? But maybe we could break it into pieces a little bit and say what are at least some elements that we would like the new story to have? For example, it should accomplish, of course, multiple things. It has to incorporate technology in a meaningful way, which our past stories did not and has to incorporate AI progress in biotech, for example. And it also has to be a truly global story, I think this time, which isn’t just a story about how America is going to get better off or China is going to get better off, but one about how we’re all going to get better off together.

And we can put up a whole bunch of other requirements. If we start maybe with this part about the global nature of the story, people disagree violently about so many things around world, but are there any ingredients at all of the story that you think people around the world, would already agreed to some principles or ideas?

Yuval Noah Harari: Again to, I don’t really know. I mean, I don’t know what the new story would look like. Historically, these kinds of really grand narratives, they aren’t created by two, three people having a discussion and thinking, okay, what new stories should we tell? It’s far deeper and more powerful forces that come together to create these new stories. I mean, even trying to say, okay, we don’t have the full view, but let’s try to put a few ingredients in place. The whole thing about the story is that the whole comes before the parts. The narrative is far more important than the individual facts that build it up.

So I’m not sure that we can start creating the story by just, okay, let’s put the first few sentences and who knows how it will continue. You wrote books. I write books, we know that the first few sentences are the last sentences that you usually write.

Max Tegmark: That’s right.

Yuval Noah Harari: Only when you know how the whole book is going to look like, but then you go back to the beginning and you write the first few sentences.

Max Tegmark: Yeah. And sometimes the very last thing you write is the new title.

Yuval Noah Harari: So I agree that whatever the new story is going to be, it’s going to be global. The world is now too small and too interconnected to have just a story for one part of the world. It won’t work. And also it will have to take very seriously both the most updated science and the most updated technology. Something that liberal democracy as we know it, it’s basically still in the 18th century. It’s taking an 18th century story and simply following it to its logical conclusions. For me, maybe the most amazing thing about liberal democracy is it really completely disregarded all the discoveries of the life sciences over the last two centuries.

Max Tegmark: And of the technical sciences!

Yuval Noah Harari: I mean, as if Darwin never existed and we know nothing about evolution. I mean, you can basically meet these folks from the middle of the 18th century, whether it’s Rousseau, Jefferson, and all these guys, and they will be surprised by some of the conclusions we have drawn for the basis they provided us. But fundamentally it’s nothing has changed. Darwin didn’t really change anything. Computers didn’t really change anything. And I think the next story won’t have that luxury of being able to ignore the discoveries of science and technology.

The number one thing it we’ll have to take into account is how do humans live in a world when there is somebody out there that knows you better than you know yourself, but that somebody isn’t God, that somebody is a technological system, which might not be a good system at all. That’s a question we never had to face before. We could always comfort yourself with the idea that we are kind of a black box with the rest of humanity. Nobody can really understand me better than I understand myself. The king, the emperor, the church, they don’t really know what’s happening within me. Maybe God knows. So we had a lot of discussions about what to do with that, the existence of a God who knows us better than we know ourselves, but we didn’t really have to deal with a non-divine system that can hack us.

And this system is emerging. I think it will be in place within our lifetime in contrast to generally artificial intelligence that I’m skeptical whether I’ll see it in my lifetime. I’m convinced we will see, if we live long enough, a system that knows us better than we know ourselves and the basic premises of democracy, of free market capitalism, even of religion just don’t work in such a world. How does democracy function in a world when somebody understands the voter better than the voter understands herself or himself? And the same with the free market. I mean, if the customer is not right, if the algorithm is right, then we need a completely different economic system. That’s the big question that I think we should be focusing on. I don’t have the answer, but whatever story will be relevant to the 21st century, will have to answer this question.

Max Tegmark: I certainly agree with you that democracy has totally failed to adapt to the developments in the life sciences and I would add to that to the developments in the natural sciences too. I watched all of the debates between Trump and Clinton in the last election here in the US and I didn’t know what is artificial intelligence getting mentioned even a single time, not even when they talked about jobs. And the voting system we have, with an electoral college system here where it doesn’t even matter how people vote except in a few swing states where there’s so little influence from the voter to what actually happens. Even though we now have blockchain and could easily implement technical solutions where people will be able to have much more influence. Just reflects that we basically declared victory on our democratic system hundreds of years ago and haven’t updated it.

And I’m very interested in how we can dramatically revamp it if we believe in some form of democracy so that we actually can have more influence on how our society is run as individuals and how we can have good reason to actually trust the system. If it is able to hack us. That is actually working in our best interest. There’s a key tenant in religions that you’re supposed to be able to trust the God as having your best interest in mind. And I think many people in the world today do not trust that their political leaders actually have their best interest in mind.

Yuval Noah Harari: Certainly, I mean that’s the issue. You give a really divine powers to far from divine systems. We shouldn’t be too pessimistic. I mean, the technology is not inherently evil either. And what history teaches us about technology is that technology is also never deterministic. You can use the same technologies to create very different kinds of societies. We saw that in the 20th century when the same technologies were used to build communist dictatorships and liberal democracies, there was no real technological difference between the USSR and the USA. It was just people making different decisions what to do with the same technology.

I don’t think that the new technology is inherently anti-democratic or inherently anti-liberal. It really is about choices that people make even in what kind of technological tools to develop. If I think about, again, AI and surveillance, at present we see all over the world that corporations and governments are developing AI tools to monitor individuals, but technically we can do exactly the opposite. We can create tools that monitor and survey government and corporations in the service of individuals. For instance, to fight corruption in the government as an individual. It’s very difficult for me to say monitor nepotism, politicians appointing all kinds of family members to lucrative positions in the government or in the civil service, but it should be very easy to build an AI tool that goes over the immense amount of information involved. And in the end you just get a simple application on your smartphone you enter the name of a politician and you immediately see within two seconds who he appointed or she appointed from their family and friends to what positions. It should be very easy to do it. I don’t see the Chinese government creating such an application anytime soon, but people can create it.

Or if you think about the fake news epidemic, basically what’s happening is that corporations and governments are hacking us in their service, but the technology can work the other way around. We can develop an antivirus for the mind, the same way we developed antivirus for the computer. We need to develop an antivirus for the mind, an AI system that serves me and not a corporation or a government, and it gets to know my weaknesses in order to protect me against manipulation.

At present, what’s happening is that the hackers are hacking me. they get to know my weaknesses and that’s how they are able to manipulate me. For instance, with fake news. If they discover that I already have a bias against immigrants, they show me one fake news story, maybe about a group of immigrants raping local women. And I easily believe that because I already have this bias. My neighbor may have an opposite bias. She may think that anybody who opposes immigration is a fascist and the same hackers will find that out and will show her a fake news story about, I don’t know, right wing extremists murdering immigrants and she will believe that.

And then if I meet my neighbor, there is no way we can have a conversation about immigration. Now we can and should, develop an AI system that serves me and my neighbor and alerts us. Look, somebody is trying to hack you, somebody trying to manipulate you. And if we learn to trust this system that it serves us, it doesn’t serve any corporation or government. It’s an important tool in protecting our minds from being manipulated. Another tool in the same field, we are now basically feeding enormous amounts of mental junk food to our minds.

We spend hours every day basically feeding our hatred, our fear, our anger, and that’s a terrible and stupid thing to do. The thing is that people discovered that the easiest way to grab our attention is by pressing the hate button in the mind or the fear button in the mind, and we are very vulnerable to that.

Now, just imagine that somebody develops a tool that shows you what’s happening to your brain or to your mind as you’re watching these YouTube clips. Maybe it doesn’t block you, it’s not Big Brother, that blocks, all these things. It’s just like when you buy a product and it shows you how many calories are in the product and how much saturated fat and how much sugar there is in the product. So at least in some cases you learn to make better decisions. Just imagine that you have this small window in your computer which tells you what’s happening to your brain as your watching this video and what’s happening to your levels of hatred or fear or anger and then make your own decision. But at least you are more aware of what kind of food you’re giving to your mind.

Max Tegmark: Yeah. This is something I am also very interested in seeing more of AI systems that empower the individual in all the ways that you mentioned. We are very interested at the Future of Life Institute actually in supporting this kind of thing on the nerdy technical side and I think this also drives home this very important fact that technology is not good or evil. Technology is an amoral tool that can be used both for good things and for bad things. That’s exactly why I feel it’s so important that we develop the wisdom to use it for good things rather than bad things. So in that sense, AI is no different than fire, which can be used for good things and for bad things and but we as a society have developed a lot of wisdom now in fire management. We educate our kids about it. We have fire extinguishers and fire trucks and with artificial intelligence and other powerful tech, I feel we need to do better in similarly developing the wisdom that can steer the technology towards better uses.

Now we’re reaching the end of the hour here. I’d like to just finish with two more questions. One of them is about what we wanted to ultimately mean to be human as we get ever more tech. You put it so beautifully and I think it was Sapiens that tech progress is gradually taking us beyond the asking what we want to ask instead what we want to want and I guess even more broadly how we want to brand ourselves, how we want to think about ourselves as humans in the high tech future.

I’m quite curious. First of all, you personally, if you think about yourself in 30 years, 40 years, what do you want to want and what sort of society would you like to live in say 2060 if you could have it your way?

Yuval Noah Harari: It’s a profound question. It’s a difficult question. My initial answer is that I would really like not just to know the truth about myself but to want to know the truth about myself. Usually the main obstacle in knowing the truth about yourself is that you don’t want to know it. It’s always accessible to you. I mean, we’ve been told for thousands of years by, all the big names in philosophy and religion. Almost all say the same thing. Get to know yourself better. It’s maybe the most important thing in life. We haven’t really progressed much in the last thousands of years and the reason is that yes, we keep getting this advice but we don’t really want to do it.

Working on our motivation in this field I think would be very good for us. It will also protect us from all the naive utopias which tend to draw far more of our attention. I mean, especially as technology will give us all, at least some of us more and more power, the temptations of naive utopias are going to be more and more irresistible and I think the really most powerful check on these naive utopias is really getting to know yourself better.

Max Tegmark: Would you like what it means to be, Yuval 2060 to be more on the hedonistic side that you have all these blissful experiences and serene meditation and so on, or would you like there to be a lot of challenges in there that gives you a sense of meaning or purpose? Would you like to be somehow upgraded with technology?

Yuval Noah Harari: None of the above. I mean at least if I think deeply enough about these issues and yes, I would like to be upgraded but only in the right way and I’m not sure what the right way is. I’m not a great believer in blissful experiences in meditation or otherwise, they tend to be traps that this is what we’ve been looking for all our lives and for millions of years all the animals they just constantly look for blissful experiences and after a couple of millions of years of evolution, it doesn’t seem that it brings us anywhere and especially in meditation you learn these kinds of blissful experiences can be the most deceptive because you fall under the impression that this is the goal that you should be aiming at.

This is a really good meditation. This is a really deep meditation simply because you’re very pleased with yourself and then you spend countless hours later on trying to get back there or regretting that you are not there and in the end it’s just another experience. What we experience with right now when we are now talking on the phone to each other and I feel something in my stomach and you feel something in your head, this is as special and amazing as the most blissful experience of meditation. The only difference is that we’ve gotten used to it so we are not amazed by it, but right now we are experiencing the most amazing thing in the universe and we just take it for granted. Partly because we are distracted by this notion that out there, there is something really, really special that we should be experiencing. So I’m a bit suspicious of blissful experiences.

Again, I would just basically repeat that to really understand yourself also means to really understand the nature of these experiences and if you really understand that, then so many of these big questions will be answered. Similarly, the question that we dealt with in the beginning of how to evaluate different experiences and what kind of experiences should we be creating for humans or for artificial consciousness. For that you need to deeply understand the nature of experience. Otherwise, there’s so many naive utopias that can tempt you. So I would focus on that.

When I say that I want to know the truth about myself, it’s really also it means to really understand the nature of these experiences.

Max Tegmark: To my very last question, coming back to this story and ending on a positive inspiring note. I’ve been thinking back about when new stories led to very positive change. And then I started thinking about a particular Swedish story. So the year was 1945, people were looking at each other all over Europe saying, “We screwed up again”. How about we, instead of using all this technology, people were saying then to build ever more powerful weapons. How about we instead use it to create a society that benefits everybody where we can have free health care, free university for everybody, free retirement and build a real welfare state. And I’m sure there were a lot of curmudgeons around who said “awe you know, that’s just hopeless naive dreamery, go smoke some weed and hug a tree because it’s never going to work.” Right?

But this story, this optimistic vision was sufficiently concrete and sufficiently both bold and realistic seeming that it actually caught on. We did this in Sweden and it actually conquered the world. Not like when the Vikings tried and failed to do it with swords, but this idea conquered the world. So now so many rich countries have copied this idea. I keep wondering if there is another new vision or story like this, some sort of welfare 3.0 which incorporates all of the exciting new technology that has happened since ’45 on the biotech side, on the AI side, et cetera, to envision a society which is truly bold and sufficiently appealing to people around the world that people could rally around this.

I feel that the shared positive experience is something that more than anything else can really help foster collaboration around the world. And I’m curious what you would say in terms of, what do you think of as a bold, positive vision for the planet now going away from what you spoke about earlier with yourself personally, getting to know yourself and so on.

Yuval Noah Harari: I think we can aim towards what you define as welfare 3.0 which is again based on a better understanding of humanity. The welfare state, which many countries have built over the last decades have been an amazing human achievement and it achieved many concrete results in fields that we knew what to aim for, like in health care. So okay, let’s vaccinate all the children in the country and let’s make sure everybody has enough to eat. We succeeded in doing that and the kind of welfare 3.0 program would try to expand that to other fields in which our achievements are far more moderate simply because we don’t know what to aim for. We don’t know what we need to do.

If you think about mental health, it’s much more difficult than providing food to people because we have a very poor understanding of the human mind and of what mental health is. Even if you think about food, one of the scandals of science is that we still don’t know what to eat, so we basically solve the problem of enough food. Now actually we have the opposite problem of people eating too much and not too little, but beyond the medical quantity, it’s I think one of the biggest scandals of science that after centuries we still don’t know what we should eat. And mainly because so many of these miracle diets, they are a one size fits all as if everybody should eat the same thing. Whereas obviously it should be tailored to individuals.

So if you harness the power of AI and big data and machine learning and biotechnology, you could create the best dietary system in the world that tell people individually what would be good for them to eat. And this will have enormous side benefits in reducing medical problems, in reducing waste of food and resources, helping the climate crisis and so forth. So this is just one example.

Max Tegmark: Yeah. Just on that example, I would argue also that part of the problem is beyond that we just don’t know enough that actually there are a lot of lobbyists who are telling people what to eat, knowing full well that that’s bad for them just because that way they’ll make more of a profit. Which gets back to your question of hacking, how we can prevent ourselves from getting hacked by powerful forces that don’t have our best interest in mind. But the things you mentioned seemed like a little bit of first world perspective which it’s easy to get when we live in Israel or Sweden, but of course there are many people on the planet who still live in pretty miserable situations where we actually can quite easily articulate how to make things at least a bit better.

But then also in our societies, I mean you touched on mental health. There’s a significant rise in depression in the United States. Life expectancy in the US has gone down three years in a row, which does not suggest the people are getting happier here. I’m wondering if you also in your positive vision of the future that we can hopefully end on here. We’d want to throw in some ingredients about the sort of society where we don’t just have the lowest rung of the Maslow pyramid taken care of food and shelter and stuff, but also feel meaning and purpose and meaningful connections with our fellow lifeforms.

Yuval Noah Harari: I think it’s not just a first world issue. Again, even if you think about food, even in developing countries, more people today die from diabetes and diseases related to overeating or to overweight than from starvation and mental health issues are certainly not just the problem for the first world. People are suffering from that in all countries. Part of the issue is that mental health is far, far more expensive. Certainly if you think in terms of going to therapy once or twice a week than just giving vaccinations or antibiotics. So it’s much more difficult to create a robust mental health system in poor countries, but we should aim there. It’s certainly not just for the first world. And if we really understand humans better, we can provide much better health care, both physical health and mental health for everybody on the planet, not just for Americans or Israelis or Swedes.

Max Tegmark: In terms of physical health, it’s usually a lot cheaper and simpler to not treat the diseases, but to instead prevent them from happening in the first place by reducing smoking, reducing people eating extremely unhealthy foods, et cetera. And the same way with mental health, presumably a key driver of a lot of the problems we have is that we have put ourselves in a human made environment, which is incredibly different from the environment that we evolved to flourish in. And I’m wondering rather than just trying to develop new pills to help us live in this environment, which is often optimized for the ability to produce stuff, rather than for human happiness. If you think that by deliberately changing our environment to be more conducive to human happiness might improve our happiness a lot without having to treat it, treat mental health disorders.

Yuval Noah Harari: It will demand the enormous amounts of resources and energy. But if you are looking for a big project for the 21st century, then yeah, that’s definitely a good project to undertake.

Max Tegmark: Okay. That’s probably a good challenge from you on which to end this conversation. I’m extremely grateful for having had this opportunity talk with you about these things. These are ideas I will continue thinking about with great enthusiasm for a long time to come and I very much hope we can stay in touch and actually meet in person, even, before too long.

Yuval Noah Harari: Yeah. Thank you for hosting me.

Max Tegmark: I really can’t think of anyone on the planet who thinks more profoundly about the big picture of the human condition here than you and it’s such an honor.

Yuval Noah Harari: Thank you. It was a pleasure for me too. Not a lot of opportunities to really go deeply about these issues. I mean, usually you get pulled away to questions about the 2020 presidential elections and things like that, which is important. But, we still have also to give some time to the big picture.

Max Tegmark: Yeah. Wonderful. So once again, todah, thank you so much.

Lucas Perry: Thanks so much for tuning in and being a part of our final episode of 2019. Many well and warm wishes for a happy and healthy new year from myself and the rest of the Future of Life Institute team. This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it’s a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we’ve come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we’ve accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond.

Topics discussed include:

  • Introductions to the FLI team and our work
  • Motivations for our projects and existential risk mitigation efforts
  • The goals and outcomes of our work
  • Our favorite projects at FLI in 2019
  • Optimistic directions for projects in 2020
  • Reasons for existential hope going into 2020 and beyond

Timestamps:

0:00 Intro

1:30 Meeting the Future of Life Institute team

18:30 Motivations for our projects and work at FLI

30:04 What we strive to result from our work at FLI

44:44 Favorite accomplishments of FLI in 2019

01:06:20 Project directions we are most excited about for 2020

01:19:43 Reasons for existential hope in 2020 and beyond

01:38:30 Outro

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is a special end of the year episode structured as an interview with members of the FLI core team. The purpose of this episode is to introduce the members of our team and their roles, explore the projects and work we’ve been up to at FLI throughout the year, and discuss future project directions we are excited about for 2020. Some topics we explore are the motivations behind our work and projects, what we are hoping will result from them, favorite accomplishments at FLI in 2019, and general trends and reasons we see for existential hope going into 2020 and beyond.

If you find this podcast interesting and valuable, you can follow us on your preferred listening platform like on itunes, soundcloud, google play, stitcher, and spotify

If you’re curious to learn more about the Future of Life Institute, our team, our projects, and our feelings about the state and ongoing efforts related to existential risk mitigation, then I feel you’ll find this podcast valuable. So, to get things started, we’re going to have the team introduce ourselves, and our role(s) at the Future of life Institute

Jared Brown: My name is Jared Brown, and I’m the Senior Advisor for Government Affairs at the Future of Life Institute. I help inform and execute FLI’s strategic advocacy work on governmental policy. It’s sounds a little bit behind the scenes because it is, but I primarily work in the U.S. and in global forums like the United Nations.

Kirsten Gronlund: My name is Kirsten and I am the Editorial Director for The Future of Life Institute. Basically, I run the website. I also create new content and manage the content that’s being created to help communicate the issues that FLI works on. I have been helping to produce a lot of our podcasts. I’ve been working on getting some new long form articles written; we just came out with one about CRISPR and gene drives. Right now I’m actually working on putting together a book list for recommended reading for things related to effective altruism and AI and existential risk. I also do social media, and write the newsletter, and a lot of things. I would say that my job is to figure out what is most important to communicate about what FLI does, and then to figure out how it’s best to communicate those things to our audience. Experimenting with different forms of content, experimenting with different messaging. Communication, basically, and writing and editing.

Meia Chita-Tegmark: I am Meia Chita-Tegmark. I am one of the co-founders of the Future of Life Institute. I am also the treasurer of the Institute, and recently I’ve been focusing many of my efforts on the Future of Life website and our outreach projects. For my day job, I am a postdoc in the human-robot interaction lab at Tufts University. My training is in social psychology, so my research actually focuses on the human end of the human-robot interaction. I mostly study uses of assistive robots in healthcare and I’m also very interested in ethical implications of using, or sometimes not using, these technologies. Now, with the Future of Life Institute, as a co-founder, I am obviously involved in a lot of the decision-making regarding the different projects that we are pursuing, but my main focus right now is the FLI website and our outreach efforts.

Tucker Davey: I’m Tucker Davey. I’ve been a member of the FLI core team for a few years. And for the past few months, I’ve been pivoting towards focusing on projects related to FLI’s AI communication strategy, various projects, especially related to advanced AI and artificial general intelligence, and considering how FLI can best message about these topics. Basically these projects are looking at what we believe about the existential risk of advanced AI, and we’re working to refine our core assumptions and adapt to a quickly changing public understanding of AI. In the past five years, there’s been much more money and hype going towards advanced AI, and people have new ideas in their heads about the risk and the hope from AI. And so, our communication strategy has to adapt to those changes. So that’s kind of a taste of the questions we’re working on, and it’s been really interesting to work with the policy team on these questions.

Jessica Cussins Newman: My name is Jessica Cussins Newman, and I am an AI policy specialist with the Future of Life Institute. I work on AI policy, governance, and ethics, primarily. Over the past year, there have been significant developments in all of these fields, and FLI continues to be a key stakeholder and contributor to numerous AI governance forums. So it’s been exciting to work on a team that’s helping to facilitate the development of safe and beneficial AI, both nationally and globally. To give an example of some of the initiatives that we’ve been involved with this year, we provided comments to the European Commission’s high level expert group on AI, to the Defense Innovation Board’s work on AI ethical principles, to the National Institute of Standards and Technology, or NIST, which developed a plan for federal engagement on technical AI standards.

We’re also continuing to participate in several multi-stakeholder initiatives, such as the Partnership on AI, the CNAS AI Task Force, and the UN Secretary General’s high level panel, and additional cooperation among others. I think all of this is helping to lay the groundwork for a more trustworthy AI, and we’ve also been engaged with direct policy engagement. Earlier this year we co-hosted an AI policy briefing at the California state legislature, and met with the White House Office of Science and Technology Policy. Lastly, on the educational side of this work, we maintain an online resource for global AI policy. So this includes information about national AI strategies and provides background resources and policy recommendations around some of the key issues.

Ian Rusconi: My name is Ian Rusconi and I edit and produce these podcasts. Since FLI’s podcasts aren’t recorded in a controlled studio setting, the interviews often come with a host of technical issues, so some of what I do for these podcasts overlaps with forensic audio enhancement, removing noise from recordings; removing as much of the reverb as possible from recordings, which works better sometimes than others; removing clicks and pops and sampling errors and restoring the quality of clipping audio that was recorded too loudly. And then comes the actual editing, getting rid of all the breathing and lip smacking noises that people find off-putting, and cutting out all of the dead space and vocal dithering, um, uh, like, you know, because we aim for a tight final product that can sometimes end up as much as half the length of the original conversation even before any parts of the conversation are cut out.

Part of working in an audio only format is keeping things to the minimum amount of information required to get your point across, because there is nothing else that distracts the listener from what’s going on. When you’re working with video, you can see people’s body language, and that’s so much of communication. When it’s audio only, you can’t. So a lot of the time, if there is a divergent conversational thread that may be an interesting and related point, it doesn’t actually fit into the core of the information that we’re trying to access, and you can construct a more meaningful narrative by cutting out superfluous details.

Emilia Javorsky: My name’s Emilia Javorsky and at the Future of Life Institute, I work on the topic of lethal autonomous weapons, mainly focusing on issues of education and advocacy efforts. It’s an issue that I care very deeply about and I think is one of the more pressing ones of our time. I actually come from a slightly atypical background to be engaged in this issue. I’m a physician and a scientist by training, but what’s conserved there is a discussion of how do we use AI in high stakes environments where life and death decisions are being made. And so when you are talking about the decisions to prevent harm, which is my field of medicine, or in the case of lethal autonomous weapons, the decision to enact lethal harm, there’s just fundamentally different moral questions, and also system performance questions that come up.

Key ones that I think about a lot are system reliability, accountability, transparency. But when it comes to thinking about lethal autonomous weapons in the context of the battlefield, there’s also this inherent scalability issue that arises. When you’re talking about scalable weapon systems, that quickly introduces unique security challenges in terms of proliferation and an ability to become what you could quite easily define as weapons of mass destruction. 

There’s also the broader moral questions at play here, and the question of whether we as a society want to delegate the decision to take a life to machines. And I personally believe that if we allow autonomous weapons to move forward and we don’t do something to really set a stake in the ground, it could set an irrecoverable precedent when we think about getting ever more powerful AI aligned with our values in the future. It is a very near term issue that requires action.

Anthony Aguirre: I’m Anthony Aguirre. I’m a professor of physics at the University of California at Santa Cruz, and I’m one of FLI’s founders, part of the core team, and probably work mostly on the policy related aspects of artificial intelligence and a few other topics. 

I’d say there are two major efforts that I’m heading up. One is the overall FLI artificial intelligence policy effort. That encompasses a little bit of our efforts on lethal autonomous weapons, but it’s mostly about wider issues of how artificial intelligence development should be thought about, how it should be governed, what kind of soft or hard regulations might we contemplate about it. Global efforts which are really ramping up now, both in the US and Europe and elsewhere, to think about how artificial intelligence should be rolled out in a way that’s kind of ethical, that keeps with the ideals of society, that’s safe and robust and in general is beneficial, rather than running into a whole bunch of negative side effects. That’s part of it.

And then the second thing is I’ve been thinking a lot about what sort of institutions and platforms and capabilities might be useful for society down the line that we can start to create, and nurture and grow now. So I’ve been doing a lot of thinking about… let’s imagine that we’re in some society 10 or 20 or 30 years from now that’s working well, how did it solve some of the problems that we see on the horizon? If we can come up with ways that this fictitious society in principle solved those problems, can we try to lay the groundwork for possibly actually solving those problems by creating new structures and institutions now that can grow into things that could help solve those problems in the future?

So an example of that is Metaculus. This is a prediction platform that I’ve been involved with in the last few years. So this is an effort to create a way to better predict what’s going to happen and make better decisions, both for individual organizations and FLI itself, but just for the world in general. This is kind of a capability that it would be good if the world had, making better predictions about all kinds of things and making better decisions. So that’s one example, but there are a few others that I’ve been contemplating and trying to get spun up.

Max Tegmark: Hi, I’m Max Tegmark, and I think of myself as having two jobs. During the day, I do artificial intelligence research at MIT, and on nights and weekends, I help lead the Future of Life Institute. My day job at MIT used to be focused on cosmology, because I was always drawn to the very biggest questions. The bigger the better, and studying our universe and its origins seemed to be kind of as big as it gets. But in recent years, I’ve felt increasingly fascinated that we have to understand more about how our own brains work, how our intelligence works, and building better artificial intelligence. Asking the question, how can we make sure that this technology, which I think is going to be the most powerful ever, actually becomes the best thing ever to happen to humanity, and not the worst.

Because all technology is really a double-edged sword. It’s not good or evil, it’s just a tool that we can do good or bad things with. If we think about some of the really horrible things that have happened because of AI systems, so far, it’s largely been not because of evil, but just because people didn’t understand how the system worked, and it did something really bad. So what my MIT research group is focused on is exactly tackling that. How can you take today’s AI systems, which are often very capable, but total black boxes… So that if you ask your system, “Why should this person be released on probation, but not this one?” You’re not going to get any better answer than, “I was trained on three terabytes of data and this is my answer. Beep, beep. Boop, boop.” Whereas, I feel we really have the potential to make systems that are just as capable, and much more intelligible. 

Trust should be earned and trust should be built based on us actually being able to peek inside the system and say, “Ah, this is why it works.” And the reason we have founded the Future of Life Institute was because all of us founders, we love technology, and we felt that the reason we would prefer living today rather than any time in the past, is all because of technology. But, for the first time in cosmic history, this technology is also on the verge of giving us the ability to actually self-destruct as a civilization. If we build AI, which can amplify human intelligence like never before, and eventually supersede it, then just imagine your least favorite leader on the planet, and imagine them having artificial general intelligence so they can impose their will on the rest of Earth.

How does that make you feel? It does not make me feel great, and I had a New Year’s resolution in 2014 that I was no longer allowed to complain about stuff if I didn’t actually put some real effort into doing something about it. This is why I put so much effort into FLI. The solution is not to try to stop technology, it just ain’t going to happen. The solution is instead win what I like to call the wisdom race. Make sure that the wisdom with which we manage our technology grows faster than the power of the technology.

Lucas Perry: Awesome, excellent. As for me, I’m Lucas Perry, and I’m the project manager for the Future of Life Institute. I’ve been with FLI for about four years now, and have focused on enabling and delivering projects having to do with existential risk mitigation. Beyond basic operations tasks at FLI that help keep things going, I’ve seen my work as having three cornerstones, these being supporting research on technical AI alignment, on advocacy relating to existential risks and related issues, and on direct work via our projects focused on existential risk. 

In terms of advocacy related work, you may know me as the host of the AI Alignment Podcast Series, and more recently the host of the Future of Life Institute Podcast. I see my work on the AI Alignment Podcast Series as promoting and broadening the discussion around AI alignment and AI safety to a diverse audience of both technical experts and persons interested in the issue.

There I am striving to include a diverse range of voices from many different disciplines, in so far as they can inform the AI alignment problem. The Future of Life Institute Podcast is a bit more general, though often dealing with related issues. There I strive to have conversations about avant garde subjects as they relate to technological risk, existential risk, and cultivating the wisdom with which to manage powerful and emerging technologies. For the AI Alignment Podcast, our most popular episode of all time so far is On Becoming a Moral Realist with Peter Singer, and a close second and third were On Consciousness, Qualia, and Meaning with Mike Johnson and Andres Gomez Emilsson, and An Overview of Technical AI Alignment with Rohin Shah. There are two parts to that podcast. These were really great episodes, and I suggest you check them out if they sound interesting to you. You can do that under the podcast tab on our site or by finding us on your preferred listening platform.

As for the main FLI Podcast Series, our most popular episodes have been an interview with FLI President Max Tegmark called Life 3.0: Being Human in the Age of Artificial intelligence. A podcast similar to this one last year, called Existential Hope in 2019 and Beyond was the second most listened to FLI podcast. And then the third is a more recent podcast called The Climate Crisis As An Existential Threat with Simon Beard and Hayden Belfield. 

In so far as the other avenue of my work, my support of research can be stated quite simply as fostering review of grant applications, and also reviewing interim reports for dispersing funds related to AGI safety grants. And then just touching again on my direct work around our projects, often if you see some project put out by the Future of Life Institute, I usually have at least some involvement with it from a logistics, operations, execution, or ideation standpoint related to it.

And moving into the next line of questioning here for the team, what would you all say motivates your interest in existential risk and the work that you do at FLI? Is there anything in particular that is motivating this work for you?

Ian Rusconi: What motivates my interest in existential risk in general I think is that it’s extraordinarily interdisciplinary. But my interest in what I do at FLI is mostly that I’m really happy to have a hand in producing content that I find compelling. But it isn’t just the subjects and the topics that we cover in these podcasts, it’s how you and Ariel have done so. One of the reasons I have so much respect for the work that you two have done and consequently enjoy working on it so much is the comprehensive approach that you take in your lines of questioning.

You aren’t afraid to get into the weeds with interviewees on very specific technical details, but still seek to clarify jargon and encapsulate explanations, and there’s always an eye towards painting a broader picture so we can contextualize a subject’s placement in a field as a whole. I think that FLI’s podcasts often do a tightrope act, walking the line between popular audience and field specialists in a way that doesn’t treat the former like children, and doesn’t bore the latter with a lack of substance. And that’s a really hard thing to do. And I think it’s a rare opportunity to be able to help create something like this.

Kirsten Gronlund: I guess really broadly, I feel like there’s sort of this sense generally that a lot of these technologies and things that we’re coming up with are going to fix a lot of issues on their own. Like new technology will help us feed more people, and help us end poverty, and I think that that’s not true. We already have the resources to deal with a lot of these problems, and we haven’t been. So I think, really, we need to figure out a way to use what is coming out and the things that we’re inventing to help people. Otherwise we’re going to end up with a lot of new technology making the top 1% way more wealthy, and everyone else potentially worse off.

So I think for me that’s really what it is, is to try to communicate to people that these technologies are not, on their own, the solution, and we need to all work together to figure out how to implement them, and how to restructure things in society more generally so that we can use these really amazing tools to make the world better.

Lucas Perry: Yeah. I’m just thinking about how technology enables abundance and how it seems like there are not limits to human greed, and there are limits to human greed. Human greed can potentially want infinite power, but also there’s radically diminishing returns on one’s own happiness and wellbeing as one gains more access to more abundance. It seems like there’s kind of a duality there. 

Kirsten Gronlund: I agree. I mean, I think that’s a very effective altruist way to look at it. That those same resources, if everyone has some power and some money, people will on average be happier than if you have all of it and everyone else has less. But I feel like people, at least people who are in the position to accumulate way more money than they could ever use, tend to not think of it that way, which is unfortunate.

Tucker Davey: In general with working with FLI, I think I’m motivated by some mix of fear and hope. And I would say the general fear is that, if we as a species don’t figure out how to cooperate on advanced technology, and if we don’t agree to avoid certain dangerous paths, we’ll inevitably find some way to destroy ourselves, whether it’s through AI or nuclear weapons or synthetic biology. But then that’s also balanced by a hope that there’s so much potential for large scale cooperation to achieve our goals on these issues, and so many more people are working on these topics as opposed to five years ago. And I think there really is a lot of consensus on some broad shared goals. So I have a hope that through cooperation and better coordination we can better tackle some of these really big issues.

Emilia Javorsky: Part of the reason as a physician I went into the research side of it is this idea of wanting to help people at scale. I really love the idea of how do we use science and translational medicine, not just to help one person, but to help whole populations of people. And so for me, this issue of lethal autonomous weapons is the converse of that. This is something that really has the capacity to both destroy lives at scale in the near term, and also as we think towards questions like value alignment and longer term, more existential questions, it’s something that for me is just very motivating. 

Jared Brown: This is going to sound a little cheesy and maybe even a little selfish, but my main motivation is my kids. I know that they have a long life ahead of them, hopefully, and there’s various different versions of the future that’ll better or worse for them. And I know that emerging technology policy is going to be key to maximizing the benefit of their future and everybody else’s, and that’s ultimately what motivates me. I’ve been thinking about tech policy basically ever since I started researching and reading Futurism books when my daughter was born about eight years ago, and that’s what really got me into the field and motivated to work on it full-time.

Meia Chita-Tegmark: I like to think of my work as being ultimately about people. I think that one of the most interesting aspects of this human drama is our relationship with technology, which recently has become evermore promising and also evermore dangerous. So, I want to study that, and I feel crazy lucky that there are universities willing to pay me to do it. And also to the best of my abilities, I want to try to nudge people in the technologies that they develop in more positive directions. I’d like to see a world where technology is used to save lives and not to take lives. I’d like to see technologies that are used for nurture and care rather than power and manipulation. 

Jessica Cussins Newman: I think the integration of machine intelligence into the world around us is one of the most impactful changes that we’ll experience in our lifetimes. I’m really excited about the beneficial uses of AI, but I worry about its impacts, and the questions of not just what we can build, but what we should build. And how we could see these technologies being destabilizing, or that won’t be sufficiently thoughtful about ensuring that the systems aren’t developed or used in ways that expose us to new vulnerabilities, or impose undue burdens on particular communities.

Anthony Aguirre: I would say it’s kind of a combination of things. Everybody looks at the world and sees that there are all kinds of problems and issues and negative directions that lots of things are going, and it feels frustrating and depressing. And I feel that given that I’ve got a particular day job that’ll affords me a lot of freedom, given that I have this position at Future of Life Institute, that there are a lot of talented people around who I’m able to work with, there’s a huge opportunity, and a rare opportunity to actually do something.

Who knows how effective it’ll actually be in the end, but to try to do something and to take advantage of the freedom, and standing, and relationships, and capabilities that I have available. I kind of see that as a duty in a sense, that if you find in a place where you have a certain set of capabilities, and resources, and flexibility, and safety, you kind of have a duty to make use of that for something beneficial. I sort of feel that, and so try to do so, but I also feel like it’s just super interesting, thinking about the ways that you can create things that can be effective, it’s just a fun intellectual challenge. 

There are certainly aspects of what I do at Future of Life Institute that are sort of, “Oh, yeah, this is important so I should do it, but I don’t really feel like it.” Those are occasionally there, but mostly it feels like, “Ooh, this is really interesting and exciting, I want to get this done and see what happens.” So in that sense it’s really gratifying in both ways, to feel like it’s both potentially important and positive, but also really fun and interesting.

Max Tegmark: What really motivates me is this optimistic realization that after 13.8 billion years of cosmic history, we have reached this fork in the road where we have these conscious entities on this little spinning ball in space here who, for the first time ever, have the future in their own hands. In the stone age, who cared what you did? Life was going to be more or less the same 200 years later regardless, right? Whereas now, we can either develop super powerful technology and use it to destroy life on earth completely, go extinct and so on. Or, we can create a future where, with the help of artificial intelligence amplifying our intelligence, we can help life flourish like never before. And I’m not talking just about the next election cycle, I’m talking about for billions of years. And not just here, but throughout much of our amazing universe. So I feel actually that we have a huge responsibility, and a very exciting one, to make sure we don’t squander this opportunity, don’t blow it. That’s what lights me on fire.

Lucas Perry: So I’m deeply motivated by the possibilities of the deep future. I often take cosmological or macroscopic perspectives when thinking about my current condition or the condition of life on earth. The universe is about 13.8 billion years old and our short lives of only a few decades are couched within the context of this ancient evolving system of which we are a part. As far as we know, consciousness has only really exploded and come onto the scene in the past few hundred million years, at least in our sector of space and time, and the fate of the universe is uncertain but it seems safe to say that we have at least billions upon billions of years left before the universe perishes in some way. That means there’s likely longer than the current lifetime of the universe for earth originating intelligent life to do and experience amazing and beautiful things beyond what we can even know or conceive of today.

It seems very likely to me that the peaks and depths of human consciousness, from the worst human misery to the greatest of joy, peace, euphoria, and love, represent only a very small portion of a much larger and higher dimensional space of possible conscious experiences. So given this, I’m deeply moved by the possibility of artificial intelligence being the next stage in the evolution of life and the capacities for that intelligence to solve existential risk, for that intelligence to explore the space of consciousness and optimize the world, for super-intelligent and astronomical degrees of the most meaningful and profound states of consciousness possible. So sometimes I ask myself, what’s a universe good for if not ever evolving into higher and more profound and intelligent states of conscious wellbeing? I’m not sure, and this is still an open question for sure, but this deeply motivates me as I feel that the future can be unimaginably good to degrees and kinds of wellbeing that we can’t even conceive of today. There’s a lot of capacity there for the future to be something that is really, really, really worth getting excited and motivated about.

And moving along in terms of questioning again here, this question is again for the whole team: do you have anything more specifically that you hope results from your work, or is born of your work at FLI?

Jared Brown: So, I have two primary objectives, the first is sort of minor but significant. A lot of what I do on a day-to-day basis is advocate for relatively minor changes to existing and future near term policy on emerging technology. And some of these changes won’t make a world of difference unto themselves, but the small marginal benefits to the future can cumulate rather significantly overtime. So, I look for as many small wins as possible in different policy-making environments, and try and achieve those on a regular basis.

And then more holistically in the long-run, I really want to help destigmatize the discussion around global catastrophic and existential risk, and Traditional National Security, and International Security policy-making. It’s still quite an obscure and weird thing to say to people, I work on global catastrophic and existential risk, and it really shouldn’t be. I should be able talk to most policy-makers in security related fields, and have it not come off as a weird or odd thing to be working on. Because inherently what we’re talking about is the very worst of what could happen to you or humanity or even life as we know it on this planet. And there should be more people who work on these issues both from an effective altruistic perspective and other perspectives going forward.

Jessica Cussins Newman: I want to raise awareness about the impacts of AI and the kinds of levers that we have available to us today to help shape these trajectories. So from designing more robust machine learning models, to establishing the institutional procedures or processes that can track and monitor those design decisions and outcomes and impacts, to developing accountability and governance mechanisms to ensure that those AI systems are contributing to a better future. We’ve built a tool that can automate decision making, but we need to retain human control and decide collectively as a society where and how to implement these new abilities.

Max Tegmark: I feel that there’s a huge disconnect right now between our potential, as the human species, and the direction we’re actually heading in. We are spending most of our discussions in news media on total BS. You know, like country A and country B are squabbling about something which is quite minor, in the grand scheme of things, and people are often treating each other very badly in the misunderstanding that they’re in some kind of zero-sum game, where one person can only get better off if someone else gets worse off. Technology is not a zero-sum game. Everybody wins at the same time, ultimately, if you do it right. 

Why are we so much better off now than 50,000 years ago or 300 years ago? It’s because we have antibiotics so we don’t die of stupid diseases all the time. It’s because we have the means to produce food and keep ourselves warm, and so on, with technology, and this is nothing compared to what AI can do.

I’m very much hoping that this mindset that we all lose together or win together is something that can catch on a bit more as people gradually realize the power of this tech. It’s not the case that either China is going to win and the U.S. is going to lose, or vice versa. What’s going to happen is either we’re both going to lose because there’s going to be some horrible conflict and it’s going to ruin things for everybody, or we’re going to have a future where people in China are much better off, and people in the U.S. and elsewhere in the world are also much better off, and everybody feels that they won. There really is no third outcome that’s particularly likely.

Lucas Perry: So, in the short term, I’m hoping that all of the projects we’re engaging with help to nudge the trajectory of life on earth in a positive direction. I’m hopeful that we can mitigate an arms race in lethal autonomous weapons. I see that as being a crucial first step in coordination around AI issues such that, if that fails, it may likely be much harder to coordinate in the future on making sure that beneficial AI takes place. I am also hopeful that we can promote beneficial AI alignment and AI safety research farther and mainstream its objectives and understandings about the risks posed by AI and what it means to create beneficial AI. I’m hoping that we can maximize the wisdom with which we handle technology through projects and outreach, which explicitly cultivate ethics and coordination and governance in ways which help to direct and develop technologies in ways that are beneficial.

I’m also hoping that we can promote and instantiate a culture and interest in existential risk issues and the technical, political, and philosophical problems associated with powerful emerging technologies like AI. It would be wonderful if the conversations that we have on the podcast and at FLI and in the surrounding community weren’t just something for us. These are issues that are deeply interesting and will ever become more important as technology becomes more powerful. And so I’m really hoping that one day discussions about existential risk and all the kinds of conversations that we have on the podcast are much more mainstream, are normal, that there are serious institutions in government and society which explore these, is part of common discourse as a society and civilization.

Emilia Javorsky: In an ideal world, all of FLI’s work in this area, a great outcome would be the realization of the Asilomar principle that an arms race in lethal autonomous weapons must be avoided. I hope that we do get there in the shorter term. I think the activities that we’re doing now on increasing awareness around this issue, better understanding and characterizing the unique risks that these systems pose across the board from a national security perspective, a human rights perspective, and an AI governance perspective, are a really big win in my book.

Meia Chita-Tegmark: When I allow myself to unreservedly daydream about how I want my work to manifest itself into the world, I always conjure up fantasy utopias in which people are cared for and are truly inspired. For example, that’s why I am very committed to fighting against the development of lethal autonomous weapons. It’s precisely because a world with such technologies would be one in which human lives would be cheap, killing would be anonymous, our moral compass would likely be very damaged by this. I want to start work on using technology to help people, maybe to heal people. In my research, I tried to think of various disabilities and how technology can help with those, but that is just one tiny aspect of a wealth of possibilities for using technology, and in particular, AI for good.

Anthony Aguirre: I’ll be quite gratified if I can find that some results of some of the things that I’ve done help society be better and more ready, and to wisely deal with challenges that are unfolding. There are a huge number of problems in society, but there are a particular subset that are just sort of exponentially growing problems, because they have to do with exponentially advancing technology. And the set of people who are actually thinking proactively of the problems that those technologies are going to create, rather than just creating the technologies or sort of dealing with the problems when they arise, it’s quite small.

FLI is a pretty significant part of that tiny community of people who are thinking about that. But I also think it’s very important. Problems are better solved in advance, if possible. So I think anything that we can do to nudge things in the right direction, taking the relatively high point of leverage I think the Future of Life Institute has, will feel useful and worthwhile. Any of these projects being successful, I think will have a significant positive impact, and it’s just a question of buckling down and trying to get them to work.

Kirsten Gronlund: A big part of this field, not necessarily, but sort of just historically has been that it’s very male, and it’s very white, and in and of itself is a pretty privileged group of people, and something that I personally care about a lot is to try to expand some of these conversations around the future, and what we want it to look like, and how we’re going to get there, and involve more people and more diverse voices, more perspectives.

It goes along with what I was saying, that if we don’t figure out how to use these technologies in better ways, we’re just going to be contributing to people who have historically been benefiting from technology, and so I think bringing some of the people who have historically not been benefiting from technology and the way that our society is structured into these conversations, can help us figure out how to make things better. I’ve definitely been trying, while we’re doing this book guide thing, to make sure that there’s a good balance of male and female authors, people of color, et cetera and same with our podcast guests and things like that. But yeah, I mean I think there’s a lot more to be done, definitely, in that area.

Tucker Davey: So with the projects related to FLI’s AI communication strategy, I am hopeful that as an overall community, as an AI safety community, as an effective altruism community, existential risk community, we’ll be able to better understand what our core beliefs are about risks from advanced AI, and better understand how to communicate to different audiences, whether these are policymakers that we need to convince that AI is a problem worth considering, or whether it’s just the general public, or shareholders, or investors. Different audiences have different ideas of AI, and if we as a community want to be more effective at getting them to care about this issue and understand that it’s a big risk, we need to figure out better ways to communicate with them. And I’m hoping that a lot of this communications work will help the community as a whole, not just FLI, communicate with these different parties and help them understand the risks.

Ian Rusconi: Well, I can say that I’ve learned more since I started working on these podcasts about more disparate subjects than I had any idea about. Take lethal autonomous weapon systems, for example, I didn’t know anything about that subject when I started. These podcasts are extremely educational, but they’re conversational, and that makes them accessible, and I love that. And I hope that as our audience increases, other people find the same thing and keep coming back because we learn something new every time. I think that through podcasts, like the ones that we put out at FLI, we are enabling that sort of educational enrichment.

Lucas Perry: Cool. I feel the same way. So, you actually have listened to more FLI podcasts than perhaps anyone, since you’ve listened to all of them. Of all of these podcasts, do you have any specific projects, or a series that you have found particularly valuable? Any favorite podcasts, if you could mention a few, or whatever you found most valuable?

Ian Rusconi: Yeah, a couple of things. First, back in February, Ariel and Max Tegmark did a two part conversation with Matthew Meselson in advance of FLI awarding him in April, and I think that was probably the most fascinating and wide ranging single conversation I’ve ever heard. Philosophy, science history, weapons development, geopolitics, the value of the humanities from a scientific standpoint, artificial intelligence, treaty development. It was just such an incredible amount of lived experience and informed perspective in that conversation. And, in general, when people ask me what kinds of things we cover on the FLI podcast, I point them to that episode.

Second, I’m really proud of the work that we did on Not Cool, A Climate Podcast. The amount of coordination and research Ariel and Kirsten put in to make that project happen was staggering. I think my favorite episodes from there were those dealing with the social ramifications of climate change, specifically human migration. It’s not my favorite topic to think about, for sure, but I think it’s something that we all desperately need to be aware of. I’m oversimplifying things here, but Kris Ebi’s explanations of how crop failure and malnutrition and vector borne diseases can lead to migration, Cullen Hendrix touching on migration as it relates to the social changes and conflicts born of climate change, Lindsay Getschel’s discussion of climate change as a threat multiplier and the national security implications of migration.

Migration is happening all the time and it’s something that we keep proving we’re terrible at dealing with, and climate change is going to increase migration, period. And we need to figure out how to make it work and we need to do it in a way that ameliorates living standards and prevents this extreme concentrated suffering. And there are questions about how to do this while preserving cultural identity, and the social systems that we have put in place, and I know none of these are easy. But if instead we’d just take the question of, how do we reduce suffering? Well, we know how to do that and it’s not complicated per se: have compassion and act on it. We need compassionate government and governance. And that’s a thing that came up a few times, sometimes directly and sometimes obliquely, in Not Cool. The more I think about how to solve problems like these, the more I think the intelligent answer is compassion.

Lucas Perry: So, do you feel like you just learned a ton about climate change from the Not Cool podcast that you just had no idea about?

Ian Rusconi: Yeah, definitely. And that’s really something that I can say about all of FLI’s podcast series in general, is that there are so many subtopics on the things that we talk about that I always learn something new every time I’m putting together one of these episodes. 

Some of the actually most thought provoking podcasts to me are the ones about the nature of intelligence and cognition, and what it means to experience something, and how we make decisions. Two of the AI Alignment Podcast episodes from this year stand out to me in particular. First was the one with Josh Green in February, which did an excellent job of explaining the signal grounding problem and grounded cognition in an understandable and engaging way. And I’m also really interested in his lab’s work using the veil of ignorance. And second was the episode with Mike Johnson and Andres Gomez Emilsson of the Qualia Research Institute in May, where I particularly liked the discussion of electromagnetic harmony in the brain, and the interaction between the consonance and dissonance of it’s waves, and how you can basically think of music as a means by which we can hack our brains. Again, it gets back to the fabulously, extraordinarily interdisciplinary aspect of everything that we talk about here.

Lucas Perry: Kirsten, you’ve also been integral to the podcast process. What are your favorite things that you’ve done at FLI in 2019, and are there any podcasts in particular that stand out for you?

Kirsten Gronlund: The Women For The Future campaign was definitely one of my favorite things, which was basically just trying to highlight the work of women involved in existential risk, and through that try to get more women feeling like this is something that they can do and to introduce them to the field a little bit. And then also the Not Cool Podcast that Ariel and I did. I know climate isn’t the major focus of FLI, but it is such an important issue right now, and it was really just interesting for me because I was much more closely involved with picking the guests and stuff than I have been with some of the other podcasts. So it was just cool to learn about various people and their research and what’s going to happen to us if we don’t fix the climate. 

Lucas Perry: What were some of the most interesting things that you learned from the Not Cool podcast? 

Kirsten Gronlund: Geoengineering was really crazy. I didn’t really know at all what geoengineering was before working on this podcast, and I think it was Alan Robock in his interview who was saying even just for people to learn about the fact that one of the solutions that people are considering to climate change right now being shooting a ton of crap into the atmosphere and basically creating a semi nuclear winter, would hopefully be enough to kind of freak people out into being like, “maybe we should try to fix this a different way.” So that was really crazy.

I also thought it was interesting just learning about some of the effects of climate change that you wouldn’t necessarily think of right away. The fact that they’ve shown the links between increased temperature and upheaval in government, and they’ve shown links between increased temperature and generally bad mood, poor sleep, things like that. The quality of our crops is going to get worse, so we’re going to be eating less nutritious food.

Then some of the cool things, I guess this ties in as well with artificial intelligence, is some of the ways that people are using some of these technologies like AI and machine learning to try to come up with solutions. I thought that was really cool to learn about, because that’s kind of like what I was saying earlier where if we can figure out how to use these technologies in productive ways. They are such powerful tools and can do so much good for us. So it was cool to see that in action in the ways that people are implementing automated systems and machine learning to reduce emissions and help out with the climate.

Lucas Perry: From my end, I’m probably most proud of our large conference, Beneficial AGI 2019, we did to further mainstream AGI safety thinking and research and then the resulting projects which were a result of conversations which took place there were also very exciting and encouraging. I’m also very happy about the growth and development of our podcast series. This year, we’ve had over 200,000 listens to our podcasts. So I’m optimistic about the continued growth and development of our outreach through this medium and our capacity to inform people about these crucial issues.

Everyone else, other than podcasts, what are some of your favorite things that you’ve done at FLI in 2019?

Tucker Davey: I would have to say the conferences. So the beneficial AGI conference was an amazing start to the year. We gathered such a great crowd in Puerto Rico, people from the machine learning side, from governance, from ethics, from psychology, and really getting a great group together to talk out some really big questions, specifically about the long-term future of AI, because there’s so many conferences nowadays about the near term impacts of AI, and very few are specifically dedicated to thinking about the long term. So it was really great to get a group together to talk about those questions and that set off a lot of good thinking for me personally. That was an excellent conference. 

And then a few months later, Anthony and a few others organized a conference called the Augmented Intelligence Summit, and that was another great collection of people from many different disciplines, basically thinking about a hopeful future with AI and trying to do world building exercises to figure out what that ideal world with AI would look like. These conferences and these events in these summits do a great job of bringing together people from different disciplines in different schools of thought to really tackle these hard questions, and everyone who attends them is really dedicated and motivated, so seeing all those faces is really inspiring.

Jessica Cussins Newman: I’ve really enjoyed the policy engagement that we’ve been able to have this year. You know, looking back to last year, we did see a lot of successes around the development of ethical principles for AI, and I think this past year, there’s been significant interest in actually implementing those principles into practice. So seeing many different governance forums, both within the U.S. and around the world, look to that next level, and so I think one of my favorite things has just been seeing FLI become a trusted resource for so many of those governance and policies processes that I think will significantly shape the future of AI.

I think the thing that I continue to value significantly about FLI is its ability as an organization to just bring together an amazing network of AI researchers and scientists, and to be able to hold events, and networking and outreach activities, that can merge those communities with other people thinking about issues around governance or around ethics or other kinds of sectors and disciplines. We have been playing a key role in translating some of the technical challenges related to AI safety and security into academic and policy spheres. And so that continues to be one of my favorite things that FLI is really uniquely good at.

Jared Brown: A recent example here, Future of Life Institute submitted some comments on a regulation that the Department of Housing and Urban Development put out in the U.S. And essentially the regulation is quite complicated, but they were seeking comment about how to integrate artificial intelligence systems into the legal liability framework surrounding something called ‘the Fair Housing Act,’ which is an old, very important civil rights legislation and protection to prevent discrimination in the housing market. And their proposal was essentially to grant users, such as a mortgage lender, or the banking system seeking loans, or even a landlord, if they were to use an algorithm to decide who they rent out a place to, or who to give a loan, that met certain technical standards, they’d be given liability protection. And this stems from the growing use of AI in the housing market. 

Now, in theory, there’s nothing wrong with using algorithmic systems so long as they’re not biased, and they’re accurate, and well thought out. However, if you grant it like HUD wanted to, blanket liability protection, you’re essentially telling that bank officer or that landlord that they should only exclusively use those AI systems that have the liability protection. And if they see a problem in those AI systems, and they’ve got somebody sitting across from them, and think this person really should get a loan, or this person should be able to rent my apartment because I think they’re trustworthy, but the AI algorithm says “no,” they’re not going to dispute what the AI algorithm tells them too, because to do that, they take on liability of their own, and could potentially get sued. So, there’s a real danger here in moving too quickly in terms of how much legal protection we give these systems. And so, the Future of Life Institute, as well as many other different groups, commented on this proposal and pointed out these flaws to the Department of Housing and Urban Development. That’s an example of just one of many different things that the Future of Life has done, and you can actually go online and see our public comments for yourself, if you want to.

Lucas Perry:Wonderful.

Jared Brown: Honestly, a lot of my favorite things are just these off the record type conversations that I have in countless formal and informal settings with different policymakers and people who influence policy. The policy-making world is an old-fashioned, face-to-face type business, and essentially you really have to be there, and to meet these people, and to have these conversations to really develop a level of trust, and a willingness to engage with them in order to be most effective. And thankfully I’ve had a huge range of those conversations throughout the year, especially on AI. And I’ve been really excited to see how well received Future of Life has been as an institution. Our reputation precedes us because of a lot of the great work we’ve done in the past with the Asilomar AI principles, and the AI safety grants. It’s really helped me get in the room for a lot of these conversations, and given us a lot of credibility as we discuss near-term AI policy.

In terms of bigger public projects, I also really enjoyed coordinating with some community partners across the space in our advocacy on the U.S. National Institute of Standards and Technology’s plan for engaging in the development of technical standards on AI. In the policy realm, it’s really hard to see some of the end benefit of your work, because you’re doing advocacy work, and it’s hard to get folks to really tell you why the certain changes were made, and if you were able to persuade them. But in this circumstance, I happen to know for a fact that we had real positive effect on the end products that they developed. I talked to the lead authors about it, and others, and can see the evidence in the final product of the effect of our changes.

In addition to our policy and advocacy work, I really, really like that FLI continues to interface with the AI technical expert community on a regular basis. And this isn’t just through our major conferences, but also informally throughout the entire year, through various different channels and personal relationships that we’ve developed. It’s really critical for anyone’s policy work to be grounded in the technical expertise on the topic that they’re covering. And I’ve been thankful for the number of opportunities I’ve been given throughout the year to really touch base with some of the leading minds in AI about what might work best, and what might not work best from a policy perspective, to help inform our own advocacy and thinking on various different issues.

I also really enjoy the educational and outreach work that FLI is doing. As with our advocacy work, it’s sometimes very difficult to see the end benefit of the work that we do with our podcasts, and our website, and our newsletter. But I know anecdotally, from various different people, that they are listened too, that they are read by leading policymakers and researchers in this space. And so, they have a real effect on developing a common understanding in the community and helping network and develop collaboration on some key topics that are of interest to the Future of Life and people like us.

Emilia Javorsky: 2019 was a great year at FLI. It’s my first year at FLI, so I’m really excited to be part of such an incredible team. There are two real highlights that come to mind. One was publishing an article in the British Medical Journal on this topic of engaging the medical community in the lethal autonomous weapons debate. In previous disarmament conversations, it’s always been a community that has played an instrumental role in getting global action on these issues passed, whether you look at nuclear, landmines, biorisk… So that was something that I thought was a great contribution, because up until now, they hadn’t really been engaged in the discussion.

The other that comes to mind that was really amazing was a workshop that we hosted, where we brought together AI researchers, and roboticists, and lethal autonomous weapons experts, with very divergent range of views of the topic, to see if they could achieve consensus on something. Anything. We weren’t really optimistic to say what that could be going into it, and the result of that was actually remarkably heartening. They came up with a roadmap that outlined four components for action on lethal autonomous weapons, including things like the potential role that a moratorium may play, research areas that need exploration, non-proliferation strategies, ways to avoid unintentional escalation. They actually published this in the IEEE Spectrum, which I really recommend reading, but it was just really exciting to see just how much area of agreement and consensus that can exist in people that you would normally think have very divergent views on the topic.

Max Tegmark: To make it maximally easy for them to get along, we actually did this workshop in our house, and we had lots of wine. And because they were in our house, also it was a bit easier to exert social pressure on them to make sure they were nice to each other, and have a constructive discussion. The task we gave them was simply: write down anything that they all agreed on that should be done to reduce the risk of terrorism or destabilizing events from this tech. And you might’ve expected a priori that they would come up with a blank piece of paper, because some of these people had been arguing very publicly that we need lethal autonomous weapons, and others had been arguing very vociferously that we should ban them. Instead, it was just so touching to see that when they actually met each other, often for the first time, they could actually listen directly to each other, rather than seeing weird quotes in the news about each other. 

Meia Chita-Tegmark: If I had to pick one thing, especially in terms of emotional intensity, it’s really been a while since I’ve been on such an emotional roller coaster as the one during the workshop related to lethal autonomous weapons. It was so inspirational to see how people that come with such diverging opinions could actually put their minds together, and work towards finding consensus. For me, that was such a hope inducing experience. It was a thrill.

Max Tegmark: They built a real camaraderie and respect for each other, and they wrote this report with five different sets of recommendations in different areas, including a moratorium on these things and all sorts of measures to reduce proliferation, and terrorism, and so on, and that made me feel more hopeful.

We got off to a great start I feel with our January 2019 Puerto Rico conference. This was the third one in a series where we brought together world leading AI researchers from academia, and industry, and other thinkers, to talk not about how to make AI more powerful, but how to make it beneficial. And what I was particularly excited about was that this was the first time when we also had a lot of people from China. So it wasn’t just this little western club, it felt much more global. It was very heartening to meet to see how well everybody got along and shared visions people really, really had. And I hope that if people who are actually building this stuff can all get along, can help spread this kind of constructive collaboration to the politicians and the political leaders in their various countries, we’ll all be much better off.

Anthony Aguirre: That felt really worthwhile in multiple aspects. One, just it was a great meeting getting together with this small, but really passionately positive, and smart, and well-intentioned, and friendly community. It’s so nice to get together with all those people, it’s very inspiring. But also, that out of that meeting came a whole bunch of ideas for very interesting and important projects. And so some of the things that I’ve been working on are projects that came out of that meeting, and there’s a whole long list of other projects that came out of that meeting, some of which some people are doing, some of which are just sitting, gathering dust, because there aren’t enough people to do them. That feels like really good news. It’s amazing when you get a group of smart people together to think in a way that hasn’t really been widely done before. Like, “Here’s the world 20 or 30 or 50 or 100 years from now, what are the things that we’re going to want to have happened in order for the world to be good then?”

Not many people sit around thinking that way very often. So to get 50 or 100 people who are really talented together thinking about that, it’s amazing how easy it is to come up with a set of really compelling things to do. Now actually getting those done, getting the people and the money and the time and the organization to get those done is a whole different thing. But that was really cool to see, because you can easily imagine things that have a big influence 10 or 15 years from now that were born right at that meeting.

Lucas Perry: Okay, so that hits on BAGI. So, were there any other policy-related things that you’ve done at FLI in 2019 that you’re really excited about?

Anthony Aguirre: It’s been really good to see, both at FLI and globally, the new and very serious attention being paid to AI policy and technology policy in general. We created the Asilomar principles back in 2017, and now two years later, there are multiple other sets of principles, many of which are overlapping and some of which aren’t. And more importantly, now institutions coming into being, international groups like the OECD, like the United Nations, the European Union, maybe someday the US government, actually taking seriously these sets of principles about how AI should be developed and deployed, so as to be beneficial.

There’s kind of now too much going on to keep track of, multiple bodies, conferences practically every week, so the FLI policy team has been kept busy just keeping track of what’s going on, and working hard to positively influence all these efforts that are going on. Because of course while there’s a lot going on, it doesn’t necessarily mean that there’s a huge amount of expertise that is available to feed those efforts. AI is relatively new on the world’s stage, at least at the size that it’s assuming. AI and policy expertise, that intersection, there just aren’t a huge number of people who are ready to give useful advice on the policy side and the technical side and what the ramifications are and so on.

So I think the fact that FLI has been there from the early days of AI policy five years ago, means that we have a lot to offer to these various efforts that are going on. I feel like we’ve been able to really positively contribute here and there, taking opportunistic chances to lend our help and our expertise to all kinds of efforts that are going on and doing real serious policy work. So that’s been really interesting to see that unfold and how rapidly these various efforts are gearing up around the world. I think that’s something that FLI can really do, bringing the technical expertise to make those discussions and arguments more sophisticated, so that we can really take it to the next step and try to get something done.

Max Tegmark: Another one which was very uplifting is this tradition we have to celebrate unsung heroes. So three years ago we celebrated the guy who prevented the world from getting nuked in 1962, Vasili Arkhipov. Two years ago, we celebrated the man who probably helped us avoid getting nuked in 1983, Stanislav Petrov. And this year we celebrated an American who I think has done more than anyone else to prevent all sorts of horrible things happening with bioweapons, Matthew Meselson from Harvard, who ultimately persuaded Kissinger, who persuaded Brezhnev and everyone else that we should just ban them. 

We celebrated them all by giving them or their survivors a $50,000 award and having a ceremony where we honored them, to remind the world of how valuable it is when you can just draw a clear, moral line between the right thing to do and the wrong thing to do. Even though we call this the Future of Life award officially, informally, I like to think of this as our unsung hero award, because there really aren’t awards particularly for people who prevented shit from happening. Almost all awards are for someone causing something to happen. Yet, obviously we wouldn’t be having this conversation if there’d been a global thermonuclear war. And it’s so easy to think that just because something didn’t happen, there’s not much to think about it. I’m hoping this can help create both a greater appreciation of how vulnerable we are as a species and the value of not being too sloppy. And also, that it can help foster a tradition that if someone does something that future generations really value, we actually celebrate them and reward them. I want us to have a norm in the world where people know that if they sacrifice themselves by doing something courageous, that future generations will really value, then they will actually get appreciation. And if they’re dead, their loved ones will get appreciation.

We now feel incredibly grateful that our world isn’t radioactive rubble, or that we don’t have to read about bioterrorism attacks in the news every day. And we should show our gratitude, because this sends a signal to people today who can prevent tomorrow’s catastrophes. And the reason I think of this as an unsung hero award, and the reason these people have been unsung heroes, is because what they did was often going a little bit against what they were supposed to do at the time, according to the little system they were in, right? Arkhipov and Petrov, neither of them got any medals for averting nuclear war because their peers either were a little bit pissed at them for violating protocol, or a little bit embarrassed that we’d almost had a war by mistake. And we want to send the signal to the kids out there today that, if push comes to shove, you got to go with your own moral principles.

Lucas Perry: Beautiful. What project directions are you most excited about moving in, in 2020 and beyond?

Anthony Aguirre: Along with the ones that I’ve already mentioned, something I’ve been involved with is Metaculus, this prediction platform, and the idea there is there are certain facts about the future world, and Metaculus is a way to predict probabilities for those facts being true about the future world. But they’re also facts about the current world, that we either don’t know whether they’re true or not or we disagree about whether they’re true or not. Something I’ve been thinking a lot about is how to extend the predictions of Metaculus into a general truth-seeking mechanism. If there’s something that’s contentious now, and people disagree about something that should be sort of a fact, can we come up with a reliable truth-seeking arbiter that people will believe, because it’s been right in the past, and it has very clear reliable track record for getting things right, in the same way that Metaculus has that record for getting predictions right?

So that’s something that interests me a lot, is kind of expanding that very strict level of accountability and track record creation from prediction to just truth-seeking. And I think that could be really valuable, because we’re entering this phase where people feel like they don’t know what’s true and facts are under contention. People simply don’t know what to believe. The institutions that they’re used to trusting to give them reliable information are either conflicting with each other or getting drowned in a sea of misinformation.

Lucas Perry: So, would this institution gain its credibility and epistemic status and respectability by taking positions on unresolved, yet concrete issues, which are likely to resolve in the short-term?

Anthony Aguirre: Or the not as short-term. But yeah, so just like in a prediction, where there might be disagreements as to what’s going to happen because nobody quite knows, and then at some point something happens and we all agree, “Oh, that happened, and some people were right and some people were wrong,” I think there are many propositions under contention now, but in a few years when the dust has settled and there’s not so much heat about them, everybody’s going to more or less agree on what the truth was.

And so I think, in a sense, this is about saying, “Here’s something that’s contentious now, let’s make a prediction about how that will turn out to be seen five or 10 or 15 years from now, when the dust has settled people more or less agree on how this was.”

I think there’s only so long that people can go without feeling like they can actually rely on some source of information. I mean, I do think that there is a reality out there, and ultimately you have to pay a price if you are not acting in accordance with what is true about that reality. You can’t indefinitely win by just denying the truth of the way that the world is. People seem to do pretty well for awhile, but I maintain my belief that eventually there will be a competitive advantage in understanding the way things actually are, rather than your fantasy of them.

We in the past did have trusted institutions that people generally listened to, and felt like I’m being told that basic truth. Now they weren’t always, and there were lots of problems with those institutions, but we’ve lost something, in that almost nobody trusts anything anymore at some level, and we have to get that back. We will solve this problem, I think, in the sense that we sort of have to. What that solution will look like is unclear, and this is sort of an effort to seek some way to kind of feel our way towards a potential solution to that.

Tucker Davey: I’m definitely excited to continue this work on our AI messaging and generally just continuing the discussion about advanced AI and artificial general intelligence within the FLI team and within the broader community, to get more consensus about what we believe and how we think we should approach these topics with different communities. And I’m also excited to see how our policy team continues to make more splashes across the world, because it’s really been exciting to watch how Jared and Jessica and Anthony have been able to talk with so many diverse shareholders and help them make better decisions about AI.

Jessica Cussins Newman: I’m most excited to see the further development of some of these global AI policy forums in 2020. For example, the OECD is establishing an AI policy observatory, which we’ll see further development on early in next year. And FLI is keen to support this initiative, and I think it may be a really meaningful forum for global coordination and cooperation on some of these key AI global challenges. So I’m really excited to see what they can achieve.

Jared Brown: I’m really looking forward to the opportunity the Future of Life has to lead the implementation of a recommendation related to artificial intelligence from the UN’s High-Level Panel on Digital Cooperation. This is a group that was led by Jack Ma and Melinda Gates, and they produced an extensive report that had many different recommendations on a range of digital or cyber issues, including one specifically on artificial intelligence. And because of our past work, we were invited to be a leader on the effort to implement and further refine the recommendation on artificial intelligence. And we’ll be able to do that with cooperation from the government of France, and Finland, and also with a UN agency called the UN Global Pulse. So I’m really excited about this opportunity to help lead a major project in the global governance arena, and to help actualize how some of these early soft law norms that have developed in AI policy can be developed for a better future.

I’m also excited about continuing to work with other civil society organizations, such as the Future of Humanity Institute, the Center for the Study of Existential Risk, other groups that are like-minded in their approach to tech issues. And helping to inform how we work on AI policy in a number of different governance spaces, including with the European Union, the OECD, and other environments where AI policy has suddenly become the topic du jour of interest to policy-makers.

Emilia Javorsky: Something that I’m really excited about is continuing to work on this issue of global engagement in the topic of lethal autonomous weapons, as I think this issue is heading in a very positive direction. By that I mean starting to move towards meaningful action. And really the only way we get to action on this issue is through education, because policy makers really need to understand what these systems are, what their risks are, and how AI differs from traditional other areas of technology that have really well established existing governance frameworks. So that’s something I’m really excited about for the next year. And this has been especially in the context of engaging with states at the United nations. So it’s really exciting to continue those efforts and continue to keep this issue on the radar.

Kirsten Gronlund: I’m super excited about our website redesign. I think that’s going to enable us to reach a lot more people and communicate more effectively, and obviously it will make my life a lot easier. So I think that’s going to be great.

Lucas Perry: I’m excited about that too. I think there’s a certain amount of a maintenance period that we need to kind of go through now, with regards to the website and a bunch of the pages, so that everything is refreshed and new and structured better. 

Kirsten Gronlund: Yeah, we just need like a little facelift. We are aware that the website right now is not super user friendly, and we are doing an incredibly in depth audit of the site to figure out, based on data, what’s working and what isn’t working, and how people would best be able to use the site to get the most out of the information that we have, because I think we have really great content, but the way that the site is organized is not super conducive to finding it, or using it.

So anyone who likes our site and our content but has trouble navigating or searching or anything: hopefully that will be getting a lot easier.

Ian Rusconi: I think I’d be interested in more conversations about ethics overall, and how ethical decision making is something that we need more of, as opposed to just economic decision making, and reasons for that with actual concrete examples. It’s one of the things that I find is a very common thread throughout almost all of the conversations that we have, but is rarely explicitly connected from one episode to another. And I think that there is some value in creating a conversational narrative about that. If we look at, say, the Not Cool Project, there are episodes about finance, and episodes about how the effects of what we’ve been doing to create global economy have created problems. And if we look at the AI Alignment Podcasts, there are concerns about how systems will work in the future, and who they will work for, and who benefits from things. And if you look at FLI’s main podcast, there are concerns about denuclearization, and lethal autonomous weapons, and things like that, and there are major ethical considerations to be had in all of these.

And I think that there’s benefit in taking all of these ethical considerations, and talking about them specifically outside of the context of the fields that they are in, just as a way of getting more people to think about ethics. Not in opposition to thinking about, say, economics, but just to get people thinking about ethics as a stand-alone thing, before trying to introduce how it’s relevant to something. I think if more people thought about ethics, we would have a lot less problems than we do.

Lucas Perry: Yeah, I would be interested in that too. I would first want to know empirically how much of the decisions that the average human being makes a day are actually informed by “ethical decision making,” which I guess my intuition at the moment is probably not that much?

Ian Rusconi: Yeah, I don’t know how much ethics plays into my autopilot-type decisions. I would assume. Probably not very much.

Lucas Perry: Yeah. We think about ethics explicitly a lot. I think that that definitely shapes my terminal values. But yeah, I don’t know, I feel confused about this. I don’t know how much of my moment to moment lived experience and decision making is directly born of ethical decision making. So I would be interested in that too, with that framing that I would first want to know the kinds of decision making faculties that we have, and how often each one is employed, and the extent to which improving explicit ethical decision making would help in making people more moral in general.

Ian Rusconi: Yeah, I could absolutely get behind that.

Max Tegmark: What I find also to be a concerning trend, and a predictable one, is that just like we had a lot of greenwashing in the corporate sector about environmental and climate issues, where people would pretend to care about the issues just so they didn’t really have to do much, we’re seeing a lot of what I like to call “ethics washing” now in AI, where people say, “Yeah, yeah. Okay, let’s talk about AI ethics now, like an ethics committee, and blah, blah, blah, but let’s not have any rules or regulations, or anything. We can handle this because we’re so ethical.” And interestingly, the very same people who talk the loudest about ethics are often among the ones who are the most dismissive about the bigger risks from human level AI, and beyond. And also the ones who don’t want to talk about malicious use of AI, right? They’ll be like, “Oh yeah, let’s just make sure that robots and AI systems are ethical and do exactly what they’re told,” but they don’t want to discuss what happens when some country, or some army, or some terrorist group has such systems, and tells them to do things that are horrible for other people. That’s an elephant in the room we are looking forward to help draw more attention to, I think, in the coming year. 

And what I also feel is absolutely crucial here is to avoid splintering the planet again, into basically an eastern and a western zone of dominance that just don’t talk to each other. Trade is down between China and the West. China has its great firewall, so they don’t see much of our internet, and we also don’t see much of their internet. It’s becoming harder and harder for students to come here from China because of visas, and there’s sort of a partitioning into two different spheres of influence. And as I said before, this is a technology which could easily make everybody a hundred times better or richer, and so on. You can imagine many futures where countries just really respect each other’s borders, and everybody can flourish. Yet, major political leaders are acting like this is some sort of zero-sum game. 

I feel that this is one of the most important things to help people understand that, no, it’s not like we have a fixed amount of money or resources to divvy up. If we can avoid very disruptive conflicts, we can all have the future of our dreams.

Lucas Perry: Wonderful. I think this is a good place to end on that point. So, what are reasons that you see for existential hope, going into 2020 and beyond?

Jessica Cussins Newman: I have hope for the future because I have seen this trend where it’s no longer a fringe issue to talk about technology ethics and governance. And I think that used to be the case not so long ago. So it’s heartening that so many people and institutions, from engineers all the way up to nation states, are really taking these issues seriously now. I think that momentum is growing, and I think we’ll see engagement from even more people and more countries in the future.

I would just add that it’s a joy to work with FLI, because it’s an incredibly passionate team, and everybody has a million things going on, and still gives their all to this work and these projects. I think what unites us is that we all think these are some of the most important issues of our time, and so it’s really a pleasure to work with such a dedicated team.

Lucas Perry:  Wonderful.

Jared Brown: As many of the listeners will probably realize, governments across the world have really woken up to this thing called artificial intelligence, and what it means for civil society, their governments, and the future really of humanity. And I’ve been surprised, frankly, over the past year, about how many of the new national, and international strategies, the new principles, and so forth are actually quite aware of both the potential benefits but also the real safety risks associated with AI. And frankly, this time this year, last year, I wouldn’t have thought as many principles would have come out, that there’s a lot of positive work in those principles, there’s a lot of serious thought about the future of where this technology is going. And so, on the whole, I think the picture is much better than what most people might expect in terms of the level of high-level thinking that’s going on in policy-making about AI, its benefits, and its risks going forward. And so on that score, I’m quite hopeful that there’s a lot of positive soft norms to work from. And hopefully we can work to implement those ideas and concepts going forward in real policy.

Lucas Perry: Awesome.

Emilia Javorsky: I am optimistic, and it comes from having had a lot of these conversations, specifically this past year, on lethal autonomous weapons, and speaking with people from a range of views and being able to sit down, coming together, having a rational and respectful discussion, and identifying actionable areas of consensus. That has been something that has been very heartening for me, because there is just so much positive potential for humanity waiting on the science and technology shelves of today, nevermind what’s in the pipeline that’s coming up. And I think that despite all of this tribalism and hyperbole that we’re bombarded with in the media every day, there are ways to work together as a society, and as a global community, and just with each other to make sure that we realize all that positive potential, and I think that sometimes gets lost. I’m optimistic that we can make that happen and that we can find a path forward on restoring that kind of rational discourse and working together.

Tucker Davey: I think my main reasons for existential hope in 2020 and beyond are, first of all, seeing how many more people are getting involved in AI safety, in effective altruism, and existential risk mitigation. It’s really great to see the community growing, and I think just by having more people involved, that’s a huge step. As a broader existential hope, I am very interested in thinking about how we can better coordinate to collectively solve a lot of our civilizational problems, and to that end, I’m interested in ways where we can better communicate about our shared goals on certain issues, ways that we can more credibly commit to action on certain things. So these ideas of credible commitment mechanisms, whether that’s using advanced technology like blockchain or whether that’s just smarter ways to get people to commit to certain actions, I think there’s a lot of existential hope for bigger groups in society coming together and collectively coordinating to make systemic change happen.

I see a lot of potential for society to organize mass movements to address some of the biggest risks that we face. For example, I think it was last year, an AI researcher, Toby Walsh, who we’ve worked with, he organized a boycott against a South Korean company that was working to develop these autonomous weapons. And within a day or two, I think, he contacted a bunch of AI researchers and they signed a pledge to boycott this group until they decided to ditch the project. And the boycotts succeeded basically within two days. And I think that’s one good example of the power of boycotts, and the power of coordination and cooperation to address our shared goals. So if we can learn lessons from Toby Walsh’s boycott, as well as from the fossil fuel and nuclear divestment movements, I think we can start to realize some of our potential to push these big industries in more beneficial directions.

So whether it’s the fossil fuel industry, the nuclear weapons industry, or the AI industry, as a collective, we have a lot of power to use stigma to push these companies in better directions. No company or industry wants bad press. And if we get a bunch of researchers together to agree that a company’s doing some sort of bad practice, and then we can credibly say that, “Look, you guys will get bad press if you guys don’t change your strategy,” many of these companies might start to change their strategy. And I think if we can better coordinate and organize certain movements and boycotts to get different companies and industries to change their practices, that’s a huge source of existential hope moving forward.

Lucas Perry: Yeah. I mean, it seems like the point that you’re trying to articulate is that there are particular instances like this thing that happened with Toby Walsh that show you the efficacy of collective action around our issues.

Tucker Davey: Yeah. I think there’s a lot more agreement on certain shared goals such,as we don’t want banks investing in fossil fuels, or we don’t want AI companies developing weapons that can make targeted kill decisions without human intervention. And if we take some of these broad shared goals and then we develop some sort of plan to basically pressure these companies to change their ways or to adopt better safety measures, I think these sorts of collective action can be very effective. And I think as a broader community, especially with more people in the community, we have much more of a possibility to make this happen.

So I think I see a lot of existential hope from these collective movements to push industries in more beneficial directions, because they can really help us, as individuals, feel more of a sense of agency that we can actually do something to address these risks.

Kirsten Gronlund: I feel like there’s actually been a pretty marked difference in the way that people are reacting to… at least things like climate change, and I sort of feel like more generally, there’s sort of more awareness just of the precariousness of humanity, and the fact that our continued existence and success on this planet is not a given, and we have to actually work to make sure that those things happen. Which is scary, and kind of exhausting, but I think is ultimately a really good thing, the fact that people seem to be realizing that this is a moment where we actually have to act and we have to get our shit together. We have to work together and this isn’t about politics, this isn’t about, I mean it shouldn’t be about money. I think people are starting to figure that out, and it feels like that has really become more pronounced as of late. I think especially younger generations, like obviously there’s Greta Thunberg and the youth movement on these issues. It seems like the people who are growing up now are so much more aware of things than I certainly was at that age, and that’s been cool to see, I think. They’re better than we were, and hopefully things in general are getting better.

Lucas Perry: Awesome.

Ian Rusconi: I think it’s often easier for a lot of us to feel hopeless than it is to feel hopeful. Most of the news that we get comes in the form of warnings, or the existing problems, or the latest catastrophe, and it can be hard to find a sense of agency as an individual when talking about huge global issues like lethal autonomous weapons, or climate change, or runaway AI.

People frame little issues that add up to bigger ones as things like death by 1,000 bee stings, or the straw that broke the camel’s back, and things like that, but that concept works both ways. 1,000 individual steps in a positive direction can change things for the better. And working on these podcasts has shown me the number of people taking those steps. People working on AI safety, international weapons bans, climate change mitigation efforts. There are whole fields of work, absolutely critical work, that so many people, I think, probably know nothing about. Certainly that I knew nothing about. And sometimes, knowing that there are people pulling for us, that’s all we need to be hopeful. 

And beyond that, once you know that work exists and that people are doing it, nothing is stopping you from getting informed and helping to make a difference. 

Kirsten Gronlund: I had a conversation with somebody recently who is super interested in these issues, but was feeling like they just didn’t have particularly relevant knowledge or skills. And what I would say is “neither did I when I started working for FLI,” or at least I didn’t know a lot about these specific issues. But really anyone, if you care about these things, you can bring whatever skills you have to the table, because we need all the help we can get. So don’t be intimidated, and get involved.

Ian Rusconi: I guess I think that’s one of my goals for the podcast, is that it inspires people to do better, which I think it does. And that sort of thing gives me hope.

Lucas Perry: That’s great. I feel happy to hear that, in general.

Max Tegmark: Let me first give a more practical reason for hope, and then get a little philosophical. So on the practical side, there are a lot of really good ideas that the AI community is quite unanimous about, in terms of policy and things that need to happen, that basically aren’t happening because policy makers and political leaders don’t get it yet. And I’m optimistic that we can get a lot of that stuff implemented, even though policy makers won’t pay attention now. If we get AI researchers around the world to formulate and articulate really concrete proposals and plans for policies that should be enacted, and they get totally ignored for a while? That’s fine, because eventually some bad stuff is going to happen because people weren’t listening to their advice. And whenever those bad things do happen, then leaders will be forced to listen because people will be going, “Wait, what are you going to do about this?” And if at that point, there are broad international consensus plans worked out by experts about what should be done, that’s when they actually get implemented. So the hopeful message I have to anyone working in AI policy is: don’t despair if you’re being ignored right now, keep doing all the good work and flesh out the solutions, and start building consensus for it among the experts, and there will be a time people will listen to you. 

To just end on a more philosophical note, again, I think it’s really inspiring to think how much impact intelligence has had on life so far. We realize that we’ve already completely transformed our planet with intelligence. If we can use artificial intelligence to amplify our intelligence, it will empower us to solve all the problems that we’re stumped by thus far, including curing all the diseases that kill our near and dear today. And for those so minded, even help life spread into the cosmos. Not even the sky is the limit, and the decisions about how this is going to go are going to be made within the coming decades, so within the lifetime of most people who are listening to this. There’s never been a more exciting moment to think about grand, positive visions for the future. That’s why I’m so honored and excited to get to work with the Future Life Institute.

Anthony Aguirre: Just like disasters, I think big positive changes can arise with relatively little warning and then seem inevitable in retrospect. I really believe that people are actually wanting and yearning for a society and a future that gives them fulfillment and meaning, and that functions and works for people.

There’s a lot of talk in the AI circles about how to define intelligence, and defining intelligence as the ability to achieve one’s goals. And I do kind of believe that for all its faults, humanity is relatively intelligent as a whole. We can be kind of foolish, but I think we’re not totally incompetent at getting what we are yearning for, and what we are yearning for is a kind of just and supportive and beneficial society that we can exist in. Although there are all these ways in which the dynamics of things that we’ve set up are going awry in all kinds of ways, and people’s own self-interest fighting it out with the self-interest of others is making things go terribly wrong, I do nonetheless see lots of people who are putting interesting, passionate effort forward toward making a better society. I don’t know that that’s going to turn out to be the force that prevails, I just hope that it is, and I think it’s not time to despair.

There’s a little bit of a selection effect in the people that you encounter through something like the Future of Life Institute, but there are a lot of people out there who genuinely are trying to work toward a vision of some better future, and that’s inspiring to see. It’s easy to focus on the differences in goals, because it seems like different factions that people want totally different things. But I think that belies the fact that there are lots of commonalities that we just kind of take for granted, and accept, and brush under the rug. Putting more focus on those and focusing the effort on, “given that we can all agree that we want these things and let’s have an actual discussion about what is the best way to get those things,” that’s something that there’s sort of an answer to, in the sense that we might disagree on what our preferences are, but once we have the set of preferences we agree on, there’s kind of the correct or more correct set of answers to how to get those preferences satisfied. We actually are probably getting better, we can get better, this is an intellectual problem in some sense and a technical problem that we can solve. There’s plenty of room for progress that we can all get behind.

Again, strong selection effect. But when I think about the people that I interact with regularly through the Future of Life Institute and other organizations that I work as a part of, they’re almost universally highly-effective, intelligent, careful-thinking, well-informed, helpful, easy to get along with, cooperative people. And it’s not impossible to create or imagine a society where that’s just a lot more widespread, right? It’s really enjoyable. There’s no reason that the world can’t be more or less dominated by such people.

As economic opportunity grows and education grows and everything, there’s no reason to see that that can’t grow also, in the same way that non-violence has grown. It used to be a part of everyday life for pretty much everybody, now many people I know go through many years without having any violence perpetrated on them or vice versa. We still live in a sort of overall, somewhat violent society, but nothing like what it used to be. And that’s largely because of the creation of wealth and institutions and all these things that make it unnecessary and impossible to have that as part of everybody’s everyday life.

And there’s no reason that can’t happen in most other domains, I think it is happening. I think almost anything is possible. It’s amazing how far we’ve come, and I see no reason to think that there’s some hard limit on how far we go.

Lucas Perry: So I’m hopeful for the new year simply because in areas that are important, I think things are on average getting better than they are getting worse. And it seems to be that much of what causes pessimism is perception that things are getting worse, or that we have these strange nostalgias for past times that we believe to be better than the present moment.

This isn’t new thinking, and is much in line with what Steven Pinker has said, but I feel that when we look at the facts about things like poverty, or knowledge, or global health, or education, or even the conversation surrounding AI alignment and existential risk, that things really are getting better, and that generally the extent to which it seems like it isn’t or that things are getting worse can be seen in many cases as our trend towards more information causing the perception that things are getting worse. But really, we are shining a light on everything that is already bad or we are coming up with new solutions to problems which generate new problems in and of themselves. And I think that this trend towards elucidating all of the problems which already exist, or through which we develop technologies and come to new solutions, which generate their own novel problems, this can seem scary as all of these bad things continue to come up, it seems almost never ending.

But they seem to me more now like revealed opportunities for growth and evolution of human civilization to new heights. We are clearly not at the pinnacle of life or existence or wellbeing, so as we encounter and generate and uncover more and more issues, I find hope in the fact that we can rest assured that we are actively engaged in the process of self-growth as a species. Without encountering new problems about ourselves, we are surely stagnating and risk decline. However, it seems that as we continue to find suffering and confusion and evil in the world and to notice how our new technologies and skills may contribute to these things, we have an opportunity to act upon remedying them and then we can know that we are still growing and that, that is a good thing. And so I think that there’s hope in the fact that we’ve continued to encounter new problems because it means that we continue to grow better. And that seems like a clearly good thing to me.

And with that, thanks so much for tuning into this Year In The Review Podcast on our activities and team as well as our feelings about existential hope moving forward. If you’re a regular listener, we want to share our deepest thanks for being a part of this conversation and thinking about these most fascinating and important of topics. And if you’re a new listener, we hope that you’ll continue to join us in our conversations about how to solve the world’s most pressing problems around existential risks and building a beautiful future for all. Many well and warm wishes for a happy and healthy end of the year for everyone listening from the Future of Life Institute team. If you find this podcast interesting, valuable, unique, or positive, consider sharing it with friends and following us on your preferred listening platform. You can find links for that on the pages for these podcasts found at futureoflife.org.

FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that’s stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford’s Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. 

Topics discussed include:

  • The psychology of existential risk, longtermism, effective altruism, and speciesism
  • Stefan’s study “The Psychology of Existential Risks: Moral Judgements about Human Extinction”
  • Various works and studies Stefan Schubert has co-authored in these spaces
  • How this enables us to be more altruistic

Timestamps:

0:00 Intro

2:31 Stefan’s academic and intellectual journey

5:20 How large is this field?

7:49 Why study the psychology of X-risk and EA?

16:54 What does a better understanding of psychology here enable?

21:10 What are the cognitive limitations psychology helps to elucidate?

23:12 Stefan’s study “The Psychology of Existential Risks: Moral Judgements about Human Extinction”

34:45 Messaging on existential risk

37:30 Further areas of study

43:29 Speciesism

49:18 Further studies and work by Stefan

Works Cited 

Understanding cause-neutrality

Considering Considerateness: Why communities of do-gooders should be exceptionally considerate

On Caring by Nate Soares

Against Empathy: The Case for Rational Compassion

Eliezer Yudkowsky’s Sequences

Whether and Where to Give

A Person-Centered Approach to Moral Judgment

Moral Aspirations and Psychological Limitations

Robin Hanson on Near and Far Mode 

Construal-Level Theory of Psychological Distance

The Puzzle of Ineffective Giving (Under Review) 

Impediments to Effective Altruism

The Many Obstacles to Effective Giving (Under Review) 

Moral Aspirations and Psychological Limitations

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Hello everyone and welcome to the Future of Life Institute Podcast. I’m Lucas Perry.  Today, we’re speaking with Stefan Schubert about the psychology of existential risk, longtermism, and effective altruism more broadly. This episode focuses on Stefan’s reasons for exploring psychology in this space, how large this space of study currently is, the usefulness of studying psychology as it pertains to these areas, the central questions which motivate his research, a recent publication that he co-authored which motivated this interview called The Psychology of Existential Risks: Moral Judgements about Human Extinction, as well as other related work of his. 

This podcast often ranks in the top 100 of technology podcasts on Apple Music. This is a big help for increasing our audience and informing the public about existential and technological risks, as well as what we can do about them. So, if this podcast is valuable to you, consider sharing it with friends and leaving us a good review. It really helps. 

Stefan Schubert is a researcher at the the Social Behaviour and Ethics Lab at the University of Oxford, working in the intersection of moral psychology and philosophy. He focuses on psychological questions of relevance to effective altruism, such as why our altruistic actions are often ineffective, and why we don’t invest more in safe-guarding our common future. He was previously a researcher at Centre for Effective Altruism and a postdoc in philosophy at the London School of Economics. 

We can all be more altruistic and effective in our service of others. Expanding our moral circles of compassion farther into space and deeper into time, as well as across species, and possibly even eventually to machines, while mitigating our own tendencies towards selfishness and myopia is no easy task and requires deep self-knowledge and far more advanced psychology than I believe we have today. 

This conversation explores the first steps that researchers like Stefan are taking to better understand this space in service of doing the most good we can. 

So, here is my conversation with Stefan Schubert 

Can you take us through your intellectual and academic journey in the space of EA and longtermism and in general, and how that brought you to what you’re working on now?

Stefan Schubert: I started range of different subjects. I guess I had a little bit of hard time deciding what I wanted to do. So I got a masters in political science. But then in the end, I ended up doing a PhD in philosophy at Lund University in Sweden, specifically in epistemology, the theory of knowledge. And then I went to London School of Economics to do a post doc. And during that time, I discovered effective altruism and I got more and more involved with that.

So then I applied to Centre for Effective Altruism, here in Oxford, to work as a researcher. And I worked there as a researcher for two years. At first, I did policy work, including reports on catastrophic risk and x-risk for a foundation and for a government. But then I also did some work, which was general and foundational or theoretical nature, including work on the notion of cause neutrality, how we should understand that. And also on how EAs should think about everyday norms like norms of friendliness and honesty.

And I guess that even though I, at the time I didn’t do sort of psychological empirical research, that sort of relates to my current work on psychology because for the last two years, I’ve worked on the psychology of effective altruism at the Social Behavior and Ethics Lab here at Oxford. This lab is headed by Nadira Farber and I also work closely with Lucius Caviola, who did his PhD here at Oxford and recently moved to Harvard to do a postdoc.

So we have three strands of research. The first one is sort of the psychology of effective altruism in general. So why is it that people aren’t effectively altruistic? This is a bit of a puzzle because generally people, they are at least somewhat effective when they working for their own interest. To be sure they are not maximally effective, but when they try to buy a home or save for retirement, they do some research and sort of try to find good value for money.

But they don’t seem to do the same when they donate to charity. They aren’t as concerned with effectiveness. So this is a bit of a puzzle. And then there are two strands of research, which have to do with specific EA causes. So one is the psychology of longtermism and existential risk, and the other is the psychology of speciesism, human-animal relations. So out of these three strands of research, I focused the most on the psychology of effective altruism in general and the psychology of longtermism and existential risk.

Lucas Perry: How large is the body of work regarding the psychology of existential risk and effective altruism in general? How many people are working on this? If you give us more insight into the state of the field and the amount of interest there.

Stefan Schubert: It’s somewhat difficult to answer because it sort of depends on how do you define these domains. There’s research, which is of some relevance to ineffective altruism, but it’s not exactly on that. But I will say that there may be around 10 researchers or so who are sort of EAs and work on these topics for EA reasons. So you definitely want to count them. And then when we thinking about non EA researchers, like other academics, there hasn’t been that much research I would say on the psychology of X-risk and longtermism

There’s research on the psychology of climate change, that’s a fairly large topic. But more specifically on X-risk and longtermism, there’s less. Effective altruism in general. That’s a fairly large topic. There’s lots of research on biases like the identifiable victim effect: people’s tendency to donate to identifiable victims over larger number of known unidentifiable statistical victims. Maybe the order of a few hundred papers.

And then the last topic, speciesism; human-animals relations: that’s fairly large. I know less of that literature, but my impression is that it’s fairly large.

Lucas Perry: Going back into the 20th century, much of what philosophers have done, like Peter Singer is constructing thought experiments, which isolate the morally relevant aspects of a situation, which is intended in the end to subvert psychological issues and biases in people.

So I guess I’m just reflecting here on how philosophical thought experiments are sort of the beginnings of elucidating a project of the psychology of EA or existential risk or whatever else.

Stefan Schubert: The vast majority of these papers are not directly inspired by philosophical thought experiments. It’s more like psychologists who run some experiments because there’s some theory that some other psychologist has devised. Most don’t look that much at philosophy I would say. But I think effective altruism and the fact that people are ineffectively altruistic, that’s fairly theoretically interesting for psychologists, and also for economists.

Lucas Perry: So why study psychological questions as they relate to effective altruism, and as they pertain to longtermism and longterm future considerations?

Stefan Schubert: It’s maybe easiest to answer that question in the context of effective altruism in general. I should also mention that when we studied this topic of sort of effectively altruistic actions in general, what we concretely study is effective and ineffective giving. And that is because firstly, that’s what other people have studied, so it’s easier to put our research into context.

The other thing is that it’s quite easy to study in a lab setting, right? So you might ask people, where would you donate to the effective or the ineffective charity? You might think that career choice is actually more important than giving, or some people would argue that, but that seems more difficult to study in a lab setting. So with regards to what motivates our research on effective altruism in general and effective giving, what ultimately motivates our research is that we want to make people improve their decisions. We want to make them donate more effectively, be more effectively altruistic in general.

So how can you then do that? Well, I want to make one distinction here, which I think might be important to think about. And that is the distinction between what I call a behavioral strategy and an intellectual strategy. And the behavioral strategy is that you come up with certain framings or setups to decision problems, such that people behave in a more desirable way. So there’s literature on nudging for instance, where you sort of want to nudge people into desirable options.

So for instance, in a cafeteria where you have healthier foods at eye level and the unhealthy food is harder to reach people will eat healthier than if it’s the other way round. You could come up with interventions that similarly make people donate more effectively. So for instance, the default option could be an effective charity. We know that in general, people tend often to go with the default option because of some kind of cognitive inertia. So that might lead to more effective donations.

I think it has some limitations. For instance, nudging might be interesting for the government because the government has a lot of power, right? It might frame the decision on whether you want to donate your organs after you’re dead. The other thing is that just creating an implementing these kinds of behavior interventions can often be very time consuming and costly.

So one might think that this sort of intellectual strategy should be emphasized and it shouldn’t be forgotten. So with respect to the intellectual strategy, you’re not trying to change people’s behavior solely, you are trying to do that as well, but you’re also trying to change their underlying way of thinking. So in a sense it has a lot in common with philosophical argumentation. But the difference is that you start with descriptions of people’s default way of thinking.

You describe that your default way of thinking, that leads you to prioritize an identifiable victim over larger numbers of statistical victims. And then you sort of provide an argument that that’s wrong. Statistical victims, they are just as real individuals as the identifiable victims. So you get people to accept that their own default way of thinking about identifiable versus statistical victims is wrong, and that they shouldn’t trust the default way of thinking but instead think in a different way.

I think that this strategy is actually often used, but we don’t often think about it as a strategy. So for instance, Nate Soares has this blog post “On Caring” where he argues that we shouldn’t trust our internal care-o-meter. And this is because we can’t increase how much we feel about more people dying with the number of people that die or with the badness of those increasing numbers. So it’s sort of an intellectual argument that takes psychological insight as a starting point and other people have done as well.

So the psychologist Paul Bloom has this book Against Empathy where he argues for similar conclusions. And I think Eliezer Yudkowsky uses his strategy a lot in his sequences. I think it’s often an effective strategy that should be used more.

Lucas Perry: So there’s the extent to which we can know about underlying, problematic cognition in persons and we can then change the world in ways. As you said, this is framed as nudging, where you sort of manipulate the environment in such a way without explicitly changing their cognition, in order to produce desired behaviors. Now, my initial reaction to this is, how are you going to deal with the problem when they find out that you’re doing this to them?

Now the second one here is the extent to which we can use our insights from psychological and analysis and studies to change implicit and explicit models and cognition in order to effectively be better decision makers. If a million deaths is a statistic and a dozen deaths is a tragedy, then there is some kind of failure of empathy and compassion in the human mind. We’re not evolved or set up to deal with these kinds of moral calculations.

So maybe you could do nudging by setting up the world in such a way that people are more likely to donate to charities that are likely to help out statistically large, difficult to empathize with numbers of people, or you can teach them how to think better and better act on statistically large numbers of people.

Stefan Schubert: That’s a good analysis actually. On the second approach: what I call the intellectual strategy, you are sort of teaching them to think differently. Whereas on this behavioral or nudging approach, you’re changing the world. I also think that this comment about “they might not like the way you nudged them” is a good comment. Yes, that has been discussed. I guess in some cases of nudging, it might be sort of cases of weakness of will. People might not actually want the chocolate but they fall prey to their impulses. And the same might be true with saving for retirement.

So whereas with ineffective giving, yeah there it’s much less clear. Is it really the case that people really want to donate effectively and therefore sort of are happy to be nudged in this way, that doesn’t seem to clear at all? So that’s absolutely a reason against that approach.

And then with respect to arguing for certain conclusions, in the sense that it is argument or argumentation, it’s more akin to philosophical argumentation. But it’s different from standard analytic philosophical argumentation in that it discusses human psychology. You discuss how our psychological dispositions mislead us at length and that’s not how analytic philosophers normally do it. And of course you can argue for instance, effective giving in the standard philosophical vein.

And some people have done that, like this EA philosopher Theron Pummer, he has an interesting paper called Whether and Where to Give on this question of whether it is an obligation to donate effectively. So I think that’s interesting, but one worries that there might not be that much to say about these issues because everything else equal is maybe sort of trivial that the more effectiveness the better. Of course everything isn’t always equal. But in general, it might not be too much interesting stuff you can say about that from a normative or philosophical point of view.

But there are tons of interesting psychological things you can say because there are tons of ways in which people aren’t effective. The other related issue is that this form of psychology might have a substantial readership. So it seems to me based on the success of Kahneman and Haidt and others, that people love to read about how their own and others’ thoughts by default go wrong. Whereas in contrast, standard analytic philosophy, it’s not as widely read, even among the educated public.

So for those reasons, I think that the sort of more psychology based augmentation may in some respects be more promising than purely abstract philosophical arguments for why we should be effectively altruistic.

Lucas Perry: My view or insight here is that the analytic philosopher is more so trying on the many different perspectives in his or her own head, whereas the psychologist is empirically studying what is happening in the heads of many different people. So clarifying what a perfected science of psychology in this field is useful for illustrating the end goals and what we’re attempting to do here. This isn’t to say that this will necessarily happen in our lifetimes or anything like that, but what does a full understanding of psychology as it relates to existential risk and longtermism and effective altruism enable for human beings?

Stefan Schubert: One thing I might want to say is that psychological insights might help us to formulate a vision of how we ought to behave or what mindset we ought to have and what we ought to be like as people, which is not the only normatively valid, which is what philosophers talk about, but also sort of persuasive. So one idea there that Lucius and I have discussed quite extensively recently is that some moral psychologists suggest that when we think about morality, we think to a large degree, not in terms of whether a particular act was good or bad, but rather about whether the person who performed that act is good or bad or whether they are virtuous or vicious.

So this is called the person centered approach to moral judgment. Based on that idea, we’ve been thinking about what lists of virtues people would need, in order to make the world better, more effectively. And ideally these should be virtues that both are appealing to common sense, or which can at least be made appealing to common sense, and which also make the world better when applied.

So we’ve been thinking about which such virtues one would want to have on such a list. We’re not sure exactly what we’ll include, but some examples might be prioritization, that you need to make sure that you prioritize the best ways of helping. And then we have another which we call Science: That you do proper research and how to help effectively or that you rely on others who do. And then collaboration, that you’re willing to collaborate on moral issues, potentially even with your moral opponents.

So the details of this virtues aren’t too important, but the idea is that it hopefully should seem like a moral ideal to some people, to be a person who lives these virtues. I think that to many people philosophical arguments about the importance of being more effective and putting more emphasis on consequences, if you read them in a book of analytic philosophy, that might seem pretty uninspiring. So people don’t read that and think “that’s what I would want to be like.”

But hopefully, they could read about these kinds of virtues and think, “that’s what I would want to be like.” So to return to your question, ideally we could use psychology to sort of create such visions of some kind of moral ideal that would not just be normatively correct, but also sort of appealing and persuasive.

Lucas Perry: It’s like a science, which is attempting to contribute to the project of human and personal growth and evolution and enlightenment in so far as that as possible.

Stefan Schubert: We see this as part of the larger EA project of using evidence and reason and research to make the world a better place. EA has this prioritization research where you try to find the best ways of doing good. I gave this talk at EAGx Nordics earlier this year on “Moral Aspirations and Psychological Limitations.” And in that talk I said, well what EAs normally do when they prioritize ways of doing good, is as it were, they look into the world and they think: what ways of doing good are there? What different courses are there? What sort of levers can we pull to make the world better?

So should we reduce existential risk from specific sources like advanced AI or bio risk, or is rather global poverty or animal welfare the best thing to work on? But then the other approach is to rather sort of look inside yourself and think, well I am not perfectly effectively altruistic, and that is because of my psychological limitations. So then we want to find out which of those psychological limitations are most impactful to work on because, for instance, they are more tractable or because it makes a bigger difference if we remove them. That’s one way of thinking about this research, that we sort of take this prioritization research and turn it inwards.

Lucas Perry: Can you clarify the kinds of things that psychology is really pointing out about the human mind? Part of this is clearly about biases and poor aspects of human thinking, but what does it mean for human beings to have these bugs and human cognition? What are the kinds of things that we’re discovering about the person and how he or she thinks that fail to be in alignment with the truth.

Stefan Schubert: I mean, there are many different sources of error, one might say. One thing that some people have discussed is that people are not that interested in being effectively altruistic. Why is that? Some people say that’s just because they get more warm glow out of giving someone who’s suffering more saliently and then the question arises, why do they get more warm glow out of that? Maybe that’s because they just want to signal their empathy. That’s sort of one perspective, which is maybe a bit cynical, then ,that the ultimate source of lots of ineffectiveness is just this preference for signaling and maybe a lack of genuine altruism.

Another approach would be to just say, the world is very complex and it’s very difficult to understand it and we’re just computationally constrained, so we’re not good enough at understanding it. Another approach would be to say that because the world is so complex, we evolved various broad-brushed heuristics, which generally work not too badly, but then, when we are put in some evolutionarily novel context and so on, they don’t guide us too well. That might be another source of error. In general, what I would want to emphasize is that there are likely many different sources of human errors.

Lucas Perry: You’ve discussed here how you focus and work on these problems. You mentioned that you are primarily interested in the psychology of effective altruism in so far as we can become better effective givers and understand why people are not effective givers. And then, there is the psychology of longtermism. Can you enumerate some central questions that are motivating you and your research?

Stefan Schubert: To some extent, we need more research just in order to figure out what further research we and others should do so I would say that we’re in a pre-paradigmatic stage with respect to that. There are numerous questions one can discuss with respect to psychology of longtermism and existential risks. One is just people’s empirical beliefs on how good the future will be if we don’t go extinct, what the risk of extinction is and so on. This could potentially be useful when presenting arguments for the importance of work on existential risks. Maybe it turns out that people underestimate the risk of extinction and the potential quality of the future and so on. Another issue which is interesting is moral judgments, people’s moral judgements about how bad extinction would be, and the value of a good future, and so on.

Moral judgements about human extinction, that’s exactly what we studied in a recent paper that we published, which is called “The Psychology of Existential Risks: Moral Judgements about Human Extinction.” In that paper, we test this thought experiment by philosopher Derek Parfit. He has this thought experiment where he discusses three different outcomes. First, peace, the second, a nuclear war that kills 99% of the world’s existing population and three, a nuclear war that kills everyone. Parfit says, then, that a war that kills everyone, that’s the worst outcome. Near-extinction is the next worst and peace is the best. Maybe no surprises there, but the more interesting part of the discussion, that concerns the relative differences between these outcomes in terms of badness. Parfit effectively made an empirical prediction, saying that most people would find a difference in terms of badness between peace and near-extinction to be greater, but he himself thought that the difference between near-extinction and extinction, that’s the greater difference. That’s because only extinction would lead to the future forever being lost and Parfit thought that if humanity didn’t go extinct, the future could be very long and good and therefore, it would be a unique disaster if the future was lost.

On this view, extinction is uniquely bad, as we put it. It’s not just bad because it would mean that many people would die, but also because it would mean that we would lose a potentially long and grand future. We tested this hypothesis in the paper, then. First, we had a preliminary study, which didn’t actually pertain directly to Parfit’s hypothesis. We just studied whether people would find extinction a very bad event in the first place and we found that, yes, they do and they that the government should invest substantially to prevent it.

Then, we moved on to the main topic, which was Parfit’s hypothesis. We made some slight changes. In the middle outcome, Parfit had 99% dying. We reduced that number to 80%. We also talked about catastrophes in general rather than nuclear wars and we didn’t want to talk about peace because we thought that you might have an emotional association with the word “peace;” we just talked about no catastrophe instead. Using this paradigm, we found that Parfit was right. First, most people, just like him, thought that extinction was the worst outcome, near extinction the next, and no catastrophe was the best. But second, we find, then, that most people find the difference in terms of badness, between no one dying and 80% dying, that’s greater than the difference between 80% dying and 100% dying.

Our interpretation, then, is that this is presumably because they focus most on the immediate harm that the catastrophes cause and in terms of the immediate harm, the difference between no one dying and 80% dying, it’s obviously greater than that between 80% dying and 100% dying. That was a control condition in some of our experiments, but we also had other conditions where we would slightly tweak the question. We had one condition which we call the salience condition, where we made the longterm consequences of the three outcomes salient. We told participants to remember the longterm consequences of the outcomes. Here, we didn’t actually add any information that they don’t have access to, but we just made some information more salient and that made significantly more participants find the difference between 80% dying and 100% dying the greater one.

Then, we had yet another condition which we call the utopia condition, where we told participants that if humanity doesn’t go extinct, then the future will be extremely long and extremely good and it was said that if 80% die, then, obviously, at first, things are not so good, but after a recovery period, we would go on to this rosy future. We included this condition partly because such scenarios have been discussed to some extent by futurists, but partly also because we wanted to know, if we ramp up this goodness of the future to the maximum and maximize the opportunity costs of extinction, how many people would then find the difference between near extinction and extinction the greater one. Indeed, we found, then, that given such a scenario, a large majority found the difference between 80% dying and 100% dying the larger one so then, they did find extinction uniquely bad given this enormous opportunity cost of a utopian future.

Lucas Perry: What’s going on in my head right now is we were discussing earlier the role or not of these philosophical thought experiments in psychological analysis. You’ve done a great study here that helps to empirically concretize the biases and remedies for the issues that Derek Parfit had exposed and pointed to in his initial thought experiment. That was popularized by Nick Bostrom and it’s one of the key thought experiments for much of the existential risk community and people committed to longtermism because it helps to elucidate this deep and rich amount of value in the deep future and how we don’t normally consider that. Your discussion here just seems to be opening up for me tons of possibilities in terms of how far and deep this can go in general. The point of Peter Singer’s child drowning in a shallow pond was to isolate the bias of proximity and Derek Parfit’s thought experiment isolates the bias of familiarity, temporal bias and continuing into the future, it’s making me think, we also have biases about identity.

Derek Parfit also has thought experiments about identity, like with his teleportation machine where, say, you stepped into a teleportation machine and it annihilated all of your atoms but before it did so, it scanned all of your information and once it scanned you, it destroyed you and then re-assembled you on the other side of the room, or you can change the thought experiment and say on the other side of the universe. Is that really you? What does it mean to die? Those are the kinds of questions that are elicited. Listening to what you’ve developed and learned and reflecting on the possibilities here, it seems like you’re at the beginning of a potentially extremely important and meaningful field that helps to inform decision-making on these morally crucial and philosophically interesting questions and points of view. How do you feel about that or what I’m saying?

Stefan Schubert: Okay, thank you very much and thank you also for putting this Parfit thought experiment a bit in context. What you’re saying is absolutely right, that this has been used a lot, including by Nick Bostrom and others in the longtermist community and that was indeed one reason why we wanted to test it. I also agree that there are tons of interesting philosophical thought experiments there and they should be tested more. There’s also this other field of experimental philosophy where philosophers test philosophical thought experiments themselves, but in general, I think there’s absolutely more room for empirical testing of them.

With respect to temporal bias, I guess it depends a bit what one means by that, because we actually did get an effect from just mentioning that they should consider the longterm consequences, so I might think that to some extent it’s not only that people are biased in favor of the present, but it’s also that they don’t really consider the longterm future. They sort of neglect it and it’s not something that’s generally discussed among most people. I think this is also something that Parfit’s thought experiment highlights. You have to think about the really longterm consequences here and if you do think about them, then, your intuitions about these thought experiment should reverse.

Lucas Perry: People’s cognitive time horizons are really short.

Stefan Schubert: Yes.

Lucas Perry: People probably have the opposite discounting of future persons that I do. Just because I think that the kinds of experiences that Earth-originating intelligent life forms will be having in the near to 100 to 200 years will be much more deep and profound than what humans are capable of, that I would value them more than I value persons today. Most people don’t think about that. They probably just think there’ll be more humans and short of their bias towards present day humans, they don’t even consider a time horizon long enough to really have the bias kick in, is what you’re saying?

Stefan Schubert: Yeah, exactly. Thanks for that, also, for mentioning that. First of all, my view is that people don’t even think so much about the longterm future unless prompted to do so. Second, in this first study I mentioned, which was sort of a pre-study, we asked, “How good do you think that the future’s going to be?” On the average, I think they said, “It’s going to be slightly better than the present” and that would be very different from your view, then, that the future’s going to be much better. You could argue that this view that the future is going to be about as good as present is somewhat unlikely. I think it’s going to be much better or maybe it’s going to be much worse. There’s several different biases or errors that are present here.

Merely making the longterm consequences of the three outcomes salient, that already makes people more inclined to find a difference between 80% dying and 100% dying the greater one, so then you don’t add any information. Also ,specifying that the longterm outcomes are going to be extremely good, that makes a further difference that make most people find the difference between 80% dying and 100% dying the greater one.

Lucas Perry: I’m sure you and I, and listeners as well, have the hilarious problem of trying to explain this stuff to friends or family members or people that you meet that are curious about it and the difficulty of communicating it and imparting the moral saliency. I’m just curious to know if you have explicit messaging recommendations that you have extracted or learned from the study that you’ve done.

Stefan Schubert: You want to make the future more salient if you want people to care more about existential risk. With respect to explicit messaging more generally, like I said, there haven’t been that many studies on this topic, so I can’t refer to any specific study that says that this is how you should work with the messaging on this topic but just thinking more generally, one thing I’ve been thinking about is that maybe, with many of these issues, it’s just that it takes a while for people to get habituated with them. At first, if someone hears a very surprising statement that has very far reaching conclusions, they might be intuitively a bit skeptical about it, independently of how reasonable that argument would be for someone who would be completely unbiased. Their prior is that, probably, this is not right and to some extent, this might even be reasonable. Maybe people should be a bit skeptical of people who say such things.

But then, what happens is that most such people who make such claims that seem to people very weird and very far-reaching, they get discarded after some time because people poke holes in their arguments and so on. But then, a small subset of all such people, they actually stick around and they get more and more recognition and you could argue that that’s what’s now happening with people who work on longtermism and X-risk. And then, people slowly get habituated to this and they say, “Well, maybe there is something to it.” It’s not a fully rational process. I think this doesn’t just relate to longtermism an X-risk but maybe also specifically to AI risk, where it takes time for people to accept that message.

I’m sure there are some things that you can do to speed up that process and some of them would be fairly obvious like have smart, prestigious, reasonable people talk about this stuff and not people who don’t seem as credible.

Lucas Perry: What are further areas of the psychology of longtermism or existential risk that you think would be valuable to study? And let’s also touched upon other interesting areas for effective altruism as well.

Stefan Schubert: I mentioned previously people’s empirical beliefs, that could be valuable. One thing I should mention there is that I think that people’s empirical beliefs about the distant future are massively affected by framing effects, so depending on how you ask these questions, you are going to get very different answers so that’s important to remember that it’s not like people have these stable beliefs and they will always say that. The other thing I mentioned is moral judgments, and I said we stated moral judgements about human extinction, but there’s a lot of other stuff to do, like people’s views on population ethics could obviously be useful. Views on whether creating happy people is morally valuable. Whether it’s more valuable to bring large number of people whose life is barely worth living into existence than to bring in a small number of very happy people into existence and so on.

Those questions obviously have relevance for the moral value of the future. One thing I would want to say is that if you’re rational, then, obviously, your view on what and how much we should do to affect the distant future, that should arguably be a function of your moral views, including on population ethics, on the one hand, and also your empirical views of how the future’s likely to pan out. But then, I also think that people obviously aren’t completely rational and I think, in practice, their views on the longterm future will also be influenced by other factors. I think that their view on whether helping the longterm future seems like an inspiring project, that might depend massively on how the issue is framed. I think these aspects could be worth studying because if we find these kinds of aspects, then we might want to emphasize the positive aspects and we might want to adjust our behavior to avoid the negative. The goal should be to formulate a vision of longtermism that feels inspiring to people, including to people who haven’t put a lot of thought into, for instance, population ethics and related matters.

There are also some other specific issues which I think could be useful to study. One is the psychology of predictions about the distant future and the implications of the so-called construal level theory for the psychology or the longterm future. Many effective altruists would know construal level theory under another name: near mode and far mode. This is Robin Hanson’s terminology. Construal level theory is a theory about psychological distance and how it relates to how abstractly we construe things. It says that we conceive of different forms of distance – spatial, temporal, social – similarly. The second claim is that we conceive of items and events at greater psychological distance. More abstractly, we focus more on big picture features and less on details. So, Robin Hanson, he’s discussed this theory very extensively including with respect to the long term future. And he argues that the great psychological distance to the distant future causes us to reason in overly abstract ways, to be overconfident to have poor epistemics in general about the distant future.

I find this very interesting, and these kinds of ideas are mentioned a lot in EA and the X-risk community. But, to my knowledge there hasn’t been that much research which applies construal level theory specifically to the psychology of the distant future.

It’s more like people look at these general studies of construal level theory, and then they noticed that, well, the temporal distance to the distant future is obviously extremely great. Hence, these general findings should apply to a very great extent. But, to my knowledge, this hasn’t been studied so much. And given how much people discuss near or far mode in this case, it seems that there should be some empirical research.

I should also mention that I find that construal level theory a very interesting and rich psychological theory in general. I could see that it could illuminate the psychology of the distant future in numerous ways. Maybe it could be some kind of a theoretical framework that I could use for many studies about the distant future. So, I recommend that key paper from 2010 by Trope and Liberman on construal level theory.

Lucas Perry: I think that just hearing you say this right now, it’s sort of opening my mind up to the wide spectrum of possible applications of psychology in this area.

You mentioned population ethics. That makes me just think of in the context of EA and longtermism and life in general, the extent to which psychological study and analysis can find ethical biases and root them out and correct for them, either by nudging or by changing the explicit methods by which humans cognize about such ethics. There’s the extent to which psychology can better inform our epistemics, so this is the extent to which we can be more rational.

And I’m reflecting now how quantum physics subverts many of our Newtonian mechanics and classical mechanics, intuitions about the world. And there’s the extent to which psychology can also inform the way in which our social and experiential lives also condition the way that we think about the world and the extent to which that sets us astray in trying to understand the fundamental nature of reality or thinking about the longterm future or thinking about ethics or anything else. It seems like you’re at the beginning stages of debugging humans on some of the most important problems that exist.

Stefan Schubert: Okay. That’s a nice way of putting it. I certainly think that there is room for way more research on the psychology of longtermism and X-risk.

Lucas Perry: Can you speak a little bit now here about speciesism? This is both an epistemic thing and an ethical thing in the sense that we’ve invented these categories of species to describe the way that evolutionary histories of beings bifurcate. And then, there’s the psychological side of the ethics of it where we unnecessarily devalue the life of other species given that they fit that other category.

Stefan Schubert: So, we have one paper on the review, which is called “Why People Prioritize Humans Over Animals: A Framework for Moral Anthropocentrism.

To give you a bit of context, there’s been a lot of research on speciesism and on humans prioritizing humans over animals. So, in this paper we sort of try to take a bit more systematic approach and pick these different hypotheses for why humans prioritize humans over animals against each other, and look at their relative strengths as well.

And what we find is that there is truth to several of these hypotheses of why humans prioritize humans over animals. One contributing factor is just that they value individuals with greater mental capacities, and most humans have great mental capacities than most animals.

However, that explains the only part of the effect we find. We also find that people think that humans should be prioritized over animals even if they have the same mental capacity. And here, we find that this is for two different reasons.

First, according to our findings, people are what we call species relativists. And by that, we mean that they think that members of the species, including different non-human species, should prioritize other members of that species.

So, for instance, humans should prioritize other humans, and an elephant should prioritize other elephants. And that means that because humans are the ones calling the shots in the world, we have a right then, according to this species relativist view, to prioritize our own species. But other species would, if they were in power. At least that’s the implication of what the participants say, if you take them at face value. That’s species relativism.

But then, there is also the fact that they exhibit an absolute preference for humans over animals, meaning that even if we control for the mental capacities of humans and animals, and even if we control for the species relativist factors that we control for who the individual who could help them is, there remains a difference which can’t be explained by those other factors.

So, there’s an absolute speciesist preference for humans which can’t be explained by any further factor. So, that’s an absolute speciesist preference as opposed to this species relativist view.

In total, there’s a bunch of factors that together explain why humans prioritize animals, and these factors may also influence each other. So, we present some evidence that if people have a speciesist preference for humans over animals, that might, in turn, lead them to believe that animals have less advanced mental capacities than they actually have. And because they have this view that individuals with lower mental capacity, they are less morally valuable, that leads them to further deprioritize animals.

So, these three different factors, they sort of interact with each other in intricate ways. Our paper gives this overview over these different factors which contribute to humans prioritizing humans over animals.

Lucas Perry: This helps to make clear to me that a successful psychological study with regards to at least ethical biases will isolate the salient variables which are knobs that are tweaking the moral saliency of one thing over another.

Now, you said mental capacities there. You guys aren’t bringing consciousness or sentience into this?

Stefan Schubert: We discuss different formulations at length, and we went for the somewhat generic formulation.

Lucas Perry: I think people have beliefs about the ability to rationalize and understand the world, and then how that may or may not be correlated with consciousness that most people don’t make explicit. It seems like there are some variables to unpack underneath cognitive capacity.

Stefan Schubert: I agree. This is still like fairly broad brushed. The other thing to say is that sometimes we say that this human has as advanced mental capacities as these animals. Then, they have no reason to believe that the human has a more sophisticated sentience or is more conscious or something like that.

Lucas Perry: Our species membership tells me that we probably have more consciousness. My bedrock thing is I care about how much the thing can suffer or not, not how well it can model the world. Though those things are maybe probably highly correlated with one another. I think I wouldn’t be a speciesist if I thought human beings were currently the most important thing on the planet.

Stefan Schubert: You’re a speciesist if you prioritize humans over animals purely because of species membership. But, if you prioritize one species over another for some other reasons which are morally relevant, then you would not be seen as a speciesist.

Lucas Perry: Yeah, I’m excited to see what comes of that. I think that working on overcoming racism and misogyny and other things, and I think that overcoming speciesism and temporal biases and physical space, proximity biases are some of the next stages in human moral evolution that have to come. So, I think it’s honestly terrific that you’re working on these issues.

Is there anything you would like to say or that you feel that we haven’t covered?

Stefan Schubert: We have one paper which is called “The Puzzle of Ineffective Giving,” where we study this misconception that people have, which is that they think the difference in effectiveness between charities is much smaller than it actually is. So, experts think that the most effective charities are vastly much more effective than the average charity, and people don’t know that.

That seems to suggest that beliefs play a role in ineffective giving. But, there was one interesting paper called “Impediments to Effective Altruism” where they show that even if you tell people that cancer charity is less effective than an arthritis charity, they still donate.

So, then we have this other paper called “The Many Obstacles to Effective Giving.” It’s a bit similar to this speciesist paper, I guess, that we sort of pit different competing hypotheses that people have studied against each other. We give people different tasks, for instance, tasks which involve identifiable victims and tasks which involve ineffective but low overhead charities.

And then, we sort of started, well, what if we tell them how to be effective? Does that change how they behave? What’s the role of that pure belief factor? What’s the role of preferences? The result is a bit of a mix. Both beliefs and preferences contribute to ineffective giving.

In the real world, it’s likely that are several beliefs and preferences that obstruct effective giving present simultaneously. For instance, people might fail to donate to the most effective charity because first, it’s not a disaster charity, and they might have a preference for a disaster charity. And it might have a high overhead, and they might falsely believe then that high overhead entails low effectiveness. And it might not highlight identifiable victims, and they have a preference for donating to identifiable victims.

Several of these obstacles are present at the same time, and in that sense, ineffective giving is overdetermined. So, fixing one specific obstacle may not make as much of the difference as one would have wanted. That might support the view that what we need is not primarily behavioral interventions that address individual obstacles, but rather a more broad mindset change that can motivate people to proactively seek out the most effective ways of doing good.

Lucas Perry: One other thing that’s coming to my mind is the proximity of a cause to someone’s attention and the degree to which it allows them to be celebrated in their community for the good that they have done.

Are you suggesting that the way for remedying this is to help instill a curiosity and something resembling the EA mindset that would allow people to do the cognitive exploration and work necessary to transcend these limitations that bind them to their ineffective giving or is that unrealistic?

Stefan Schubert: First of all, let me just say that with respect to this proximity issue, that was actually another task that we had. I didn’t mention all the tasks. So, we told people that you can either help a local charity or a charity, I think it was in India. And then, we told them that the Indian charity is more effective and asked “where would you want to donate?”

So, you’re absolutely right. That’s another obstacle to effective giving, that people sometimes have preferences or beliefs that local charities are more effective even when that’s not the case. Some donor I talked to, he said, “Learning how to donate effectively, it’s actually fairly complicated, and there are lots of different things to think about.”

So, just fixing the overhead myth or something like that, that may not take you very far, especially if you think that the very best charities that are sort of extremely much more effective than the average charity. So, what’s important is not going from an average charity to a somewhat more effective charity, but to actually find the very best charities.

And to do that, we may need to address many psychological obstacles because the most effective charities, they might be very weird and sort of concerned with longterm future or what-not. So, I do think that a mindset where people seek out effective charities, or defer to others who do, that might be necessary. It’s not super easy to make people adopt that mindset, definitely not.

Lucas Perry: We have charity evaluators, right? These institutions which are intended to be reputable enough that they can tell you which are the most effective charities to donate to. It wouldn’t even be enough to just market those really hard. They’d be like, “Okay, that’s cool. But, I’m still going to donate my money to seeing eye dogs because blindness is something that runs in my family and is experientially and morally salient for me.”

Is the way that we fix the world really about just getting people to give more, and what is the extent to which the institutions which exist, which require people to give, need to be corrected and fixed? There’s that tension there between just the mission of getting people to give more, and then the question of, well, why do we need to get everyone to give so much in the first place?

Stefan Schubert: This insight that ineffective giving is overdetermined and there are lots of things that stand in a way of effective giving, one thing I like about it is that it seems to sort of go well with this observation that it is actually, in the real world, very difficult to make people donate effectively.

I might relate there a bit to what you mentioned about the importance of giving more, and so we could sort of distinguish between the different kinds of psychological limitations. First, that limitations that relate to how much we give. We’re selfish, so therefore we don’t necessarily give as much of our monetary rather resources as we should. There are sort of limits to altruism.

But then, there are also limits to effectiveness. We are ineffective for various reasons that we’ve discussed. And then, there’s also fact that we can have the wrong moral goals. Maybe we work towards short term goals, but then we would realize on the careful reflection that we should work towards long term goals.

And then, I was thinking like, “Well, which of these obstacles should you then prioritize if you turn this sort of prioritization framework inwards?” And then, you might think that, well, at least with respect to giving, it might be difficult for you to increase the amount that you give by more than 10 times. Americans, for instance, they already donate several percent of their income. We know from historical experience that it might be hard for people to sustain very high levels of altruism, so maybe it’s difficult for them to sort of ramp up this altruist factor to the extreme amount.

But then, with effectiveness, if this story about heavy-tailed distributions of effectiveness is right, then you could increase the effectiveness of your donations a lot. And arguably, the sort of psychological price for that is lower. It’s very demanding to give up a huge proportion of your income for others, but I would say that it’s less demanding to redirect your donations to a more effective cause, even if you feel more strongly for the ineffective cause.

I think it’s difficult to really internalize how enormously important it is to go for the most effective option. And also, of course, then the third factor to sort of change your moral goals if necessary. If people would reduce their donations by 99%, they would reduce the impact by 99%. Many people would feel guilty about it.

But then, if they reduce their impact 99% via reducing their effectiveness 99% through choosing an ineffective charity, then people don’t feel similarly guilty, so similar to Nate Soares’ idea of a care-o-meter: our feelings aren’t adjusted for these things, so we don’t feel as much about the ineffectiveness as we do about altruistic sacrifice. And that might lead us to not focus enough on effectiveness, and we should really think carefully about going that extra mile for the sake of effectiveness.

Lucas Perry: Wonderful. I feel like you’ve given me a lot of concepts and tools that are just very helpful for reinvigorating a introspective mindfulness about altruism in my own life and how that can be nurtured and developed.

So, thank you so much. I’ve really enjoyed this conversation for the reasons I just said. I think this is a very important new research stream in this space, and it seems small now, but I really hope that it grows. And thank you for you and your colleagues work here on seeding and doing the initial work in this field.

Stefan Schubert: Thank you very much. Thank you for having me. It was a pleasure.

FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre

There exist many facts about the nature of reality which stand at odds with our commonly held intuitions and experiences of the world. Ultimately, there is a relativity of the simultaneity of events and there is no universal “now.” Are these facts baked into our experience of the world? Or are our experiences and intuitions at odds with these facts? When we consider this, the origins of our mental models, and what modern physics and cosmology tell us about the nature of reality, we are beckoned to identify our commonly held experiences and intuitions, to analyze them in the light of modern science and philosophy, and to come to new implicit, explicit, and experiential understandings of reality. In his book Cosmological Koans: A Journey to the Heart of Physical Reality, FLI co-founder Anthony Aguirre explores the nature of space, time, motion, quantum physics, cosmology, the observer, identity, and existence itself through Zen koans fueled by science and designed to elicit questions, experiences, and conceptual shifts in the reader. The universe can be deeply counter-intuitive at many levels and this conversation, rooted in Anthony’s book, is an attempt at exploring this problem and articulating the contemporary frontiers of science and philosophy.

Topics discussed include:

  • What is skillful of a synergy of Zen and scientific reasoning
  • The history and philosophy of science
  • The role of the observer in science and knowledge
  • The nature of information
  • What counts as real
  • The world in and of itself and the world we experience as populated by our concepts and models of it
  • Identity in human beings and future AI systems
  • Questions of how identity should evolve
  • Responsibilities and open questions associated with architecting life 3.0

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Welcome to the Future of Life Institute podcast. I’m Lucas Perry. Today, we’re speaking with Anthony Aguirre. He is a cosmologist, a co-founder of the Future of Life Institute, and a co-founder of the Foundational Questions Institute. He also has a cool prediction market called Metaculus that I suggest you check out. We’re discussing his book, Cosmological Koans: A Journey Into the Heart of Physical Reality. This is a book about physics from a deeply philosophical perspective in the format of Zen koans. This discussion is different from the usual topics of the podcast, thought there are certainly many parts that directly apply. I feel this will be of interest to people who like big questions about the nature of reality. Some questions that we explore are, what is skillful of a synergy of Zen and scientific reasoning, the history and philosophy of science, the nature of information, we ask what is real, and explore that question. We discuss the world in and of itself and the world we experience as populated by our concepts and stories about the universe. We discuss identity in people and future AI systems. We wonder about how identity should evolve in persons and AI systems. And we also get into the problem we face of architecting new forms of intelligence with their own lived experiences, and identities, and understandings of the world. 

As a bit of side news, Ariel is transitioning out of her role at FLI. So, i’ll be taking over the main FLI podcast from here on out. This podcast will continue to deal with broad issues in the space of existential risk and areas that pertain broadly to the Future of Life Institute. Like, AI risk and AI alignment, as well as bio-risk and climate change, and the stewardship of technology with wisdom and benevolence in mind. And the AI Alignment Podcast will continue to explore the technical, social, political, ethical, psychological, and broadly interdisciplinary facets of the AI alignment problem. So, I deeply appreciated this conversation with Anthony and I feel that conversations like these help me to live what I feel is an examined life. And if these topics and questions that I’ve mentioned are of interest to you or resonate with you then I think you’ll find this conversation valuable as well. 

So let’s get in to our conversation with Anthony Aguirre. 

We’re here today to discuss your work, Cosmological Koans: A Journey to the Heart of Physical Reality. As a little bit of background, tell me a little bit about your experience as a cosmologist and someone interested in Zen whose pursuits have culminated into his book.

Anthony Aguirre: I’ve been a cosmologist professionally for 20 years or so since grad school I suppose, but I’ve also for my whole life had just the drive to understand what reality is, what’s reality all about. One approach to that certainly to understanding physical reality is physics and cosmology and fundamental physics and so on. I would say that the understanding of mental reality, what is going on in the interior sense is also reality and is also crucially important. That’s what we actually experience. I’ve long had an interest in both sides of that question. What is this interior reality? Why do we have experience the way we do? How is our mind working? As well as what is the exterior reality of physics and the fundamental physical laws and the large scale picture of the universe and so on?

While professionally I’ve been very  focused on the external side and the cosmological side in particular, I’ve nourished that interest in the inner side as well and how that interior side and the exterior side connect in various ways. I think that longstanding interest has built the foundation of what then turned into this book that I’ve put together over a number of years that I don’t care to admit.

Lucas Perry: There’s this aspect of when we’re looking outward, we’re getting a story of the universe and then that story of the universe eventually leads up into us. For example as Carl Sagan classically pointed out, the atoms which make up your body had to be fused in supernovas, at least the things which aren’t hydrogen and helium. So we’re all basically complex aggregates of collapsed interstellar gas clouds. And this shows that looking outward into the cosmos is also a process of uncovering the story of the person and of the self as well.

Anthony Aguirre: Very much in that I think to understand how our mind works and how our body works, we have to situate that within a chain of wider and wider context. We have to think of ourselves as biological creatures, and that puts us in the biological context and evolution and evolution over the history of the earth, but that in turn is in the context of where the earth sits in cosmic evolution in the universe as a whole, and also where biology and its functioning sits within the context of physics and other sciences, information theory, computational science. I think to understand ourselves, we certainly have to understand those other layers of reality.

I think what’s often assumed though is that to understand those other layers of reality, we don’t have to understand how our mind works. I think that’s tricky because on the one hand, we’re asking for descriptions of objective reality, and we asking for laws of physics. We don’t want to ask for our opinion that we’re going to disagree about. We want something that transcends our own minds and our ability to understand or describe those things. We’re looking for something objective in that sense.

I think it’s also true that many of the things that we talk about is fairly objective contain unavoidably a fairly subjective component to them. Once we have the idea of an objective reality out there that is independent of who’s observing it, we ascribe a lot of objectivity to things that are in fact much more of a mix that have a lot more ingredients that we have brought to them than we like to admit and are not wholly out there to be observed by us as impartial observers but are very much a tangled interaction between the observer and the observed.

Lucas Perry: There are many different facets and perspectives here about why taking the cosmological perspective of understanding the history of the universe, as well as the person, is deeply informative. In terms of the perspective of the Future of Life Institute, understanding cosmology tells us what is ultimately possible for life in terms of how long the universe will last, and how far you can spread, and fundamental facts about information and entropy, which are interesting, and also ultimately determine how the fate of intelligence and consciousness in the world. There’s also this anthropic aspect that you’re touching on about how observers only observe the kinds of things that observers are able to observe. We can also consider the limits of the concepts that are born of being a primate conditioned by evolution and culture, and the extent to which our concepts are lived experiences within our world model. And then there’s this distinction between the map and the territory, or our world model and the world itself. And so perhaps part of fusing Zen with cosmology is experientially being mindful of not confusing the map for the territory in our moment to moment experience of things.

There’s also this scientific method for understanding what is ultimately true about the nature of reality, and then what Zen offers is an introspective technique for trying to understand the nature of the mind, the nature of consciousness, the causes and conditions which lead to suffering, and the concepts which inhabit and make up conscious experience. I think all of this thinking culminates into an authentically lived life as a scientist and as a person who wants to know the nature of things, to understand the heart of reality, to attempt to not be confused, and to live an examined life – both of the external world and the experiential world as a sentient being. 

Anthony Aguirre: Something like that, except I nurture no hope to ever not be confused. I think confusion is a perfectly admirable state in the sense that reality is confusing. You can try to think clearly, but I think there are always going to be questions of interests that you simply don’t understand. If you go into anything deeply enough, you will fairly quickly run into, wow, I don’t really get that. There are very few things that if you push into them carefully and skeptically and open-mindedly enough, you won’t come to that point. I think it would actually be I think let down if I ever got to the point where I wasn’t confused about something. All the fun would be gone, but otherwise, I think I agree with you. Where shall we start?

Lucas Perry: This helps to contextualize some of the motivations here. We can start by explaining why cosmology and Zen in particular? What are the skillful means born of a fusion of these two things? Why fuse these two things? I think some number of our audience will be intrinsically skeptical of all religion or spiritual pursuits. So why do this?

Anthony Aguirre: There are two aspects to it. I think one is a methodological one, which is Cosmological Koans is made up of these koans, and they’re not quite the same koans that you would get from a Zen teacher, but they’re sort of riddles or confrontations that are meant to take the recipient and cause them to be a little bit baffled, a little bit surprised, a little bit maybe shocked at some aspect of reality. The idea here is to both confront someone with something that is weird or unusual or contradicts what they might have believed beforehand in a comfortable, familiar way and make it uncomfortable and unfamiliar. Also to make the thing that is being discussed about the person rather than abstracts intellectual pursuit. Something that I like about Zen is that it’s about immediate experience. It’s about here you are here and now having this experience.

Part of the hope I think methodologically of Cosmological Koans is to try to put the reader personally in the experience rather than have it be stuff out there that physicists over there are thinking about and researching or we can speculate with a purely third person point of view to emphasize that if we’re talking about the universe and the laws of physics and reality, we’re part of the universe. We’re obeying those laws of physics. We’re part of reality. We’re all mixed up in that there can be cases where it’s useful to get a distance from that, but then there are also cases where it’s really important to understand what that all has to do with you. What does this say about me and my life, my experience, my individual subjective, first person view of the world? What does that have to do with these very third person objective things that physics studies?

Part of the point is an interesting and fun way to jolt someone into seeing the world in a new way. The other part is to make it about the reader in this case or about the person asking the questions and not just the universe out there. That’s one part of why I chose this particular format.

I think the other is a little bit more on the content side to say I think it’s dangerous to take things that were written 2,500 years ago and say, oh look, they anticipated what modern physics is finding now. They didn’t quite. Obviously, they didn’t know calculus, let alone anything else that modern physics knows. On the other hand, I think the history of thinking about reality from the inside out, from the interior perspective using a set of introspective tools that were incredibly sophisticated through thousands of years does have a lot to say about reality when the reality is both the internal reality and the external one.

In particular, when you’re talking about a person experiencing the physical world perceiving something in the exterior physical world in some way, what goes on in that process that has both the physical side to it and an internal subjective mental side to it, observing how much of the interior gets brought to the perception. In that sense, I think the Eastern traditions are way ahead of where the West was. The West has had this idea that there’s the external world out there that sends information in and we receive it and we have a pretty much accurate view of what the world is. The idea that instead what we are actually experiencing is very much a joint effort of the experiencer and that external world building up this thing in the middle that brings that individual along with a whole backdrop of social and biological and physical history to every perception. I think that is something that is (a) true, and (b) there’s been a lot more investigation of that on the Eastern and on the philosophical side, some in Western philosophy too of course, but on the philosophical side rather than just the physical side.

I think the book is also about exploring that connection. What are the connections between our personal first person, self-centered view and the external physical world? In doing that investigation, I’m happy to jump to whatever historical intellectual foundations there are, whether it’s Zen or Western philosophy or Indian philosophy or modern physics or whatever. My effort is to touch on all of those at some level in investigating that set of questions.

Lucas Perry: Human beings are the only general epistemic agents in the universe that we’re currently aware of. From the point of view of the person, all the progress we’ve done in philosophy and science, all that there has ever been historically, from a first person perspective, is consciousness and its contents, and our ability to engage with those contents. It is by virtue of engaging with the contents of consciousness that we believe that we gain access to the outside world.  You point out here that in Western traditions, it’s been felt that we just have all of this data come in and we’re basically just seeing and interacting with the world as it really is. But as we’ve moreso uncovered, and in reality, the process of science and interrogating the external world is more like you have this internal virtual world model simulation that you’re constructing, that is a representation of the world that you use to engage and navigate with it. 

From this first person experiential bedrock, Western philosophers like Descartes have tried to assume certain things about the nature of being, like “I think, therefore I am.” And from assumptions about being, the project and methodologies of science are born of that reasoning and follow from it. It seems like it took Western science a long time, perhaps up until quantum physics, to really come back to the observer, right?

Anthony Aguirre: Yeah. I would say that a significant part of the methodology of physics was at some level to explicitly get the observer out and to talk about only objectively mathematically definable things. The mathematical part is still with physics. The objective is still there, except that I think there’s a realization that one always has to, if one is being careful, talk about what actually gets observed. You could do all of classical physics at some level, physics up to the beginning of the 20th century without ever talking about the observer. You could say there is this object. It is doing this. These are the forces acting on it and so on. You don’t have to be very careful about who is measuring those properties or talking about them or in what terms.

Lucas Perry: Unless they would start to go fast and get big.

Anthony Aguirre: Before the 20th century, you didn’t care if things were going fast. In the beginning of the 20th century though, there was relativity, and there was quantum mechanics, and both of those suddenly had the agent doing the observations at their centers. In relativity, you suddenly have to worry about what reference frame you’re measuring things in, and things that you thought were objective facts like how long is the time interval between two things that happen suddenly were revealed to be not objective facts, but dependent on who the observer is in particular, what reference frame their state of motion and so on.

Everything else as it turned out is really more like a property of the world that the world can either have or not when someone checks. The structure of quantum mechanics is at some level things have a state, which encodes something about the objects, and the something that it encodes is there’s this set of questions that I could ask the object and I can get answers to those questions. There’s a particular set of questions that I might ask and I’d get definite answers. If I ask other questions that aren’t in that list, then I get answers still, but they’re indefinite, and so I have to use probabilities to describe them.

This is a very different structure to say the object is a list of potential answers to questions that I might pose. It’s very different from saying there’s a chunk of stuff that has a position and a momentum and a force is acting on it and so on. It feels very different. While mathematically you can make the connections between those, it is a very different way of thinking about reality. That is a big change obviously and one that I think still isn’t complete in the sense that as soon as you start to talk that way and say an electron or a glass of water or whatever is a set of potential answers to questions, that’s a little bit hard to swallow, but you immediately have to ask, well, who’s asking the questions and who’s getting the answers? That’s the observer.

The structure of quantum mechanics from the beginning has been mute about that. It said make an observation and you’ll get these probabilities. That’s just pushing the observer into the thing that by definition makes observations, but without a specification of what does that mean to make an observation, what’s allowed to do it and what isn’t? Can an electron observe another electron or does it have to be a big group of electrons? What is it exactly that counts as making an observation and so on? There are all these questions about what this actually means that have just been sitting around since quantum mechanics was created and really haven’t been answered at any agreed upon or really I would say satisfactory way.

Lucas Perry: Theres a ton there. In terms of your book, there’s this fusion between what is skillful and true about Zen and what is skillful and true about science. You discussed here historically this transition to an emphasis on the observer and information and how those change both epistemology and ontology. The project of Buddhism or the project of Zen is ultimately also different from the project and intentions of Western science historically in terms of the normative, and the ethics driving it, and whether it’s even trying to make claims about those kinds of things. Maybe you could also explain a little bit there about where the projects diverge, what they’re ultimately trying to say either about the nature of reality or the observer.

Anthony Aguirre: Certainly in physics and much of philosophy of physics I suppose, it’s purely about superior understanding of what physical reality is and how it functions and how to explain the world around us using mathematical theories but with little or no translation of that into anything normative or ethical or prescriptive in some way. It’s purely about what is, and not only is there no ought connected with it as maybe there shouldn’t be, but there’s no necessary connection between any statement of what ought to be and what is. No translation of because reality is like this, if we want this, we should do this.

Physics has got to be part of that. What we need to do in order to achieve our goals has to do with how the world works, and physics describes that so it has to be part of it and yet, it’s been somewhat disconnected from that in a way that it certainly isn’t in spiritual traditions like Buddhism where our goal in Buddhism is to reduce or eliminate suffering. This is how the mind works and therefore, this is what we need to do given the way the mind and reality works to reduce or eliminate suffering. That’s the fundamental goal, which is quite distinct from the fundamental goal of just I want to understand how reality works.

 do think there’s more to do, and obviously there are sciences that fill that role like psychology and social science and so on that are more about let’s understand how the mind works. Let’s understand how society works so that given some set of goals like greater harmony in society or greater individual happiness, we have some sense of what we should do in order to achieve those. I would say there’s a pretty big gap nowadays between those fields on the one hand and fundamental physics on the other hand. You can spend a lot of time doing social science or psychology without knowing any physics and vice versa, but at the same time, it’s not clear that they really should be so separate. Physics is talking about the basic nature of reality. Psychology is also talking about the basic nature of reality but two different sides of it, the interior side and the exterior side.

Those two are very much connected, and so it should not be entirely possible to fully understand one without at least some of the other. That I think is also part of the motivation that I have because I don’t think that you can have a comprehensive worldview of the type that you want to have in order to understand what we should do, without having some of both aspects in it.

Lucas Perry: The observer has been part of the equation the whole time. It’s just that classical mechanics is a problem such that it never really mattered that much, but now it matters more given astronomy and communications technologies.  When determining what is, the fact that an observer is trying to determine what is and that the observer has a particular nature impacts the process of trying to discover what is, but not only are there supposed “is statements” that we’re trying to discover or understand, but we’re also from one perspective conscious beings with experiences and we have suffering and joy, and are trying to determine what we ought to do. I think what you’re pointing towards is basically an alternate unification of the problem of determining what is, and also of the often overlooked fact that we are contextualized as a creature in the world we’re attempting to understand, and make decisions about what to do next.

Anthony Aguirre: I think you can think of that in very big terms like that in this cosmic context, what is subjectivity? What is consciousness? What does it mean to have feelings of moral value and so on? Let’s talk about that. I think it’s also worth being more concrete in the sense that if you think about my experience as an agent in the world insofar as I think the world is out there objectively and I’m just perceiving it more or less directly. I tend to make very real in my mind a lot of things that aren’t necessarily real. Things that are very much half created by me, I tend to then turn into objective things out there and then react to them. This is something that we just all do on a personal basis all the time in our daily lives. We make up stories and then we think that those stories are real. This is just a very concrete thing that we do every day.

Sometimes that works out well and sometimes it doesn’t because if the story that we have is different from the story that someone else has or the story that society has, or if some in some ways somewhat more objective story then we have a mismatch and we can cause a lot of poor choices and poor outcomes by doing that. Simply the very clear psychological fact that we can discover with a little bit of self analysis that the stories that we make up aren’t as true as we usually think they are, that’s just one end of the spectrum of this process by which we as sentient beings are very much co-creating the reality that we’re inhabiting.

I think this co-creation process we’re comfortable with the fact that it awkwardly happens when we make up stories about what happened yesterday when I was talking to so and so. We don’t think of it so much when we’re talking about a table. We think the table is there. It’s real. If anything, it is. When we go deeper, we can realize that all of the things like color and solidity and endurance over time aren’t in the way function of the atoms and the laws of physics evolving them. Those things are properties that we’ve brought as useful ways to describe the world that have developed over millions of years of evolution and thousands of years of social evolution and so on. Those properties, none of those things are built into the laws of nature. Those are all things that we’ve brought. That’s not to say that the table is made up. Obviously, it’s not. The table is very objective in a sense, but there’s no table built into the structure of the universe.

I think we tend to brush under the rug how much we bring to our description of reality. We say that it’s out there. We can realize that on small levels, but I think to realize the depth of how much we bring to our perceptions and where that stuff comes from, which is a long historical, complicated information generating process that takes a lot more diving in and thinking about.

Lucas Perry: Right. If one were god or if one were omniscient, then to know the universe at the ultimate level would be to know the cosmic wave function, and within the cosmic wave function, things like marriage and identity and the fact that I have a title and conceptual history about my life are not bedrock ontological things. Rather they’re concepts and stories that sentient beings make up due to, as you said, evolution and social conditioning and culture.

Anthony Aguirre: Right, but when you’re saying that, I think there’s a suggestion that the cosmic wave functions description would be better in some way. I’d take issue with that because I think if you were some super duper mega intelligence that just knew the position of every atom or exactly the cosmic wave function, that doesn’t mean that you would know that the table in front of me is brown. That description of reality has all the particles in it and their positions and at some level, all the information that you could have of the fundamental physics, but it’s completely missing a whole bunch of other stuff, which are the ways that we categorize that information into meaningful things like solidity and color and tableness.

Lucas Perry: It seems to me that that must be contained within that ultimate description of reality because in the end, we’re just arrangements of particles and if god or the omniscient thing could take the perspective of us then they would see the table or the chair and have that same story. Our stories about the world are information built into us. Right?

Anthony Aguirre: How would it do that? What I’m saying is there’s information. Say the wave function of the universe. That’s some big chunk of information describing all kinds of different observations you could make of locations of atoms and things, but nowhere in that description is it going to tell you the things that you would need to know in order to talk about whether there’s a glass on the table in front of me because glass and table and things are not part of that wave function. Those are concepts that have to be added to it. It’s more specification that has been added that exists because of our view of the world. It only exists from the interior perspective of where we are as creatures that have evolved and are looking out.

Lucas Perry: My perspective here is that given the full capacity of the universal wave function for the creation of all possible things, there is the total set of arbitrary concepts and stories and narratives and experiences that sentient beings might dream up that arrive within the context of that particular cosmic wave function. There could be tables and chairs, or sniffelwoops and worbblogs but if we were god and we had the wave function, we could run it such that we created the kinds of creatures who dreamt a life of sniffelwoops and worbblogs or whatever else. To me, it seems like it’s more contained within the original thing.

Anthony Aguirre: This is where I think it’s useful to talk about information because I think that I just disagree with that idea in the sense that if you think of an eight-bit string, so there’s 256 possibilities of where the ones and zeros can be on and off, if you think of all 256 of those things, then there’s no information there. Whereas when I say actually only 128 of these are allowed because the first one is a one, you cut down the list of possibilities, but by cutting it down, now there’s information. This is exactly the way that information physically or mathematically is defined. It’s by saying if all the possibilities are on equal footing, you might say equally probable, then there’s no information there. Whereas, if some of them are more probable or even known, like this is definitely a zero or one, then that whole thing has information in it.

I think very much the same way with reality. If you think of all the possibilities and they’re all on the table with equal validity, then there’s nothing there. There’s nothing interesting. There’s no information there. It’s when you cut down the possibilities that the information appears. You can look at this in many different contexts. If you think about it in quantum mechanics, if you start some system out, it evolves into many possibilities. When you make an observation of it, you’re saying, oh, this possibility was actually realized and in that sense, you’ve created information there.

Now suppose you subscribe to the many worlds view of quantum mechanics. You would say that the world evolves into two copies, one in which thing A happened and one in which thing B happened. In that combination, A and B, there’s less information than in either A or B. If you’re observer A or if you’re observer B, you have more information than if you’re observer C looking at the combination of things. In that sense, I think we as residents, not with omniscient view, but as limited agents that have a particular point of view actually have more information about the world in a particular sense than someone who has the full view. The person with the full view can say, well, if I were this person, I would see this, or if I were this person, I would see that. They have in some sense a greater analytical power, but there’s a missing aspect of that, which is to make a choice as to which one you’re actually looking at, which one you’re actually residing in.

Lucas Perry: It’s like the world model which you’re identified with or the world model which you’re ultimately running is the point. The eight-bit string that you mentioned: that contains all possible information that can be contained within that string. Your point is that when we begin to limit it is when we begin to encode more information.

Anthony Aguirre: That’s right. There’s a famous story called the Library of Babel by Borges. It’s a library with every possible sequence of characters just book, after book, after book. You have to ask yourself how much information is there in that library. On the one hand, it seems like a ton because each volume you pick out has a big string of characters in it, but on the other hand, there’s nothing there. You would search forever practically far longer than the age of the universe before you found even a sentence that made any sense.

Lucas Perry: The books also contain the entire multi-verse, right?

Anthony Aguirre: If they go on infinitely long, if they’re not finite length books. This is a very paradoxical thing about information, I think, which is that if you combine many things with information in them, you get something without information in it. That’s very, very strange. That’s what the Library of Babel is. I think it’s many things with lots of information, but combined, they give you nothing. I think that’s in some level how the universe is that it might be a very low information thing in and of itself, but incredibly high information from the standpoint of the beings that are in it like us.

Anthony Aguirre: When you think of it that way, we become vastly, vastly more important than you might think because all of that information that the universe then contains is defined in terms of us, in terms of the point of view that we’re looking out from, without which there’s sort of nothing there. That’s a very provocative and strange view of the world, but that’s more and more the way I think maybe it is.

Lucas Perry: I’m honestly confused. Can you expand upon your example? 

Anthony Aguirre: Suppose you’ve got the library of Babel. It’s there, it’s all written out. But suppose that once there’s a sentence like, “I am here observing the world,” that you can attribute to that sentence a point of view. So once you have that sequence of words like, “I am here observing the world,” it has a subjective experience. So then almost no book has that in this whole library, but a very, very, very select few do. And then you focus on those books. That sub-selection of books you would say there’s a lot of information associated with that subsection, because making something more special means that it has more information. So once you specify something, there’s a bunch of information associated with it.

Anthony Aguirre: By picking out those particular books, now you’ve created information. What I’m saying is there’s a very particular subset of the universe or subset of the ways the universe could be, that adds a perspective that has a subjective sense of looking out at the world. And if you specify, once you focus in from all the different states of the universe to those associated … having that perspective, that creates a whole bunch of information. That’s the way that I look at our role as subjective observers in the universe, that by being in a first person perspective, you’re sub-selecting a very, very, very special set of matter and thus creating a whole ton of information relative to all possible ways that the matter could be arranged.

Lucas Perry: So for example, say the kitchen is dirty, and if you leave the kitchen alone, entropy will just continue to make the kitchen more dirty because there are more possible states in which the kitchen is dirty than it is clean, and there are more possible states in the universe in which sentient human beings do not arise. But here we are, encoded on a planet with the rest of organic life … and in total, evolution and the history of life on this planet requires requires a large and unequal amount of information and specification. 

Anthony Aguirre: Yes, I would say … We haven’t talked about entropy, and I don’t know if we should. Genericness is the opposite of information. So when something’s very specific, there’s information content, and when it’s very generic, there’s less information content. This is at some level saying, “Our first person perspective as conscious beings is very, very specific.” I think there is something very special and mysterious at least, about the fact that there’s this very particular set of stuff in the universe that seems to have a first person perspective associated with it. That’s where we are, sort of almost by definition.

That’s where I think the question of agency and observation and consciousness has something to do with how the universe is constituted, not in that it changes the universe in some way, but that connected with this particular perspective is all this information, and if the physical world is at some level made of information, that’s a very radical thing because that’s saying that through our conscious existence and our particular point of view, we’re creating information, and information is reality, and therefore we’re creating reality.

There are all these ways that we apply physics to reality. They’re very information theoretic. There’s this sort of claim that a more useful way to think about the constituents of reality are as informational entities. And then the second claim is that by specifying, we create information. And then the third is that by being conscious observers who come into being in the universe and then have our perspective that we look out toward the universe from, that we are making a selection, we’re specifying, “This is what I see.” So we’re then creating a bunch of information and thus creating a reality.

In that sense, I’m claiming that we create a reality, not from some, “I think in my mind and therefore reality appears like magical powers,” but that if we really talk about what’s real, it isn’t just little bits of stuff I think, but it’s everything else that makes up reality and that information that makes up reality is something that we very much are part of the creation of. 

There are different definitions of information, but the way that the word is most commonly used is for Shannon information. And what that is, is an amount that is associated with a set of probabilities. So if I say I’m going to roll some dice, what am I going to roll? So you’d say, “I don’t know.” And I’d say, “Okay, so what probabilities would you ascribe to what I’m going to roll?” And you’d say, “Well probably a sixth for each side of the die.” And I would say that there’s zero information in that description. And I say that because that’s the most uncertain you could be about the rolls of the dice. There’s no information there in your description of the die.

Now I roll it, and we see that it’s a three. So now the probability of three is 100% or at least very close to it. And the probability of all the other ones is zero. And now there is information in our description. Something specific has happened, and we’ve created information. That’s not a magical thing; it’s just the information is associated with probabilities over things, and when we change the probabilities, we change how much information there is.

Usually when we observe things, we narrow the probabilities. That’s kind of the point of making observations, to find out more about something. In that sense, we can say that we’re creating information or we’re gathering information, so we’ve created information or gathered it in that sense by doing the measurement. In that sense, any time we look at anything, we’re creating information, right?

If I just think what is behind me, well there’s probably a pillar. It might be over there, it might be over there. Now let me turn around and look. Now I’ve gathered information or created information in my description of pillar location. Now when we’re talking about a wave function and somebody measuring the wave function, and we want to keep track of all of the information and so on, it gets rather tricky because there are questions about whose probabilities are we talking about, and whose observations and what are they observing. So we have to get really careful and technical about what sort of probabilities are being defined and whose they are, and how are they evolving.

When you read something like, “Information is preserved in the universe,” what that actually means is that if I take some description of the universe now and then I close my eyes and I evolve that description using the laws of physics, the information that my description had will be preserved. So the laws of physics themselves will not change the amount of information in that description.

But as soon as I open my eyes and look, it changes, because I just will observe something and I’ll see that I closed my eyes, the universe could have evolved into two different things. Now I open them and see which one it actually evolved into. Now I increased the information. I reduced the uncertainty. So it’s very, very subtle, the way in which the universe preserves information. The dynamics of the universe, the laws of physics, preserve the information that is associated with a description that you have of the world. There’s an incredible amount of richness there because that’s what’s actually happening. If you want to think about what reality is, that’s what reality is, and it’s the observers who are creating that description and observing that world and changing the description to match what they saw. Reality is a combination of those two things: the evolution of the world by the laws of physics, and the interaction of that with the person who or the whatever it is that is asking the questions and making the observations.

What’s very tricky is that unlike matter, information is not something that you can say, “I’ve got four bits of information here and five bits of information here, so I’m going to combine them and get nine bits of information.” Sometimes that’s true, but other times it’s very much not true. That’s what’s very, very, very tricky I think. So if I say I’ve got a die and I rolled a one with a 100% chance, that’s information. If I say I have a die and I rolled a two, or if I say I had a die and then rolled a three, all of those have information associated with them. But if I combine those in the sense that I say I have a die and I rolled a one and a two and a three and a four and a five and a six, then there’s no information associated with that.

All of the things happened, and so that’s what’s so tricky about it. It’s the same with the library of Babel. If I take every possibility on an equal footing, then none of them is special and there’s no information associated with that. If I take a whole bunch of special things and put them in a big pot, I just have a big mess and then there’s nothing special any more.

When I say something like, “The world is made out of information,” that means that it has different sort of properties than if it was made out of stuff. Because stuff … Like you take away some stuff and there’s less stuff. Or you divide the stuff in two and each half has half as much stuff. And information is not necessarily that way. And so if you have a bunch of information or a description of something and you take a subset of it, you’ve actually made more information even though there’s less that you’re talking about.

It’s different than the way we think about the makeup of reality when you think about it as made up of stuff, and has just very different properties that are somewhat counter-intuitive when we’re used to thinking about the world as being made up of stuff.

Lucas Perry: I’m happy that we have spent this much time on just discussing information, because I think that it offers an important conceptual shift for seeing the world, and a good challenging of some commonly held intuitions – at least, that I have. The question for me now is, what are the relevant and interesting implications here for agents? The one thing that had been coming to my mind is… and to inject more Zen here… there is a koan that goes something like: “first there were mountains and then there were no mountains, and then there were mountains.”  This seems to have parallels to the view that you’re articulating, because first you’re just stupefied and bought into the reality of your conceptualizations and stories where you say “I’m actually ultimately a human being, and I have a story about my life where I got married, and I had a thing called a job, and there were tables, which were solid and brown and had other properties…” But as you were saying, there’s no tableness or table in the wave function; these are all stories and abstractions which we use because they are functional or useful for us. And then when we see that we go, “Okay, so there aren’t really mountains in the way that I thought, mountains are just stories we tell ourselves about the wave function.”

But then I think it seems like you’re pointing out here again, there’s sort of this ethical or normative imperative where it’s like, “okay, so mountains are mountains again, because I need my concept and lived experience of a mountain to exist in the world, and to exist amongst human institutions and concepts and language, and even though I may return to this, this all may be viewed in a new light. Is this pointing in the right direction in your opinion?

Anthony Aguirre: I think in a sense, in that we think we’re so important, and the things around us are real, and then we realize as we study physics that actually, we’re tiny little blips in this potentially infinite or at least extremely large, somewhat uncaring-seeming universe, that the things that we thought are real are kind of fictitious, and partly made up by our own history and perceptions and things, that the table isn’t really real but it’s made up of atoms or wave function or what have you.

But then I would say, why do you attribute more realness to the wave function than the table? The wave function is a sort of very impoverished description of the world that doesn’t contain tables and things. So I think there’s this pathology of saying because something is described by fundamental physical mathematical laws, it’s more real than something like a table that is described by people talking about tables to other people.

There’s something very different about those things, but is one of them more real and what does that even mean? If the table is not contained in the wave function and the wave function isn’t really contained in the table, they’re just different things. They’re both, in my view, made out of information, but rather different types and accessible to rather different things.

To me, the, “Then I realized it was a mountain again,” is that yes, the table is kind of an illusion in a sense. It’s made out of atoms and we bring all this stuff to it and we make up solidity and brownness and stuff. So it’s not a fundamental part of the universe. It’s not objectively real, but then I think at some level nothing is so purely objectively real. It’s a sliding scale, and then it’s got a place for things like the wave function of the universe and the fundamental laws of physics at the more objective end of things, and brownness and solidity at the more subjective end of things, and my feelings about tables and my thirst for water at the very subjective end of things. But I see it as a sort of continuous spectrum, and that all of those things are real, just in somewhat different ways. In that sense, I think I’ve come back to those illusory things being real again in a sense, but just from a rather different perspective, if we’re going to be Zen about it.

Lucas Perry: Yeah, it seems to be an open question in physics and cosmology. There is still arguing now currently going on about what it means for something to be real. I guess I would argue that something is real if it maybe has causality or that causality would supervene upon that thing… I’m not even sure, I don’t think I’m even going to start here, I think I would probably be wrong. So…

Anthony Aguirre: Well, I think the problem is in trying to make a binary distinction between whether things are real or not or objective or not. I just think that’s the wrong way to think about it. I think there are things that are much more objective than other things, and things that are much less objective than other things, and to the extent that you want to connect real with being objective, there are then things that are more and less real.

In one of the koans in the book, I make this argument that we think of a mathematical statement like the Pythagorean theorem, say, or some other beautiful thing like Euler’s theorem relating exponentials to cosines and sines, that these are objective special things built into the universe, because we feel like once we understand these things, we see that they must have been true and existed before any people were around. Like it couldn’t be that the Pythagorean theorem just came into being when Pythagoras or someone else discovered it, or Euler’s theorem. They were true all the way back until before the first stars and whatnot.

And that’s clearly the case. There is no time at which those things became true. At the same time, suppose I just take some axioms of mathematics that we employ now, and some sort of rules for generating new true statements from them. And then I just take a computer and start churning out statements. So I churn out all possible consequences of those axioms. Now, if I let that computer churn long enough, somewhere in that string of true statements will be something that can be translated into the Pythagorean theorem or Euler’s theorem. It’s in there somewhere. But am I doing mathematics? I would say I’m not, in the sense that all I’m doing is generating an infinite number of true statements if I let this thing go on forever.

But almost all of them are super uninteresting. They’re just strings of gobbledygook that are true given the axioms and the rules for generating new true statements, but they don’t mean anything. Whereas Euler’s theorem is a very, very special statement that means something. So what we’re doing when we’re doing mathematics, we feel like what we’re doing is proving stuff to be true. And we are at some level, but I think what we’re really doing from this perspective is out of this catalog that is information-free of true statements, we’re picking out a very, very special subset that are interesting. And in making that selection, we’re once again creating information. And the information that we’re creating is really what we’re doing, I think, when we’re doing mathematics.

The information contained in the statement that the Pythagorean theorem is an interesting theorem that applies to stuff in the real world and that we should teach our kids in school, that only came into being when humans did. So although the statement has always been true, the information I think was created along with humans. So I think you kind of get to have it both ways. It is built into the universe, but at the same time, it’s created, so you discover it and you create it.

I think there’s a lot of things that are that way. And although the Pythagorean theorem feels super objective, you can’t disagree with the Pythagorean theorem in a sense, we all agree on it once we understand what it is, at the same time, it’s got this subjective aspect to it that out of all the theorems we selected, this particular one of interest … We also selected the axioms by the way, out of all different sets of axioms we could have chosen. So there’s this combination of objectivity and the subjectivity that we as humans that like to do geometry and think about the world and prove theorems and stuff have brought to it. And that combination is what’s created the information that is associated with the Pythagorean theorem.

Lucas Perry: Yeah. You threw the word “subjectivity” there, but this process is bringing us to the truth, right? I mean, the question is again, what is true or real?

Anthony Aguirre: There are different senses of subjectivity. So there’s one sense of having an interior world view, having consciousness or awareness or something like that, being a subject. And there’s another of saying that its perspectival, that it’s relative or something, that different agents might not agree on it or might see it a little bit differently. So I’d want to distinguish between those two.

Lucas Perry: In which sense did you mean?

Anthony Aguirre: What I mean is that the Pythagorean theorem is quite objective in the sense that once lots of agents agree on the premises and the ground rules, we’re all going to agree on Pythagorean theorem. Whereas we might not agree on whether ice cream is good, but it’s still a little bit not objective.

Lucas Perry: It’s like a small part of all possible mathematically true statements which arise out of those axioms.

Anthony Aguirre: Yes. And that some community of agents in a historical process had to select that out. It can’t be divorced from the process and the agents that brought it into being, and so it’s not entirely objective in that sense.

Lucas Perry: Okay. Yeah, yeah, that makes sense. I see. So this is a question I was intending on asking you an hour ago before we went down this wormhole, first I’m interested in just the structure of your book. How do you structure your book in terms of the ideas and what leads to what?

Anthony Aguirre: Just a brief outline of the book: there are a few different layers of structure. One is the koans themselves, which are sort of parables or little tales that encode some idea. There’s maybe a metaphor or just the idea itself, and the koans take place as part of a narrative that takes place starting in 1610 or 1630 or so, in a trip from Italy to in the end, Kyoto. So this across the world journey that takes place through these koans. And they don’t come in chronological order, so you kind of have to piece together the storyline as the book goes on. But it kind of comes together in the end, so there’s a sequence of things that are happening through the koans, and there’s a storyline that you get to see assemble itself and it involves a genie and it involves a sword fight and it involves all kinds of fun stuff.

That’s one layer of the structure, is the koans forming the narrative. Then after each koan is a commentary that’s kind of delving into the ideas, providing some background, filling in some physics, talking about what that koan was getting at. And in some cases, it’s kind of a resolution to it, like here’s the paradox and here’s the resolution to that paradox. But more often, it’s here’s the question, here’s how to understand what that question is really asking. Here’s a deeper question that we don’t know the answer to, and maybe we’ll come back to later in the book or maybe we won’t. So there’s kind of this development of a whole bunch of physics ideas that are going on in those commentaries.

In terms of the physics ideas, there’s a sequence. There’s first classical physics including relativity. The second part is quantum mechanics, essentially. The third part is statistical mechanics and information theory. The fourth part is cosmology. The fifth part is the connections to the interior sense, like subjectivity and the subject and experiments and thinking about interior sense and consciousness and the eye. And then the last part is a sort of more philosophical section, bringing things together in the way that we’ve been discussing, like how much of reality is out there, how much of it is constructed by us, or us as us writ large as a society and thinking beings and biological evolution and so on. So that’s kind of the structure of the book.

Lucas Perry: Can you read for us two of your favorite koans in the book?

Anthony Aguirre: This one alludes to a classic philosophical thought experiment of the ship of Theseus. This one’s called What Is It You Sail In? It takes place in Shanghai, China in 1620. “After such vast overland distances, you’re relieved that the next piece of your journey will be at sea, where you’ve always felt comfortable. Then you see the ship. You’ve never beheld a sorrier pile of junk. The hull seems to be made mostly of patches, and the patches appear to be made of other patches. The nails look nailed together. The sails are clearly mostly a quilt of canvas sacks and old clothing. ‘Does it float?’ you ask the first mate, packing in as much skepticism as you can fit. ‘Yes. Many repairs, true. But she is still my good companion, [Atixia 00:25:46], still the same ship she ever was.’

Is she?, you wonder. Then you look down at your fingernails, your skin, the fading scar on your arm and wonder, am I? Then you look at the river, the sea, the port and all around. Is anything?”

So what this one’s getting at is this classic tale where if you replace one board of a ship, you’d still say it’s the same ship; you’ve just replaced one little piece of it. But as you replace more and more pieces of it, at some point, every piece of the ship might be a piece that wasn’t there before. So is it the same ship or it’s not? Every single piece has been replaced. And our body is pretty much like this; on a multi-year timescale, we replace pretty much everything.

The idea of this is to get at the fact that when we think of a thing like an identity that something has, it’s much more about the form and I would say the information content in a sense, than about the matter that it’s made up of. The matter’s very interchangeable. That’s sort of the way of kicking off a discussion of what does it mean for something to exist? What is it made of? What does it mean for something to be different than another thing? What are the different forms of existence? What is the form versus the matter?

And with the conclusion that at some level, the very idea of matter is a bit of an illusion. There’s kind of form in the sense that when you think of little bits of stuff, and you break those little bits of stuff down farther, you see that there are protons and electrons and neutrons and whatnot, but what those things are, they’re not little bits of stuff. They’re sort of amounts or properties of something. Like we think of energy or mass as a thing, but it’s better to think of it as a property that something might have if you look.

The fact that you have an electron really means that you’ve got something with a little bit of the energy property or a little bit of the mass property, a little bit of the spin property, a little bit of the electron lepton number property, and that’s it. And maybe you talk about its position or its speed or something. So it’s more like a little bundle of properties than a little bundle of stuff. And then when you think of agglomerations of atoms, it’s the same way. Like the way that they’re arranged is a sort of informational thing, and questions you can ask and get answers to.

Going back to our earlier conversation, this is just a slightly more concrete version of the claim that when we say what something’s made of, there are lots of different answers to that question that are useful in different ways. But the answer that it’s made of stuff is maybe not so useful as we usually think it is.

Lucas Perry: So just to clarify for listeners, koans in Zen traditionally are supposed to be not explicitly philosophically analytical, but experiential things which are supposed to subvert commonly held intuitions which may take you from seeing mountains as mountains, to no mountains, to mountains again. So here there’s this perspective that there’s both supposedly the atoms which make up me and you, and then the way in which the atoms are arranged, and then this koan that you say elicits the thought that you can remove any bit of information from me, and you can continue to move one bit of information from me at a time, and there’s no one bit of information that I would say is essential to what I call Lucas, or what I take to be myself. Nor atoms. So then what am I? How many atoms or bits of information do you have to take away from me until I stop being Lucas? And so one may arrive at the place where you’re deeply questioning the category of Lucas altogether.

Anthony Aguirre: Yeah. The things in this book are not Zen koans in the sense that a lot of them are pretty philosophical and intellectual and analytical, which Zen koans are sort of not. But at the same time, when you delve into them and try to experience them, when you think not of the abstract idea of the ship in this koan and lepton numbers and energy and things like that, but when you apply it to yourself and think, okay, what am I if I’m not this body?, then it becomes a bit more like a genuine Zen koan. You’re sort of like, ah, I don’t know what I am. And that’s a weird place to be. I don’t know what I am.

Lucas Perry: Yeah. Sure. And the wisdom to be found is the subversion of a ton of different commonly held intuitions, which are evolutionarily conditioned, which are culturally conditioned and socially conditioned. So yeah, this has to do with the sense of permanent things and objects, and then what identity ultimately is, or what our preferences are about identity, or if there are normative or ethical imparitives about the sense of identity that we out to take. Are there any other ideas here for some other major intuitions that you’re attempting to subvert in your book?

Anthony Aguirre: Well yeah, there’s … I guess it depends which ones you have, but I’ve subverted as many as I can. I mean, a big one I think is the idea of a sort of singular individual self, and that’s one that is really interesting to experiment with. The way we go through our lives pretty much all the time is that there’s this one-to-one correspondence between our feeling that we’re an individual self looking out at the world, there’s an “I”. We feel like there’s this little nugget of me-ness that’s experiencing the world and owns mental faculties, and then owns and steers around this body that’s made out of physical stuff.

That’s the intuition that we go through life with, but then there are all kinds of thought experiments you can do that put tension on that. And one of them that I go through a lot in the book is what happens when the body gets split or duplicated, or there are multiple copies of it and things like that. And some of those things are physically impossible or so extraordinarily difficult that they’re not worth thinking about, but some of them are very much things that might automatically happen as part of physics, if we really could instantaneously copy a person and create a duplicate of them across the room or something like that.

What does that mean? How do we think about that? When we’ve broken that one-to-one correspondence between the thing that we like to think of as ourself and our little nugget of I-ness, and the physical body, which we know is very, very closely related to that thing. When one of them bifurcates into two, it kind of throws that whole thing up in the air, like now what do we think? And it gets very unsettling to be confronted with that. There are several koans investigating that at various different levels that don’t really draw any conclusions, I would say. They’re more experiments that I’m sort of inviting other people to subject themselves to, just as I have thinking about them.

It’s very confusing how to think about them. Like, should I care if I get copied to another copy across the room and then get instantaneously destroyed? Should that bother me? Should I fear that process? What if it’s not across the room, but across the universe? And what if it’s not instantaneously that I appear across the room, but I get destroyed now, and I exist on the other side of the universe a billion years from now, the same configuration of atoms? Do I care that that happens? There are no easy answers to this, I think, and they’re not questions that you can easily dismiss.

Lucas Perry: I think that this has extremely huge ethical implications, and represents, if transcended, an important point in human evolution. There is this koan, which is something like, “If you see the Buddha on the road, kill him.” Which means if you think you’ve reached something like enlightenment, it’s not that, because enlightenment is another one of these stories. But insofar as human beings are capable of transcending illusions and reaching anything called enlightenment… I think that an introspective journey into trying to understand the self and the world is one of the most interesting pursuits a human being can do. And just to contextualize this and, I think, paint the picture better, it’s evolution that has evolved these information processing systems, with this virtual sense of self that exists in the world model we have, and the model we have about ourselves and our body, and this is because this is good for self preservation. 

So you can say, “Where do you feel you’re located?” Well I sort of feel I’m behind my face and I feel I have a body and I have this large narrative of self concept and identity, which is like, “OI’m Lucas. I’m from here. I have this concept of self which I’ve created, which is basically this extremely elaborative connotative web of all the things which I think make up my identity. And under scrutiny, this is basically just all conditioned, it’s all outside of myself, all prior to myself, I’m not self-made at all, yet I think that I’m some sort of self separate entity. And then comes along Abrahamic religions at some point in the story of humanity, which are going to have tremendous cultural and social implications on the way that evolution has already bred ego-primates like ourselves. We’re primates with egos and now we have Abrahamic religions, which are contributing to this problem by conditioning the language and philosophy and thought of the West, which say that ultimately you’re a soul, you’re not just a physical thing.

You’re actually a soul who has a body and you’re basically just visiting here for a while, and then the thing that is essentially you will go to the next level of existence. This leads to, I think, reifying this rational conceptualization of self and this experience itself. Where you feel like you have a body, you feel that your heart beats itself, you feel that think your thoughts and you say things like, “I have a brain.” Who is it that stands in relation to the brain? Or we might say something like, “I have a body.” Who is it that has a body? So it seems like our language is clearly conditioned and structured around our sense and understanding of self. And there’s also this sense in which you’ve been trying to subvert some sorts of ideas here, like sameness or otherness, what counts as the same ship or not. And from an ultimate physics perspective, the thing that is fusing the stars is the same thing that is thinking my thoughts. The fundamental ontology of the world is running everything, and I’m not separate from that, yet if feels like I am, and this seems to have tremendous ethical implications.

For example, people believe that people are deserving of retribution for crimes or acting immorally, as if they had chosen in some ultimate and concrete sense what to do. The ultimate spiritual experience, or at least the ultimate insight, is to see this whole thing for what it is, to realize that basically everyone is spell bound by these narratives of self, and these different intuitions we have about the world, and that we’re basically bought into this story that I think Abrahamic religions have led to a deeper conditioning in us. It seems to me that atheists also experience themselves this way. We think when we die there’ll be nothing, there will just be an annihilation of the self, but part of this realization process is that there’s no self to be annihilated to begin with. There’s just consciousness and its contents, and ultimately by this process you may come to see that consciousness is something empty of self and empty of identity. It’s just another thing that is happening.

Anthony Aguirre: I think there are a lot of these cases where the mountain becomes less then more of a mountain and then more and less of a mountain. You touched upon consciousness and free will and many other things that are also in this, and there’s a lot of discussion of free will in the book and we can get into that too. I think with consciousness or the self, I find myself in this strange sort of war in the sense that, on the one hand I feel like there’s a sense in which this self that we construct, is kind of an illusionary thing and that the ego and things that we attach to, is kind of an illusionary thing. But at the same time, A, it sure feels real and the feeling of being Anthony, I think is a kind of unique thing.

I don’t subscribe to the notion that there’s this little nugget of soul stuff that exists at the core of a person. It’s easy to sort of make fun of this, but at the same time I think the idea that there’s something intrinsically equally valuable to each person is really, really important. I mean it underlies a lot of our way of thinking about society and morality, in ways that I find very valuable. And so while I kind of doubt the sort of metaphysics of the individual’s soul in that sense, I worry what happens to the way we’ve constructed our scheme of values. If we grade people on a sliding scale, you’re more valuable than this other person. I think that sense of equal intrinsic human worth is incredibly crucial and has led to a lot of moral progress. So I have this really ambivalent feeling, in that I doubt that there’s some metaphysical basis for that, but at the same time I really, really value that way of looking at the self, in terms of society and morality and so on, that we’ve constructed on top of that.

Lucas Perry: Yeah, so there’s the concept in zen Buddhism of skillful means. So one could say that the concept of each human being having some kind of equal and intrinsic worth, which is related to their uniqueness and fundamental being as being a human being, that that is skillful. 

Anthony Aguirre: It’s not something that in some sense makes any rational sense. Whatever you name, some people have more of it than others. Money, capability, intelligence, sensitivity.

Lucas Perry: Even consciousness.

Anthony Aguirre: Consciousness maybe. Maybe some people are just a lot more conscious than others. If we can measure it, maybe some people would be like a 10 on the dial and others would be 2. Who knows?

Lucas Perry: I think that’s absolutely probably true, because some people are brain dead. Medically there’s a sliding scale of brain activity, so yeah, I think today it seems clear that some people are more conscious than others.

Anthony Aguirre: Yes, that’s certainly true. I mean when we go to sleep, we’re less conscious. But nonetheless, although anything that you can measure about people and their experience of the world varies and if you could quantify it on a scale, some people would have more and less. Nonetheless, we find it useful to maintain this idea that there is some intrinsic equality among people and I worry what would happen if we let go of that. What kind of world would we build without that assumption? So I find it valuable to keep that assumption, but I’m conflicted about that honestly, because on what basis do we make that assumption? I really feel good about it, but I’m not sure I can point to why. Maybe that’s just what we do. We say this is an axiom that we choose to believe that there’s an intrinsic moral value to people and I respect that, because I think you have to have axioms. But it’s an interesting place that we’ve come to, I think in terms of the relation between our beliefs about reality and our beliefs about morality.

Lucas Perry: Yeah. I mean there’s the question, as we approach AI and super intelligence, of what authentic experiential and ethical enlightenment and idealization means. From my perspective the development of this idea, which is correlated with the enlightenment and humanism, right? Is a very recent thing, the 17 and the 1800’s, right? So it seems clear from a cosmological context that this norm or ethical view is obviously based on a bunch of things that are just not true, but at the same time it’s been ethnically very skillful and meaningful for fixing many of the immoral things that humans do, that are unethical. But obviously it seems like it will give way to something else, and the question is, is what else does it give way to?

So if we create Life 3.0 and we create AI’s that do not care about getting turned off for two minutes and then waking up again, because they don’t feel the delusion of a self. That to me seems to be a step in moral evolution, and why I think that ultimately it would be super useful for AI design, if the AI designers would consider the role that identity plays in forming strong AI systems that are there to help us. We have the opportunity here to have selfless AI systems, they’re not going to be confused like we are. They’re not going to think they have souls, or feel like they have souls, or have strong senses of self. So it seems like there’s opportunities here, and questions around what it means to transcend many of the aspects of human experience, and how best it would be to instantiate that in advanced AI systems. 

Anthony Aguirre: Yeah, I think there’s a lot of valuable stuff to talk about there. In humans, there are a whole bunch of things that go together that don’t necessarily have to be packaged together. Intelligence and consciousness are packaged together, it’s not clear to what degree those have to be. It’s not clear how much consciousness and selfness have to be packaged together. It’s not clear how much consciousness or selfness and a valence to consciousness, a positive or negative experience have to be packaged together. Could we conceive of something that is intelligent, but not conscious? I think we certainly could, depending on how intelligent it has to be. I think we have those things and depending on what we mean by consciousness, I guess. Can we imagine something that is conscious and intelligent, but without a self, maybe? Or conscious, but it doesn’t matter to it how something goes. So it’s something that’s conscious, but can’t really have a moral weight in the sense that it doesn’t either suffer or experience positive feelings, but it does experience.

I think there’s often a notion that if something is said to have consciousness, then we have to care about it. It’s not totally clear that that’s the case and at what level do we have to care about somethings preferences? The rain prefers to fall down, but I don’t really care and if I frustrate the rain by putting up an umbrella, I don’t feel bad about that. So at what level do preferences matter and how do we define those? So there are all these really, really interesting questions and what’s both sort of exciting and terrifying, is that we have a situation in which those questions are going to play out. In that we’re going to be creating things that are intelligent and we’re doing that now depending on how intelligent they have to be again. That may or may not be conscious, that may or may not have preferences, may or may not matter. They may or may not experience something positive or negative when those preferences are satisfied or not.

And I think we have the possibility of both moral catastrophe if we do things wrong at some level, but an enormous opportunity as well, in the sense that you’ve pointed out that we may be able to create agents that are purely selfless and insofar as other beings have a moral value. These beings can be absolute altruists, like Stewart has been pointing out in his book. Absolute altruism is a pretty tough one for humans to attain, but might be really easy for beings that we construct that aren’t tied to an evolutionary history and all those sorts of things that we came out of.

It may still be that the sort of moral value of the universe centers around the beings that do have meaningful preferences, like humans. Where meaning sort of ultimately sits, what is important and what’s not and what’s valuable and what’s not. If that isn’t grounded in the preferences of experiencing conscious beings, then I don’t know where it’s grounded, so there’s a lot of questions that come up with that. Does it just disappear if those beings disappear and so on? All incredibly important questions I think, because we’re now at the point in the next however many years, 50, 100, maybe less, maybe more. Where our decisions are going to affect what sorts of beings the universe gets inhabited by in the far future and we really need to avoid catastrophic blunders in how that plays out.

Lucas Perry: Yeah. There this whole aspect of AI alignment that you’re touching on, that is not just AI alignment, but AI generation and creation. The problem has been focused on how we can get AI systems, in so far as we create them, to serve the needs of human beings, to understand our preference hierarchies, to understand our metapreferences. But in the creation of Life 3.0, there’s this perspective that you’re creating something who, by virtue of how it is created, it is potentially more morally relevant than you, it may be capable of much more experience, much more profound levels of experience, which also means that there’s this aspect of AI alignment which is about qualia architecting or experience architecting or reflecting on the fact that we’re building Life 3.0. These aren’t just systems that can process information for us, there are important questions about what it is like to be that system in terms of experience and ethics and moral relevance. If you create something with the kind of experience that you have, and it has the escape velocity to become super intelligent and populate the cosmic endowment with whatever it determines to be the good, or what we determine to be the good, what is the result of that?

One last thing that I’m nervous about is that the way that the illusion of self will contribute to a fair and valuable AI alignment. This consideration is in relation to us not being able to see what is ultimately good. We could ultimately be tied up in the preservation of our own arbitrary identities, like the Lucas identity or the Anthony identity. You could be creating something like blissful, purely altruistic, benevolent Boddhisattva gods, but we never did because we had this fear and this illusion of self-annihilation. And that’s not to deny that our information can be destroyed, and maybe we care a lot about the way that the Lucas identity information is arranged, but when we question these types of intuitions that we have, it makes me question and wonder if my conditioned identity is actually as important as I think it is, or as I experience it to be.

Anthony Aguirre: Yeah, I think this is a very horrifyingly thorny question that we have to face and my hope is that we have a long time to face it. I’m very much an advocate of creating intelligent systems that can be incredibly helpful and economically beneficial and then reaping those benefits for a good long time while we sort ourselves out. But with a fairly strict upper limit on how intelligent and powerful we make those things. Because I think if huge gains in the capability of machine systems happens in a period of years or even decades, the chance of us getting these big questions right, seems to me like almost zero. There’s a lot of argumentation about how difficult is it to build a machine system that has the same sort of general intelligence that we do. And I think part of what makes that question hard, is thinking about the huge amount of effort that went in evolutionarily and otherwise to creating the sort of robust intelligence that humans have.

I mean we’ve built up over millions of years in this incredibly difficult adversarial environment, where robustness is incredibly important. Cleverness is pretty important, but being able to cope with a wide variety of circumstances is kind of what life and mind has done. And I think the degree to which AGI will be difficult, is at some level the degree to which it has to attain a similar level of generality and robustness, that we’ve spent just an ungodly amount of computation over the evolution of life on earth to attain. If we have to do anything like that level of computation, it’s going to take just an extraordinarily long time. But I think we don’t know to what degree all of that is necessary and to what degree we can really skip over a lot of it, in the same way that we skip over a lot of evolution of flying when we build an airplane.

But I think there’s another question, which is that of experience and feeling that were even more clueless as to where we would possibly start. If we wanted to create an appreciation for music, you have no clue where to even begin with that question, right? What does it even mean to appreciate or listen to, in some sense have preferences. You can maybe make a machine that will sort different kinds of music into different categories, but do you really feel like there’s going to be any music appreciation in there or in any other human feeling? These are things that have a very, very long, complicated evolutionary history and it’s really unclear to me that we’re going to get them in machine form without something like that. But at least as our moral system is currently construed, those are the things that actually matter.

Whether conscious beings are having a good time, is pretty much the foundation of what we consider to be important, morally speaking at least. Unless we have ideas like we have to do it with a way to please some deity or something like that. So I just don’t know, when you’re talking about future AI beings that have a much richer and deeper interior sense, that’s like the AGI problem squared. We can at least imagine what it’s like to make a general intelligence, an idea of what it would take to do that. But when you talk about creating a feeling being, with deeper, more profound feelings that we have, just no clue what that means in terms of actually engineering or something.

Lucas Perry: So putting on the table all of the moral anti-realism considerations and thought that many people in the AI alignment community may have… Their view is that there’s the set of the historically conditioned preferences that we have and that’s it. We can imagine if horshoecrabs had been able to create a being more intelligent than them, a being that was aligned to horshoecrabs preferences and preference hierarchy. And we can imagine that the horseshoecrabs were very interested and committed to just being horseshoecrabs, because that’s what horseshoecrab wants to do. So now you have this being that was able to maintain it’s own existential condition of the horseshoecrab for a very long time. That just seems like an obvious moral catastrophe. It seems like a waste of what could have been.

Anthony Aguirre: That’s true. But if you imagine that the horseshoe crabs, instead creating elaborate structures out of sand, that they decided we’re their betters and we’re like, this is their legacy was to create these intricate sand structures, because the universe deserves to be inhabited by these much greater beings than them. Then that’s also a moral catastrophe, right? Because the sand structures have no value whatsoever.

Lucas Perry: Yeah. I don’t want humans to do any of these things. I don’t want human beings to go around building monuments, and I don’t want us to lock in to the human condition either. Both of these cases obviously seem like horrible waste, and now you’re helping to articulate the issue that human beings are at a certain place in evolution. 

And so if we’re to create Life 3.0, then it’s also unclear epistemically how we are to evaluate what kinds of exotic qualia states are the kinds that are morally good, and I don’t even know how to begin to answer that question.

So we may be unaware of experiences that literally astronomically better than the kinds of experiences that we have access to, and it’s unclear to me how you would navigate effectively towards that, other than amplifying what we already have.

Anthony Aguirre: Yeah. I guess my instinct on that is to look more on the biology side then the machine side and to say as biological systems, we’re going to continue to evolve in various ways. Some of those might be natural, some of them might be engineered and so on. Maybe some of them are symbiotic, but I think it’s hard for me to imagine how we’re going to have confidence that the things that are being created have an experience that we would recognize or find valuable, if they don’t have some level of continuity with what we are, that we can directly experience. The reason I feel confidence that my dog is actually feeling some level of joy or frustration or whatever, is really by analogy, right? There’s no way that I can get inside the dog’s mind, maybe someday there will be, but there’s no way at the moment. I assume that because we have this common evolutionary heritage, that the outward manifestations of those feelings correspond to some inward feelings in much the same way that they do in humans and much the same the way that they do in me. And I feel quite confident about that really, although for a long period of history, people have believed otherwise at times.

So I think realistically all we’re going to be able to do, is reason by analogy and that’s not going to work very well I think with machine systems, because it’s quite clear that we’ll be able to create machine systems that can wag their tails and smile and things, even though there’s manifestly nothing behind that. So at what point we would start to believe the sort of behavioral cues and say that there’s some interior sense behind that, is very, very unclear when we’re talking about a machine system. And I think we’re very likely to make all kinds of moral errors in either ascribing too much or too little interior experience to machines, because we have no real way of knowing to make any meaningful connection between those things. I suspect that we’ll tend to make the error in both directions. We’ll create things that seem kind of lifelike and attribute all kinds of interior life to them that we shouldn’t and if we go on long enough, we may well create things that have some interior sense that we don’t attribute to them and make all kinds of errors that way too.

So I think it’s quite fraught actually in that sense and I don’t know what we’re going to do about that. I mean we can always hope that the intractably hard problems that we can’t solve now, will just be solved by something much smarter than us. But I do worry a little bit about attributing sort of godlike powers to something by saying, “Oh, it’s super intelligent, so it will be able to do that.” I’m not terribly optimistic. It may well be that the time at which something is so intelligent that it can solve the problem of consciousness and qualia and all these things, it’d be so far beyond the time at which it was smart enough to completely change reality in the world and all kinds of other things. That it’s almost past the horizon of what we can think about now, it’s sort of past the singularity in that sense. We can speculate, hopefully or not hopefully, but it’s not clear on what basis we would be speculating.

Lucas Perry: Yeah. At least the questions that it will need to face, and then we can leave it open as to whether or not and how long it will need to address those questions. So we discussed who I am, I don’t know. You touched on identity and free will. I think that free will in the libertarian sense, as in I could have done otherwise, is basically one of these common sense intuitions that is functionally useful, but ultimately illusory.

Anthony Aguirre: Yeah, I disagree. I will just say briefly, I prefer to think of free will as a set of claims that may or may not be true. And I think in general it’s useful to decompose the question of free will into a set of claims that may or may not be true. And I think when you do that, you find that most of the claims are true, but there may be some big fuzzy metaphysically thing that you’re equating to that set of claims and then claiming it’s not true. So that’s my feeling, that when you actually try to operationalize what you mean by free will, you’ll find that a lot of the things that you mean actually are properties of reality. But if you sort of invent a thing that you call free will, that’s by its nature can’t be part of a physical world, then yes, that doesn’t exist. In a nutshell that’s my point of view, but we could go into a lot more depth some other time.

Lucas Perry: I think I understand that from that short summary. So for this last part then, can you just touch on, because I think this is an interesting point, as we come to the end of the conversation. Form is emptiness, emptiness is form. What does that mean?

Anthony Aguirre: So form is emptiness, is coming back to the discussion of earlier. That when we talk about something like a table, that thing that we call real and existing and objective in some sense, is actually composed of all kinds of ingredients that are not that thing. Our evolutionary history and our concept of solidity and shape, all of these things come together from many different sources and as the Buddhist would say, “There’s no intrinsic self existence of a table.” It very much exists relative to a whole bunch of other things, that we and many other people and processes and so on, bring into being. So that’s the form is emptiness. The emptiness is the emptiness of an intrinsic self existence, so that’s the way that I view the form is emptiness.

But turning that around, that emptiness is form, is yes, even though the table is empty of inherit existence, you can still knock on it. It’s still there, it’s still real and it’s in many ways as real as anything else. If you look for something that is more intrinsically existing than a table, you’re not really going to find it and so we might as well call all of those things real, in which case the emptiness is form again, it’s something. That’s the way I sort of view it and that’s the way that I’ve explored it in that section of the book.

 So to talk about like the ship, that there’s this form of the ship that is kind of what we call the ship. That’s the arrangement of atoms and so on, it’s kind of made out of information and whatnot. That that form is empty in the sense that there are all these ingredients, that come from all these different places that come together to make that thing, but then that doesn’t mean it’s non-existent or meaningless or something like that. That there very much is meaning in the fact that something is a ship rather than something else, that is reality. So that’s kind of the case that I’m putting together in that last section of the book. It’s not so simply either, our straight forward sense of a table as a real existing thing, nor is it, everything is an illusion. It’s like a dream, it’s like a phantasm, nothing is real. Neither of those is the right way to look at it.

Lucas Perry: Yeah, I think that your articulation here brings me again back, for better or for worse, to mountains, no mountains, and mountains again. I came into this conversation with my conventional view of things, and then there’s “form is emptiness.” Oh so okay, so no mountains. But then “emptiness is form.” Okay, mountains again. And given this conceptual back and forth, you can decide what to do from there.

Anthony Aguirre: So have we come back to the mountain in this conversation, at this point?

Lucas Perry: Yeah. I think we’re back to mountains. So I tremendously valued this conversation and feel that it’s given me a lot to consider. And I will re-enter the realm of feeling like a self and inhabiting a world of chairs, tables, objects and people. And will have to engage with some more thinking about information theory. And with that, thank you so much.

 

The Psychology of Existential Risk: Moral Judgments about Human Extinction

By Stefan Schubert

This blog post reports on Schubert, S.**, Caviola, L.**, Faber, N. The Psychology of Existential Risk: Moral Judgments about Human Extinction. Scientific Reports [Open Access]. It was originally posted on the University of Oxford’s Practical Ethics: Ethics in the News blog.

Humanity’s ever-increasing technological powers can, if handled well, greatly improve life on Earth. But if they’re not handled well, they may instead cause our ultimate demise: human extinction. Recent years have seen an increased focus on the threat that emerging technologies such as advanced artificial intelligence could pose to humanity’s continued survival (see, e.g., Bostrom, 2014Ord, forthcoming). A common view among these researchers is that human extinction would be much worse, morally speaking, than almost-as-severe catastrophes from which we could recover. Since humanity’s future could be very long and very good, it’s an imperative that we survive, on this view.

Do laypeople share the intuition that human extinction is much worse than near-extinction? In a famous passage in Reasons and Persons, Derek Parfit predicted that they would not. Parfit invited the reader to consider three outcomes:

1) Peace
2) A nuclear war that kills 99% of the world’s existing population.
3) A nuclear war that kills 100%.

In Parfit’s view, 3) is the worst outcome, and 1) is the best outcome. The interesting part concerns the relative differences, in terms of badness, between the three outcomes. Parfit thought that the difference between 2) and 3) is greater than the difference between 1) and 2), because of the unique badness of extinction. But he also predicted that most people would disagree with him, and instead find the difference between 1) and 2) greater.

Parfit’s hypothesis is often cited and discussed, but it hasn’t previously been tested. My colleagues Lucius Caviola and Nadira Faber and I recently undertook such testing. A preliminary study showed that most people judge human extinction to be very bad, and think that governments should invest resources to prevent it. We then turned to Parfit’s question whether they find it uniquely bad even compared to near-extinction catastrophes. We used a slightly amended version of Parfit’s thought-experiment, to remove potential confounders:

A) There is no catastrophe.
B) There is a catastrophe that immediately kills 80% of the world’s population.
C) There is a catastrophe that immediately kills 100% of the world’s population.

A large majority found the difference, in terms of badness, between A) and B) to be greater than the difference between B) and C). Thus, Parfit’s hypothesis was confirmed.

However, we also found that this judgment wasn’t particularly stable. Some participants were told, after having read about the three outcomes, that they should remember to consider their respective long-term consequences. They were reminded that it is possible to recover from a catastrophe killing 80%, but not from a catastrophe killing everyone. This mere reminder made a significantly larger number of participants find the difference between B) and C) the greater one. And still greater numbers (a clear majority) found the difference between B) and C) the greater one when the descriptions specified that the future would be extraordinarily long and good if humanity survived.

Our interpretation is that when confronted with Parfit’s question, people by default focus on the immediate harm associated with the three outcomes. Since the difference between A) and B) is greater than the difference between B) and C) in terms of immediate harm, they judge that the former difference is greater in terms of badness as well. But even relatively minor tweaks can make more people focus on the long-term consequences of the outcomes, instead of the immediate harm. And those long-term consequences become the key consideration for most people, under the hypothesis that the future will be extraordinarily long and good.

A conclusion from our studies is thus that laypeople’s views on the badness of extinction may be relatively unstable. Though such effects of relatively minor tweaks and re-framings are ubiquitous in psychology, they may be especially large when it comes to questions about human extinction and the long-term future. That may partly be because of the intrinsic difficulty of those questions, and partly because most people haven’t thought a lot about them previously.

In spite of the increased focus on existential risk and the long-term future, there has been relatively little research on how people think about those questions. There are several reasons why such research could be valuable. For instance, it might allow us to get a better sense of how much people will want to invest in safe-guarding our long-term future. It might also inform us of potential biases to correct for.

The specific issues which deserve more attention include people’s empirical estimates of whether humanity will survive and what will happen if we do, as well as their moral judgments about how valuable different possible futures (e.g., involving different population sizes and levels of well-being) would be. Another important issue is whether we think about the long term future with another frame of mind because of the great “psychological distance” (cf. Trope and Lieberman, 2010). We expect the psychology of longtermism and existential risk to be a growing field in the coming years.

** Equal contribution.

FLI Podcast: Feeding Everyone in a Global Catastrophe with Dave Denkenberger & Joshua Pearce

Most of us working on catastrophic and existential threats focus on trying to prevent them — not on figuring out how to survive the aftermath. But what if, despite everyone’s best efforts, humanity does undergo such a catastrophe? This month’s podcast is all about what we can do in the present to ensure humanity’s survival in a future worst-case scenario. Ariel is joined by Dave Denkenberger and Joshua Pearce, co-authors of the book Feeding Everyone No Matter What, who explain what would constitute a catastrophic event, what it would take to feed the global population, and how their research could help address world hunger today. They also discuss infrastructural preparations, appropriate technology, and why it’s worth investing in these efforts.

Topics discussed include:

  • Causes of global catastrophe
  • Planning for catastrophic events
  • Getting governments onboard
  • Application to current crises
  • Alternative food sources
  • Historical precedence for societal collapse
  • Appropriate technology
  • Hardwired optimism
  • Surprising things that could save lives
  • Climate change and adaptation
  • Moral hazards
  • Why it’s in the best interest of the global wealthy to make food more available

References discussed include:

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Ariel Conn: In a world of people who worry about catastrophic threats to humanity, most efforts are geared toward preventing catastrophic threats. But what happens if something does go catastrophically wrong? How can we ensure that things don’t spiral out of control, but instead, humanity is set up to save as many lives as possible, and return to a stable, thriving state, as soon as possible? I’m Ariel Conn, and on this month’s episode of the FLI podcast, I’m speaking with Dave Denkenberger and Joshua Pearce.

Dave and Joshua want to make sure that if a catastrophic event occurs, then at the very least, all of the survivors around the planet will be able to continue eating. Dave got his Master’s from Princeton in mechanical and aerospace engineering, and his PhD from the University of Colorado at Boulder in building engineering. His dissertation was on his patented heat exchanger. He is an assistant professor at University of Alaska Fairbanks in mechanical engineering. He co-founded and directs the Alliance to Feed the Earth in Disasters, also known as ALLFED, and he donates half his income to that. He received the National Science Foundation Graduate Research Fellowship. He is a Penn State distinguished alumnus and he is a registered professional engineer. He has authored 56 publications with over 1600 citations and over 50,000 downloads — including the book Feeding Everyone No Matter What, which he co-authored with Joshua — and his work has been featured in over 20 countries, over 200 articles, including Science.

Joshua received his PhD in materials engineering from the Pennsylvania State University. He then developed the first sustainability program in the Pennsylvania State system of higher education and helped develop the Applied Sustainability Graduate Engineering Program while at Queens University Canada. He is currently the Richard Witte Professor of Materials Science and Engineering and a professor cross-appointed in the Department of Materials Science and Engineering, and he’s in the Department of Electrical and Computer Engineering at the Michigan Technological University where he runs the Open Sustainability Technology research group. He was a Fulbright-Aalto University Distinguished Chair last year and remains a visiting professor of photovoltaics and Nano-engineering at Aalto University. He’s also a visiting professor at the University of Lorraine in France. His research concentrates on the use of open source appropriate technology to find collaborative solutions to problems in sustainability and poverty reduction. He has authored over 250 publications, which have earned more than 11,000 citations. You can find his work on appropedia.org, and his research is regularly covered by the international and national press and continually ranks in the top 0.1% on academia.edu. He helped found the field of alternative food for global catastrophes with Dave, and again he was co-author on the book Feeding Everyone No Matter What.

So Dave and Joshua, thank you so much for joining us this month.

Dave Denkenberger: Thank you.

Joshua Pearce: Thank you for having us.

Ariel Conn: My first question for the two of you is a two-part question. First, why did you decide to consider how to survive a disaster rather — than focusing on prevention, as so many other people do? And second, how did you two start working together on this topic?

Joshua Pearce: So, I’ll take a first crack at this. Both of us have worked in the area of prevention, particularly in regards to alternative energy sources in order to be able to mitigate climate destabilization from fossil fuel burning. But what we both came to realize is that many of the disasters that we look at that could actually wipe out humanity aren’t things that we can necessarily do anything to avoid. The ones that we can do something about — climate change and nuclear winter — we’ve even worked together on it.

So for example, we did a study where we looked at how many nuclear weapons a state should have if they would continue to be rational. And by rational I mean even if everything were to go your way, if you shot all of your nuclear weapons, they all hit their targets, the people you were aiming at weren’t firing back at you, at what point would just the effects of firing that many weapons hurt your own society, possibly kill many of your own people, or destroy your own nation?

The answer to that turned out to be a really remarkably low number. The answer was 100. And many of the nuclear power states currently have more weapons than that. And so it’s clear at least from our current political system that we’re not behaving rationally and that there’s a real need to have a backup plan for humanity in case something does go wrong — whether it’s our fault, or whether it’s just something that happens in nature that we can’t control like a super volcano or an asteroid impact.

Dave Denkenberger: Even though there is more focus on preventing a catastrophe than there is on resilience to the catastrophe, overall the field is highly neglected. As someone pointed out, there are still more publications on dung beetles than there are on preventing or dealing with global catastrophic risks. But I would say that the particular sub-field of resilience to the catastrophes is even more neglected. That’s why I think it’s a high priority to investigate.

Joshua Pearce: We actually met way back as undergraduate students at Penn State. I was a chemistry and physics double major and one of my friends a year above said, “You have to take an engineering science class before you leave.” It changed his life. I signed up for this class taught by the man that eventually became my advisor, Christopher Wronski, and it was a brutal class — very difficult conceptually and mathematically. And I remember when one of my first tests came back, there was this bimodal distribution where there were two students who scored A’s and everybody else failed. Turned out that the two students were Dave and I, so we started working together then just on homework assignments, and then continued collaborating through all different areas of technical experiments and theory for years and years. And then Dave had this very interesting idea about what do we do in the event of a global catastrophe? How can we feed everybody? And to attack it as an engineering problem, rather than a social problem. We started working on it very aggressively.

Dave Denkenberger: So it’s been, I guess, 18 years now that we’ve been working together: a very fruitful collaboration.

Ariel Conn: Before I get any farther into the interview, let’s quickly define what a catastrophic event is and the types of catastrophic events that you both look at most.

Dave Denkenberger: The original focus was on the catastrophes that could collapse global agriculture. These would include nuclear winter from a full-scale nuclear war like US-Russia, causing burning of cities and blocking of the sun with smoke, but it could also mean a super volcanic eruption like the one that happened about 74,000 years ago that many think nearly wiped out the human species. And then there could also be a large asteroid impact similar to the one that wiped out the dinosaurs about 66 million years ago.

And in those cases, it’s very clear we need to have some other alternative source of food, but we also look at what I call the 10% global shortfalls. These are things like a volcano that caused the year without a summer in 1816, might have reduced food supply by about 10%, and caused widespread famine including in Europe and almost in the US. Then it could be a slightly smaller sized asteroid, or a regional nuclear war, and actually many other catastrophes such as a super weed, a plant that could out-compete crops. If this happened naturally, it probably would be slow enough that we could respond, but if it were part of a coordinated terrorist attack, that could be catastrophic. Even though technically we waste more than 10% of our food and we feed more than 10% of our food to animals, I think realistically, if we had a 10% food shortfall, the price of food would go so high that hundreds of millions of people could starve.

Joshua Pearce: Something that’s really important to understand about the way that we analyze these risks is that currently, even with the agricultural system completely working fine, we’ve got somewhere on the order of 800 million people without enough food to eat, because of waste and inefficiencies. And so anything that starts to cut into our ability for our agricultural system to continue, especially if all of plant life no longer works for a number of years because of the sun being blocked, we have to have some method to provide alternative foods to feed the bulk of the human population.

Ariel Conn: I think that ties in to the next question then, and that is what does it mean to feed everyone no matter what, as you say in the title of your book?

Dave Denkenberger: As Joshua pointed out, we are still not feeding everyone adequately right now. The idea of feeding everyone no matter what is an aspirational goal, and it’s showing that if we cooperated, we could actually feed everyone, even if the sun is blocked. Of course, it might not work out exactly like that, but we think that we can do much better than if we were not prepared for one of these catastrophes.

Joshua Pearce: Right. Today, roughly one in nine people go to bed hungry every night, and somewhere on the order of 25,000 people starve to death or die from hunger-related disease [per day]. And so one of the inspiring things from our initial analysis drawn up in the book is that even in the worst-case scenarios where something major happens, like a comet strike that would wipe out the dinosaurs, humans don’t need to be wiped out: We could provide for ourselves. And the embarrassing thing is that today, even with the agricultural system working fine, we’re not able to do that. And so what I’m at least hoping is that some of our work on these alternative foods provides another mechanism to provide low-cost calories for the people that need it, even today when there is no catastrophe.

Dave Denkenberger: One of the technologies that we think could be useful even now is there’s a company called Comet Bio that is turning agricultural residues like leaves and stalks into edible sugar, and they think that’s actually going to be able to compete with sugar cane. It has the advantage of not taking up lots of land that we might be cutting the rainforest down for, so it has environmental benefits as well as humanitarian benefits. Another area that I think would be relevant is in smaller disasters, such as an earthquake or a hurricane, generally the cheapest solution is just shipping in grain from outside, but if transportation is disrupted, it might make sense to be able to produce some food locally — like if a hurricane blows all the crops down and you’re not going to be able to get any normal harvest from them, you can actually grind up those leaves, like from wheat leaves, and squeeze out the liquid, boil the liquid, and then you get a protein concentrate, and people can eat that.

Ariel Conn: So that’s definitely a question that I had, and that is to what extent can we start implementing some of the plans today during a disaster? This is a pre-recorded podcast; Dorian has just struck the Bahamas. Can the stuff that you are working on now help people who are still stuck on an island after it’s been ravaged by a hurricane?

Dave Denkenberger: I think there is potential for that, the getting food from leaves. There’s actually a non-profit organization called Leaf for Life that has been doing this in less developed countries for decades now. Some other possibilities would be some mushrooms can mature in just a few weeks, and they can grow on waste, basically.

Joshua Pearce: The ones that would be good for an immediate catastrophe are the in between food that we’re working on: between the time that you run out of stored food and the time that you can ramp up the full scale, alternative foods.

Ariel Conn: Can you elaborate on that a little bit more and explain what that process would look like? What does happen between when the disaster strikes? And what does it look like to start ramping up food development in a couple weeks or a couple months or however long that takes?

Joshua Pearce: In the book we develop 10 primary pathways to develop alternative food sources that could feed the entire global population. But the big challenge for that is it’s not just are there enough calories — but you have to have enough calories at the right time.

If, say, a comet strikes tomorrow and throws up a huge amount of earth and ash and covers the sun, we’d have roughly six months of stored food in grocery stores and pantry that we could use to eat. But then for most of the major sources of alternative food, it would take around a year to ramp them up, to take these processes that might not even exist now and get them to industrial scale to feed billions of people. So the most challenging is that six-month-to-one-year period, and for those we would be using the alternative foods that Dave talked about, the mushrooms that can grow really fast and leaves. And the leaf one, part of those leaves can come from agricultural residues, things that we already know are safe.

The much larger biomass that we might be able to use is just normal killed tree leaves. The only problem with that is that there hasn’t been really any research into whether or not that’s safe. We don’t know, for example, if you can eat maple or oak leaf concentrate. The studies haven’t been done yet. And that’s one of the areas that we’re really focusing on now, is to take some of these ideas that are promising and prove that they’re actually technically feasible and safe for people to use in the event of a serious catastrophe, a minor one, or just being able to feed people that for whatever reason don’t have enough food.

Dave Denkenberger: I would add that even though we might have six months of stored food, that would be a best-case scenario when we’ve just had the harvest in the northern hemisphere; We could only have two or three months of stored food. But in many of these catastrophes, even a pretty severe nuclear winter, there’s likely to be some sunlight still coming down to the earth, and so a recent project we’ve been working on is growing seaweed. This has a lot of advantages because seaweed can tolerate low light levels, the ocean would not cool as fast as on the land, and it grows very quickly. So we’ve actually been applying seaweed growth models to the conditions of nuclear winter.

Ariel Conn: You talk about the food that we have stored being able to last for two to six months. How much transportation is involved in that? And how much transportation would we have, given different scenarios? I’ve heard that the town I’m in now, if it gets blocked off by a big snow storm, we have about two weeks of food. So I’m curious: How does that apply elsewhere? And are we worried about transportation being cut off, or do we think that transportation will still be possible?

Dave Denkenberger: Certainly there will be destruction of infrastructure regionally, whether it’s nuclear war or a super volcano or asteroid impact. So in those affected countries, transportation of food is going to be very challenging, but most of the people would not be in those countries. That’s why we think that there’s still going to be a lot of infrastructure still functioning. There are still going to be chemical factories that we can retrofit to turn leaves into sugar, or another one of the technologies is turning natural gas into single-cell protein.

Ariel Conn: There’s the issue of developing agriculture if the sun is blocked, which is one of the things that you guys are working on, and that can happen with nuclear war leading to nuclear winter; It can happen with the super volcano, with the asteroid. Let’s go a little more in depth and into what happens with these catastrophic events that block the sun. What happens with them? Why are they so devastating?

Joshua Pearce: All the past literature on what would happen if, say, we lost agriculture for a number of years, is all pretty grim. The base assumption is that everyone would simply starve to death, and there might be some fighting before that happens. When you look at what would happen based on previous knowledge of generating food from traditional ways, those were the right answers. And so, what we’re calling catastrophic events not only deal with the most extreme ones, the sun-killing ideas, but also the maybe a little less tragic but still very detrimental to the agricultural system: so something like a planned number of terrorist events to wipe out the major bread baskets of the world. Again, for the same idea, is that you’re impacting the number of available calories that are available to the entire population, and our work is trying to ensure that we can still feed everyone.

Dave Denkenberger: We wrote a paper on if we had a scenario that chaos did not break out, but there was still trade between countries and sharing of information and a global price of food — in that case, with stored food, there might around 10% of people surviving. It could be much worse though. As Joshua pointed out, if the food were distributed equally, then everyone would starve. Also people have pointed out, well, in civilization, we have food storage, so some people could survive — but if there’s a loss of civilization through the catastrophe, and we have to go back to being hunter-gatherers, first, hunter gatherers that we still have now generally don’t have food storage, so they would not survive, but then there’s a recent book called The Secret of Our Success that argues that it might not be as easy as we think to go back to being hunter-gatherers.

So that is another failure mode where it could actually cause human extinction. But then even if we don’t have extinction, if we have a collapse of civilization, there are many reasons why we might not be able to recover civilization. We’ve had a stable climate for the last 10,000 years; That might not continue. We’ve already used up the easily accessible fossil fuels that we wouldn’t have to rebuild industrial civilization. Just thinking about the original definition of civilization, about being able to cooperate with people who are not related to you, like outside your tribe — maybe the trauma of the catastrophe could make the remaining humans less open to trusting people, and maybe we would not recover that civilization. And then I would say even if we don’t lose civilization, the trauma of the catastrophe could make other catastrophes more likely.

One people are concerned about is global totalitarianism. We’ve had totalitarian states in the past, but they’ve generally been out-competed by other, free-er societies. But if it were a global totalitarianism, then there would be no competition, and that might be a stable state that we could be stuck in. And then even if we don’t go that route, the trauma from the catastrophe could cause worse values that end up in artificial intelligence that could define our future. And I would say even on these catastrophes that are slightly less extreme, the 10% food shortfalls, we don’t know what would happen after that. Tensions would be high; This could end up in full-scale nuclear war, and then some of these really extreme scenarios occurring.

Ariel Conn: What’s the historical precedence that we’ve got to work with in terms of trying to figure out how humanity would respond?

Dave Denkenberger: There have been localized collapses of society, and Jared Diamond has cataloged a lot of these in his book Collapse, but you can argue that there have even been more global collapse scenarios. Jeffrey Ladish has been looking at some collapses historically, and some catastrophes — like the black death was very high mortality but did not result in a collapse of economic production in Europe; But other collapses actually have occurred. There’s enough uncertainty to say that collapse is possible and that we might not recover from it.

Ariel Conn: A lot of this is about food production, but I think you guys have also done work on instances in which maybe it’s easier to produce food but other resources have been destroyed. So for example, a solar flare, a solar storm knocks out our electric grid. How do we address that?

Joshua Pearce: In the event that a solar flare wipes out the electricity grid and most non-shielded electrical devices, that would be another scenario where we might legitimately lose civilization. There’s been a lot of work in the electrical engineering community on how we might shield things and harden them, but one of the things that we can absolutely do, at least on the electricity side, is start to go from our centralized grid infrastructure into a more decentralized method of producing and consuming electricity. The idea here would be that the grid would break down into a federation of micro-grids, and the micro-grids could be as small as even your own house, where you, say, have solar panels on your roof producing electricity that would charge a small battery, and then when those two sources of power don’t provide enough, you have a backup generator, a co-generation system.

And a lot of the work my group has done has shown that in the United States, those types of systems are already economic. Pretty much everywhere in the US now, if you have exposure to sunshine, you can produce electricity less expensively than you buy it from the grid. If you add in the backup generator, the backup co-gen — in many places, particularly in the northern part of the US, that’s necessary in order to provide yourself with power — that again makes you more secure. And in the event of some of these catastrophes that we’re looking at, now the ones that block the sun, the solar won’t be particularly useful, but what solar does do is preserve our fossil fuels for use in the event of a catastrophe. And if you are truly insular, in that you’re able to produce all of your own power, then you have a backup generator of some kind and fuel storage onsite.

In the context of providing some resiliency for the overall civilization, many of the technical paths that we’re on now, at least electrically, are moving us in that direction anyway. Solar and wind power are both the fastest growing sources of electricity generation both in the US and globally, and their costs now are so competitive that we’re seeing that accelerate much faster than anyone predicted.

Dave Denkenberger: It is true that a solar flare would generally only affect the large grid systems. In 1859 there was the Carrington event that basically destroyed our telegraph systems, which was all we had at the time. But then we also had a near miss with a solar flare in 2012, so the world almost did end in 2012. But then there’s evidence that in the first millennium AD that there were even larger solar storms that could disrupt electricity globally. But there are other ways that electricity could be disrupted. One of those is the high altitude detonation of a nuclear weapon, producing an electromagnetic pulse or an EMP. If this were done multiple places around the world, that could disrupt electricity globally, and the problem with that is it could affect even smaller systems. Then there’s also the coordinated cyber attack, which could be led by a narrow artificial intelligence computer virus, and then anything connected to the internet would be vulnerable, basically.

In these scenarios, at least the sun would still be shining. But we wouldn’t have our tractors, because basically everything is dependent on electricity, like pulling fossil fuels out of the ground, and we also wouldn’t have our industrial fertilizers. And so the assumption is as well that most people would die, because the reason we can feed more than seven billion people is because of the industry we’ve developed. People have also talked about, well, let’s harden the grid to EMP, but that would cost something like $100 billion.

So what we’ve been looking at are, what are inexpensive ways of getting prepared if there is a loss of electricity? One of those is can we make quickly farming implements that would work by hand or by animal power? And even though a very small percent of our total land area is being plowed by draft animals, we still actually have a lot of cows left for food, not for draft animals. It would actually be feasible to do that. 

But if we lost electricity, we’d lose communications. We have a short wave radio, or ham radio, expert on our team who’s been doing this for 58 years, and he’s estimated that for something like five million dollars, we could actually have a backup communication system, and then we would also need to have a backup power system, which would likely be solar cells. But we would need to have this system not plugged into the grid, because if it’s plugged in, it would likely get destroyed by the EMP.

Joshua Pearce: And this gets into that area of appropriate technology and open source appropriate technology that we’ve done a lot of work on. And the idea basically is that the plans for something like a solar powered ham radio station that would be used as a backup communication system, those plans need to be developed now and shared globally so that everyone, no matter where they happen to be, can start to implement these basic safety precautions now. We’re trying to do that for all the tools that we’re implementing, sharing them on sites like Appropedia.org, which is an appropriate technology wiki that already is trying to help small-scale farmers in the developing world now lift themselves out of poverty by applying science and technologies that we already know about that are generally small-scale, low-cost, and not terribly sophisticated. And so there’s many things as an overall global society that we understand much better how to do now that if you just share a little bit of information in the right way, you can help people — both today but also in the event of a catastrophe.

Dave Denkenberger: And I think that’s critical: that if one of these catastrophes happened and people realized that most people were going to die, I’m very worried that there would be chaos, potentially within countries, and then also between countries. But if people realized that we could actually feed everyone if we cooperated, then I think we have a much better chance of cooperating, so you could think of this actually as a peace project.

Ariel Conn: One of the criticisms that I’ve heard, that honestly I think it’s a little strange, but the idea that we don’t need to deal with worrying about alternative foods now because if a catastrophe strikes, then we’ll be motivated to develop these alternative food systems.

I was curious if you guys have estimates of how much of a time difference you think would exist between us having a plan for how we would feed people if these disasters do strike versus us realizing the disaster has struck and now we need to figure something out, and how long it would take us to figure something out? That second part of the question is both in situations where people are cooperating and also in situations where people are not cooperating.

Dave Denkenberger: I think that if you don’t have chaos, the big problem is that yes, people would be able to put lots of money into developing food sources, but there are some things that take a certain amount of calendar time, like testing out different diets for animals or building pilot factories for food production. You generally need to test these things out before you build the large factories. I don’t have a quantitative estimate, but I do think it would delay by many months; And as we said, we only have a few months of food storage, so I do think that a delay would cost many lives and could result in the collapse of civilization that could have been prevented if we were actually prepared ahead of time.

Joshua Pearce: I think the boy scouts are right on this. You should always be prepared. If you think about just something like the number of types of leaves that would need to be tested, if we get a head start on it in order to determine toxicity as well as the nutrients that could come from them, we’ll be much, much better off in the event of a catastrophe — whether or not we’re working together. And in the cases where we’re not working together, to have this knowledge that’s built up within the population and spread out, makes it much more likely that overall humanity will survive.

Ariel Conn: What, roughly, does it cost to plan ahead: to do this research and to get systems and organization in place so that we can feed people if a disaster strikes?

Dave Denkenberger: Around order of magnitude $100 million. We think that that would fund a lot of research to figure out what are the most promising food sources, and also interventions for handling the loss of electricity and industry, and then also doing development of the most promising food sources, actual pilot scale, and funding a backup communications system, and then also working with countries, corporations, international organizations to actually have response plans for how we would respond quickly in a catastrophe. It’s really a very small amount of money compared to the benefit, in terms of how many lives we could save and preserving civilization.

Joshua Pearce: All this money doesn’t have to come at once, and some of the issues of alternative foods are being funded in other ways. There already are, for example, chemical engineering plants being looked at to be turned into food supply factories. That work is already ongoing. What Dave is talking about is combining all the efforts that are already existing and what ALLFED is trying to do, in order to be able to provide a very good, solid backup plan for society.

Ariel Conn: So Joshua, you mentioned ALLFED, and I think now is a good time to transition to that. Can you guys explain what ALLFED is?

Dave Denkenberger: The Alliance to Feed the Earth in Disasters, or ALLFED, is a non-profit organization that I helped to co-found, and our goal is to build an alliance with interested stakeholders to do this research on alternate food sources, develop the sources, and then also develop these response plans.

Ariel Conn: I’ll also add a quick disclosure that I also do work with ALLFED, so I don’t know if people will care, but there that is. So what are some of the challenges you’ve faced so far in trying to implement these solutions?

Dave Denkenberger: I would say a big challenge, a surprise that came to me, is that when we’ve started talking to international organizations and countries, no one appears to have a plan for what would happen. Of course you hear about the continuity of government plans, and bunkers, but there doesn’t seem to be a plan for actually keeping most people alive. And this doesn’t apply just to the sun-blocking catastrophes; It also applies to the 10% shortfalls.

There was a UK government study that estimated that extreme weather on multiple continents, like flooding and droughts, has something like an 80% chance of happening this century that would actually reduce the food supply by 10%. And yet no one has a plan of how they would react. It’s been a challenge for people to actually take this seriously.

Joshua Pearce: I think that goes back to the devaluation of human life, where we’re not taking seriously the thousands of people that, say, starve to death today and we’re not actively trying to solve that problem when from a financial standpoint, it’s trivial based on the total economic output of the globe; From a technical standpoint, it’s ridiculously easy; But we don’t have the social infrastructure in place in order to just be able to feed everyone now and be able to meet the basic needs of humanity. What we’re proposing is to prepare for a catastrophe in order to be able to feed everybody: That actually is pretty radical.

Initially, I think when we got started, overcoming the views that this was a radical departure for what the types of research that would normally be funded or anything like that — that was something that was challenging. But I think now existential risk just as a field is growing and maturing, and because many of the technologies in the alternative food sector that we’ve looked at have direct applications today, it’s being seen as less and less radical — although, in the popular media, for example, they’d be more happy for us to talk about how we could turn rotting wood into beetles and then eat beetles than to actually look at concrete plans in order to be able to implement it and do the research that needs to be done in order to make sure that that is the right path.

Ariel Conn: Do you think people also struggle with the idea that these disasters will even happen? That there’s that issue of people not being able to recognize the risks?

Joshua Pearce: It’s very hard to comprehend. You may have your family and your friends; It’s hard to imagine a really large catastrophe. But these have happened throughout history, both at the global scale but even just something like a world war has happened multiple times in the last century. We’re, I think, hardwired to be a little bit optimistic about these things, and no one wants to see any of this happen, but that doesn’t mean that it’s a good idea to put our head in the sand. And even though it’s a relatively low probability event, say the case of an all-out nuclear war, something on the order of one percent, it still is there. And as we’ve seen in recent history, even some of the countries that we think of as stable aren’t really necessarily stable.

And so currently we have thousands of nuclear warheads, and it only takes a tiny fraction of them in order to be able to push us into one of these global catastrophic scenarios. Whether that’s an accident or one crazy government actor or a legitimate small-scale war, say an India and a Pakistan that pull out the nuclear weapons, these are things that we should be preparing for.

In the beginning it was a little bit more difficult to have people consider them, but now it’s becoming more and more mainstream. Many of our publications and ALLFED publications and collaborators are pushing into the mainstream of the literature.

Dave Denkenberger: I would say even though the probability each year is relatively low, it certainly adds up over time, and we’re eventually going to have at least some natural disaster like a volcano. But people have said, “Well, it might not occur in my lifetime, so if I work on this or if I donate to it, my money might be wasted” — and I said, “Well, do you consider if you pay for insurance and don’t get anything out of it in a year, your money is wasted?” “No.” So basically I think of this as an insurance policy for civilization.

Ariel Conn: In your research, personally for you, what are some of the interesting things that you found that you think could actually save a lot of lives that you hadn’t expected?

Dave Denkenberger: I think one particularly promising one is the turning of natural gas into single-cell protein, and fortunately, there are actually two companies that are doing this right now. They are focusing on stranded natural gas, which means too far away from a market, and they’re actually producing this as fish food and other animal feed.

Joshua Pearce: For me, living up here in the upper peninsula of Michigan where we’re surrounded by trees, can’t help but look out my window and look at all the potential biomass that could actually be a food source. If it turns out that we can get even a small fraction of that into human edible food, I think that could really shift the balance in providing food, both now and in the case of a disaster.

Dave Denkenberger: One interesting thing coming to Alaska is I’ve learned about the Aleutian Islands that stick out into the pacific. They are very cloudy. It is so cool in the summer that they cannot even grow trees. They also don’t get very much rain. The conditions there are actually fairly similar to nuclear winter in the tropics; And yet, they can grow potatoes. So lately I’ve become more optimistic that we might be able to do some agriculture near the equator where it would not freeze, even in nuclear winter.

Ariel Conn: I want to switch gears a little bit. We’ve been talking about disasters that would be relatively immediate, but one of the threats that we’re trying to figure out how to deal with now is climate change. And I was wondering how efforts that you’re both putting into alternative foods could help as we try to figure out how to adapt to climate change.

Joshua Pearce: I think a lot of the work that we’re doing has a dual use. Because we are trying to squeeze every last calorie we could out of primarily fossil fuel sources and trees and leaves, that if by using those same techniques in the ongoing disaster of climate change, we can hopefully feed more people. And so that’s things like growing mushrooms on partially decomposed wood, eating the mushrooms, but then feeding the leftovers to, say, ruminants or chickens, and then eating those. There’s a lot of industrial ecology practices we can apply to the agricultural food system so that we can get every last calorie out of our primary inputs. So that I think is something we can focus on now and push forward regardless of the speed of the catastrophe.

Dave Denkenberger: I would also say that in addition to this extreme weather on multiple continents that is made more likely by climate change, there’s also abrupt climate change in the ice core record. We’ve had an 18 degree fahrenheit drop in just one decade over a continent. That could be another scenario of a 10% food shortfall globally. And another one people have talked about is what’s called extreme climate change that would still be slow. This is sometimes called tail risk, where we have this expected or median climate change of a few degrees celsius, but maybe there would be five or even 10 degrees celsius — so 18 degree fahrenheit — that could happen over a century or two. We might not be able to have agriculture at all in the tropics, so it would be very valuable to have some food backup plan for that.

Ariel Conn: I wanted to get into concerns about moral hazards with this research. I’ve heard some criticism that if you present a solution to, say, surviving nuclear winter that maybe people will think nuclear war is more feasible. How do you address concerns like that — that if we give people a means of not starving, they’ll do something stupid?

Dave Denkenberger: I think you’ve actually summarized this succinctly by saying, this would be like saying we shouldn’t have the jaws of life because that would cause people to drive recklessly. But the longer answer would be: there is evidence that the awareness of nuclear winter in the 80s was a reason that Gorbachev and Reagan worked towards reducing the nuclear stockpile. However, we still have enough nuclear weapons to potentially cause nuclear winter, and I doubt that the decision in the heat of the moment to go to nuclear war is actually going to take into account the non-target countries. I also think that there’s a significant cost of nuclear war directly, independent of nuclear winter. I would also say that this backup plan helps up with catastrophes that we don’t have control over, like a volcanic eruption. Overall, I think we’re much better off with a backup plan.

Joshua Pearce: I of course completely agree. It’s insane to not have a backup plan. The idea that the irrational behavior that’s currently displayed in any country with more than 100 nuclear weapons isn’t going to get worse because now they know that at a larger fraction their population won’t starve to death as they use them — I think that’s crazy.

Ariel Conn: As you’ve mentioned, there are quite a few governments — in fact, as far as I can tell, all governments don’t really have a backup plan. How surprised have you been by this? And also how optimistic are you that you can convince governments to start implementing some sort of plan to feed people if disaster happens?

Dave Denkenberger: As I said, I certainly have been surprised with the lack of plans. I think that as we develop the research further and are able to show examples of companies already doing very similar things, showing more detailed analysis of what current factories we have that could be retrofitted quickly to produce food — that’s actually an active area of research that we’re doing right now — then I am optimistic that governments will eventually come around to the value of planning for these catastrophes.

Joshua Pearce: I think it’s slightly depressing when you look around the globe and all the hundreds of countries, and how poorly most of them care for their own citizens. It’s sort of a commentary on how evolved or how much of a civilization we really are, so instead of comparing number of Olympic medals or how much economic output your country does, I think we should look at the poorest citizens in each country. And if you can’t feed the people that are in your country, you should be embarrassed to be a world leader. And for whatever reason, world leaders show their faces every day while their constituents, the citizens of their countries, are starving to death today, let alone in the event of a catastrophe.

If you look at the — I’ll call them the more civilized countries, and I’ve been spending some time in Europe, where rational, science-based approaches to governing are much more mature than what I’ve been used to. I think it gives me quite a bit of optimism as we take these ideas of sustainability and of long-term planning seriously, try to move civilization into a state where it’s not doing significant harm to the environment or to our own health or to the health and the environment in the future — that gives me a lot of cause for hope. Hopefully as all the different countries throughout the world mature and grow up as governments, they can start taking the health and welfare of their own populations much more seriously.

Dave Denkenberger: And I think that even though I’m personally very motivated about the long-term future of human civilization, I think that because what we’re proposing is so cost effective, even if an individual government doesn’t put very much weight on people outside its borders, or in future generations even within the country, it’s still cost effective. And we actually wrote a paper from the US perspective showing how cheaply they could get prepared and save so many lives just within their own borders.

Ariel Conn: What do you think is most important for people to understand about both ALLFED and the other research you’re doing? And is there anything, especially that you think we didn’t get into, that is important to mention?

Dave Denkenberger: I would say that thanks to recent grants from the Berkeley Existential Risk Initiative, the Effective Altruism Lottery, and the Center for Effective Altruism, that we’ve been able to do, especially this year, a lot of new research and, as I mentioned, retrofitting factories to produce food. We’re also looking at, can we construct factories quickly, like having construction crews work around the clock? Also investigating seaweed; But I would still say that there’s much more work to do, and we have been building our alliance, and we have many researchers and volunteers that are ready to do more work with additional funding, so we estimate in the next 12 months that we could effectively use approximately $1.5 million.

Joshua Pearce: A lot of the areas of research that are needed to provide a strong backup plan for humanity are relatively greenfield; This isn’t areas that people have done a lot of research in before. And so for other academics, maybe small companies that slightly overlap the alternative food ecosystem of intellectual pursuits, there’s a lot of opportunities for you to get involved, either in direct collaboration with ALLFED or just bringing these types of ideas into your own subfield. And so we’re always looking out for collaborators, and we’re happy to talk to anybody that’s interested in this area and would like to move the ball forward.

Dave Denkenberger: We have a list of theses that undergraduates or graduates could do on the website called Effective Thesis. We’ve gotten a number of volunteers through that.

I would also say another surprising thing to me was that when we were looking at these scenarios of if the world cooperated but only had stored food, the amount of money people would spend on that stored food was tremendous — something like $90 trillion. And that huge expenditure, only 10% of people survived. But instead if we could produce alternate foods, our goal is around a dollar a dry pound of food. One pound of dry food can feed a person for a day, then more like 97% of people would be able to afford food with their current incomes. And yet, even though we feed so many more people, the total expenditure on food was less. You could argue that even if you are in the global wealthy that could potentially survive one of these catastrophes if chaos didn’t break out, it would still be in your interest to get prepared for alternate foods, because you’d have to pay less money for your food.

Ariel Conn: And that’s all with a research funding request of 1.5 million? Is that correct?

Dave Denkenberger: The full plan is more like $100 million.

Joshua Pearce: It’s what we could use as the current team now, effectively.

Ariel Conn: Okay. Well, even the 100 million still seems reasonable.

Joshua Pearce: It’s still a bargain. One of the things we’ve been primarily assuming during all of our core scenarios is that there would be human cooperation, and that things would break down into fighting, but as we know historically, that’s an extremely optimistic way to look at it. And so even if you’re one of the global wealthy, in the top 10% globally in terms of financial means and capital, even if you would be able to feed yourself in one of these relatively modest reductions in overall agricultural supply, it is not realistic to assume that the poor people are just going to lay down and starve to death. They’re going to be storming your mansion. And so if you can provide them with food with a relatively low upfront capital investment, it makes a lot of sense, again, for you personally, because you’re not fighting them off at your door.

Dave Denkenberger: One other thing that surprised me was we did a real worst case scenario where the sun is mostly blocked, say by nuclear winter, but then we also had a loss of electricity and industry globally, say there were multiple EMPs around the world. And I, going into it, was not too optimistic that we’d be able to feed everyone. But we actually have a paper on it saying that it’s technically feasible, so I think it really comes down to getting prepared and having that message in the decision makers at the right time, such that they realize it’s in their interest to cooperate.

Another issue that surprised me: when we were writing the book, I thought about seaweed, but then I looked at how much seaweed for sushi cost, and it was just tremendously expensive per calorie, so I didn’t pursue it. But then I found out later that we actually produce a lot of seaweed at a reasonable price. And so now I think that we might be able to scale up that food source from seaweed in just a few months.

Ariel Conn: How quickly does seaweed grow, and how abundantly?

Dave Denkenberger: It depends on the species, but one species that is edible, we put into the scenario of nuclear winter, and one thing to note is that the ocean, as the upper layers cool, they sink, and then the lower layers of the ocean come to the surface, and that brings nutrients to the surface. We found in pretty big areas on Earth, in the ocean, that the seaweed could actually grow more than 10% per day. With that exponential growth, you quickly scale up to feeding a lot of people. Now of course we need to scale up the infrastructure, the ropes that it grows on, but that’s what we’re working out.

The other thing I would add is that in these catastrophes, if many people are starving, then I think not only will people not care about saving other species, but they may actively eat other species to extinction. And it turns out that feeding seven billion people is a lot more food than keeping, say, 500 individuals of many different species alive. And so I think we could actually use this to save a lot of species. And if it were a natural catastrophe, well some species would go extinct naturally — so maybe for the first time, humans could actually be increasing biodiversity.

Joshua Pearce: That’s a nice optimistic way to end this.

Ariel Conn: Yeah, that’s what I was just thinking. Anything else?

Dave Denkenberger: I think that’s it.

Joshua Pearce: We’re all good.

Ariel Conn: All right. This has been a really interesting conversation. Thank you so much for joining us.

Dave Denkenberger: Thank you.

Joshua Pearce: Thank you for having us.

 

Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More

How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks?

In this special podcast episode, Ariel speaks with Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Martin is a cosmologist and space scientist based in the University of Cambridge. He is director of The Institute of Astronomy and Master of Trinity College, and he was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords.

Topics discussed in this episode include:

  • Why Martin remains a technical optimist even as he focuses on existential risks
  • The economics and ethics of climate change
  • How AI and automation will make it harder for Africa and the Middle East to economically develop
  • How high expectations for health care and quality of life also put society at risk
  • Why growing inequality could be our most underappreciated global risk
  • Martin’s view that biotechnology poses greater risk than AI
  • Earth’s carrying capacity and the dangers of overpopulation
  • Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars
  • The ethics of artificial meat, life extension, and cryogenics
  • How intelligent life could expand into the galaxy
  • Why humans might be unable to answer fundamental questions about the universe

Books and resources discussed in this episode include

You can listen to the podcast above and read the full transcript below. Check out our previous podcast episodes on SoundCloudiTunesGooglePlay, and Stitcher.

Ariel: Hello, I am Ariel Conn with The Future of Life Institute. Now, our podcasts lately have dealt with artificial intelligence in some way or another, and with a few focusing on nuclear weapons, but FLI is really an organization about existential risks, and especially x-risks that are the result of human action. These cover a much broader field than just artificial intelligence.

I’m excited to be hosting a special segment of the FLI podcast with Martin Rees, who has just come out with a book that looks at the ways technology and science could impact our future both for good and bad. Martin is a cosmologist and space scientist. His research interests include galaxy formation, active galactic nuclei, black holes, gamma ray bursts, and more speculative aspects of cosmology. He’s based in Cambridge where he has been director of The Institute of Astronomy, and Master of Trinity College. He was president of The Royal Society, which is the UK’s Academy of Science, from 2005 to 2010. In 2005 he was also appointed to the UK’s House of Lords. He holds the honorary title of Astronomer Royal. He has received many international awards for his research and belongs to numerous academies, including The National Academy of Sciences, the Russian Academy, the Japan Academy, and the Pontifical Academy.

He’s on the board of The Princeton Institute for Advanced Study, and has served on many bodies connected with international collaboration and science, especially threats stemming from humanity’s ever heavier footprint on the planet and the runaway consequences of ever more powerful technologies. He’s written seven books for the general public, and his most recent book is about these threats. It’s the reason that I’ve asked him to join us today. First, Martin thank you so much for talking with me today.

Martin: Good to be in touch.

Ariel: Your new book is called On the Future: Prospects for Humanity. In his endorsement of the book Neil deGrasse Tyson says, “From climate change, to biotech, to artificial intelligence, science sits at the center of nearly all decisions that civilization confronts to assure its own survival.”

I really liked this quote, because I felt like it sums up what your book is about. Basically science and the future are too intertwined to really look at one without the other. And whether the future turns out well, or whether it turns out to be the destruction of humanity, science and technology will likely have had some role to play. First, do you agree with that sentiment? Am I accurate in that description?

Martin: No, I certainly agree, and that’s truer of this century than ever before because of greater scientific knowledge we have, and the greater power to use it for good or ill, because the technologies allow tremendously advanced technologies which could be misused by a small number of people.

Ariel: You’ve written in the past about how you think we have essentially a 50/50 chance of some sort of existential risk. One of the things that I noticed about this most recent book is you talk a lot about the threats, but to me it felt still like an optimistic book. I was wondering if you could talk a little bit about, this might be jumping ahead a bit, but maybe what the overall message you’re hoping that people take away is?

Martin: Well, I describe myself as a technical optimist, but political pessimist because it is clear that we couldn’t be living such good lives today with seven and a half billion people on the planet if we didn’t have the technology which has been developed in the last 100 years, and clearly there’s a tremendous prospect of better technology in the future. But on the other hand what is depressing is the very big gap between the way the world could be, and the way the world actually is. In particular, even though we have the power to give everyone a decent life, the lot of the bottom billion people in the world is pretty miserable and could be alleviated a lot simply by the money owned by the 1,000 richest people in the world.

We have a very unjust society, and the politics is not optimizing the way technology is used for human benefit. My view is that it’s the politics which is an impediment to the best use of technology, and the reason this is important is that as time goes on we’re going to have a growing population which is ever more demanding of energy and resources, putting more pressure on the planet and its environment and its climate, but we are also going to have to deal with this if we are to allow people to survive and avoid some serious tipping points being crossed.

That’s the problem of the collective effect of us on the planet, but there’s another effect, which is that these new technologies, especially bio, cyber, and AI allow small groups of even individuals to have an effect by error or by design, which could cascade very broadly, even globally. This, I think, makes our society very brittle. We’re very interdependent, and on the other hand it’s easy for there to be a breakdown. That’s what depresses me, the gap between the way things could be, and the downsides if we collectively overreach ourselves, or if individuals cause disruption.

Ariel: You mentioned actually quite a few things that I’m hoping to touch on as we continue to talk. I’m almost inclined, before we get too far into some of the specific topics, to bring up an issue that I personally have. It’s connected to a comment that you make in the book. I think you were talking about climate change at the time, and you say that if we heard that there was 10% chance that an asteroid would strike in 2100 people would do something about it.

We wouldn’t say, “Oh, technology will be better in the future so let’s not worry about it now.” Apparently I’m very cynical, because I think that’s exactly what we would do. And I’m curious, what makes you feel more hopeful that even with something really specific like that, we would actually do something and not just constantly postpone the problem to some future generation?

Martin: Well, I agree. We might not even in that case, but the reason I gave that as a contrast to our response to climate change is that there you could imagine a really sudden catastrophe happening if the asteroid does hit, whereas the problem with climate change is really that it’s first of all, the effect is mainly going to be several decades in the future. It’s started to happen, but the really severe consequences are decades away. But also there’s an uncertainty, and it’s not a sort of sudden event we can easily visualize. It’s not at all clear therefore, how we are actually going to do something about it.

In the case of the asteroid, it would be clear what the strategy would be to try and deal with it, whereas in the case of climate there are lots of ways, and the problem is that the consequences are decades away, and they’re global. Most of the political focus obviously is on short-term worry, short-term problems, and on national or more local problems. Anything we do about climate change will have an effect which is mainly for the benefit of people in quite different parts of the world 50 years from now, and it’s hard to keep those issues up the agenda when there are so many urgent things to worry about.

I think you’re maybe right that even if there was a threat of an asteroid, there may be the same sort of torpor, and we’d fail to deal with it, but I thought that’s an example of something where it would be easier to appreciate that it would really be a disaster. In the case of the climate it’s not so obviously going to be a catastrophe that people are motivated now to start thinking about it.

Ariel: I’ve heard it go both ways that either climate change is yes, obviously going to be bad but it’s not an existential risk so therefore those of us who are worried about existential risk don’t need to worry about it, but then I’ve also heard people say, “No, this could absolutely be an existential risk if we don’t prevent runaway climate change.” I was wondering if you could talk a bit about what worries you most regarding climate.

Martin: First of all, I don’t think it is an existential risk, but it’s something we should worry about. One point I make in my book is that I think the debate, which makes it hard to have an agreed policy on climate change, stems not so much from differences about the science — although of course there are some complete deniers — but differences about ethics and economics. There’s some people of course who completely deny the science, but most people accept that CO2 is warming the planet, and most people accept there’s quite a big uncertainty, matter of fact a true uncertainty about how much warmer you get for a given increase in CO2.

But even among those who accept the IPCC projections of climate change, and the uncertainties therein, I think there’s a big debate, and the debate is really between people who apply a standard economic discount rate where you discount the future to a rate of, say 5%, and those who think we shouldn’t do it in this context. If you apply a 5% discount rate as you would if you were deciding whether it’s worth putting up an office building or something like that, then of course you don’t give any weight to what happens after about, say 2050.

As Bjorn Lomborg, the well-known environmentalist argues, we should therefore give a lower priority to dealing with climate change than to helping the world’s poor in other more immediate ways. He is consistent given his assumptions about the discount rate. But many of us would say that in this context we should not discount the future so heavily. We should care about the life chances of a baby born today as much as we should care about the life chances of those of us who are now middle aged and won’t be alive at the end of the century. We should also be prepared to pay an insurance premium now in order to remove or reduce the risk of the worst case climate scenarios.

I think the debates about what to do about climate change is essentially ethics. Do we want to discriminate on grounds of date of birth and not care about the life chances of those who are now babies, or are we prepared to make some sacrifices now in order to reduce a risk which they might encounter in later life?

Ariel: Do you think the risks are only going to be showing up that much later? We are already seeing these really heavy storms striking. We’ve got Florence in North Carolina right now. There’s a super typhoon hit southern China and the Philippines. We had Maria, and I’m losing track of all the hurricanes that we’ve had. We’ve had these huge hurricanes over the last couple of years. We saw California and much of the west coast of the US just on flames this year. Do you think we really need to wait that long?

Martin: I think it’s generally agreed that extreme weather is now happening more often as a consequence of climate change and the warming of the ocean, and that this will become a more serious trend, but by the end of the century of course it could be very serious indeed. And the main threat is of course to people in the disadvantaged parts of the world. If you take these recent events, it’s been far worse in the Philippines than in the United States because they’re not prepared for it. Their houses are more fragile, etc.

Ariel: I don’t suppose you have any thoughts on how we get people to care more about others? Because it does seem to be in general that sort of worrying about myself versus worrying about other people. The richer countries are the ones who are causing more of the climate change, and it’s the poorer countries who seem to be suffering more. Then of course there’s the issue of the people who are alive now versus the people in the future.

Martin: That’s right, yes. Well, I think most people do care about their children and grandchildren, and so to that extent they do care about what things will be like at the end of the century, but as you say, the extra-political problem is that the cause of the CO2 emissions is mainly what’s happened in the advanced countries, and the downside is going to be more seriously felt by those in remote parts of the world. It’s easy to overlook them, and hard to persuade people that we ought to make a sacrifice which will be mainly for their benefit.

I think incidentally that’s one of the other things that we have to ensure happens, is a narrowing of the gap between the lifestyles and the economic advantages in the advanced and the less advanced parts of the world. I think that’s going to be in everyone’s interest because if there continues to be great inequality, not only will the poorer people be more subject to threats like climate change, but I think there’s going to be massive and well-justified discontent, because unlike in the earlier generations, they’re aware of what they’re missing. They all have mobile phones, they all know what it’s like, and I think there’s going to be embitterment leading to conflict if we don’t narrow this gap, and this requires I think a sacrifice on the part of the wealthy nations to subsidize developments in these poorer countries, especially in Africa.

Ariel: That sort of ties into another question that I had for you, and that is, what do you think is the most underappreciated threat that maybe isn’t quite as obvious? You mentioned the fact that we have these people in poorer countries who are able to more easily see what they’re missing out on. Inequality is a problem in and of itself, but also just that people are more aware of the inequality seems like a threat that we might not be as aware of. Are there others that you think are underappreciated?

Martin: Yes. Just to go back, that threat is of course very serious because by the end of the century there might be 10 times as many people in Africa as in Europe, and of course they would then have every justification in migrating towards Europe with the result of huge disruption. We do have to care about those sorts of issues. I think there are all kinds of reasons apart from straight ethics why we should ensure that the less developed countries, especially in Africa, do have a chance to close the gap.

Incidentally, one thing which is a handicap for them is that they won’t have the route to prosperity followed by the so called “Asian tigers,” which were able to have high economic growth by undercutting the labor cost in the west. Now what’s happening is that with robotics it’s possible to, as it were, re-shore lots of manufacturing industry back to wealthy countries, and so Africa and the Middle East won’t have the same opportunity the far eastern countries did to catch up by undercutting the cost of production in the west.

This is another reason why it’s going to be a big challenge. That’s something which I think we don’t worry about enough, and need to worry about, because if the inequalities persist when everyone is able to move easily and knows exactly what they’re missing, then that’s a recipe for a very dangerous and disruptive world. I would say that is an underappreciated threat.

Another thing I would count as important is that we are as a society very brittle, and very unstable because of high expectations. I’d like to give you another example. Suppose there were to be a pandemic, not necessarily a genetically engineered terrorist one, but a natural one. Then in contrast to what happened in the 14th century when the Bubonic Plague, the Black Death, occurred and killed nearly half the people in certain towns and the rest went on fatalistically. If we had some sort of plague which affected even 1% of the population of the United States, there’d be complete social breakdown, because that would overwhelm the capacity of hospitals, and people, unless they are wealthy, would feel they weren’t getting their entitlement of healthcare. And if that was a matter of life and death, that’s a recipe for social breakdown. I think given the high expectations of people in the developed world, then we are far more vulnerable to the consequences of these breakdowns, and pandemics, and the failures of electricity grids, et cetera, than in the past when people were more robust and more fatalistic.

Ariel: That’s really interesting. Is it essentially because we expect to be leading these better lifestyles, just that expectation could be our downfall if something goes wrong?

Martin: That’s right. And of course, if we know that there are cures available to some disease and there’s not the hospital capacity to offer it to all the people who are afflicted with the disease, then naturally that’s a matter of life and death, and that is going to promote social breakdown. This is a new threat which is of course a downside of the fact that we can at least cure some people.

Ariel: There’s two directions that I want to go with this. I’m going to start with just transitioning now to biotechnology. I want to come back to issues of overpopulation and improving healthcare in a little bit, but first I want to touch on biotech threats.

One of the things that’s been a little bit interesting for me is that when I first started at FLI three years ago we were very concerned about biotechnology. CRISPR was really big. It had just sort of exploded onto the scene. Now, three years later I’m not hearing quite as much about the biotech threats, and I’m not sure if that’s because something has actually changed, or if it’s just because at FLI I’ve become more focused on AI and therefore stuff is happening but I’m not keeping up with it. I was wondering if you could talk a bit about what some of the risks you see today are with respect to biotech?

Martin: Well, let me say I think we should worry far more about bio threats than about AI in my opinion. I think as far as the bio threats are concerned, then there are these new techniques. CRISPR, of course, is a very benign technique if it’s used to remove a single damaging gene that gives you a particular disease, and also it’s less objectionable than traditional GM because it doesn’t cross the species barrier in the same way, but it does allow things like a gene drive where you make a species extinct by making it sterile.

That’s good if you’re wiping out a mosquito that carries a deadly virus, but there’s a risk of some effect which distorts the ecology and has a cascading consequence. There are risks of that kind, but more important I think there is a risk of the misuse of these techniques, and not just CRISPR, but for instance the the gain of function techniques that we used in 2011 in Wisconsin and in Holland to make influenza virus both more virulent and more transmissible, things like that which can be done in a more advanced way now I’m sure.

These are clearly potentially dangerous, even if experimenters have a good motive, then the viruses might escape, and of course they are the kinds of things which could be misused. There have, of course, been lots of meetings, you have been at some, to discuss among scientists what the guidelines should be. How can we ensure responsible innovation in these technologies? These are modeled on the famous Conference in Asilomar in the 1970s when recombinant DNA was first being discussed, and the academics who worked in that area, they agreed on a sort of cautious stance, and a moratorium on some kinds of experiments.

But now they’re trying to do the same thing, and there’s a big difference. One is that these scientists are now more global. It’s not just a few people in North America and Europe. They’re global, and there is strong commercial pressures, and they’re far more widely understood. Bio-hacking is almost a student recreation. This means, in my view, that there’s a big danger, because even if we have regulations about certain things that can’t be done because they’re dangerous, enforcing those regulations globally is going to be as hopeless as it is now to enforce the drug laws, or to enforce the tax laws globally. Something which can be done will be done by someone somewhere, whatever the regulations say, and I think this is very scary. Consequences could cascade globally.

Ariel: Do you think that the threat is more likely to come from something happening accidentally, or intentionally?

Martin: I don’t know. I think it could be either. Certainly it could be something accidental from gene drive, or releasing some dangerous virus, but I think if we can imagine it happening intentionally, then we’ve got to ask what sort of people might do it? Governments don’t use biological weapons because you can’t predict how they will spread and who they’d actually kill, and that would be an inhibiting factor for any terrorist group that had well-defined aims.

But my worst nightmare is some person, and there are some, who think that there are too many human beings on the planet, and if they combine that view with the mindset of extreme animal rights people, etc, they might think it would be a good thing for Gaia, for Mother Earth, to get rid of a lot of human beings. They’re the kind of people who, with access to this technology, might have no compunction in releasing a dangerous pathogen. This is the kind of thing that worries me.

Ariel: I find that interesting because it ties into the other question that I wanted to ask you about, and that is the idea of overpopulation. I’ve read it both ways, that overpopulation is in and of itself something of an existential risk, or a catastrophic risk, because we just don’t have enough resources on the planet. You actually made an interesting point, I thought, in your book where you point out that we’ve been thinking that there aren’t enough resources for a long time, and yet we keep getting more people and we still have plenty of resources. I thought that was sort of interesting and reassuring.

But I do think at some point that does become an issue. At then at the same time we’re seeing this huge push, understandably, for improved healthcare, and expanding life spans, and trying to save as many lives as possible, and making those lives last as long as possible. How do you resolve those two sides of the issue?

Martin: It’s true, of course, as you imply, that the population has risen double in the last 50 years, and there were doomsters who in the 1960s and ’70s thought that mass starvation by now, and there hasn’t been because food production has more than kept pace. If there are famines today, as of course there are, it’s not because of overall food shortages. It’s because of wars, or mal-distribution of money to buy the food. Up until now things have gone fairly well, but clearly there are limits to the food that can be produced on the earth.

All I would say is that we can’t really say what the carrying capacity of the earth is, because it depends so much on the lifestyle of people. As I say in the book, the world couldn’t sustainably have 2 billion people if they all lived like present day Americans, using as much energy, and burning as much fossil fuels, and eating as much beef. On the other hand you could imagine lifestyles which are very sort of austere, where the earth could carry 10, or even 20 billion people. We can’t set an upper limit, but all we can say is that given that it’s fairly clear that the population is going to rise to about 9 billion by 2050, and it may go on rising still more after that, we’ve got to ensure that the way in which the average person lives is less profligate in terms of energy and resources, otherwise there will be problems.

I think we also do what we can to ensure that after 2050 the population turns around and goes down. The base scenario is when it goes on rising as it may if people choose to have large families even when they have the choice. That could happen, and of course as you say, life extension is going to have an affect on society generally, but obviously on the overall population too. I think it would be more benign if the population of 9 billion in 2050 was a peak and it started going down after that.

And it’s not hopeless, because the actual number of births per year has already started going down. The reason the population is still going up is because more babies survive, and most of the people in the developing world are still young, and if they live as long as people in advanced countries do, then of course that’s going to increase the population even for a steady birth rate. That’s why, unless there’s a real disaster, we can’t avoid the population rising to about 9 billion.

But I think policies can have an affect on what happens after that. I think we do have to try to make people realize that having large numbers of children has negative externalities, as it were in economic jargon, and it is going to be something to put extra pressure on the world, and affects our environment in a detrimental way.

Ariel: As I was reading this, especially as I was reading your section about space travel, I want to ask you about your take on whether we can just start sending people to Mars or something like that to address issues of overpopulation. As I was reading your section on that, news came out that Elon Musk and SpaceX had their first passenger for a trip around the moon, which is now scheduled for 2023, and the timing was just entertaining to me, because like I said you have a section in your book about why you don’t actually agree with Elon Musk’s plan for some of this stuff.

Martin: That’s right.

Ariel: I was hoping you could talk a little bit about why you’re not as big a plan of space tourism, and what you think of humanity expanding into the rest of the solar system and universe?

Martin: Well, let me say that I think it’s a dangerous delusion to think we can solve the earth’s problems by escaping to Mars or elsewhere. Mass emigration is not feasible. There’s nowhere in the solar system which is as comfortable to live in as the top of Everest or the South Pole. I think the idea which was promulgated by Elon Musk and Stephen Hawking of mass emigration is, I think, a dangerous delusion. The world’s problems have to be solved here, dealing with climate change is a dawdle compared to terraforming Mars. SoI don’t think that’s true.

Now, two other things about space. The first is that the practical need for sending people into space is getting less as robots get more advanced. Everyone has seen pictures of the Curiosity Probe trundling across the surface of Mars, and maybe missing things that a geologist would notice, but future robots will be able to do much of what a human will do, and to manufacture large structures in space, et cetera, so the practical need to send people to space is going down.

On the other hand, some people may want to go simply as an adventure. It’s not really tourism, because tourism implies it’s safe and routine. It’ll be an adventure like Steve Fossett or the guy who fell supersonically from an altitude balloon. It’d be crazy people like that, and maybe this Japanese tourist is in the same style, who want to have a thrill, and I think we should cheer them on.

I think it would be good to imagine that there are a few people living on Mars, but it’s never going to be as comfortable as our Earth, and we should just cheer on people like this.

And I personally think it should be left to private money. If I was an American, I would not support the NASA space program. It’s very expensive, and it could be undercut by private companies which can afford to take higher risks than NASA could inflict on publicly funded civilians. I don’t think NASA should be doing manned space flight at all. Of course, some people would say, “Well, it’s a national aspiration, a national goal to show superpower pre-eminence by a massive space project.” That was, of course, what drove the Apollo program, and the Apollo program cost about 4% of The US federal budget. Now NASA has .6% or thereabouts. I’m old enough to remember the Apollo moon landings, and of course if you would have asked me back then, I would have expected that there might have been people on Mars within 10 or 15 years at that time.

There would have been, had the program been funded, but of course there was no motive, because the Apollo program was driven by superpower rivalry. And having beaten the Russians, it wasn’t pursued with the same intensity. It could be that the Chinese will, for prestige reasons, want to have a big national space program, and leapfrog what the Americans did by going to Mars. That could happen. Otherwise I think the only manned space flight will, and indeed should, be privately funded by adventurers prepared to go on cut price and very risky missions.

But we should cheer them on. The reason we should cheer them on is that if in fact a few of them do provide some sort of settlement on Mars, then they will be important for life’s long-term future, because whereas we are, as humans, fairly well adapted to the earth, they will be in a place, Mars, or an asteroid, or somewhere, for which they are badly adapted. Therefore they would have every incentive to use all the techniques of genetic modification, and cyber technology to adapt to this hostile environment.

A new species, perhaps quite different from humans, may emerge as progeny of those pioneers within two or three centuries. I think this is quite possible. They, of course, may download themselves to be electronic. We don’t know how it’ll happen. We all know about the possibilities of advanced intelligence in electronic form. But I think this’ll happen on Mars, or in space, and of course if we think about going further and exploring beyond our solar system, then of course that’s not really a human enterprise because of human life times being limited, but it is a goal that would be feasible if you were a near immortal electronic entity. That’s a way in which our remote descendants will perhaps penetrate beyond our solar system.

Ariel: As you’re looking towards these longer term futures, what are you hopeful that we’ll be able to achieve?

Martin: You say we, I think we humans will mainly want to stay on the earth, but I think intelligent life, even if it’s not out there already in space, could spread through the galaxy as a consequence of what happens when a few people who go into space and are away from the regulators adapt themselves to that environment. Of course, one thing which is very important is to be aware of different time scales.

Sometimes you hear people talk about humans watching the death of the sun in five billion years. That’s nonsense, because the timescale for biological evolution by Darwinian selection is about a million years, thousands of times shorter than the lifetime of the sun, but more importantly the time scale for this new kind of intelligent design, when we can redesign humans and make new species, that time scale is a technological time scale. It could be only a century.

It would only take one, or two, or three centuries before we have entities which are very different from human beings if they are created by genetic modification, or downloading to electronic entities. They won’t be normal humans. I think this will happen, and this of course will be a very important stage in the evolution of complexity in our universe, because we will go from the kind of complexity which has emerged by Darwinian selection, to something quite new. This century is very special, which is a century where we might be triggering or jump starting a new kind of technological evolution which could spread from our solar system far beyond, on the timescale very short compared to the time scale for Darwinian evolution and the time scale for astronomical evolution.

Ariel: All right. In the book you spend a lot of time also talking about current physics theories and how those could evolve. You spend a little bit of time talking about multiverses. I was hoping you could talk a little bit about why you think understanding that is important for ensuring this hopefully better future?

Martin: Well, it’s only peripherally linked to it. I put that in the book because I was thinking about, what are the challenges, not just challenges of a practical kind, but intellectual challenges? One point I make is that there are some scientific challenges which we are now confronting which may be beyond human capacity to solve, because there’s no particular reason to think that the capacity of our brains is matched to understanding all aspects of reality any more than a monkey can understand quantum theory.

It’s possible that there be some fundamental aspects of nature that humans will never understand, and they will be a challenge for post-humans. I think those challenges are perhaps more likely to be in the realm of complexity, understanding the brain for instance, than in the context of cosmology, although there are challenges in cosmology which is to understand the very early universe where we may need a new theory like string theory with extra dimensions, et cetera, and we need a theory like that in order to decide whether our big bang was the only one, or whether there were other big bangs and a kind of multiverse.

It’s possible that in 50 years from now we will have such a theory, we’ll know the answers to those questions. But it could be that there is such a theory and it’s just too hard for anyone to actually understand and make predictions from. I think these issues are relevant to the intellectual constraints on humans.

Ariel: Is that something that you think, or hope, that things like more advanced artificial intelligence or however we evolve in the future, that that evolution will allow “us” to understand some of these more complex ideas?

Martin: Well, I think it’s certainly possible that machines could actually, in a sense, create entities based on physics which we can’t understand. This is perfectly possible, because obviously we know they can vastly out-compute us at the moment, so it could very well be, for instance, that there is a variant of string theory which is correct, and it’s just too difficult for any human mathematician to work out. But it could be that computers could work it out, so we get some answers.

But of course, you then come up against a more philosophical question about whether competence implies comprehension, whether a computer with superhuman capabilities is necessarily going to be self-aware and conscious, or whether it is going to be just a zombie. That’s a separate question which may not affect what it can actually do, but I think it does affect how we react to the possibility that the far future will be dominated by such things.

I remember when I wrote an article in a newspaper about these possibilities, the reaction was bimodal. Some people thought, “Isn’t it great there’ll be these even deeper intellects than human beings out there,” but others who thought these might just be zombies thought it was very sad if there was no entity which could actually appreciate the beauties and wonders of nature in the way we can. It does matter, in a sense, to our perception of this far future, if we think that these entities which may be electronic rather than organic, will be conscious and will have the kind of awareness that we have and which makes us wonder at the beauty of the environment in which we’ve emerged. I think that’s a very important question.

Ariel: I want to pull things back to a little bit more shorter term I guess, but still considering this idea of how technology will evolve. You mentioned that you don’t think it’s a good idea to count on going to Mars as a solution to our problems on Earth because all of our problems on Earth are still going to be easier to solve here than it is to populate Mars. I think in general we have this tendency to say, “Oh, well in the future we’ll have technology that can fix whatever issue we’re dealing with now, so we don’t need to worry about it.”

I was wondering if you could sort of comment on that approach. To what extent can we say, “Well, most likely technology will have improved and can help us solve these problems,” and to what extent is that a dangerous approach to take?

Martin: Well, clearly technology has allowed us to live much better, more complex lives than we could in the past, and on the whole the net benefits outweigh the downsides, but of course there are downsides, and they stem from the fact that we have some people who are disruptive, and some people who can’t be trusted. If we had a world where everyone could trust everyone else, we could get rid of about a third of the economy I would guess, but I think the main point is that we are very vulnerable.

We have huge advances, clearly, in networking via the Internet, and computers, et cetera, and we may have the Internet of Things within a decade, but of course people worry that this opens up a new kind of even more catastrophic potential for cyber terrorism. That’s just one example, and ditto for biotech which may allow the development of pathogens which kill people of particular races, or have other effects.

There are these technologies which are developing fast, and they can be used to great benefit, but they can be misused in ways that will provide new kinds of horrors that were not available in the past. It’s by no means obvious which way things will go. Will there be a continued net benefit of technology, as I think we’ve said there as been up ’til now despite nuclear weapons, et cetera, or will at some stage the downside run ahead of the benefits.

I do worry about the latter being a possibility, particularly because of this amplification factor, the fact that it only takes a few people in order to cause disruption that could cascade globally. The world is so interconnected that we can’t really have a disaster in one region without its affecting the whole world. Jared Diamond has this book called Collapse where he discusses five collapses of particular civilizations, whereas other parts of the world were unaffected.

I think if we really had some catastrophe, it would affect the whole world. It wouldn’t just affect parts. That’s something which is a new downside. The stakes are getting higher as technology advances, and my book is really aimed to say that these developments are very exciting, but they pose new challenges, and I think particularly they pose challenges because a few dissidents can cause more trouble, and I think it’ll make the world harder to govern. It’ll make cities and countries harder to govern, and a stronger tension between three things we want to achieve, which is security, privacy, and liberty. I think that’s going to be a challenge for all future governments.

Ariel: Reading your book I very much got the impression that it was essentially a call to action to address these issues that you just mentioned. I was curious: what do you hope that people will do after reading the book, or learning more about these issues in general?

Martin: Well, first of all I hope that people can be persuaded to think long term. I mentioned that religious groups, for instance, tend to think long term, and the papal encyclical in 2015 I think had a very important effect on the opinion in Latin America, Africa, and East Asia in the lead up to the Paris Climate Conference, for instance. That’s an example where someone from outside traditional politics would have an effect.

What’s very important is that politicians will only respond to an issue if it’s prominent in the press, and prominent in their inbox, and so we’ve got to ensure that people are concerned about this. Of course, I ended the book saying, “What are the special responsibilities of scientists,” because scientists clearly have a special responsibility to ensure that their work is safe, and that the public and politicians are made aware of the implications of any discovery they make.

I think that’s important, even though they should be mindful that their expertise doesn’t extend beyond their special area. That’s a reason why scientific understanding, in a general sense, is something which really has to be universal. This is important for education, because if we want to have a proper democracy where debate about these issues rises above the level of tabloid slogans, then given that the important issues that we have to discuss involve health, energy, the environment, climate, et cetera, which have scientific aspects, then everyone has to have enough feel for those aspects to participate in a debate, and also enough feel for probabilities and statistics to be not easily bamboozled by political arguments.

I think an educated population is essential for proper democracy. Obviously that’s a platitude. But the education needs to include, to a greater extent, an understanding of the scope and limits of science and technology. I make this point at the end and hope that it will lead to a greater awareness of these issues, and of course for people in universities, we have a responsibility because we can influence the younger generation. It’s certainly the case that students and people under 30 may be alive towards the end of the century are more mindful of these concerns than the middle aged and old.

It’s very important that these activities like the Effective Altruism movement, 80,000 Hours, and these other movements among students should be encouraged, because they are going to be important in spreading an awareness of long-term concerns. Public opinion can be changed. We can see the change in attitudes to drunk driving and things like that, which have happened over a few decades, and I think perhaps we can have a more environmental sensitivity so to become regarded as sort of rather naff or tacky to waste energy, and to be extravagant in consumption.

I’m hopeful that attitudes will change in a positive way, but I’m concerned simply because the politics is getting very difficult, because with social media, panic and rumor can spread at the speed of light, and small groups can have a global effect. This makes it very, very hard to ensure that we can keep things stable given that only a few people are needed to cause massive disruption. That’s something which is new, and I think is becoming more and more serious.

Ariel: We’ve been talking a lot about things that we should be worrying about. Do you think there are things that we are currently worrying about that we probably can just let go of, that aren’t as big of risks?

Martin: Well, I think we need to ensure responsible innovation in all new technologies. We’ve talked a lot about bio, and we are very concerned about the misuse of cyber technology. As regards AI, of course there are a whole lot of concerns to be had. I personally think that the takeover AI would be rather slower than many of the evangelists suspect, but of course we do have to ensure that humans are not victimized by some algorithm which they can’t have explained to them.

I think there is an awareness to this, and I think that what’s being done by your colleagues at MIT has been very important in raising awareness of the need for responsible innovation and ethical application of AI, and also what your group has recognized is that the order in which things happen is very important. If some computer is developed and goes rogue, that’s bad news, whereas if we have a powerful computer which is under our control, then it may help us to deal with these other problems, the problems of the misuse of biotech, et cetera.

The order in which things happen is going to be very important, but I must say I don’t completely share these concerns about machines running away and taking over, ’cause I think there’s a difference in that, for biological evolution there’s been a drive toward intelligence being favored, but so is aggression. In the case of computers, they may drive towards greater intelligence, but it’s not obvious that that is going to be combined with aggression, because they are going to be evolving by intelligent design, not the struggle of the fittest, which is the way that we evolved.

Ariel: What about concerns regarding AI just in terms of being mis-programmed, and AI just being extremely competent? Poor design on our part, poor intelligent design?

Martin: Well, I think in the short term obviously there are concerns about AI making decisions that affect people, and I think most of us would say that we shouldn’t be deprived of our credit rating, or put in prison on the basis of some AI algorithm which can’t be explained to us. We are entitled to have an explanation if something is done to us against our will. That is why it is worrying if too much is going to be delegated to AI.

I also think that constraint on the development of self-driving cars, and things of that kind, is going to be constrained by the fact that these become vulnerable to hacking of various kinds. I think it’ll be a long time before we will accept a driverless car on an ordinary road. Controlled environments, yes. In particular lanes on highways, yes. In an ordinary road in a traditional city, it’s not clear that we will ever accept a driverless car. I think I’m frankly less bullish than maybe some of your colleagues about the speed at which the machines will really take over and be accepted, that we can trust ourselves to them.

Ariel: As I mentioned at the start, and as you mentioned at the start, you are a techno optimist, for as much as the book is about things that could go wrong it did feel to me like it was also sort of an optimistic look at the future. What are you most optimistic about? What are you most hopeful for looking at both short term and long term, however you feel like answering that?

Martin: I’m hopeful that biotech will have huge benefits for health, will perhaps extend human life spans a bit, but that’s something about which we should feel a bit ambivalent. So, I think health, and also food. If you asked me, what is one of the most benign technologies, it’s to make artificial meat, for instance. It’s clear that we can more easily feed a population of 9 billion on a vegetarian diet than on a traditional diet like Americans consume today.

To take one benign technology, I would say artificial meat is one, and more intensive farming so that we can feed people without encroaching too much on the natural part of the world. I’m optimistic about that. If we think about very long term trends then life extension is something which obviously if it happens too quickly is going to be hugely disruptive, multi-generation families, et cetera.

Also, even though we will have the capability within a century to change human beings, I think we should constrain that on earth and just let that be done by the few crazy pioneers who go away into space. But if this does happen, then as I say in the introduction to my book, it will be a real game changer in a sense. I make the point that one thing that hasn’t changed over most of human history is human character. Evidence for this is that we can read the literature written by the Greeks and Romans more than 2,000 years ago and resonate with the people, and their characters, and their attitudes and emotions.

It’s not at all clear that on some scenarios, people 200 years from now will resonate in anything other than an algorithmic sense with the attitudes we have as humans today. That will be a fundamental, and very fast change in the nature of humanity. The question is, can we do something to at least constrain the rate at which that happens, or at least constrain the way in which it happens? But it is going to be almost certainly possible to completely change human mentality, and maybe even human physique over that time scale. One has only to listen to listen to people like George Church to realize that it’s not crazy to imagine this happening.

Ariel: You mentioned in the book that there’s lots of people who are interested in cryogenics, but you also talked briefly about how there are some negative effects of cryogenics, and the burden that it puts on the future. I was wondering if you could talk really quickly about that?

Martin: There are some people, I know some, who have a medallion around their neck which is an injunction of, if they drop dead they should be immediately frozen, and their blood drained and replaced by liquid nitrogen, and that they should then be stored — there’s a company called Alcor in Arizona that does this — and allegedly revived at some stage when technology advanced. I find it hard to take these seriously, but they say that, well the chance may be small, but if they don’t invest this way then the chance is zero that they have a resurrection.

But I actually think that even if it worked, even if the company didn’t go bust, and sincerely maintained them for centuries and they could then be revived, I still think that what they’re doing is selfish, because they’d be revived into a world that was very different. They’d be refugees from the past, and they’d therefore be imposing an obligation on the future.

We obviously feel an obligation to look after some asylum seeker or refugee, and we might feel the same if someone had been driven out of their home in the Amazonian forest for instance, and had to find a new home, but these refugees from the past, as it were, they’re imposing a burden on future generations. I’m not sure that what they’re doing is ethical. I think it’s rather selfish.

Ariel: I hadn’t thought of that aspect of it. I’m a little bit skeptical of our ability to come back.

Martin: I agree. I think the chances are almost zero, even if they were stored and et cetera, one would like to see this technology tried on some animal first to see if they could freeze animals at liquid nitrogen temperatures and then revive it. I think it’s pretty crazy. Then of course, the number of people doing it is fairly small, and some of the companies doing it, there’s one in Russia, which are real ripoffs I think, and won’t survive. But as I say, even if these companies did keep going for a couple of centuries, or however long is necessary, then it’s not clear to me that it’s doing good. I also quoted this nice statement about, “What happens if we clone, and create a neanderthal? Do we put him in a zoo or send him to Harvard,” said the professor from Stanford.

Ariel: Those are ethical considerations that I don’t see very often. We’re so focused on what we can do that sometimes we forget. “Okay, once we’ve done this, what happens next?”

I appreciate you being here today. Those were my questions. Was there anything else that you wanted to mention that we didn’t get into?

Martin: One thing we didn’t discuss, which was a serious issue, is the limits of medical treatment, because you can make extraordinary efforts to keep people alive long before they’d have died naturally, and to keep alive babies that will never live a normal life, et cetera. Well, I certainly feel that that’s gone too far at both ends of life.

One should not devote so much effort to extreme premature babies, and allow people to die more naturally. Actually, if you asked me about predictions I’d make about the next 30 or 40 years, first more vegetarianism, secondly more euthanasia.

Ariel: I support both, vegetarianism, and I think euthanasia should be allowed. I think it’s a little bit barbaric that it’s not.

Martin: Yes.

I think we’ve covered quite a lot, haven’t we?

Ariel: I tried to.

Martin: I’d just like to mention that my book does touch a lot of bases in a fairly short book. I hope it will be read not just by scientists. It’s not really a science book, although it emphasizes how scientific ideas are what’s going to determine how our civilization evolves. I’d also like to say that for those in universities, we know it’s only interim for students, but we have universities like MIT, and my University of Cambridge, we have convening power to gather people together to address these questions.

I think the value of the centers which we have in Cambridge, and you have in MIT, are that they are groups which are trying to address these very, very big issues, these threats and opportunities. The stakes are so high that if our efforts can really reduce the risk of a disaster by one part in 10,000, we’ve more than earned our keep. I’m very supportive of our Centre for Existential Risk in Cambridge, and also the Future of Life Institute which you have at MIT.

Given the huge numbers of people who are thinking about small risks like which foods are carcinogenic, and the threats of low radiation doses, et cetera, it’s not at all inappropriate that there should be some groups who are focusing on the more extreme, albeit perhaps rather improbable threats which could affect the whole future of humanity. I think it’s very important that these groups should be encouraged and fostered, and I’m privileged to be part of them.

Ariel: All right. Again, the book is On the Future: Prospects for Humanity by Martin Rees. I do want to add, I agree with what you just said. I think this is a really nice introduction to a lot of the risks that we face. I started taking notes about the different topics that you covered, and I don’t think I got all of them, but there’s climate change, nuclear war, nuclear winter, biodiversity loss, overpopulation, synthetic biology, genome editing, bioterrorism, biological errors, artificial intelligence, cyber technology, cryogenics, and the various topics in physics, and as you mentioned the role that scientists need to play in ensuring a safe future.

I highly recommend the book as a really great introduction to the potential risks, and the hopefully much greater potential benefits that science and technology can pose for the future. Martin, thank you again for joining me today.

Martin: Thank you, Ariel, for talking to me.

[end of recorded material]

Doomsday Clock: Two and a Half Minutes to Midnight

Is the world more dangerous than ever?

Today in Washington, D.C, the Bulletin of Atomic Scientists announced its decision to move the infamous Doomsday Clock thirty seconds closer to doom: It is now two and a half minutes to midnight.

Each year since 1947, the Bulletin of Atomic Scientists has publicized the symbol of the Doomsday Clock to convey how close we are to destroying our civilization with dangerous technologies of our own making. As the Bulletin perceives our existential threats to grow, the minute hand inches closer to midnight.

For the past two years the Doomsday Clock has been set at three minutes to midnight.

But now, in the face of an increasingly unstable political climate, the Doomsday Clock is the closest to midnight it has been since 1953.

The clock struck two minutes to midnight in 1953 at the start of the nuclear arms race, but what makes 2017 uniquely dangerous for humanity is the variety of threats we face. Not only is there growing uncertainty with nuclear weapons and the leaders that control them, but the existential threats of climate change, artificial intelligence, cybersecurity, and biotechnology continue to grow.

As the Bulletin notes, “The challenge remains whether societies can develop and apply powerful technologies for our welfare without also bringing about our own destruction through misapplication, madness, or accident.”

Rachel Bronson, the Executive Director and publisher of the Bulletin of the Atomic Scientists, said: “This year’s Clock deliberations felt more urgent than usual. In addition to the existential threats posed by nuclear weapons and climate change, new global realities emerged, as trusted sources of information came under attack, fake news was on the rise, and words were used by a President-elect of the United States in cavalier and often reckless ways to address the twin threats of nuclear weapons and climate change.”

Lawrence Krauss, a Chair on the Board of Sponsors, warned viewers that “technological innovation is occurring at a speed that challenges society’s ability to keep pace.” While these technologies offer unprecedented opportunities for humanity to thrive, they have proven difficult to control and thus demand responsible leadership.

Given the difficulty of controlling these increasingly capable technologies, Krauss discussed the importance of science for informing policy. Scientists and groups like the Bulletin don’t seek to make policy, but their research and evidence must support and inform policy. “Facts are stubborn things,” Krauss explained, “and they must be taken into account if the future of humanity is to be preserved. Nuclear weapons and climate change are precisely the sort of complex existential threats that cannot be properly managed without access to and reliance on expert knowledge.”

The Bulletin ended their public statement today with a strong message: “It is two and a half minutes to midnight, the Clock is ticking, global danger looms. Wise public officials should act immediately, guiding humanity away from the brink. If they do not, wise citizens must step forward and lead the way.”

You can read the Bulletin of Atomic Scientists’ full report here.

Podcast: FLI 2016 – A Year In Review

For FLI, 2016 was a great year, full of our own success, but also great achievements from so many of the organizations we work with. Max, Meia, Anthony, Victoria, Richard, Lucas, David, and Ariel discuss what they were most excited to see in 2016 and what they’re looking forward to in 2017.

AGUIRRE: I’m Anthony Aguirre. I am a professor of physics at UC Santa Cruz, and I’m one of the founders of the Future of Life Institute.

STANLEY: I’m David Stanley, and I’m currently working with FLI as a Project Coordinator/Volunteer Coordinator.

PERRY: My name is Lucas Perry, and I’m a Project Coordinator with the Future of Life Institute.

TEGMARK: I’m Max Tegmark, and I have the fortune to be the President of the Future of Life Institute.

CHITA-TEGMARK: I’m Meia Chita-Tegmark, and I am a co-founder of the Future of Life Institute.

MALLAH: Hi, I’m Richard Mallah. I’m the Director of AI Projects at the Future of Life Institute.

KRAKOVNA: Hi everyone, I am Victoria Krakovna, and I am one of the co-founders of FLI. I’ve recently taken up a position at Google DeepMind working on AI safety.

CONN: And I’m Ariel Conn, the Director of Media and Communications for FLI. 2016 has certainly had its ups and downs, and so at FLI, we count ourselves especially lucky to have had such a successful year. We’ve continued to progress with the field of AI safety research, we’ve made incredible headway with our nuclear weapons efforts, and we’ve worked closely with many amazing groups and individuals. On that last note, much of what we’ve been most excited about throughout 2016 is the great work these other groups in our fields have also accomplished.

Over the last couple of weeks, I’ve sat down with our founders and core team to rehash their highlights from 2016 and also to learn what they’re all most looking forward to as we move into 2017.

To start things off, Max gave a summary of the work that FLI does and why 2016 was such a success.

TEGMARK: What I was most excited by in 2016 was the overall sense that people are taking seriously this idea – that we really need to win this race between the growing power of our technology and the wisdom with which we manage it. Every single way in which 2016 is better than the Stone Age is because of technology, and I’m optimistic that we can create a fantastic future with tech as long as we win this race. But in the past, the way we’ve kept one step ahead is always by learning from mistakes. We invented fire, messed up a bunch of times, and then invented the fire extinguisher. We at the Future of Life Institute feel that that strategy of learning from mistakes is a terrible idea for more powerful tech, like nuclear weapons, artificial intelligence, and things that can really alter the climate of our globe.

Now, in 2016 we saw multiple examples of people trying to plan ahead and to avoid problems with technology instead of just stumbling into them. In April, we had world leaders getting together and signing the Paris Climate Accords. In November, the United Nations General Assembly voted to start negotiations about nuclear weapons next year. The question is whether they should actually ultimately be phased out; whether the nations that don’t have nukes should work towards stigmatizing building more of them – with the idea that 14,000 is way more than anyone needs for deterrence. And – just the other day – the United Nations also decided to start negotiations on the possibility of banning lethal autonomous weapons, which is another arms race that could be very, very destabilizing. And if we keep this positive momentum, I think there’s really good hope that all of these technologies will end up having mainly beneficial uses.

Today, we think of our biologist friends as mainly responsible for the fact that we live longer and healthier lives, and not as those guys who make the bioweapons. We think of chemists as providing us with better materials and new ways of making medicines, not as the people who built chemical weapons and are all responsible for global warming. We think of AI scientists as – I hope, when we look back on them in the future – as people who helped make the world better, rather than the ones who just brought on the AI arms race. And it’s very encouraging to me that as much as people in general – but also the scientists in all these fields – are really stepping up and saying, “Hey, we’re not just going to invent this technology, and then let it be misused. We’re going to take responsibility for making sure that the technology is used beneficially.”

CONN: And beneficial AI is what FLI is primarily known for. So what did the other members have to say about AI safety in 2016? We’ll hear from Anthony first.

AGUIRRE: I would say that what has been great to see over the last year or so is the AI safety and beneficiality research field really growing into an actual research field. When we ran our first conference a couple of years ago, they were these tiny communities who had been thinking about the impact of artificial intelligence in the future and in the long-term future. They weren’t really talking to each other; they weren’t really doing much actual research – there wasn’t funding for it. So, to see in the last few years that transform into something where it takes a massive effort to keep track of all the stuff that’s being done in this space now. All the papers that are coming out, the research groups – you sort of used to be able to just find them all, easily identified. Now, there’s this huge worldwide effort and long lists, and it’s difficult to keep track of. And that’s an awesome problem to have.

As someone who’s not in the field, but sort of watching the dynamics of the research community, that’s what’s been so great to see. A research community that wasn’t there before really has started, and I think in the past year we’re seeing the actual results of that research start to come in. You know, it’s still early days. But it’s starting to come in, and we’re starting to see papers that have been basically created using these research talents and the funding that’s come through the Future of Life Institute. It’s been super gratifying. And seeing that it’s a fairly large amount of money – but fairly small compared to the total amount of research funding in artificial intelligence or other fields – but because it was so funding-starved and talent-starved before, it’s just made an enormous impact. And that’s been nice to see.

CONN: Not surprisingly, Richard was equally excited to see AI safety becoming a field of ever-increasing interest for many AI groups.

MALLAH: I’m most excited by the continued mainstreaming of AI safety research. There are more and more publications coming out by places like DeepMind and Google Brain that have really lent additional credibility to the space, as well as a continued uptake of more and more professors, and postdocs, and grad students from a wide variety of universities entering this space. And, of course, OpenAI has come out with a number of useful papers and resources.

I’m also excited that governments have really realized that this is an important issue. So, while the White House reports have come out recently focusing more on near-term AI safety research, they did note that longer-term concerns like superintelligence are not necessarily unreasonable for later this century. And that they do support – right now – funding safety work that can scale toward the future, which is really exciting. We really need more funding coming into the community for that type of research. Likewise, other governments – like the U.K. and Japan, Germany – have all made very positive statements about AI safety in one form or another. And other governments around the world.

CONN: In addition to seeing so many other groups get involved in AI safety, Victoria was also pleased to see FLI taking part in so many large AI conferences.

KRAKOVNA: I think I’ve been pretty excited to see us involved in these AI safety workshops at major conferences. So on the one hand, our conference in Puerto Rico that we organized ourselves was very influential and helped to kick-start making AI safety more mainstream in the AI community. On the other hand, it felt really good in 2016 to complement that with having events that are actually part of major conferences that were co-organized by a lot of mainstream AI researchers. I think that really was an integral part of the mainstreaming of the field. For example, I was really excited about the Reliable Machine Learning workshop at ICML that we helped to make happen. I think that was something that was quite positively received at the conference, and there was a lot of good AI safety material there.

CONN: And of course, Victoria was also pretty excited about some of the papers that were published this year connected to AI safety, many of which received at least partial funding from FLI.

KRAKOVNA: There were several excellent papers in AI safety this year, addressing core problems in safety for machine learning systems. For example, there was a paper from Stuart Russell’s lab published at NIPS, on cooperative IRL. This is about teaching AI what humans want – how to train an RL algorithm to learn the right reward function that reflects what humans want it to do. DeepMind and FHI published a paper at UAI on safely interruptible agents, that formalizes what it means for an RL agent not to have incentives to avoid shutdown. MIRI made an impressive breakthrough with their paper on logical inductors. I’m super excited about all these great papers coming out, and that our grant program contributed to these results.

CONN: For Meia, the excitement about AI safety went beyond just the technical aspects of artificial intelligence.

CHITA-TEGMARK: I am very excited about the dialogue that FLI has catalyzed – and also engaged in – throughout 2016, and especially regarding the impact of technology on society. My training is in psychology; I’m a psychologist. So I’m very interested in the human aspect of technology development. I’m very excited about questions like, how are new technologies changing us? How ready are we to embrace new technologies? Or how our psychological biases may be clouding our judgement about what we’re creating and the technologies that we’re putting out there. Are these technologies beneficial for our psychological well-being, or are they not?

So it has been extremely interesting for me to see that these questions are being asked more and more, especially by artificial intelligence developers and also researchers. I think it’s so exciting to be creating technologies that really force us to grapple with some of the most fundamental aspects, I would say, of our own psychological makeup. For example, our ethical values, our sense of purpose, our well-being, maybe our biases and shortsightedness and shortcomings as biological human beings. So I’m definitely very excited about how the conversation regarding technology – and especially artificial intelligence – has evolved over the last year. I like the way it has expanded to capture this human element, which I find so important. But I’m also so happy to feel that FLI has been an important contributor to this conversation.

CONN: Meanwhile, as Max described earlier, FLI has also gotten much more involved in decreasing the risk of nuclear weapons, and Lucas helped spearhead one of our greatest accomplishments of the year.

PERRY: One of the things that I was most excited about was our success with our divestment campaign. After a few months, we had great success in our own local Boston area with helping the City of Cambridge to divest its $1 billion portfolio from nuclear weapon producing companies. And we see this as a really big and important victory within our campaign to help institutions, persons, and universities to divest from nuclear weapons producing companies.

CONN: And in order to truly be effective we need to reach an international audience, which is something Dave has been happy to see grow this year.

STANLEY: I’m mainly excited about – at least, in my work – the increasing involvement and response we’ve had from the international community in terms of reaching out about these issues. I think it’s pretty important that we engage the international community more, and not just academics. Because these issues – things like nuclear weapons and the increasing capabilities of artificial intelligence – really will affect everybody. And they seem to be really underrepresented in mainstream media coverage as well.

So far, we’ve had pretty good responses just in terms of volunteers from many different countries around the world being interested in getting involved to help raise awareness in their respective communities, either through helping develop apps for us, or translation, or promoting just through social media these ideas in their little communities.

CONN: Many FLI members also participated in both local and global events and projects, like the following we’re about  to hear from Victoria, Richard, Lucas and Meia.

KRAKOVNA: The EAGX Oxford Conference was a fairly large conference. It was very well organized, and we had a panel there with Demis Hassabis, Nate Soares from MIRI, Murray Shanahan from Imperial, Toby Ord from FHI, and myself. I feel like overall, that conference did a good job of, for example, connecting the local EA community with the people at DeepMind, who are really thinking about AI safety concerns like Demis and also Sean Legassick, who also gave a talk about the ethics and impacts side of things. So I feel like that conference overall did a good job of connecting people who are thinking about these sorts of issues, which I think is always a great thing.  

MALLAH: I was involved in this endeavor with IEEE regarding autonomy and ethics in autonomous systems, sort of representing FLI’s positions on things like autonomous weapons and long-term AI safety. One thing that came out this year – just a few days ago, actually, due to this work from IEEE – is that the UN actually took the report pretty seriously, and it may have influenced their decision to take up the issue of autonomous weapons formally next year. That’s kind of heartening.

PERRY: A few different things that I really enjoyed doing were giving a few different talks at Duke and Boston College, and a local effective altruism conference. I’m also really excited about all the progress we’re making on our nuclear divestment application. So this is an application that will allow anyone to search their mutual fund and see whether or not their mutual funds have direct or indirect holdings in nuclear weapons-producing companies.

CHITA-TEGMARK:  So, a wonderful moment for me was at the conference organized by Yann LeCun in New York at NYU, when Daniel Kahneman, one of my thinker-heroes, asked a very important question that really left the whole audience in silence. He asked, “Does this make you happy? Would AI make you happy? Would the development of a human-level artificial intelligence make you happy?” I think that was one of the defining moments, and I was very happy to participate in this conference.

Later on, David Chalmers, another one of my thinker-heroes – this time, not the psychologist but the philosopher – organized another conference, again at NYU, trying to bring philosophers into this very important conversation about the development of artificial intelligence. And again, I felt there too, that FLI was able to contribute and bring in this perspective of the social sciences on this issue.

CONN: Now, with 2016 coming to an end, it’s time to turn our sites to 2017, and FLI is excited for this new year to be even more productive and beneficial.

TEGMARK: We at the Future of Life Institute are planning to focus primarily on artificial intelligence, and on reducing the risk of accidental nuclear war in various ways. We’re kicking off by having an international conference on artificial intelligence, and then we want to continue throughout the year providing really high-quality and easily accessible information on all these key topics, to help inform on what happens with climate change, with nuclear weapons, with lethal autonomous weapons, and so on.

And looking ahead here, I think it’s important right now – especially since a lot of people are very stressed out about the political situation in the world, about terrorism, and so on – to not ignore the positive trends and the glimmers of hope we can see as well.

CONN: As optimistic as FLI members are about 2017, we’re all also especially hopeful and curious to see what will happen with continued AI safety research.

AGUIRRE: I would say I’m looking forward to seeing in the next year more of the research that comes out, and really sort of delving into it myself, and understanding how the field of artificial intelligence and artificial intelligence safety is developing. And I’m very interested in this from the forecast and prediction standpoint.

I’m interested in trying to draw some of the AI community into really understanding how artificial intelligence is unfolding – in the short term and the medium term – as a way to understand, how long do we have? Is it, you know, if it’s really infinity, then let’s not worry about that so much, and spend a little bit more on nuclear weapons and global warming and biotech, because those are definitely happening. If human-level AI were 8 years away… honestly, I think we should be freaking out right now. And most people don’t believe that, I think most people are in the middle it seems, of thirty years or fifty years or something, which feels kind of comfortable. Although it’s not that long, really, on the big scheme of things. But I think it’s quite important to know now, which is it? How fast are these things, how long do we really have to think about all of the issues that FLI has been thinking about in AI? How long do we have before most jobs in industry and manufacturing are replaceable by a robot being slotted in for a human? That may be 5 years, it may be fifteen… It’s probably not fifty years at all. And having a good forecast on those good short-term questions I think also tells us what sort of things we have to be thinking about now.

And I’m interested in seeing how this massive AI safety community that’s started develops. It’s amazing to see centers kind of popping up like mushrooms after a rain all over and thinking about artificial intelligence safety. This partnership on AI between Google and Facebook and a number of other large companies getting started. So to see how those different individual centers will develop and how they interact with each other. Is there an overall consensus on where things should go? Or is it a bunch of different organizations doing their own thing? Where will governments come in on all of this? I think it will be interesting times. So I look forward to seeing what happens, and I will reserve judgement in terms of my optimism.

KRAKOVNA: I’m really looking forward to AI safety becoming even more mainstream, and even more of the really good researchers in AI giving it serious thought. Something that happened in the past year that I was really excited about, that I think is also pointing in this direction, is the research agenda that came out of Google Brain called “Concrete Problems in AI Safety.” And I think I’m looking forward to more things like that happening, where AI safety becomes sufficiently mainstream that people who are working in AI just feel inspired to do things like that and just think from their own perspectives: what are the important problems to solve in AI safety? And work on them.

I’m a believer in the portfolio approach with regards to AI safety research, where I think we need a lot of different research teams approaching the problems from different angles and making different assumptions, and hopefully some of them will make the right assumption. I think we are really moving in the direction in terms of more people working on these problems, and coming up with different ideas. And I look forward to seeing more of that in 2017. I think FLI can also help continue to make this happen.

MALLAH: So, we’re in the process of fostering additional collaboration among people in the AI safety space. And we will have more announcements about this early next year. We’re also working on resources to help people better visualize and better understand the space of AI safety work, and the opportunities there and the work that has been done. Because it’s actually quite a lot.

I’m also pretty excited about fostering continued theoretical work and practical work in making AI more robust and beneficial. The work in value alignment, for instance, is not something we see supported in mainstream AI research. And this is something that is pretty crucial to the way that advanced AIs will need to function. It won’t be very explicit instructions to them; they’ll have to be making decision based on what they think is right. And what is right? It’s something that… or even structuring the way to think about what is right requires some more research.

STANLEY: We’ve had pretty good success at FLI in the past few years helping to legitimize the field of AI safety. And I think it’s going to be important because AI is playing a large role in industry and there’s a lot of companies working on this, and not just in the US. So I think increasing international awareness about AI safety is going to be really important.

CHITA-TEGMARK: I believe that the AI community has raised some very important questions in 2016 regarding the impact of AI on society. I feel like 2017 should be the year to make progress on these questions, and actually research them and have some answers to them. For this, I think we need more social scientists – among people from other disciplines – to join this effort of really systematically investigating what would be the optimal impact of AI on people. I hope that in 2017 we will have more research initiatives, that we will attempt to systematically study other burning questions regarding the impact of AI on society. Some examples are: how can we ensure the psychological well-being for people while AI creates lots of displacement on the job market as many people predict. How do we optimize engagement with technology, and withdrawal from it also? Will some people be left behind, like the elderly or the economically disadvantaged? How will this affect them, and how will this affect society at large?

What about withdrawal from technology? What about satisfying our need for privacy? Will we be able to do that, or is the price of having more and more customized technologies and more and more personalization of the technologies we engage with… will that mean that we will have no privacy anymore, or that our expectations of privacy will be very seriously violated? I think these are some very important questions that I would love to get some answers to. And my wish, and also my resolution, for 2017 is to see more progress on these questions, and to hopefully also be part of this work and answering them.

PERRY: In 2017 I’m very interested in pursuing the landscape of different policy and principle recommendations from different groups regarding artificial intelligence. I’m also looking forward to expanding out nuclear divestment campaign by trying to introduce divestment to new universities, institutions, communities, and cities.

CONN: In fact, some experts believe nuclear weapons pose a greater threat now than at any time during our history.

TEGMARK: I personally feel that the greatest threat to the world in 2017 is one that the newspapers almost never write about. It’s not terrorist attacks, for example. It’s the small but horrible risk that the U.S. and Russia for some stupid reason get into an accidental nuclear war against each other. We have 14,000 nuclear weapons, and this war has almost happened many, many times. So, actually what’s quite remarkable and really gives a glimmer of hope is that – however people may feel about Putin and Trump – the fact is they are both signaling strongly that they are eager to get along better. And if that actually pans out and they manage to make some serious progress in nuclear arms reduction, that would make 2017 the best year for nuclear weapons we’ve had in a long, long time, reversing this trend of ever greater risks with ever more lethal weapons.

CONN: Some FLI members are also looking beyond nuclear weapons and artificial intelligence, as I learned when I asked Dave about other goals he hopes to accomplish with FLI this year.

STANLEY: Definitely having the volunteer team – particularly the international volunteers – continue to grow, and then scale things up. Right now, we have a fairly committed core of people who are helping out, and we think that they can start recruiting more people to help out in their little communities, and really making this stuff accessible. Not just to academics, but to everybody. And that’s also reflected in the types of people we have working for us as volunteers. They’re not just academics. We have programmers, linguists, people having just high school degrees all the way up to Ph.D.’s, so I think it’s pretty good that this varied group of people can get involved and contribute, and also reach out to other people they can relate to.

CONN: In addition to getting more people involved, Meia also pointed out that one of the best ways we can help ensure a positive future is to continue to offer people more informative content.

CHITA-TEGMARK: Another thing that I’m very excited about regarding our work here at the Future of Life Institute is this mission of empowering people to information. I think information is very powerful and can change the way people approach things: they can change their beliefs, their attitudes, and their behaviors as well. And by creating ways in which information can be readily distributed to the people, and with which they can engage very easily, I hope that we can create changes. For example, we’ve had a series of different apps regarding nuclear weapons that I think have contributed a lot to peoples knowledge and has brought this issue to the forefront of their thinking.

CONN: Yet as important as it is to highlight the existential risks we must address to keep humanity safe, perhaps it’s equally important to draw attention to the incredible hope we have for the future if we can solve these problems. Which is something both Richard and Lucas brought up for 2017.

MALLAH: I’m excited about trying to foster more positive visions of the future, so focusing on existential hope aspects of the future. Which are kind of the flip side of existential risks. So we’re looking at various ways of getting people to be creative about understanding some of the possibilities, and how to differentiate the paths between the risks and the benefits.

PERRY: Yeah, I’m also interested in creating and generating a lot more content that has to do with existential hope. Given the current global political climate, it’s all the more important to focus on how we can make the world better.

CONN: And on that note, I want to mention one of the most amazing things I discovered this past year. It had nothing to do with technology, and everything to do with people. Since starting at FLI, I’ve met countless individuals who are dedicating their lives to trying to make the world a better place. We may have a lot of problems to solve, but with so many groups focusing solely on solving them, I’m far more hopeful for the future. There are truly too many individuals that I’ve met this year to name them all, so instead, I’d like to provide a rather long list of groups and organizations I’ve had the pleasure to work with this year. A link to each group can be found at futureoflife.org/2016, and I encourage you to visit them all to learn more about the wonderful work they’re doing. In no particular order, they are:

Machine Intelligence Research Institute

Future of Humanity Institute

Global Catastrophic Risk Institute

Center for the Study of Existential Risk

Ploughshares Fund

Bulletin of Atomic Scientists

Open Philanthropy Project

Union of Concerned Scientists

The William Perry Project

ReThink Media

Don’t Bank on the Bomb

Federation of American Scientists

Massachusetts Peace Action

IEEE (Institute for Electrical and Electronics Engineers)

Center for Human-Compatible Artificial Intelligence

Center for Effective Altruism

Center for Applied Rationality

Foresight Institute

Leverhulme Center for the Future of Intelligence

Global Priorities Project

Association for the Advancement of Artificial Intelligence

International Joint Conference on Artificial Intelligence

Partnership on AI

The White House Office of Science and Technology Policy

The Future Society at Harvard Kennedy School

 

I couldn’t be more excited to see what 2017 holds in store for us, and all of us at FLI look forward to doing all we can to help create a safe and beneficial future for everyone. But to end on an even more optimistic note, I turn back to Max.

TEGMARK: Finally, I’d like – because I spend a lot of my time thinking about our universe – to remind everybody that we shouldn’t just be focused on the next election cycle. We have not decades, but billions of years of potentially awesome future for life, on Earth and far beyond. And it’s so important to not let ourselves get so distracted by our everyday little frustrations that we lose sight of these incredible opportunities that we all stand to gain from if we can get along, and focus, and collaborate, and use technology for good.

Effective Altruism and Existential Risks: a talk with Lucas Perry

What are the greatest problems of our time? And how can we best address them?

FLI’s Lucas Perry recently spoke at Duke University and Boston College to address these questions. Perry presented two major ideas in these talks – effective altruism and existential risk – and explained how they work together.

As Perry explained to his audiences, effective altruism is a movement in philanthropy that seeks to use evidence, analysis, and reason to take actions that will do the greatest good in the world. Since each person has limited resources, effective altruists argue it is essential to focus resources where they can do the most good. As such, effective altruists tend to focus on neglected, large-scale problems where their efforts can yield the greatest positive change.

Effective altruists focus on issues including poverty alleviation, animal suffering, and global health through various organizations. Nonprofits such as 80,000 Hours help people find jobs within effective altruism, and charity evaluators such as GiveWell investigate and rank the most effective ways to donate money. These groups and many others are all dedicated to using evidence to address neglected problems that cause, or threaten to cause, immense suffering.

Some of these neglected problems happen to be existential risks – they represent threats that could permanently and drastically harm intelligent life on Earth. Since existential risks, by definition, put our very existence at risk, and have the potential to create immense suffering, effective altruists consider these risks extremely important to address.

Perry explained to his audiences that the greatest existential risks arise due to humans’ ability to manipulate the world through technology. These risks include artificial intelligence, nuclear war, and synthetic biology. But Perry also cautioned that some of the greatest existential threats might remain unknown. As such, he and effective altruists believe the topic deserves more attention.

Perry learned about these issues while he was in college, which helped redirect his own career goals, and he wants to share this opportunity with other students. He explains, “In order for effective altruism to spread and the study of existential risks to be taken seriously, it’s critical that the next generation of thought leaders are in touch with their importance.”

College students often want to do more to address humanity’s greatest threats, but many students are unsure where to go. Perry hopes that learning about effective altruism and existential risks might give them direction. Realizing the urgency of existential risks and how underfunded they are – academics spend more time on the dung fly than on existential risks – can motivate students to use their education where it can make a difference.

As such, Perry’s talks are a small effort to open the field to students who want to help the world and also crave a sense of purpose. He provided concrete strategies to show students where they can be most effective, whether they choose to donate money, directly work with issues, do research, or advocate.

By understanding the intersection between effective altruism and existential risks, these students can do their part to ensure that humanity continues to prosper in the face of our greatest threats yet.

As Perry explains, “When we consider what existential risks represent for the future of intelligent life, it becomes clear that working to mitigate them is an essential part of being an effective altruist.”

Elon Musk’s Plan to Colonize Mars

In an announcement to the International Astronautical Congress on Tuesday, Elon Musk unveiled his Interplanetary Transport System (ITS). His goal: allow humans to colonize a city on Mars within the next 50 to 100 years.

Speaking to an energetic crowd in Guadalajara, Mexico, Musk explained that the alternative to staying on Earth, which is at risk of a “doomsday event,” is to “become a spacefaring civilization and a multi-planet species.” As he told Aeon magazine in 2014, “I think there is a strong humanitarian argument for making life multi-planetary in order to safeguard the existence of humanity in the event that something catastrophic were to happen.” Colonizing Mars, he believes, is one of our best options.

In his speech, Musk discussed the details of his transport system. The ITS, developed by SpaceX, would use the most powerful rocket ever built, and at 400 feet tall, it would also be the largest spaceflight system ever created. The spaceship would fit 100-200 people and would feature movie theaters, lecture halls, restaurants, and other fun activities to make the approximately three-month journey enjoyable. “You’ll have a great time,” said Musk.

Musk explained four key issues that must be addressed to make colonization of Mars possible: the rockets need to be fully reusable, they need to be able to refuel in orbit, there must be a way to harness energy on Mars, and we must figure out more efficient ways of traveling. If SpaceX succeeds in meeting these requirements, the rockets could travel to Mars and return to Earth to pick up more colonists for the journey. Musk explained that the same rockets could be used up to a dozen times, bringing more and more people to colonize the Red Planet.

Despite his enthusiasm for the ITS, Musk was careful to acknowledge that there are still many difficulties and obstacles in reaching this goal. Currently, getting to Mars would require an investment of about $10 billion, which is not affordable for most people today. However, Musk thinks that the reusable rocket technology could significantly decrease this cost. “If we can get the cost of moving to Mars to the cost of a median house price in the U.S., which is around $200,000, then I think the probability of establishing a self-sustaining civilization is very high,” Musk noted.

But this viability requires significant investment from both the government and the private sector. Musk explained, “I know there’s a lot of people in the private sector who are interested in helping fund a base on Mars and then perhaps there will be interest on the government sector side to also do that. Ultimately, this is going to be a huge public-private partnership.” This speech, and the attention it has garnered, could help make such investment and cooperation possible.

Many questions remain about how to sustain human life on Mars and whether or not SpaceX can make this technology viable, as even Musk admits. He explained, “This is a huge amount of risk, will cost a lot, and there’s a good chance we don’t succeed. But we’re going to try and do our best. […] What I really want to do here is to make Mars seem possible — make it seem as though it’s something that we could do in our lifetimes, and that you can go.”

Musk’s full speech can be found here.

Op-ed: Education for the Future – Curriculum Redesign

robot_girl_full

“Adequately preparing for the future means actively creating it: the future is not the inevitable or something we are pulled into.”

What Should Students Learn for the 21st Century?

At the heart of ensuring the best possible future lies education. Experts may argue over what exactly the future will bring, but most agree that the job market, the economy, and society as a whole are about to see major changes.

Automation and artificial intelligence are on the rise, interactions are increasingly global, and technology is rapidly changing the landscape. Many worry that the education system is increasingly outdated and unable to prepare students for the world they’ll graduate into – for life and employability.

Will students have the skills and character necessary to compete for new jobs? Will they easily adapt to new technologies?

Charles Fadel, founder of the Center for Curriculum Redesign, considers six factors – three human and three technological – that will require a diverse set of individual abilities and competencies, plus an increased collaboration among cultures. In the following article, Fadel explains these factors and why today’s curriculum may not be sufficient to prepare students for the future.

 

Human Factors

First, there are three human factors affecting our future: (1) increased human longevity, (2) global connectivity, and (3) environmental stresses.

Increased Human Longevity

The average human lifespan is lengthening and will produce collective changes in societal dynamics, including better institutional memory and more intergenerational interactions.  It will also bring about increased resistance to change. This may also lead to economic implications, such as multiple careers over one’s lifespan and conflicts over resource allocation between younger and older generations. Such a context will require intergenerational sensitivity and a collective systems mindset in which each person balances his or her personal and societal needs.

Global Connectivity

The rapid increase in the world’s interconnectedness has had many compounding effects, including exponential increase in the velocity of the dissemination of information and ideas, with more complex interactions on a global basis. Information processing has already had profound effects on how we work and think. It also brings with it increased concerns and issues about data ownership, trust, and the overall attention to and reorganization of present societal structures. Thriving in this context will require tolerance of a diversity of cultures, practices, and world views, as well as the ability to leverage this connectedness.

Environmental Stresses

Along with our many unprecedented technological advances, human society is using up our environment at an unprecedented rate, consuming more of it and throwing more of it away. So far, our technologies have wrung from nature an extraordinary bounty of food, oil, and materials. Scientists calculate that humans use approximately “40 percent of potential terrestrial [plant] production” for themselves (Global Change, 2008). What’s more, we have been mining the remains of plants and animals from hundreds of millions of years ago in the form of fossil fuels in the relatively short period of a few centuries. Without technology, we would have no chance of supporting a population of one billion people, much less seven billion and climbing.

Changing dynamics and demographics will, by necessity, require greater cooperation and sensitivity among nations and cultures. Such needs suggest a reframing of notions of happiness beyond a country’s gross domestic product (a key factor used in analyses of cultural or national quality of life) (Revkin, 2005) and an expansion of business models to include collaboration with a shared spirit of humanity for collective well-being. It also demands that organizations possess an ability to pursue science with an ethical approach to societal solutions

Three Technology Factors

Three technology factors will also condition our future: (1) the rise of smart machines and systems, (2) the explosive growth of data and new media, and (3) the possibility of amplified humans.

The Rise of Smart Machines and Systems

While the creation of new technologies always leads to changes in a society, the increasing development and diffusion of smart machines—that is, technologies that can perform tasks once considered only executable by humans—has led to increased automation and ‘offshorability’ of jobs and production of goods. In turn, this shift creates dramatic changes in the workforce and in overall economic instability, with uneven employment. At the same time, it pushes us toward overdependence on technology—potentially decreasing individual resourcefulness. These shifts have placed an emphasis on non-automatable skills (such as synthesis and creativity), along with a move toward a do-it-yourself maker economy and a proactive human-technology balance (that is, one that permits us to choose what, when, and how to rely on technology).

The Explosive Growth of Data and New Media

The influx of digital technologies and new media has allowed for a generation of “big data” and brings with it tremendous advantages and concerns. Massive data sets generated by millions of individuals afford us the ability to leverage those data for the creation of simulations and models, allowing for deeper understanding of human behavioral patterns, and ultimately for evidence-based decision making.

At the same time, however, such big data production and practices open the door to privacy issues, concerns, and abuses. Harnessing these advantages, while mitigating the concerns and potential negative outcomes, will require better collective awareness of data, with skeptical inquiry and a watchfulness for potential commercial or governmental abuses of data.

The Possibility of Amplified Humans

Advances in prosthetic, genetic, and pharmacological supports are redefining human capabilities while blurring the lines between disability and enhancement. These changes have the potential to create “amplified humans.” At the same time, increasing innovation in virtual reality may lead to confusion regarding real versus virtual and what can be trusted. Such a merging shift of natural and technological requires us to reconceptualize what it means to be human with technological augmentations and refocus on the real world, not just the digital world.

Conclusion

Curricula worldwide have often been tweaked, but they have never been completely redesigned for the comprehensive education of knowledge, skills, character, and meta-learning.

21st century education

In a rapidly changing world, it is easy to get focused on current requirements, needs, and demands. Yet, adequately preparing for the future means actively creating it: the future is not the inevitable or something we are pulled into. There is a feedback loop between what the future could be and what we want it to be, and we have to deliberately choose to construct the reality we wish to experience. We may see global trends and their effects creating the ever-present future on the horizon, but it is up to us to choose to actively engage in co-constructing that future.

For more analysis of the question and implications for education, please see: http://curriculumredesign.org/our-work/four-dimensional-21st-century-education-learning-competencies-future-2030/

 

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

Effective Altruism 2016

The Effective Altruism Movement

Edit: The following article has been updated to include more highlights as well as links to videos of the talks.

How can we more effectively make the world a better place? Over 1,000 concerned altruists converged at the Effective Altruism Global conference this month in Berkeley, CA to address this very question. For two and a half days, participants milled around the Berkeley campus, attending talks, discussions, and workshops to learn more about efforts currently underway to improve our ability to not just do good in the world, but to do the most good.

Those who arrived on the afternoon of Friday, August 5 had the opportunity to mingle with other altruists and attend various workshops geared toward finding the best careers, improving communication, and developing greater self-understanding and self-awareness.

But the conference really kicked off on Saturday, August 6, with talks by Will MacAskill and Toby Ord, who both helped found the modern effective altruistism movement. Ord gave the audience a brief overview of the centuries of science and philosophy that provided the base for effective altruism. “Effective altruism is to the pursuit of good as the scientific revolution is to the pursuit of truth,” he explained. Yet, as he pointed out, effective altruism has only been a real “thing” for five years.

Will MacAskill

Will MacAskill introduced the conference and spoke of the success the EA movement has had in the last year.

Toby Ord speaking about the history of effective altruism.

Toby Ord spoke about the history of effective altruism.

 

MacAskill took the stage after Ord to highlight the movement’s successes over the past year, including coverage by such papers as the New York Times and the Washington Post. And more importantly, he talked about the significant increase in membership they saw this year, as well as in donations to worthwhile causes. But he also reminded the audience that a big part of the movement is the process of effective altruism. He said:

“We don’t know what the best way to do good is. We need to figure that out.”

For the rest of the two days, participants considered past charitable actions that had been most effective, problems and challenges altruists face today, and how the movement can continue to grow. There were too many events to attend them all, but there were many highlights.

Highlights From the Conference

When FLI cofounder, Jaan Tallin, was asked why he chose to focus on issues such as artificial intelligence, which may or may not be a problem in the future, rather than mosquito nets, which could save lives today, he compared philanthropy to investing. Higher risk investments have the potential for a greater payoff later. Similarly, while AI may not seem like much of  threat to many people now, ensuring it remains safe could save billions of lives in the future. Tallin spoke as part of a discussion on Philanthropy and Technology.

Jaan Tallin speaking remotely about his work with EA efforts.

Jaan Tallin speaking remotely about his work with EA efforts.

Martin Reese, a member of FLI’s Science Advisory Board, argued that we are in denial of the seriousness of our risks. At the same time, he said that minimizing risks associated with technological advances can only be done “with great difficulty.”  He encouraged EA participants to figure out which threats can be dismissed as science fiction and which are legitimate, and he encouraged scientists to become more socially engaged.

As if taking up that call to action, Kevin Esvelt talked about his own attempts to ensure gene drive research in the wild is accepted and welcomed by local communities. Gene drives could be used to eradicate such diseases as malaria, schistosomiasis, Zika, and many others, but fears of genetic modification could slow research efforts. He discussed his focus on keeping his work as open and accessible as possible, engaging with the public to allow anyone who might be affected by his research to have as much input as they want. “Closed door science,” he added, “is more dangerous because we have no way of knowing what other people are doing.”  A single misstep with this early research in his field could imperil all future efforts for gene drives.

Kevin Esvelt talks about his work with CRISPR and gene drives.

Kevin Esvelt talks about his work with CRISPR and gene drives.

That same afternoon, Cari Tuna, President of the Open Philanthropy Project, sat down with Will McAskill for an interview titled, “Doing Philosophy Better,” which focused on her work with OPP and Effective Altruism and how she envisions her future as a philanthropist. She highlighted some of the grants she’s most excited about, which include grants to Give Directly, Center for Global Development, and Alliance for Safety and Justice. When asked about how she thought EA could improve, she emphasized, “We consider ourselves a part of the Effective Altruism community, and we’re excited to help it grow.” But she also said, “I think there is a tendency toward overconfidence in the EA community that sometimes undermines our credibility.” She mentioned that one of the reasons she trusted GiveWell was because of their self reflection. “They’re always asking, ‘how could we be wrong?'” she explained, and then added, “I would really love to see self reflection become more of a core value of the effective altruism community.”

cari tuna

Cari Tuna interviewed by Will McAskill (photo from the Center for Effective Altruism).

The next day, FLI President, Max Tegmark, highlighted the top nine myths of AI safety, and he discussed how important it is to dispel these myths so researchers can focus on the areas necessary to keep AI beneficial. Some of the most distracting myths include arguments over when artificial general intelligence could be created, whether or not it could be “evil,” and goal-oriented issues. Tegmark also added that the best thing people can do is volunteer for EA groups.

During the discussion about the risks and benefits of advanced artificial intelligence, Dileep George, cofounder of Vicarious, reminded the audience why this work is so important. “The goal of the future is full unemployment so we can all play,” he said. Dario Amodei of OpenAI emphasized that having curiosity and trying to understand how technology is evolving can go a long way toward safety. And though he often mentioned the risks of advanced AI, Toby Ord, a philosopher and research fellow with the Future of Humanity Institute, also added, “I think it’s more likely than not that AI will contribute to a fabulous outcome.” Later in the day, Chris Olah, an AI researcher at Google Brain and one of the lead authors of the paper, Concrete Problems in AI Safety, explained his work as trying to build a bridge to futuristic problems by doing empirical research today.

Moderator Riva-Melissa Tez, Dario Amodei, George Dileep, and Toby Ord at the Risks and Benefits of Advanced AI discussion.

Moderator Riva-Melissa Tez, Dario Amodei, Dileep George, and Toby Ord at the Risks and Benefits of Advanced AI discussion. (Not pictured, Daniel Dewey)

FLI’s Richard Mallah gave a talk on mapping the landscape of AI safety research threads. He showed how there are many meaningful dimensions along which such research can be organized, how harmonizing the various research agendas into a common space allows us to reason about different kinds of synergies and dependencies, and how consideration of the white space in such representations can help us find both unknown knowns and unknown unknowns about the space.

Tara MacAulay, COO at the Centre for Effective Altruism, spoke during the discussion on “The Past, Present, and Future of EA.” She talked about finding the common values in the movement and coordinating across skill sets rather than splintering into cause areas or picking apart who is and who is not in the movement. She said, “The opposite of effective altruism isn’t ineffective altruism. The opposite of effective altruism is apathy, looking at the world and not caring, not doing anything about it . . . It’s helplessness. . . . throwing up our hands and saying this is all too hard.”

MacAulay also moderated a panel discussion called, Aggregating Knowledge, which was significant, not only for its thoughtful content about accessing, understanding, and communicating all of the knowledge available today, but also because it was an all-woman panel. The panel included Sarah Constantin, Amanda Askell, Julia Galef, and Heidi McAnnaly, who discussed various questions and problems the EA community faces when trying to assess which actions will be most effective. MacAulay summarized the discussion at the end when she said, “Figuring out what to do is really difficult but we do have a lot of tools available.” She concluded with a challenge to the audience to spend five minutes researching some belief they’ve always had about the world to learn what the evidence actually says about it.

aggregating knowledge

Sarah Constantin, Amanda Askell, Julia Galef, Heidi McAnnaly, and Tara MacAulay (photo from the Center for Effective Altruism).

Prominent government leaders also took to the stage to discuss how work with federal agencies can help shape and impact the future. Tom Kalil, Deputy Director for Technology and Innovation highlighted how much of today’s technology, from cell phones to Internet, got its start in government labs. Then, Jason Matheny, Director of IARPA, talked about how delays in technology can actually cost millions of lives. He explained that technology can make it less costly to enhance moral developments and that, “ensuring that we have a future counts a lot.”

Tom Kalil speaks about the history of government research and its impact on technology.

Tom Kalil speaks about the history of government research and its impact on technology.

Jason Matheny talks about how employment with government agencies can help advance beneficial technologies.

Jason Matheny talks about how employment with government agencies can help advance beneficial technologies.

Robin Hanson, author of The Age of Em, talked about his book and what the future will hold if we continue down our current economic path while the ability to create brain emulation is developed. He said that if creating ems becomes cheaper than paying humans to do work, “that would change everything.” Ems would completely take over the job market and humans would be pushed aside. He explained that some people might benefit from this new economy, but it would vary, just as it does today, with many more people suffering from poverty and fewer gaining wealth.

Robin Hanson talks to a group about how brain emulations might take over the economy and what their world will look like.

Robin Hanson talks to a group about how brain emulations might take over the economy and what their world will look like.

 

Applying EA to Real Life

Lucas Perry, also with FLI, was especially impressed by the career workshops offered by 80,000 Hours during the conference. He said:

“The 80,000 Hours workshops were just amazing for giving new context and perspective to work. 80,000 Hours gave me the tools and information necessary to reevaluate my current trajectory and see if it really is best of all possible paths for me and the world.

In the end, I walked away from the conference realizing I had been missing out on something so important for most of my life. I found myself wishing that effective altruism, and organizations like 80,000 Hours, had been a part of my fundamental education. I think it would have helped immensely with providing direction and meaning to my life. I’m sure it will do the same for others.”

In total, 150 people spoke over the course of those two and a half days. MacAskill finally concluded the conference with another call to focus on the process of effective altruism, saying:

“Constant self-reflection, constant learning, that’s how we’re going to be able to do the most good.”

 

View from the conference.

View from the conference.