Posts

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity” has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. “The Precipice” thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.

Topics discussed in this episode include:

  • An overview of Toby’s new book
  • What it means to be standing at the precipice and how we got here
  • Useful arguments for why existential risk matters
  • The risks themselves and their likelihoods
  • What we can do to safeguard humanity’s potential

Timestamps: 

0:00 Intro 

03:35 What the book is about 

05:17 What does it mean for us to be standing at the precipice? 

06:22 Historical cases of global catastrophic and existential risk in the real world

10:38 The development of humanity’s wisdom and power over time  

15:53 Reaching existential escape velocity and humanity’s continued evolution

22:30 On effective altruism and writing the book for a general audience 

25:53 Defining “existential risk” 

28:19 What is compelling or important about humanity’s potential or future persons?

32:43 Various and broadly appealing arguments for why existential risk matters

50:46 Short overview of natural existential risks

54:33 Anthropogenic risks

58:35 The risks of engineered pandemics 

01:02:43 Suggestions for working to mitigate x-risk and safeguard the potential of humanity 

01:09:43 How and where to follow Toby and pick up his book

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This episode is with Toby Ord and covers his new book “The Precipice: Existential Risk and the Future of Humanity.” This is a new cornerstone piece in the field of existential risk and I highly recommend this book for all persons of our day and age. I feel this work is absolutely critical reading for living an informed, reflective, and engaged life in our time. And I think even for those well acquainted with this topic area will find much that is both useful and new in this book. Toby offers a plethora of historical and academic context to the problem, tons of citations and endnotes, useful definitions, central arguments for why existential risk matters that can be really helpful for speaking to new people about this issue, and also novel quantitative analysis and risk estimations, as well as what we can actually do to help mitigate these risks. So, if you’re a regular listener to this podcast, I’d say this is a must add to your science, technology, and existential risk bookshelf. 

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. If you support any other content creators via services like Patreon, consider viewing a regular subscription to FLI in the same light. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

Toby Ord is a Senior Research Fellow in Philosophy at Oxford University. His work focuses on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

Toby’s earlier work explored the ethics of global health and global poverty, demonstrating that aid has been highly successful on average and has the potential to be even more successful if we were to improve our priority setting. This led him to create an international society called Giving What We Can, whose members have pledged over $1.5 billion to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement, encouraging thousands of people to use reason and evidence to help others as much as possible.

His current research is on the long-term future of humanity,  and the risks which threaten to destroy our entire potential.

Finally, the Future of Life Institute podcasts have never had a central place for conversation and discussion about the episodes and related content. In order to facilitate such conversation, I’ll be posting the episodes to the LessWrong forum at Lesswrong.com where you’ll be able to comment and discuss the episodes if you so wish. The episodes more relevant to AI alignment will be crossposted from LessWrong to the Alignment Forum as well at alignmentforum.org.  

And so with that, I’m happy to present Toby Ord on his new book “The Precipice.”

We’re here today to discuss your new book, The Precipice: Existential Risk and the Future of Humanity. Tell us a little bit about what the book is about.

Toby Ord: The future of humanity, that’s the guiding idea, and I try to think about how good our future could be. That’s what really motivates me. I’m really optimistic about the future we could have if only we survive the risks that we face. There have been various natural risks that we have faced for as long as humanity’s been around, 200,000 years of Homo sapiens or you might include an even broader definition of humanity that’s even longer. That’s 2000 centuries and we know that those natural risks can’t be that high or else we wouldn’t have been able to survive so long. It’s quite easy to show that the risks should be lower than about 1 in 1000 per century.

But then with humanity’s increasing power over that time, the exponential increases in technological power. We reached this point last century with the development of nuclear weapons, where we pose a risk to our own survival and I think that the risks have only increased since then. We’re in this new period where the risk is substantially higher than these background risks and I call this time the precipice. I think that this is a really crucial time in the history and the future of humanity, perhaps the most crucial time, this few centuries around now. And I think that if we survive, and people in the future, look back on the history of humanity, schoolchildren will be taught about this time. I think that this will be really more important than other times that you’ve heard of such as the industrial revolution or even the agricultural revolution. I think this is a major turning point for humanity. And what we do now will define the whole future.

Lucas Perry: In the title of your book, and also in the contents of it, you developed this image of humanity to be standing at the precipice, could you unpack this a little bit more? What does it mean for us to be standing at the precipice?

Toby Ord: I sometimes think of humanity has this grand journey through the wilderness with dark times at various points, but also moments of sudden progress and heady views of the path ahead and what the future might hold. And I think that this point in time is the most dangerous time that we’ve ever encountered, and perhaps the most dangerous time that there will ever be. So I see it in this central metaphor of the book, humanity coming through this high mountain pass and the only path onwards is this narrow ledge along a cliff side with this steep and deep precipice at the side and we’re kind of inching our way along. But we can see that if we can get past this point, there’s ultimately, almost no limits to what we could achieve. Even if we can’t precisely estimate the risks that we face, we know that this is the most dangerous time so far. There’s every chance that we don’t make it through.

Lucas Perry: Let’s talk a little bit then about how we got to this precipice and our part in this path. Can you provide some examples or a story of global catastrophic risks that have happened and near misses of possible existential risks that have occurred so far?

Toby Ord: It depends on your definition of global catastrophe. One of the definitions that’s on offer is 10%, or more of all people on the earth at that time being killed in a single disaster. There is at least one time where it looks like we’ve may have reached that threshold, which was the Black Death, which killed between a quarter and a half of people in Europe and may have killed many people in South Asia and East Asia as well and the Middle East. It may have killed one in 10 people across the whole world. Although because our world was less connected than it is today, it didn’t reach every continent. In contrast, the Spanish Flu 1918 reached almost everywhere across the globe, and killed a few percent of people.

But in terms of existential risk, none of those really posed an existential risk. We saw, for example, that despite something like a third of people in Europe dying, that there wasn’t a collapse of civilization. It seems like we’re more robust than some give us credit for, but there’ve been times where there hasn’t been an actual catastrophe, but there’s been near misses in terms of the chances.

There are many cases actually connected to the Cuban Missile Crisis, a time of immensely high tensions during the Cold War in 1962. I think that the closest we have come is perhaps the events on a submarine that was unknown to the U.S. that it was carrying a secret nuclear weapon and the U.S. Patrol Boats tried to force it to surface by dropping what they called practice depth charges, but the submarine thought that there were real explosives aimed at hurting them. The submarine was made for the Arctic and so it was overheating in the Caribbean. People were dropping unconscious from the heat and the lack of oxygen as they tried to hide deep down in the water. And during that time the captain, Captain Savitsky, ordered that this nuclear weapon be fired and the political officer gave his consent as well.

On any of the other submarines in this flotilla, this would have been enough to launch this torpedo that then would have been a tactical nuclear weapon exploding and destroying the fleet that was oppressing them, but on this one, it was lucky that the flotilla commander was also on board this submarine, Captain Vasili Arkhipov and so, he overruled this and talked Savitsky down from this. So this was a situation at the height of this tension where a nuclear weapon would have been used. And we’re not quite sure, maybe Savitsky would have decided on his own not to do it, maybe he would have backed down. There’s a lot that’s not known about this particular case. It’s very dramatic.

But Kennedy had made it very clear that any use of nuclear weapons against U.S. Armed Forces would lead to an all-out full scale attack on the Soviet Union, so they hadn’t anticipated that tactical weapons might be used. They assumed it would be a strategic weapon, but it was their policy to respond with a full scale nuclear retaliation and it looks likely that that would have happened. So that’s the case where ultimately zero people were killed in that event. The submarine eventually surfaced and surrendered and then returned to Moscow where people were disciplined, but it brought us very close to this full scale nuclear war.

I don’t mean to imply that that would have been the end of humanity. We don’t know whether humanity would survive the full scale nuclear war. My guess is that we would survive, but that’s its own story and it’s not clear.

Lucas Perry: Yeah. The story to me has always felt a little bit unreal. It’s hard to believe we came so close to something so bad. For listeners who are not aware, the Future of Life Institute gives out a $50,000 award each year, called the Future of Life Award to unsung heroes who have contributed greatly to the existential security of humanity. We actually have awarded Vasili Arkhipov’s family with the Future of Life Award, as well as Stanislav Petrov and Matthew Meselson. So if you’re interested, you can check those out on our website and see their particular contributions.

And related to nuclear weapons risk, we also have a webpage on nuclear close calls and near misses where there were accidents with nuclear weapons which could have led to escalation or some sort of catastrophe. Is there anything else here you’d like to add in terms of the relevant historical context and this story about the development of our wisdom and power over time?

Toby Ord: Yeah, that framing, which I used in the book comes from Carl Sagan in the ’80s when he was one of the people who developed the understanding of nuclear winter and he realized that this could pose a risk to humanity on the whole. The way he thought about it is that we’ve had this massive development over the hundred billion human lives that have come before us. This succession of innovations that have accumulated building up this modern world around us.

If I look around me, I can see almost nothing that wasn’t created by human hands and this, as we all know, has been accelerating and often when you try to measure exponential improvements in technology over time, leading to the situation where we have the power to radically reshape the Earth’s surface, both say through our agriculture, but also perhaps in a moment through nuclear war. This increasing power has put us in a situation where we hold our entire future in the balance. A few people’s actions over a few minutes could actually potentially threaten that entire future.

In contrast, humanity’s wisdom has grown only falteringly, if at all. Many people would suggest that it’s not even growing. And by wisdom here, I mean, our ability to make wise decisions for human future. I talked about this in the book under the idea about civilizational virtues. So if you think of humanity as a group of agents, in the same way that we think of say nation states as group agents, we talk about is it in America’s interest to promote this trade policy or something like that? We can think of what’s in humanity’s interests and we find that if we think about it this way, humanity is crazily impatient and imprudent.

If you think about the expected lifespan of humanity, a typical species lives for about a million years. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we play our cards right and we don’t lead to our own destruction. The analogy would be 20% of the way through our life, like an adolescent who’s just coming into his or her own power, but doesn’t have the wisdom or the patience to actually really pay any attention to this possible whole future ahead of them and so they’re just powerful enough to get themselves in trouble, but not yet wise enough to avoid that.

If you continue this analogy, what is often hard for humanity at the moment to think more than a couple of election cycles ahead at best, but that would correspond say eight years to just the next eight hours within this person’s life. For the kind of short term interests during the rest of the day, they put the whole rest of their future at risk. And so I think that that helps to see what this lack of wisdom looks like. It’s not that it’s just a highfalutin term of some sort, but you can kind of see what’s going on is that the person is incredibly imprudent and impatient. And I think that many others virtues or vices that we think of in an individual human’s life can be applied in this context and are actually illuminating about where we’re going wrong.

Lucas Perry: Wonderful. Part of the dynamic here in this wisdom versus power race seems to be one of the solutions being slowing down power seems untenable or that it just wouldn’t work. So it seems more like we have to focus on amplifying wisdom. Is this also how you view the dynamic?

Toby Ord: Yeah, that is. I think that if humanity was more coordinated, if we were able to make decisions in a unified manner better than we actually can. So, if you imagine this was a single player game, I don’t think it would be that hard. You could just be more careful with your development of power and make sure that you invest a lot in institutions, and in really thinking carefully about things. I mean, I think that the game is ours to lose, but unfortunately, we’re less coherent than that and if one country decides to hold off on developing things, then other countries might run ahead and produce similar amount of risk.

Theres this kind of the tragedy of the commons at this higher level and so I think that it’s extremely difficult in practice for humanity to go slow on progress of technology. And I don’t recommend that we try. So in particular, there’s only at the moment, only a small number of people who really care about these issues and are really thinking about the long-term future and what we could do to protect it. And if those people were to spend their time arguing against progress of technology, I think that it would be a really poor use of their energies and probably just annoy and alienate the people they were trying to convince. And so instead, I think that the only real way forward is to focus on improving wisdom.

I don’t think that’s impossible. I think that humanity’s wisdom, as you could see from my comment before about how we’re kind of disunified, partly, it involves being able to think better about things as individuals, but it also involves being able to think better collectively. And so I think that institutions for overcoming some of these tragedies of the commons or prisoner’s dilemmas at this international level, are an example of the type of thing that will make humanity make wiser decisions in our collective interest.

Lucas Perry: It seemed that you said by analogy, that humanity’s lifespan would be something like a million years as compared with other species.

Toby Ord: Mm-hmm (affirmative).

Lucas Perry: That is likely illustrative for most people. I think there’s two facets of this that I wonder about in your book and in general. The first is this idea of reaching existential escape velocity, where it would seem unlikely that we would have a reason to end in a million years should we get through the time of the precipice and the second is I’m wondering your perspective on Nick Bostrom calls what matters here in the existential condition, Earth-originating intelligent life. So, it would seem curious to suspect that even if humanity’s existential condition were secure that we would still be recognizable as humanity in some 10,000, 100,000, 1 million years’ time and not something else. So, I’m curious to know how the framing here functions in general for the public audience and then also being realistic about how evolution has not ceased to take place.

Toby Ord: Yeah, both good points. I think that the one million years is indicative of how long species last when they’re dealing with natural risks. It’s I think a useful number to try to show why there are some very well-grounded scientific reasons for thinking that a million years is entirely in the ballpark of what we’d expect if we look at other species. And even if you look at mammals or other hominid species, a million years still seems fairly typical, so it’s useful in some sense for setting more of a lower bound. There are species which have survived relatively unchanged for much longer than that. One example is the horseshoe crab, which is about 450 million years old whereas complex life is only about 540 million years old. So that’s something where it really does seem like it is possible to last for a very long period of time.

If you look beyond that the Earth should remain habitable for something in the order of 500 million or a billion years for complex life before it becomes too hot due to the continued brightening of our sun. If we took actions to limit that brightening, which look almost achievable with today’s technology, we would only need to basically shade the earth by about 1% of the energy coming at it and increase that by 1%, I think it’s every billion years, we will be able to survive as long as the sun would for about 7 billion more years. And I think that ultimately, we could survive much longer than that if we could reach our nearest stars and set up some new self-sustaining settlement there. And then if that could then spread out to some of the nearest stars to that and so on, then so long as we can reach about seven light years in one hop, we’d be able to settle the entire galaxy. There are stars in the galaxy that will still be burning in about 10 trillion years from now and there’ll be new stars for millions of times as long as that.

We could have this absolutely immense future in terms of duration and the technologies that are beyond our current reach and if you look at the energy requirements to reach nearby stars, they’re high, but they’re not that high compared to say, the output of the sun over millions of years. And if we’re talking about a scenario where we’d last millions of years anyway, it’s unclear why it would be difficult with the technology would reach them. It seems like the biggest challenge would be lasting that long in the first place, not getting to the nearest star using technology for millions of years into the future with millions of years of stored energy reserves.

So that’s the kind of big picture question about the timing there, but then you also ask about would it be humanity? One way to answer that is, unless we go to a lot of effort to preserve Homo sapiens as we are now then it wouldn’t be Homo sapiens. We might go to that effort if we decide that it’s really important that it be Homo sapiens and that we’d lose something absolutely terrible. If we were to change, we could make that choice, but if we decide that it would be better to actually allow evolution to continue, or perhaps to direct it by changing who we are with genetic engineering and so forth, then we could make that choice as well. I think that that is a really critically important choice for the future and I hope that we make it in a very deliberate and careful manner rather than just going gung-ho and letting people do whatever they want, but I do think that we will develop into something else.

But in the book, my focus is often on humanity in this kind of broad sense. Earth-originating intelligent life would kind of be a gloss on it, but that has the issue that suppose humanity did go extinct and suppose we got lucky and some other intelligent life started off again, I don’t want to count that in what I’m talking about, even though it would technically fit into Earth-originating intelligent life. Sometimes I put it in the book as humanity or our rightful heirs something like that. Maybe we would create digital beings to replace us, artificial intelligences of some sort. So long as they were the kinds of beings that could actually fulfill the potential that we have, they could realize one of the best trajectories that we could possibly reach, then I would count them. It could also be that we create something that succeeds us, but has very little value, then I wouldn’t count it.

So yeah, I do think that we may be greatly changed in the future. I don’t want that to distract the reader, if they’re not used to thinking about things like that because they might then think, “Well, who cares about that future because it will be some other things having the future.” And I want to stress that there will only be some other things having the future if we want it to be, if we make that choice. If that is a catastrophic choice, then it’s another existential risk that we have to deal with in the future and which we could prevent. And if it is a good choice and we’re like the caterpillar that really should become a butterfly in order to fulfill its potential, then we need to make that choice. So I think that is something that we can leave to future generations that it is important that they make the right choice.

Lucas Perry: One of the things that I really appreciate about your book is that it tries to make this more accessible for a general audience. So, I actually do like it when you use lower bounds on humanity’s existential condition. I think talking about billions upon billions of years can seem a little bit far out there and maybe costs some weirdness points and as much as I like the concept of Earth-originating intelligent life, I also think it costs some weirdness points.

And it seems like you’ve taken some effort to sort of make the language not so ostracizing by decoupling it some with effective altruism jargon and the kind of language that we might use in effective altruism circles. I appreciate that and find it to be an important step. The same thing I feel feeds in here in terms of talking about descendant scenarios. It seems like making things simple and leveraging human self-interest is maybe important here.

Toby Ord: Thanks. When I was writing the book, I tried really hard to think about these things, both in terms of communications, but also in terms of trying to understand what we have been talking about for all of these years when we’ve been talking about existential risk and similar ideas. Often when in effective altruism, there’s a discussion about the different types of cause areas that effective altruists are interested in. There’s people who really care about global poverty, because we can help others who are much poorer than ourselves so much more with our money, and also about helping animals who are left out of the political calculus and the economic calculus and we can see why it is that they’re interests are typically neglected and so we look at factory farms, and we can see how we could do so much good.

And then also there’s this third group of people and then the conversation drifts off a bit, it’s like who have this kind of idea about the future and it’s kind of hard to describe and how to kind of wrap up together. So I’ve kind of seen that as one of my missions over the last few years is really trying to work out what is it that that third group of people are trying to do? My colleague, Will MacAskill, has been working on this a lot as well. And what we see is that this other group of effective altruists are this long-termist group.

The first group is thinking about this cosmopolitan aspect as much as me and it’s not just people in my country that matter, it’s people across the whole world and some of those could be helped much more. And the second group is saying, it’s not just humans that could be helped. If we widen things up beyond the species boundary, then we can see that there’s so much more we could do for other conscious beings. And then this third group is saying, it’s not just our time that we can help, there’s so much we can do to help people perhaps across this entire future of millions of years or further into the future. And so the difference there, the point of leverage is this difference between what fraction of the entire future is our present generation is perhaps just a tiny fraction. And if we can do something that will help that entire future, then that’s where this could be really key in terms of doing something amazing with our resources and our lives.

Lucas Perry: Interesting. I actually had never thought of it that way. And I think it puts it really succinctly the differences between the different groups that people focused on global poverty are reducing spatial or proximity bias in people’s focus on ethics or doing good. Animal farming is a kind of anti-speciesism, broadening our moral circle of compassion to other species and then the long-termism is about reducing time-based ethical bias. I think that’s quite good.

Toby Ord: Yeah, that’s right. In all these cases, you have to confront additional questions. It’s not just enough to make this point and then it follows that things are really important. You need to know, for example, that there really are ways that people can help others in distant countries and that the money won’t be squandered. And in fact, for most of human history, there weren’t ways that we could easily help people in other countries just by writing out a check to the right place.

When it comes to animals, there’s a whole lot of challenging questions there about what is the effects of changing your diet or the effects of donating to a group that prioritize animals in campaigns against factory farming or similar and when it comes to the long-term future, there’s this real question about “Well, why isn’t it that people in the future would be just as able to protect themselves as we are? Why wouldn’t they be even more well-situated to attend to their own needs?” Given the history of economic growth and this kind of increasing power of humanity, one would expect them to be more empowered than us, so it does require an explanation.

And I think that the strongest type of explanation is around existential risk. Existential risks are things that would be an irrevocable loss. So, as I define them, which is a simplification, I think of it as the destruction of humanity’s long-term potential. So I think of our long term potential as you could think of this set of all possible futures that we could instantiate. If you think about all the different collective actions of humans that we could take across all time, this kind of sets out this huge kind of cloud of trajectories that humanity could go in and I think that this is absolutely vast. I think that there are ways if we play our cards right of lasting for millions of years or billions or trillions and affecting billions of different worlds across the cosmos, and then doing all kinds of amazing things with all of that future. So, we’ve got this huge range of possibilities at the moment and I think that some of those possibilities are extraordinarily good.

If we were to go extinct, though, that would collapse this set of possibilities to a much smaller set, which contains much worse possibilities. If we went extinct, there would be just one future, whatever it is that would happen without humans, because there’d be no more choices that humans could make. If we had an irrevocable collapse of civilization, something from which we could never recover, then that would similarly reduce it to a very small set of very meager options. And it’s possible as well that we could end up locked into some dystopian future, perhaps through economic or political systems, where we end up stuck in some very bad corner of this possibility space. So that’s our potential. Our potential is currently the value of the best realistically realizable worlds available to us.

If we fail in an existential catastrophe, that’s the destruction of almost all of this value, and it’s something that you can never get back, because it’s our very potential that would be being destroyed. That then has an explanation as to why it is that people in the future wouldn’t be better able to solve their own problems because we’re talking about things that could fail now, that helps explain why it is that there’s room for us to make such a contribution.

Lucas Perry: So if we were to very succinctly put the recommended definition or framing on existential risk that listeners might be interested in using in the future when explaining this to new people, what is the sentence that you would use?

Toby Ord: An existential catastrophe is the destruction of humanity’s long-term potential, and an existential risk is the risk of such a catastrophe.

Lucas Perry: Okay, so on this long-termism point, can you articulate a little bit more about what is so compelling or important about humanity’s potential into the deep future and which arguments are most compelling to you with a little bit of a framing here on the question of whether or not the long-termist’s perspective is compelling or motivating for the average person like, why should I care about people who are far away in time from me?

Toby Ord: So, I think that a lot of people if pressed and they’re told “does it matter equally much if a child 100 years in the future suffers as a child at some other point in time?” I think a lot of people would say, “Yeah, it matters just as much.” But that’s not how we normally think of things when we think about what charity to donate to or what policies to implement, but I do think that it’s not that foreign of an idea. In fact, the weird thing would be why it is that people in virtue of the fact that they live in different times matter different amounts.

A simple example of that would be suppose you do think that things further into the future matter less intrinsically. Economists sometimes represent this by a pure rate of time preference. It’s a component of a discount rate, which is just to do with things mattering less in the future, whereas most of the discount rate is actually to do with the fact that money is more important to have earlier which is actually a pretty solid reason, but that component doesn’t affect any of these arguments. It’s only this little extra aspect about things matter less just because we’re in the future. Suppose you have that 1% discount rate of that form. That means that someone’s older brother matters more than their younger brother, that their life is equally long and has the same kinds of experiences is fundamentally more important for their older child than the younger child, things like that. This just seems kind of crazy to most people, I think.

And similarly, if you have these exponential discount rates, which is typically the only kind that economists consider, it has these consequences that what happens in 10,000 years is way more important than what happens in 11,000 years. People don’t have any intuition like that at all, really. Maybe we don’t think that much about what happens in 10,000 years, but 11,000 is pretty much the same as 10,000 from our intuition, but these other views say, “Wow. No, it’s totally different. It’s just like the difference between what happens next year and what happens in a thousand years.”

It generally just doesn’t capture our intuitions and I think that what’s going on is not so much that we have a kind of active intuition that things that happen further into the future matter less and in fact, much less because they would have to matter a lot less to dampen the fact that we can have millions of years of future. Instead, what’s going on is that we just aren’t thinking about it. We’re not really considering that our actions could have irrevocable effects over the long distant future. And when we do think about that, such as within environmentalism, it’s a very powerful idea. The idea that we shouldn’t sacrifice, we shouldn’t make irrevocable changes to the environment that could damage the entire future just for transient benefits to our time. And people think, “Oh, yeah, that is a powerful idea.”

So I think it’s more that they’re just not aware that there are a lot of situations like this. It’s not just the case of a particular ecosystem that could be an example of one of these important irrevocable losses, but there could be these irrevocable losses at this much grander scale affecting everything that we could ever achieve and do. I should also explain there that I do talk a lot about humanity in the book. And the reason I say this is not because I think that non-human animals don’t count or they don’t have intrinsic value, I do. It’s because instead, only humanity is responsive to reasons and to thinking about this. It’s not the case that chimpanzees will choose to save other species from extinction and will go out and work out how to safeguard them from natural disasters that could threaten their ecosystems or things like that.

We’re the only ones who are even in the game of considering moral choices. So in terms of the instrumental value, humanity has this massive instrumental value, because what we do could affect, for better or for worse, the intrinsic value of all of the other species. Other species are going to go extinct in about a billion years, basically, all of them when the earth becomes uninhabitable. Only humanity could actually extend that lifespan. So there’s this kind of thing where humanity ends up being key because we are the decision makers. We are the relevant agents or any other relevant agents will spring from us. That will be things that our descendants or things that we create and choose how they function. So, that’s the kind of role that we’re playing.

Lucas Perry: So if there are people who just simply care about the short term, if someone isn’t willing to buy into these arguments about the deep future or realizing the potential of humanity’s future, like “I don’t care so much about that, because I won’t be alive for that.” There’s also an argument here that these risks may be realized within their lifetime or within their children’s lifetime. Could you expand that a little bit?

Toby Ord: Yeah, in the precipice, when I try to think about why this matters. I think the most obvious reasons are rooted in the present. The fact that it will be terrible for all of the people who are alive at the time when the catastrophe strikes. That needn’t be the case. You could imagine things that meet my definition of an existential catastrophe that it would cut off the future, but not be bad for the people who were alive at that time, maybe we all painlessly disappear at the end of our natural lives or something. But in almost all realistic scenarios that we’re thinking about, it would be terrible for all of the people alive at that time, they would have their lives cut short and witness the downfall of everything that they’ve ever cared about and believed in.

That’s a very obvious natural reason, but the reason that moves me the most is thinking about our long-term future, and just how important that is. This huge scale of everything that we could ever become. And you could think of that in very numerical terms or you could just think back over time and how far humanity has come over these 200,000 years. Imagine that going forward and how small a slice of things our own lives are and you can come up with very intuitive arguments to exceed that as well. It doesn’t have to just be multiply things out type argument.

But then I also think that there are very strong arguments that you could also have rooted in our past and in other things as well. Humanity has succeeded and has got to where we are because of this partnership of the generations. Edmund Burke had this phrase. It’s something where, if we couldn’t promulgate our ideas and innovations to the next generation, what technological level would be like. It would be like it was in the Paleolithic time, even a crude iron shovel would be forever beyond our reach. It was only through passing down these innovations and iteratively improving upon them, we could get billions of people working in cooperation over deep time to build this world around us.

If we think about the wealth and prosperity that we have the fact that we live as long as we do. This is all because this rich world was created by our ancestors and handed on to us and we’re the trustees of this vast inheritance and if we would have failed, if we’d be the first of 10,000 generations to fail to pass this on to our heirs, we will be the worst of all of these generations. We’d have failed in these very important duties and these duties could be understood as some kind of reciprocal duty to those people in the past or we could also consider it as duties to the future rooted in obligations to people in the past, because we can’t reciprocate to people who are no longer with us. The only kind of way you can get this to work is to pay it forward and have this system where we each help the next generation with the respect for the past generations.

So I think there’s another set of reasons more deontological type reasons for it and you could all have the reasons I mentioned in terms of civilizational virtues and how that kind of approach rooted in being a more virtuous civilization or species and I think that that is a powerful way of seeing it as well, to see that we’re very impatient and imprudent and so forth and we need to become more wise or alternatively, Max Tegmark has talked about this and Martin Rees, Carl Sagan and others have seen it as something based on a cosmic significance of humanity, that perhaps in all of the stars and all of the galaxies of the universe, perhaps this is the only place where there is either life at all or we’re the only place where there’s intelligent life or consciousness. There’s different versions of this and that could make this exceptionally important place and this very rare thing that could be forever gone.

So I think that there’s a whole lot of different reasons here and I think that previously, a lot of the discussion has been in a very technical version of the future directed one where people have thought, well, even if there’s only a tiny chance of extinction, our future could have 10 to the power of 30 people in it or something like that. There’s something about this argument that some people find it compelling, but not very many. I personally always found it a bit like a trick. It is a little bit like an argument that zero equals one where you don’t find it compelling, but if someone says point out the step where it goes wrong, you can’t see a step where the argument goes wrong, but you still think I’m not very convinced, there’s probably something wrong with this.

And then people who are not from the sciences, people from the humanities find it an actively alarming argument that anyone who would make moral decisions on the grounds of an argument like that. What I’m trying to do is to show that actually, there’s this whole cluster of justifications rooted in all kinds of principles that many people find reasonable and you don’t have to accept all of them by any means. The idea here is that if any one of these arguments works for you, then you can see why it is that you have reasons to care about not letting our future be destroyed in our time.

Lucas Perry: Awesome. So, there’s first this deontological argument about transgenerational duties to continue propagating the species and the projects and value which previous generations have cultivated. We inherit culture and art and literature and technology, so there is a duties-based argument to continue the stewardship and development of that. There is this cosmic significance based argument that says that consciousness may be extremely precious and rare, and that there is great value held in the balance here at the precipice on planet Earth and it’s important to guard and do the proper stewardship of that.

There is this short-term argument that says that there is some reasonable likelihood I think, total existential risk for the next century you put at one in six, which we can discuss a little bit more later, so that would also be very bad for us and our children and short-term descendants should that be realized in the next century. Then there is this argument about the potential of humanity in deep time. So I think we’ve talked a bit here about there being potentially large numbers of human beings in the future or our descendants or other things that we might find valuable, but I don’t think that we’ve touched on the part and change of quality.

There are these arguments on quantity, but there’s also I think, I really like how David Pearce puts it where he says, “One day we may have thoughts as beautiful as sunsets.” So, could you expand a little bit here this argument on quality that I think also feeds in and then also with regards to the digitalization aspect that may happen, that there are also arguments around subjective time dilation, which may lead to more better experience into the deep future. So, this also seems to be another important aspect that’s motivating for some people.

Toby Ord: Yeah. Humanity has come a long way and various people have tried to catalog the improvements in our lives over time. Often in history, this is not talked about, partly because history is normally focused on something of the timescale of a human life and things don’t change that much on that timescale, but when people are thinking about much longer timescales, I think they really do. Sometimes this is written off in history as Whiggish history, but I think that that’s a mistake.

I think that if you were to summarize the history of humanity in say, one page, I think that the dramatic increases in our quality of life and our empowerment would have to be mentioned. It’s so important. You probably wouldn’t mention the Black Death, but you would mention this. Yet, it’s very rarely talked about within history, but there are people talking about it and there are people who have been measuring these improvements. And I think that you can see how, say in the last 200 years, lifespans have more than doubled and in fact, even in the poorest countries today, lifespans are longer than they were in the richest countries 200 years ago.

We can now almost all read whereas very few people could read 200 years ago. We’re vastly more wealthy. If you think about this threshold we currently use of extreme poverty, it used to be the case 200 years ago that almost everyone was below that threshold. People were desperately poor and now almost everyone is above that threshold. There’s still so much more that we could do, but there have been these really dramatic improvements.

Some people seem to think that that story of well-being in our lives getting better, increasing freedoms, increasing empowerment of education and health, they think that that story runs somehow counter to their concern about existential risk that one is an optimistic story and one’s a gloomy story. Ultimately, what I’m thinking is that it’s because these trends seem to point towards very optimistic futures that would make it all the more important to ensure that we survive to reach such futures. If all the trends suggested that the future was just going to inevitably move towards a very dreary thing that had hardly any value in it, then I wouldn’t be that concerned about existential risk, so I think these things actually do go together.

And it’s not just in terms of our own lives that things have been getting better. We’ve been making major institutional reforms, so while there is regrettably still slavery in the world today, there is much less than there was in the past and we have been making progress in a lot of ways in terms of having a more representative and more just and fair world and there’s a lot of room to continue in both those things. And even then, a world that’s kind of like the best lives lived today, a world that has very little injustice or suffering, that’s still only a lower bound on what we could achieve.

I think one useful way to think about this is in terms of your peak experiences. These moments of luminous joy or beauty, the moments that you’ve been happiest, whatever they may be and you think about how much better they are than the typical moments. My typical moments are by no means bad, but I would trade hundreds or maybe thousands for more of these peak experiences, and that’s something where there’s no fundamental reason why we couldn’t spend much more of our lives at these peaks and have lives which are vastly better than our lives are today and that’s assuming that we don’t find even higher peaks and new ways to have even better lives.

It’s not just about the well-being in people’s lives either. If you have any kind of conception about the types of value that humanity creates, so much of our lives will be in the future, so many of our achievements will be in the future, so many of our societies will be in the future. There’s every reason to expect that these greatest successes in all of these different ways will be in this long future as well. There’s also a host of other types of experiences that might become possible. We know that humanity only has some kind of very small sliver of the space of all possible experiences. We see in a set of colors, this three-dimensional color space.

We know that there are animals that see additional color pigments, that can see ultraviolet, can see parts of reality that we’re blind to. Animals with magnetic sense that can sense what direction north is and feel the magnetic fields. What’s it like to experience things like that? We could go so much further exploring this space. If we can guarantee our future and then we can start to use some of our peak experiences as signposts to what might be experienceable, I think that there’s so much further that we could go.

And then I guess you mentioned the possibilities of digital things as well. We don’t know exactly how consciousness works. In fact, we know very little about how it works. We think that there’s some suggestive reasons to think that minds including consciousness are computational things such that we might be able to realize them digitally and then there’s all kinds of possibilities that would follow from that. You could slow yourself down like slow down the rate at which you’re computed in order to see progress zoom past you and kind of experience a dizzying rate of change in the things around you. Fast forwarding through the boring bits and skipping to the exciting bits one’s life if one was digital could potentially be immortal, have backup copies, and so forth.

You might even be able to branch into being two different people, have some choice coming up as to say whether to stay on earth or to go to this new settlement in the stars, and just split with one copy go into this new life and one staying behind or a whole lot of other possibilities. We don’t know if that stuff is really possible, but it’s just to kind of give a taste of how we might just be seeing this very tiny amount of what’s possible at the moment.

Lucas Perry: This is one of the most motivating arguments for me, the fact that the space of all possible minds is probably very large and deep and that the kinds of qualia that we have access to are very limited and the possibility of well-being not being contingent upon the state of the external world which is always in flux and is always impermanent, we’re able to have a science of well-being that was sufficiently well-developed such that well-being was information and decision sensitive, but not contingent upon the state of the external world that seems like a form of enlightenment in my opinion.

Toby Ord: Yeah. Some of these questions are things that you don’t often see discussed in academia, partly because there isn’t really a proper discipline that says that that’s the kind of thing you’re allowed to talk about in your day job, but it is the kind of thing that people are allowed to talk about in science fiction. Many science fiction authors have something more like space opera or something like that where the future is just an interesting setting to play out the dramas that we recognize.

But other people use the setting to explore radical, what if questions, many of which are very philosophical and some of which are very well done. I think that if you’re interested in these types of questions, I would recommend people read Diaspora by Greg Egan, which I think is the best and most radical exploration of this and at the start of the book, it’s a setting in a particular digital system with digital minds substantially in the future from where we are now that have been running much faster than the external world. Their lives lived thousands of times faster than the people who’ve remained flesh and blood, so culturally that vastly further on, and then you get to witness what it might be like to undergo various of these events in one’s life. And in the particular setting it’s in. It’s a world where physical violence is against the laws of physics.

So rather than creating utopia by working out how to make people better behaved, the longstanding project have tried to make us all act nicely and decently to each other. That’s clearly part of what’s going on, but there’s this extra possibility that most people hadn’t even thought about, where because it’s all digital. It’s kind of like being on a web forum or something like that, where if someone attempts to attack you, you can just make them disappear, so that they can no longer interfere with you at all. And it explores what life might be like in this kind of world where the laws of physics are consent based and you can just make it so that people have no impact on you if you’re not enjoying the kind of impact that they’re having is a fascinating setting to explore radically different ideas about the future, which very much may not come to pass.

But what I find exciting about these types of things is not so much that they’re projections of where the future will be, but that if you take a whole lot of examples like this, they span a space that’s much broader than you were initially thinking about for your probability distribution over where the future might go and they help you realize that there are radically different ways that it could go. This kind of expansion of your understanding about the space of possibilities, which is where I think it’s best as opposed to as a direct prediction that I would strongly recommend some Greg Egan for anyone who wants to get really into that stuff.

Lucas Perry: You sold me. I’m interested in reading it now. I’m also becoming mindful of our time here and have a bunch more questions I would like to get through, but before we do that, I also want to just throw out here. I’ve had a bunch of conversations recently on the question of identity and open individualism and closed individualism and empty individualism are some of the views here.

For the long-termist perspective, I think that it’s pretty much very or deeply informative for how much or how little one may care about the deep future or digital minds or our descendants in a million years or humans that are around a million years later. I think for many people who won’t be motivated by these arguments, they’ll basically just feel like it’s not me, so who cares? And so I feel like these questions on personal identity really help tug and push and subvert many of our commonly held intuitions about identity. So, sort of going off of your point about the potential of the future and how it’s quite beautiful and motivating.

A little funny quip or thought there is I’ve sprung into Lucas consciousness and I’m quite excited, whatever “I” means, for there to be like awakening into Dyson sphere consciousness in Andromeda or something, and maybe a bit of a wacky or weird idea for most people, but thinking more and more endlessly about the nature of personal identity makes thoughts like these more easily entertainable.

Toby Ord: Yeah, that’s interesting. I haven’t done much research on personal identity. In fact, the types of questions I’ve been thinking about when it comes to the book are more on how radical change would be needed before it’s no longer humanity, so kind of like the identity of humanity across time as opposed to the identity for a particular individual across time. And because I’m already motivated by helping others and I’m kind of thinking more about the question of why just help others in our own time as opposed to helping others across time. How do you direct your altruism, your altruistic impulses?

But you’re right that they could also be possibilities to do with individuals lasting into the future. There’s various ideas about how long we can last with lifespans extending very rapidly. It might be that some of the people who are alive now actually do directly experience some of this long-term future. Maybe there are things that could happen where their identity wouldn’t be preserved, because it’d be too radical a break. You’d become two different kinds of being and you wouldn’t really be the same person, but if being the same person is important to you, then maybe you could make smaller changes. I’ve barely looked into this at all. I know Nick Bostrom has thought about it more. There’s probably lots of interesting questions there.

Lucas Perry: Awesome. So could you give a short overview of natural or non-anthropogenic risks over the next century and why they’re not so important?

Toby Ord: Yeah. Okay, so the main natural risks I think we’re facing are probably asteroid or comet impacts and super volcanic eruptions. In the book, I also looked at stellar explosions like supernova and gamma ray bursts, although since I estimate the chance of us being wiped out by one of those in the next 100 years to be one in a billion, we don’t really need to worry about those.

But asteroids, it does appear that the dinosaurs were destroyed 65 million years ago by a major asteroid impact. It’s something that’s been very well studied scientifically. I think the main reason to think about it is A, because it’s very scientifically understood and B, because humanity has actually done a pretty good job on it. We only worked out 40 years ago that the dinosaurs were destroyed by an asteroid and that they could be capable of causing such a mass extinction. In fact, it was only in 1960, 60 years ago that we even confirmed that craters on the Earth’s surface were caused by asteroids. So we knew very little about this until recently.

And then we’ve massively scaled up our scanning of the skies. We think that in order to cause a global catastrophe, the asteroid would probably need to be bigger than a kilometer across. We’ve found about 95% of the asteroids between 1 and 10 kilometers across, and we think we’ve found all of the ones bigger than 10 kilometers across. We therefore know that since none of the ones were found are on a trajectory to hit us within the next 100 years that it looks like we’re very safe from asteroids.

Whereas super volcanic eruptions are much less well understood. My estimate for those for the chance that we could be destroyed in the next 100 years by one is about one in 10,000. In the case of asteroids, we have looked into it so carefully and we’ve managed to check whether any are coming towards us right now, whereas it can be hard to get these probabilities further down until we know more, so that’s why my what about the super volcanic corruptions is where it is. That the Toba eruption was some kind of global catastrophe a very long time ago, though the early theories that it might have caused a population bottleneck and almost destroyed humanity, they don’t seem to hold up anymore. It is still illuminating of having continent scale destruction and global cooling.

Lucas Perry: And so what is your total estimation of natural risk in the next century?

Toby Ord: About one in 10,000. All of these estimates are in order of magnitude estimates, but I think that it’s about the same level as I put the super volcanic eruption and the other known natural risks I would put as much smaller. One of the reasons that we can say these low numbers is because humanity has survived for 2000 centuries so far, and related species such as Homo erectus have survived for even longer. And so we just know that there can’t be that many things that could destroy all humans on the whole planet from these natural risks,

Lucas Perry: Right, the natural conditions and environment hasn’t changed so much.

Toby Ord: Yeah, that’s right. I mean, this argument only works if the risk has either been constant or expectably constant, so it could be that it’s going up and down, but we don’t know which then it will also work. The problem is if we have some pretty good reasons to think that the risks could be going up over time, then our long track record is not so helpful. And that’s what happens when it comes to what you could think of as natural pandemics, such as the coronavirus.

This is something where it’s got into humanity through some kind of human action, so it’s not exactly natural how it actually got into humanity in the first place and then its spread through humanity through airplanes, traveling to different continents very quickly, is also not natural and is a faster spread than you would have had over this long-term history of humanity. And thus, these kind of safety arguments don’t count as well as they would for things like asteroid impacts.

Lucas Perry: This class of risks then is risky, but less risky than the human-made risks, which are a result of technology, the fancy x-risk jargon for this is anthropogenic risks. Some of these are nuclear weapons, climate change, environmental damage, synthetic bio-induced pandemics or AI-enabled pandemics, unaligned artificial intelligence, dystopian scenarios and other risks. Could you say a little bit about each of these and why you view unaligned artificial intelligence as the biggest risk?

Toby Ord: Sure. Some of these anthropogenic risks we already face. Nuclear war is an example. What is particularly concerning is a very large scale nuclear war, such as between the U.S. and Russia and nuclear winter models have suggested that the soot from burning buildings could get lifted up into the stratosphere which is high enough that it wouldn’t get rained out, so it could stay in the upper atmosphere for a decade or more and cause widespread global cooling, which would then cause massive crop failures, because there’s not enough time between frosts to get a proper crop, and thus could lead to massive starvation and a global catastrophe.

Carl Sagan suggested it could potentially lead to our extinction, but the current people working on this, while they are very concerned about it, don’t suggest that it could lead to human extinction. That’s not really a scenario that they find very likely. And so even though I think that there is substantial risk of nuclear war over the next century, either an accidental nuclear war being triggered soon or perhaps a new Cold War, leading to a new nuclear war, I would put the chance that humanity’s potential is destroyed through nuclear war at about one in 1000 over the next 100 years, which is about where I’d put it for climate change as well.

There is debate as to whether climate change could really cause human extinction or a permanent collapse of civilization. I think the answer is that we don’t know. Similar with nuclear war, but they’re both such large changes to the world, these kind of unprecedentedly rapid and severe changes that it’s hard to be more than 99% confident that if that happens that we’d make it through and so this is difficult to eliminate risk that remains there.

In the book, I look at the very worst climate outcomes, how much carbon is there in the methane clathrates under the ocean and in the permafrost? What would happen if it was released? How much warming would there be? And then what would happen if you had very severe amounts of warming such as 10 degrees? And I try to sketch out what we know about those things and it is difficult to find direct mechanisms that suggests that we would go extinct or that we would collapse our civilization in a way from which you could never be restarted again, despite the fact that civilization arose five times independently in different parts of the worlds already, so we know that it’s not like a fluke to get it started again. So it’s difficult to see the direct reasons why it could happen, but we don’t know enough to be sure that it can’t happen. In my sense, that’s still an existential risk.

Then I also have a kind of catch all for other types of environmental damage, all of these other pressures that we’re putting on the planet. I think that it would be too optimistic to be sure that none of those could potentially cause a collapse from which we can never recover as well. Although when I look at particular examples that are suggested, such as the collapse of pollinating insects and so forth, for the particular things that are suggested, it’s hard to see how they could cause this, so it’s not that I am just seeing problems everywhere, but I do think that there’s something to this general style of argument that unknown effects of the stressors we’re putting on the planet could be the end for us.

So I’d put all of those kind of current types of risks at about one in 1,000 over the next 100 years, but then it’s the anthropogenic risks from technologies that are still on the horizon that scare me the most and this would be in keeping with this idea of humanity’s continued exponential growth in power where you’d expect the risks to be escalating every century. And I think that the ones that I’m most concerned about, in particular, engineered pandemics and the risk of unaligned artificial intelligence.

Lucas Perry: All right. I think listeners will be very familiar with many of the arguments around why unaligned artificial intelligence is dangerous, so I think that we could skip some of the crucial considerations there. Could you touch a little bit then on the risks of engineered pandemics, which may be more new and then give a little bit of your total risk estimate for this class of risks.

Toby Ord: Ultimately, we do have some kind of a safety argument in terms of the historical record when it comes to these naturally arising pandemics. There are ways that they could be more dangerous now than they could have been in the past, but there are also many ways in which they’re less dangerous. We have antibiotics. We have the ability to detect in real time these threats, sequence the DNA of the things that are attacking us, and then use our knowledge of quarantine and medicine in order to fight them. So we have reasons to look to our safety on that.

But there are cases of pandemics or pandemic pathogens being created to be even more spreadable or even more deadly than those that arise naturally because the natural ones are not being optimized to be deadly. The deadliness is only if that’s in service of them spreading and surviving and normally killing your host is a big problem for that. So there’s room there for people to try to engineer things to be worse than the natural ones.

One case is scientists looking to fight disease, like Ron Fouchier with the bird flu, deliberately made a more infectious version of it that could be transmitted directly from mammal to mammal. He did that because he was trying to help, but it was, I think, very risky and I think a very bad move and most of the scientific community didn’t think it was a good idea. He did it in a bio safety level three enhanced lab, which is not the highest level of biosecurity, that’s BSL four, and even at the highest level, there have been an escape of a pathogen from a BSL four facility. So these labs aren’t safe enough, I think, to be able to work on newly enhanced things that are more dangerous than anything that nature can create in a world where so far the biggest catastrophes that we know of were caused by pandemics. So I think that it’s pretty crazy to be working on such things until we have labs from which nothing has ever escaped.

But that’s not what really worries me. What worries me more is bio weapons programs and there’s been a lot of development of bio weapons in the 20th Century, in particular. The Soviet Union reportedly had 20 tons of smallpox that they had manufactured for example, and they had an accidental release of smallpox, which killed civilians in Russia. They had an accidental release of anthrax, blowing it out across the whole city and killing many people, so we know from cases like this, that they had a very large bioweapons program. And the Biological Weapons Convention, which is the leading institution at an international level to prohibit bio weapons is chronically underfunded and understaffed. The entire budget of the BWC is less than that of a typical McDonald’s.

So this is something where humanity doesn’t have its priorities in order. Countries need to work together to step that up and to give it more responsibilities, to actually do inspections and make sure that none of them are using bio weapons. And then I’m also really concerned by the dark side of the democratization of biotechnology. The fact that rapid developments that we make with things like Gene Drives and CRISPR. These two huge breakthroughs. They’re perhaps Nobel Prize worthy. That in both cases within two years, they are replicated by university students in science competitions.

So we now have a situation where two years earlier, there’s like one person in the world who could do it or no one who could do it, then one person and then within a couple of years, we have perhaps tens of thousands of people who could do it, soon millions. And so if that pool of people eventually includes people like those in the Aum Shinrikyo cults that was responsible for the Sarin gas in the Tokyo subway, who actively one of their goals was to destroy everyone in the world. Once enough people can do these things and could make engineered pathogens, you’ll get someone with this terrible but massively rare motivation, or perhaps even just a country like North Korea who wants to have a kind of blackmail policy to make sure that no one ever invades. That’s why I’m worried about that. These rapid advances are empowering us to make really terrible weapons.

Lucas Perry: All right, so wrapping things up here. How do we then safeguard the potential for humanity and Earth-originating intelligent life? You seem to give some advice on high level strategy, policy and individual level advice, and this is all contextualized within this grand plan for humanity, which is that we reach existential security by getting to a place where existential risk is decreasing every century that we then enter a period of long reflection to contemplate and debate what is good and how we might explore the universe and optimize it to express that good and then that we execute that and achieve our potential. So again, how do we achieve all this, how do we mitigate x-risk, how do we safeguard the potential of humanity?

Toby Ord: That’s an easy question to end on. So what I tried to do in the book is to try to treat this at a whole lot of different levels. You kind of refer to the most abstract level to some extent, the point of that abstract level is to show that we don’t need to get ultimate success right now, we don’t need to solve everything, we don’t need to find out what the fundamental nature of goodness is, and what worlds would be the best. We just need to make sure we don’t end up in the ones which are clearly among the worst.

The point of looking further onwards with the strategy is just to see that we can set some things aside for later. Our task now is to reach what I call existential security and that involves this idea that will be familiar to many people to do with existential risk, which is to look at particular risks and to work out how to manage them, and to avoid falling victim to them, perhaps by being more careful with technology development, perhaps by creating our protective technologies. For example, better bio surveillance systems to understand if bio weapons have been launched into the environment, so that we could contain them much more quickly or to develop say a better work on alignment with AI research.

But it also involves not just fighting fires, but trying to become the kind of society where we don’t keep lighting these fires. I don’t mean that we don’t develop the technologies, but that we build in the responsibility for making sure that they do not develop into existential risks as part of the cost of doing business. We want to get the fruits of all of these technologies, both for the long-term and also for the short-term, but we need to be aware that there’s this shadow cost when we develop new things, and we blaze forward with technology. There’s shadow cost in terms of risk, and that’s not normally priced in. We just kind of ignore that, but eventually it will come due. If we keep developing things that produce these risks, eventually, it’s going to get us.

So what we need to do to develop our wisdom, both in terms of changing our common sense conception of morality, to take this long-term future seriously or our debts to our ancestors seriously, and we also need the international institutions to help avoid some of these tragedies of the commons and so forth as well, to find these cases where we’d all be prepared to pay the cost to get the security if everyone else was doing it, but we’re not prepared to just do it unilaterally. We need to try to work out mechanisms where we can all go into it together.

There are questions there in terms of policy. We need more policy-minded people within the science and technology space. People with an eye to the governance of their own technologies. This can be done within professional societies, but also we need more technology-minded people in the policy space. We often are bemoan the fact that a lot of people in government don’t really know much about how the internet works or how various technologies work, but part of the problem is that the people who do know about how these things work, don’t go into government. It’s not just that you can blame the people in government for not knowing about your field. People who know about this field, maybe some of them should actually work in policy.

So I think we need to build that bridge from both sides and I suggest a lot of particular policy things that we could do. A good example in terms of how concrete and simple it can get is that we renew the New START Disarmament Treaty. This is due to expire next year. And as far as I understand, the U.S. government and Russia don’t have plans to actually renew this treaty, which is crazy, because it’s one of the things that’s most responsible for the nuclear disarmament. So, making sure that we sign that treaty again, it is a very actionable point that people can kind of motivate around and so on.

And I think that there’s stuff for everyone to do. We may think that existential risk is too abstract and can’t really motivate people in the way that some other causes can, but I think that would be a mistake. I’m trying to sketch a vision of it in this book that I think can have a larger movement coalesce around it and I think that if we look back a bit when it came to nuclear war, the largest protest in America’s history at that time was against nuclear weapons in Central Park in New York and it was on the grounds that this could be the end of humanity. And that the largest movement at the moment, in terms of standing up for a cause is on climate change and it’s motivated by exactly these ideas about irrevocable destruction of our heritage. It really can motivate people if it’s expressed the right way. And so that actually fills me with hope that things can change.

And similarly, when I think about ethics, and I think about how in the 1950s, there was almost no consideration of the environment within their conception of ethics. It just was considered totally outside of the domain of ethics or morality and not really considered much at all. And the same with animal welfare, it was scarcely considered to be an ethical question at all. And now, these are both key things that people are taught in their moral education in school. And we have an entire ministry for the environment and that was within 10 years of Silent Spring coming out, I think all, but one English speaking country had a cabinet level position on the environment.

So, I think that we really can have big changes in our ethical perspective, but we need to start an expansive conversation about this and start unifying these things together not to be just like the anti-nuclear movement and the anti-climate change movement where it’s fighting a particular fire, but to be aware that if we want to actually get out there preemptively for these things that we need to expand that to this general conception of existential risk and safeguarding humanity’s long-term potential, but I’m optimistic that we can do that.

That’s why I think my best guess is that there’s a one in six chance that we don’t make it through this Century, but the other way around, I’m saying there’s a five in six chance that I think we do make it through. If we really played our cards right, we could make it a 99% chance that we make it through this Century. We’re not hostages to fortune. We humans get to decide what the future of humanity will be like. There’s not much risk from external forces that we can’t deal with such as the asteroids. Most of the risk is of our own doing and we can’t just sit here and bemoan the fact we’re in some difficult prisoner’s dilemma with ourselves. We need to get out and solve these things and I think we can.

Lucas Perry: Yeah. This point on moving from these particular motivation and excitement around climate change and nuclear weapons issues to a broader civilizational concern with existential risk seems to be a crucial and key important step in developing the kind of wisdom that we talked about earlier. So yeah, thank you so much for coming on and thanks for your contribution to the field of existential risk with this book. It’s really wonderful and I recommend listeners read it. If listeners are interested in that, where’s the best place to pick it up? How can they follow you?

Toby Ord: You could check out my website at tobyord.com. You could follow me on Twitter @tobyordoxford or I think the best thing is probably to find out more about the book at theprecipice.com. On that website, we also have links as to where you can buy it in your country, including at independent bookstores and so forth.

Lucas Perry: All right, wonderful. Thanks again, for coming on and also for writing this book. I think that it’s really important for helping to shape the conversation in the world and understanding around this issue and I hope we can keep nailing down the right arguments and helping to motivate people to care about these things. So yeah, thanks again for coming on.

Toby Ord: Well, thank you. It’s been great to be here.

FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you’ll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us.

Topics discussed include:

  • Max and Yuval’s views and intuitions about consciousness
  • How they ground and think about morality
  • Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk
  • The function of myths and stories in human society
  • How emerging science, technology, and global paradigms challenge the foundations of many of our stories
  • Technological risks of the 21st century

Timestamps:

0:00 Intro

3:14 Grounding morality and the need for a science of consciousness

11:45 The effective altruism community and it’s main cause areas

13:05 Global health

14:44 Animal suffering and factory farming

17:38 Existential risk and the ethics of the long-term future

23:07 Nuclear war as a neglected global risk

24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence

28:37 On creating new stories for the challenges of the 21st century

32:33 The risks of big data and AI enabled human hacking and monitoring

47:40 What does it mean to be human and what should we want to want?

52:29 On positive global visions for the future

59:29 Goodbyes and appreciations

01:00:20 Outro and supporting the Future of Life Institute Podcast

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today, I’m excited to be bringing you a conversation between professor, philosopher, and historian Yuval Noah Harari and MIT physicist and AI researcher, as well as Future of Life Institute president, Max Tegmark.  Yuval is the author of popular science best sellers, Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, and of 21 Lessons for the 21st Century. Max is the author of Our Mathematical Universe and Life 3.0: Being human in the Age of Artificial Intelligence. 

This episode covers a variety of topics related to the interests and work of both Max and Yuval. It requires some background knowledge for everything to make sense and so i’ll try to provide some necessary information for listeners unfamiliar with the area of Max’s work in particular here in the intro. If you already feel well acquainted with Max’s work, feel free to skip ahead a minute or use the timestamps in the description for the podcast. 

Topics discussed in this episode include: morality, consciousness, the effective altruism community, animal suffering, existential risk, the function of myths and stories in our world, and the benefits and risks of emerging technology. For those new to the podcast or effective altruism, effective altruism or EA for short is a philosophical and social movement that uses evidence and reasoning to determine the most effective ways of benefiting and improving the lives of others. And existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, to kill large swaths of the global population and leave the survivors unable to rebuild society to current living standards. Advanced emerging technologies are the most likely source of existential risk in the 21st century, for example through unfortunate uses of synthetic biology, nuclear weapons, and powerful future artificial intelligence misaligned with human values and objectives.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate

These contributions make it possible for us to bring you conversations like these and to develop the podcast further. You can also follow us on your preferred listening platform by searching for us directly or following the links on the page for this podcast found in the description. 

And with that, here is our conversation between Max Tegmark and Yuval Noah Harari.

Max Tegmark: Maybe to start at a place where I think you and I both agree, even though it’s controversial, I get the sense from reading your books that you feel that morality has to be grounded on experience, subjective experience. It’s just what I like to call consciousness. I love this argument you’ve given, for example, that people who think consciousness is just bullshit and irrelevant. You challenge them to tell you what’s wrong with torture if it’s just a bunch of electrons and quarks moving around this way rather than that way.

Yuval Noah Harari: Yeah. I think that there is no morality without consciousness and without subjective experiences. At least for me, this is very, very obvious. One of my concerns, again, if I think about the potential rise of AI, is that AI will be super superintelligence but completely non-conscious, which is something that we never had to deal with before. There’s so much of the philosophical and theological discussions of what happens when there is a greater intelligence in the world. We’ve been discussing this for thousands of years with God of course as the object of discussion, but the assumption always was that this greater intelligence would be A) conscious in some sense, and B) good, infinitely good.

And therefore I think that the question we are facing today is completely different and to a large extent is I suspect that we are really facing philosophical bankruptcy that what we’ve done for thousands of years didn’t really prepare us for the kind of challenge that we have now.

Max Tegmark: I certainly agree that we have a very urgent challenge there. I think there is an additional risk which comes from the fact that, I’m embarrassed as a scientist that we actually don’t know for sure which kinds of information processing are conscious and which are not. For many, many years, I’ve been told for example that it’s okay to put lobsters in hot water to boil them but alive before we eat them because they don’t feel any suffering. And then I guess some guy asked the lobster does this hurt? And it didn’t say anything and it was a self serving argument. But then there was a recent study out that showed that actually lobsters do feel pain and they banned lobster boiling in Switzerland now.

I’m very nervous whenever we humans make these very self serving arguments saying, don’t worry about the slaves. It’s okay. They don’t feel, they don’t have a soul, they won’t suffer or women don’t have a soul or animals can’t suffer. I’m very nervous that we’re going to make the same mistake with machines just because it’s so convenient. When I feel the honest truth is, yeah, maybe future superintelligent machines won’t have any experience, but maybe they will. And I think we really have a moral imperative there to do the science to answer that question because otherwise we might be creating enormous amounts of suffering that we don’t even know exists.

Yuval Noah Harari: For this reason and for several other reasons, I think we need to invest as much time and energy in researching consciousness as we do in researching and developing intelligence. If we develop sophisticated artificial intelligence before we really understand consciousness, there is a lot of really big ethical problems that we just don’t know how to solve. One of them is the potential existence of some kind of consciousness in these AI systems, but there are many, many others.

Max Tegmark: I’m so glad to hear you say this actually because I think we really need to distinguish between artificial intelligence and artificial consciousness. Some people just take for granted that they’re the same thing.

Yuval Noah Harari: Yeah, I’m really amazed by it. I’m having quite a lot of discussions about these issues in the last two or three years and I’m repeatedly amazed that a lot of brilliant people just don’t understand the difference between intelligence and consciousness, and when it comes up in discussions about animals, but it also comes up in discussions about computers and about AI. To some extent the confusion is understandable because in humans and other mammals and other animals, consciousness and intelligence, they really go together, but we can’t assume that this is the law of nature and that it’s always like that. In a very, very simple way, I would say that intelligence is the ability to solve problems. Consciousness is the ability to feel things like pain and pleasure and love and hate.

Now in humans and chimpanzees and dogs and maybe even lobsters, we solve problems by having feelings. A lot of the problems we solve, who to mate with and where to invest our money and who to vote for in the elections, we rely on our feelings to make these decisions, but computers make decisions a completely different way. At least today, very few people would argue that computers are conscious and still they can solve certain types of problems much, much better than we.

They have high intelligence in a particular field without having any consciousness and maybe they will eventually reach superintelligence without ever developing consciousness. And we don’t know enough about these ideas of consciousness and superintelligence, but it’s at least feasible that you can solve all problems better than human beings and still have zero consciousness. You just do it in a different way. Just like airplanes fly much faster than birds without ever developing feathers.

Max Tegmark: Right. That’s definitely one of the reasons why people are so confused. There are two other reasons I noticed also among even very smart people why they are utterly confused on this. One is there’s so many different definitions of consciousness. Some people define consciousness in a way that’s almost equivalent intelligence, but if you define it the way you did, the ability to feel things simply having subjective experience. I think a lot of people get confused because they have always thought of subjective experience and intelligence for that matter as something mysterious. That can only exist in biological organisms like us. Whereas what I think we’re really learning from the whole last of century of progress in science is that no, intelligence and consciousness are all about information processing.

People fall prey to this carbon chauvinism idea that it’s only carbon or meat that can have these traits. Whereas in fact it really doesn’t matter whether the information is processed by a carbon atom and a neuron in the brain or by the silicon atom in a computer.

Yuval Noah Harari: I’m not sure I completely agree. I mean, we still don’t have enough data on that. There doesn’t seem to be any reason that we know of that consciousness would be limited to carbon based life forms, but so far this is the case. So maybe we don’t know something. My hunch is that it could be possible to have non-organic consciousness, but until we have better evidence, there is an open possibility that maybe there is something about organic biochemistry, which is essential and we just don’t understand.

And also with the other open case, we are not really sure that’s consciousness is just about information processing. I mean, at present, this is the dominant view in the life sciences, but we don’t really know because we don’t understand consciousness. My personal hunch is that nonorganic consciousness is possible, but I wouldn’t say that we know that for certain. And the other point is that really if you think about it in the broadest sense possible, I think that there is an entire potential universe of different conscious states and we know just a tiny, tiny bit of it.

Max Tegmark: Yeah.

Yuval Noah Harari: Again, thinking a little about different life forms, so human beings are just one type of life form and there are millions of other life forms that existed and billions of potential life forms that never existed but might exist in the future. And it’s a bit like that with consciousness that we really know just human consciousness, we don’t understand even the consciousness of other animals and beyond that potentially there is an infinite number of conscious states or traits that never existed and might exist in the future.

Max Tegmark: I agree with all of that. And I think if you can have nonorganic consciousness, artificial consciousness, which would be my guess, although we don’t know it, I think it’s quite clear then that the mind space of possible artificial consciousness is vastly larger than anything that evolution has given us, so we have to have a very open mind.

If we simply take away from this that we should understand which entities biological and otherwise are conscious and can experience suffering, pleasure and so on, and we try to base our morality on this idea that we want to create more positive experiences and eliminate suffering, then this leads straight into what I find very much at the core of the so called effective altruism community, which we with the Future of Life Institute view ourselves as part of where the idea is we want to help do what we can to make a future that’s good in that sense. Lots of positive experiences, not negative ones and we want to do it effectively.

We want to put our limited time and money and so on into those efforts which will make the biggest difference. And the EA community has for a number of years been highlighting a top three list of issues that they feel are the ones that are most worth putting effort into in this sense. One of them is global health, which is very, very non-controversial. Another one is animal suffering and reducing it. And the third one is preventing life from going extinct by doing something stupid with technology.

I’m very curious whether you feel that the EA movement has basically picked out the correct three things to focus on or whether you have things you would subtract from that list or add to it. Global health, animal suffering, X-risk.

Yuval Noah Harari: Well, I think that nobody can do everything, so whether you’re an individual or an organization, it’s a good idea to pick a good cause and then focus on it and not spend too much time wondering about all the other things that you might do. I mean, these three causes are certainly some of the most important in the world. I would just say that about the first one. It’s not easy at all to determine what are the goals. I mean, as long as health means simply fighting illnesses and sicknesses and bringing people up to what is considered as a normal level of health, then that’s not very problematic.

But in the coming decades, I think that the healthcare industry would focus and more, not on fixing problems but rather on enhancing abilities, enhancing experiences, enhancing bodies and brains and minds and so forth. And that’s much, much more complicated both because of the potential issues of inequality and simply that we don’t know where to aim for. One of the reasons that when you ask me at first about morality, I focused on suffering and not on happiness is that suffering is a much clearer concept than happiness and that’s why when you talk about health care, if you think about this image of the line of normal health, like the baseline of what’s a healthy human being, it’s much easier to deal with things falling under this line than things that potentially are above this line. So I think even this first issue, it will become extremely complicated in the coming decades.

Max Tegmark: And then for the second issue on animal suffering, you’ve used some pretty strong words before. You’ve said that industrial farming is one of the worst crimes in history and you’ve called the fate of industrially farmed animals one of the most pressing ethical questions of our time. A lot of people would be quite shocked when they hear you using strong words about this since they routinely eat factory farmed meat. How do you explain to them?

Yuval Noah Harari: This is quite straightforward. I mean, we are talking about billions upon billions of animals. The majority of large animals today in the world are either humans or are domesticated animals, cows and pigs and chickens and so forth. And so we’re talking about a lot of animals and we are talking about a lot of pain and misery. The industrially farmed cow and chicken are probably competing for the title of the most miserable creature that ever existed. They are capable of experiencing a wide range of sensations and emotions and in most of these industrial facilities they are experiencing the worst possible sensations and emotions.

Max Tegmark: In my case, you’re preaching to the choir here. I find this so disgusting that my wife and I just decided to mostly be vegan. I don’t go preach to other people about what they should do, but I just don’t want to be a part of this. It reminds me so much also things you’ve written about yourself, about how people used to justify having slaves before by saying, “It’s the white man’s burden. We’re helping the slaves. It’s good for them”. And much of the same way now, we make these very self serving arguments for why we should be doing this. What do you personally take away from this? Do you eat meat now, for example?

Yuval Noah Harari: Personally I define myself as vegan-ish. I mean I’m not strictly vegan. I don’t want to make kind of religion out of it and start thinking in terms of purity and whatever. I try to limit as far as possible mindful movement with industries that harm animals for no good reason and it’s not just meat and dairy and eggs, it can be other things as well. The chains of causality in the world today are so complicated that you cannot really extricate yourself completely. It’s just impossible. So for me, and also what I tell other people is just do your best. Again, don’t make it into a kind of religious issue. If somebody comes and tells you that you, I’m now thinking about this animal suffering and I decided to have one day a week without meat then don’t start blaming this person for eating meat the other six days. Just congratulate them on making one step in the right direction.

Max Tegmark: Yeah, that sounds not just like good morality but also good psychology if you actually want to nudge things in the right direction. And then coming to the third one, existential risk. There, I love how Nick Bostrom asks us to compare these two scenarios one in which some calamity kills 99% of all people and another where it kills 100% of all people and then he asks how much worse is the second one. The point being obviously is you know that if we kill everybody we might actually forfeit having billions or quadrillions or more of future minds in the future experiencing these amazing things for billions of years. This is not something I’ve seen you talk as much about in you’re writing it. So I’m very curious how you think about this morally? How you weigh future experiences that could exist versus the ones that we know exist now?

Yuval Noah Harari: I don’t really know. I don’t think that we understand consciousness and experience well enough to even start making such calculations. In general, my suspicion, at least based on our current knowledge, is that it’s simply not a mathematical entity that can be calculated. So we know all these philosophical riddles that people sometimes enjoy so much debating about whether you have five people have this kind and a hundred people of that kind and who should you save and so forth and so on. It’s all based on the assumption that experience is a mathematical entity that can be added and subtracted. And my suspicion is that it’s just not like that.

To some extent, yes, we make these kinds of comparison and calculations all the time, but on a deeper level, I think it’s taking us in the wrong direction. At least at our present level of knowledge, it’s not like eating ice cream is one point of happiness. Killing somebody is a million points of misery. So if by killing somebody we can allow 1,000,001 persons to enjoy ice cream, it’s worth it.

I think the problem here is not that we given the wrong points to the different experiences, it’s just it’s not a mathematical entity in the first place. And again, I know that in some cases we have to do these kinds of calculations, but I will be extremely careful about it and I would definitely not use it as the basis for building entire moral and philosophical projects.

Max Tegmark: I certainly agree with you that it’s an extremely difficult set of questions you get into if you try to trade off positives against negatives, like you mentioned in the ice cream versus murder case there. But I still feel that all in all, as a species, we tend to be a little bit too sloppy and flippant about the future and maybe partly because we haven’t evolved to think so much about what happens in billions of years anyway, and if we look at how reckless we’ve been with nuclear weapons, for example, I recently was involved with our organization giving this award to honor Vasily Arkhipov who quite likely prevented nuclear war between the US and the Soviet Union, and most people hadn’t even heard about that for 40 years. More people have heard of Justin Bieber, than Vasily Arkhipov even though I would argue that that would really unambiguously had been a really, really bad thing and that we should celebrate people who do courageous acts that prevent nuclear war, for instance.

In the same spirit, I often feel concerned that there’s so little attention, even paid to risks that we drive ourselves extinct or cause giants catastrophes compared to how much attention we pay to the Kardashians or whether we can get 1% less unemployment next year. So I’m curious if you have some sympathy for my angst here or whether you think I’m overreacting.

Yuval Noah Harari: I completely agree. I often define it that we are now kind of irresponsible gods. Certainly with regard to the other animals and the ecological system and with regard to ourselves, we have really divine powers of creation and destruction, but we don’t take our job seriously enough. We tend to be very irresponsible in our thinking, and in our behavior. On the other hand, part of the problem is that the number of potential apocalypses is growing exponentially over the last 50 years. And as a scholar and as a communicator, I think it’s part of our job to be extremely careful in the way that we discuss these issues with the general public. And it’s very important to focus the discussion on the more likely scenarios because if we just go on bombarding people with all kinds of potential scenarios of complete destruction, very soon we just lose people’s attention.

They become extremely pessimistic that everything is hopeless. So why worry about all that? So I think part of the job of the scientific community and people who deal with these kinds of issues is to really identify the most likely scenarios and focus the discussion on that. Even if there are some other scenarios which have a small chance of occurring and completely destroying all of humanity and maybe all of life, but we just can’t deal with everything at the same time.

Max Tegmark: I completely agree with that. With one caveat, I think it’s very much in the spirit of effective altruism, what you said. We want to focus on the things that really matter the most and not turn everybody into hypochondriac, paranoid, getting worried about everything. The one caveat I would give is, we shouldn’t just look at the probability of each bad thing happening but we should look at the expected damage it will do so the probability of times how bad it is.

Yuval Noah Harari: I agree.

Max Tegmark: Because nuclear war for example, maybe the chance of having an accidental nuclear war between the US and Russia is only 1% per year or 10% per year or one in a thousand per year. But if you have the nuclear winter caused by that by soot and smoke in the atmosphere, you know, blocking out the sun for years, that could easily kill 7 billion people. So most people on Earth and mass starvation because it would be about 20 Celsius colder. That means that on average if it’s 1% chance per year, which seems small, you’re still killing on average 70 million people. That’s the number that sort of matters I think. That means we should make it a higher priority to reduce that more.

Yuval Noah Harari: With nuclear war, I would say that we are not concerned enough. I mean, too many people, including politicians have this weird impression that well, “Nuclear war, that’s history. No, that was in the 60s and 70s people worried about it.”

Max Tegmark: Exactly.

Yuval Noah Harari: “It’s not a 21st century issue.” This is ridiculous. I mean we are now in even greater danger, at least in terms of the technology than we were in the Cuban missile crisis. But you must remember this in Stanley Kubrick, Dr Strange Love-

Max Tegmark: One of my favorite films of all time.

Yuval Noah Harari: Yeah. And so the subtitle of the film is “How I Stopped Fearing and Learned to Love the Bomb.”

Max Tegmark: Exactly.

Yuval Noah Harari: And the funny thing is it actually happened. People stopped fearing them. Maybe they don’t love it very much, but compared to the 50s and 60s people just don’t talk about it. Like you look at the Brexit debate in Britain and Britain is one of the leading nuclear powers in the world and it’s not even mentioned. It’s not part of the discussion anymore. And that’s very problematic because I think that this is a very serious existential threat. But I’ll take a counter example, which is in the field of AI, even though I understand the philosophical importance of discussing the possibility of general AI emerging in the future and then rapidly taking over the world and you know all the paperclips scenarios and so forth.

I think that at the present moment it really distracts attention of people from the immediate dangers of the AI arms race, which has a far, far higher chance of materializing in the next, say, 10, 20, 30 years. And we need to focus people’s minds on these short term dangers. And I know that there is a small chance that general AI would be upon us say in the next 30 years. But I think it’s a very, very small chance, whereas the chance that kind of primitive AI will completely disrupt the economy, the political system and human life in the next 30 years is about a 100%. It’s bound to happen.

Max Tegmark: Yeah.

Yuval Noah Harari: And I worry far more about what primitive AI will do to the job market, to the military, to people’s daily lives than about a general AI appearing in the more distant future.

Max Tegmark: Yeah, there are a few reactions to this. We can talk more about artificial general intelligence and superintelligence later if we get time. But there was a recent survey of AI researchers around the world asking what they thought and I was interested to note that actually most of them guessed that we will get artificial general intelligence within decades. So I wouldn’t say that the chance is small, but I would agree with you, that is certainly not going to happen tomorrow.

But if we eat our vitamins, you and I and meditate, go to the gym, it’s quite likely we will actually get to experience it. But more importantly, coming back to what you said earlier, I see all of these risks as really being one in the same risk in the sense that what’s happened is of course that science has kept getting ever more powerful. And science definitely gives us ever more powerful technology. And I love technology. I’m a nerd. I work at a university that has technology in its name and I’m optimistic we can create an inspiring high tech future for life if we win what I like to call the wisdom race.

The race between the growing power of the technology and the growing wisdom with which we manage it or putting it in your words, that you just used there, if we can basically learn to take more seriously our job as stewards of this planet, you can look at every science and see exactly the same thing happening. So we physicists are kind of proud that we gave the world cell phones and computers and lasers, but our problem child has been nuclear energy obviously, nuclear weapons in particular. Chemists are proud that they gave the world all these great new materials and their problem child is climate change. Biologists in my book actually have done the best so far, they actually got together in the 70s and persuaded leaders to ban biological weapons and draw a clear red line more broadly between what was acceptable and unacceptable uses of biology.

And that’s why today most people think of biology as really a force for good, something that cures people or helps them live healthier lives. And I think AI is right now lagging a little bit in time. It’s finally getting to the point where they’re starting to have an impact and they’re grappling with the same kind of question. They haven’t had big disasters yet, so they’re in the biology camp there, but they’re trying to figure out where do they draw the line between acceptable and unacceptable uses so you don’t get a crazy military AI arms race in lethal autonomous weapons, so you don’t create very destabilizing income inequality so that AI doesn’t create 1984 on steroids, et cetera.

And I wanted to ask you about what sort of new story as a society you feel we need in order to tackle these challenges. And I’ve been very, very persuaded by your arguments that stories are so central to society for us to collaborate and accomplish stuff, but you’ve also made a really compelling case. I think that’s the most popular recent stories are all getting less powerful or popular. Communism, now there’s a lot of disappointment, and this liberalism and it feels like a lot of people are kind of craving for a new story that involves technology somehow and that can help us get our act together and also help us feel meaning and purpose in this world. But I’ve never in your books seen a clear answer to what you feel that this new story should be.

Yuval Noah Harari: Because I don’t know. If I knew the new story, I will tell it. I think we are now in a kind of double bind, we have to fight on two different fronts. On the one hand we are witnessing in the last few years the collapse of the last big modern story of liberal democracy and liberalism more generally, which has been, I would say as a story, the best story humans ever came up with and it did create the best world that humans ever enjoyed. I mean the world of the late 20th century and early 21st century with all its problems, it’s still better for humans, not for cows or chickens for humans, it’s still better than it’s any previous moment in history.

There are many problems, but anybody who says that this was a bad idea, I would like to hear which year are you thinking about as a better year? Now in 2019, when was it better? In 1919, in 1719, in 1219? I mean, for me, it’s obvious this has been the best story we have come up with.

Max Tegmark: That’s so true. I have to just admit that whenever I read the news for too long, I start getting depressed. But then I always cheer myself up by reading history and reminding myself it was always worse in the past.

Yuval Noah Harari: That never fails. I mean, the last four years have been quite bad, things are deteriorating, but we are still better off than in any previous era, but people are losing faith. In this story, we are reaching really a situation of zero story. All the big stories of the 20th century have collapsed or are collapsing and the vacuum is currently filled by nostalgic fantasies, nationalistic and religious fantasies, which simply don’t offer any real solutions to the problems of the 21st century. So on the one hand we have the task of supporting or reviving the liberal democratic system, which is so far the only game in town. I keep listening to the critics and they have a lot of valid criticism, but I’m waiting for the alternative and the only thing I hear is completely unrealistic nostalgic fantasies about going back to some past golden era that as a historian I know was far, far worse, and even if it was not so far worse, you just can’t go back there. You can’t recreate the 19th century or the middle ages under the conditions of the 21st century. It’s impossible.

So we have this one struggle to maintain what we have already achieved, but then at the same time, on a much deeper level, my suspicion is that the liberal stories we know it at least is really not up to the challenges of the 21st century because it’s built on foundations that the new science and especially the new technologies of artificial intelligence and bioengineering are just destroying the belief we are inherited in the autonomous individual, in free will, in all these basically liberal mythologies. They will become increasingly untenable in contact with new powerful bioengineering and artificial intelligence.

To put it in a very, very concise way, I think we are entering the era of hacking human beings, not just hacking smartphones and bank accounts, but really hacking homo sapiens which was impossible before. I mean, AI gives us the computing power necessary and biology gives us the necessary biological knowledge and when you combine the two you get the ability to hack human beings and if you continue to try, and build society on the philosophical ideas of the 18th century about the individual and freewill and then all that in a world where it’s feasible technically to hack millions of people systematically, it’s just not going to work. And we need an updated story, I’ll just finish this thought. And our problem is that we need to defend the story from the nostalgic fantasies at the same time that we are replacing it by something else. And it’s just very, very difficult.

When I began writing my books like five years ago, I thought the real project was to really go down to the foundations of the liberal story, expose the difficulties and build something new. And then you had all these nostalgic populous eruption of the last four or five years, and I personally find myself more and more engaged in defending the old fashioned liberal story instead of replacing it. Intellectually, it’s very frustrating because I think the really important intellectual work is finding out the new story, but politically it’s far more urgent. If we allow the emergence of some kind of populist authoritarian regimes, then whatever comes out of it will not be a better story.

Max Tegmark: Yeah, unfortunately I agree with your assessment here. I love to travel. I work in basically the United Nations like environment at my university with students from all around the world, and I have this very strong sense that people are feeling increasingly lost around the world today because the stories that used to give them a sense of purpose and meaning and so on are sort of dissolving in front of their eyes. And of course, we don’t like to feel lost then likely to jump on whatever branches are held out for us. And they are often just retrograde things. Let’s go back to the good old days and all sorts of other unrealistic things. But I agree with you that the rise in population we’re seeing now is not the cause. It’s a symptom of people feeling lost.

So I think I was a little bit unfair to ask you in a few minutes to answer the toughest question of our time, what should our new story be? But maybe we could break it into pieces a little bit and say what are at least some elements that we would like the new story to have? For example, it should accomplish, of course, multiple things. It has to incorporate technology in a meaningful way, which our past stories did not and has to incorporate AI progress in biotech, for example. And it also has to be a truly global story, I think this time, which isn’t just a story about how America is going to get better off or China is going to get better off, but one about how we’re all going to get better off together.

And we can put up a whole bunch of other requirements. If we start maybe with this part about the global nature of the story, people disagree violently about so many things around world, but are there any ingredients at all of the story that you think people around the world, would already agreed to some principles or ideas?

Yuval Noah Harari: Again to, I don’t really know. I mean, I don’t know what the new story would look like. Historically, these kinds of really grand narratives, they aren’t created by two, three people having a discussion and thinking, okay, what new stories should we tell? It’s far deeper and more powerful forces that come together to create these new stories. I mean, even trying to say, okay, we don’t have the full view, but let’s try to put a few ingredients in place. The whole thing about the story is that the whole comes before the parts. The narrative is far more important than the individual facts that build it up.

So I’m not sure that we can start creating the story by just, okay, let’s put the first few sentences and who knows how it will continue. You wrote books. I write books, we know that the first few sentences are the last sentences that you usually write.

Max Tegmark: That’s right.

Yuval Noah Harari: Only when you know how the whole book is going to look like, but then you go back to the beginning and you write the first few sentences.

Max Tegmark: Yeah. And sometimes the very last thing you write is the new title.

Yuval Noah Harari: So I agree that whatever the new story is going to be, it’s going to be global. The world is now too small and too interconnected to have just a story for one part of the world. It won’t work. And also it will have to take very seriously both the most updated science and the most updated technology. Something that liberal democracy as we know it, it’s basically still in the 18th century. It’s taking an 18th century story and simply following it to its logical conclusions. For me, maybe the most amazing thing about liberal democracy is it really completely disregarded all the discoveries of the life sciences over the last two centuries.

Max Tegmark: And of the technical sciences!

Yuval Noah Harari: I mean, as if Darwin never existed and we know nothing about evolution. I mean, you can basically meet these folks from the middle of the 18th century, whether it’s Rousseau, Jefferson, and all these guys, and they will be surprised by some of the conclusions we have drawn for the basis they provided us. But fundamentally it’s nothing has changed. Darwin didn’t really change anything. Computers didn’t really change anything. And I think the next story won’t have that luxury of being able to ignore the discoveries of science and technology.

The number one thing it we’ll have to take into account is how do humans live in a world when there is somebody out there that knows you better than you know yourself, but that somebody isn’t God, that somebody is a technological system, which might not be a good system at all. That’s a question we never had to face before. We could always comfort yourself with the idea that we are kind of a black box with the rest of humanity. Nobody can really understand me better than I understand myself. The king, the emperor, the church, they don’t really know what’s happening within me. Maybe God knows. So we had a lot of discussions about what to do with that, the existence of a God who knows us better than we know ourselves, but we didn’t really have to deal with a non-divine system that can hack us.

And this system is emerging. I think it will be in place within our lifetime in contrast to generally artificial intelligence that I’m skeptical whether I’ll see it in my lifetime. I’m convinced we will see, if we live long enough, a system that knows us better than we know ourselves and the basic premises of democracy, of free market capitalism, even of religion just don’t work in such a world. How does democracy function in a world when somebody understands the voter better than the voter understands herself or himself? And the same with the free market. I mean, if the customer is not right, if the algorithm is right, then we need a completely different economic system. That’s the big question that I think we should be focusing on. I don’t have the answer, but whatever story will be relevant to the 21st century, will have to answer this question.

Max Tegmark: I certainly agree with you that democracy has totally failed to adapt to the developments in the life sciences and I would add to that to the developments in the natural sciences too. I watched all of the debates between Trump and Clinton in the last election here in the US and I didn’t know what is artificial intelligence getting mentioned even a single time, not even when they talked about jobs. And the voting system we have, with an electoral college system here where it doesn’t even matter how people vote except in a few swing states where there’s so little influence from the voter to what actually happens. Even though we now have blockchain and could easily implement technical solutions where people will be able to have much more influence. Just reflects that we basically declared victory on our democratic system hundreds of years ago and haven’t updated it.

And I’m very interested in how we can dramatically revamp it if we believe in some form of democracy so that we actually can have more influence on how our society is run as individuals and how we can have good reason to actually trust the system. If it is able to hack us. That is actually working in our best interest. There’s a key tenant in religions that you’re supposed to be able to trust the God as having your best interest in mind. And I think many people in the world today do not trust that their political leaders actually have their best interest in mind.

Yuval Noah Harari: Certainly, I mean that’s the issue. You give a really divine powers to far from divine systems. We shouldn’t be too pessimistic. I mean, the technology is not inherently evil either. And what history teaches us about technology is that technology is also never deterministic. You can use the same technologies to create very different kinds of societies. We saw that in the 20th century when the same technologies were used to build communist dictatorships and liberal democracies, there was no real technological difference between the USSR and the USA. It was just people making different decisions what to do with the same technology.

I don’t think that the new technology is inherently anti-democratic or inherently anti-liberal. It really is about choices that people make even in what kind of technological tools to develop. If I think about, again, AI and surveillance, at present we see all over the world that corporations and governments are developing AI tools to monitor individuals, but technically we can do exactly the opposite. We can create tools that monitor and survey government and corporations in the service of individuals. For instance, to fight corruption in the government as an individual. It’s very difficult for me to say monitor nepotism, politicians appointing all kinds of family members to lucrative positions in the government or in the civil service, but it should be very easy to build an AI tool that goes over the immense amount of information involved. And in the end you just get a simple application on your smartphone you enter the name of a politician and you immediately see within two seconds who he appointed or she appointed from their family and friends to what positions. It should be very easy to do it. I don’t see the Chinese government creating such an application anytime soon, but people can create it.

Or if you think about the fake news epidemic, basically what’s happening is that corporations and governments are hacking us in their service, but the technology can work the other way around. We can develop an antivirus for the mind, the same way we developed antivirus for the computer. We need to develop an antivirus for the mind, an AI system that serves me and not a corporation or a government, and it gets to know my weaknesses in order to protect me against manipulation.

At present, what’s happening is that the hackers are hacking me. they get to know my weaknesses and that’s how they are able to manipulate me. For instance, with fake news. If they discover that I already have a bias against immigrants, they show me one fake news story, maybe about a group of immigrants raping local women. And I easily believe that because I already have this bias. My neighbor may have an opposite bias. She may think that anybody who opposes immigration is a fascist and the same hackers will find that out and will show her a fake news story about, I don’t know, right wing extremists murdering immigrants and she will believe that.

And then if I meet my neighbor, there is no way we can have a conversation about immigration. Now we can and should, develop an AI system that serves me and my neighbor and alerts us. Look, somebody is trying to hack you, somebody trying to manipulate you. And if we learn to trust this system that it serves us, it doesn’t serve any corporation or government. It’s an important tool in protecting our minds from being manipulated. Another tool in the same field, we are now basically feeding enormous amounts of mental junk food to our minds.

We spend hours every day basically feeding our hatred, our fear, our anger, and that’s a terrible and stupid thing to do. The thing is that people discovered that the easiest way to grab our attention is by pressing the hate button in the mind or the fear button in the mind, and we are very vulnerable to that.

Now, just imagine that somebody develops a tool that shows you what’s happening to your brain or to your mind as you’re watching these YouTube clips. Maybe it doesn’t block you, it’s not Big Brother, that blocks, all these things. It’s just like when you buy a product and it shows you how many calories are in the product and how much saturated fat and how much sugar there is in the product. So at least in some cases you learn to make better decisions. Just imagine that you have this small window in your computer which tells you what’s happening to your brain as your watching this video and what’s happening to your levels of hatred or fear or anger and then make your own decision. But at least you are more aware of what kind of food you’re giving to your mind.

Max Tegmark: Yeah. This is something I am also very interested in seeing more of AI systems that empower the individual in all the ways that you mentioned. We are very interested at the Future of Life Institute actually in supporting this kind of thing on the nerdy technical side and I think this also drives home this very important fact that technology is not good or evil. Technology is an amoral tool that can be used both for good things and for bad things. That’s exactly why I feel it’s so important that we develop the wisdom to use it for good things rather than bad things. So in that sense, AI is no different than fire, which can be used for good things and for bad things and but we as a society have developed a lot of wisdom now in fire management. We educate our kids about it. We have fire extinguishers and fire trucks and with artificial intelligence and other powerful tech, I feel we need to do better in similarly developing the wisdom that can steer the technology towards better uses.

Now we’re reaching the end of the hour here. I’d like to just finish with two more questions. One of them is about what we wanted to ultimately mean to be human as we get ever more tech. You put it so beautifully and I think it was Sapiens that tech progress is gradually taking us beyond the asking what we want to ask instead what we want to want and I guess even more broadly how we want to brand ourselves, how we want to think about ourselves as humans in the high tech future.

I’m quite curious. First of all, you personally, if you think about yourself in 30 years, 40 years, what do you want to want and what sort of society would you like to live in say 2060 if you could have it your way?

Yuval Noah Harari: It’s a profound question. It’s a difficult question. My initial answer is that I would really like not just to know the truth about myself but to want to know the truth about myself. Usually the main obstacle in knowing the truth about yourself is that you don’t want to know it. It’s always accessible to you. I mean, we’ve been told for thousands of years by, all the big names in philosophy and religion. Almost all say the same thing. Get to know yourself better. It’s maybe the most important thing in life. We haven’t really progressed much in the last thousands of years and the reason is that yes, we keep getting this advice but we don’t really want to do it.

Working on our motivation in this field I think would be very good for us. It will also protect us from all the naive utopias which tend to draw far more of our attention. I mean, especially as technology will give us all, at least some of us more and more power, the temptations of naive utopias are going to be more and more irresistible and I think the really most powerful check on these naive utopias is really getting to know yourself better.

Max Tegmark: Would you like what it means to be, Yuval 2060 to be more on the hedonistic side that you have all these blissful experiences and serene meditation and so on, or would you like there to be a lot of challenges in there that gives you a sense of meaning or purpose? Would you like to be somehow upgraded with technology?

Yuval Noah Harari: None of the above. I mean at least if I think deeply enough about these issues and yes, I would like to be upgraded but only in the right way and I’m not sure what the right way is. I’m not a great believer in blissful experiences in meditation or otherwise, they tend to be traps that this is what we’ve been looking for all our lives and for millions of years all the animals they just constantly look for blissful experiences and after a couple of millions of years of evolution, it doesn’t seem that it brings us anywhere and especially in meditation you learn these kinds of blissful experiences can be the most deceptive because you fall under the impression that this is the goal that you should be aiming at.

This is a really good meditation. This is a really deep meditation simply because you’re very pleased with yourself and then you spend countless hours later on trying to get back there or regretting that you are not there and in the end it’s just another experience. What we experience with right now when we are now talking on the phone to each other and I feel something in my stomach and you feel something in your head, this is as special and amazing as the most blissful experience of meditation. The only difference is that we’ve gotten used to it so we are not amazed by it, but right now we are experiencing the most amazing thing in the universe and we just take it for granted. Partly because we are distracted by this notion that out there, there is something really, really special that we should be experiencing. So I’m a bit suspicious of blissful experiences.

Again, I would just basically repeat that to really understand yourself also means to really understand the nature of these experiences and if you really understand that, then so many of these big questions will be answered. Similarly, the question that we dealt with in the beginning of how to evaluate different experiences and what kind of experiences should we be creating for humans or for artificial consciousness. For that you need to deeply understand the nature of experience. Otherwise, there’s so many naive utopias that can tempt you. So I would focus on that.

When I say that I want to know the truth about myself, it’s really also it means to really understand the nature of these experiences.

Max Tegmark: To my very last question, coming back to this story and ending on a positive inspiring note. I’ve been thinking back about when new stories led to very positive change. And then I started thinking about a particular Swedish story. So the year was 1945, people were looking at each other all over Europe saying, “We screwed up again”. How about we, instead of using all this technology, people were saying then to build ever more powerful weapons. How about we instead use it to create a society that benefits everybody where we can have free health care, free university for everybody, free retirement and build a real welfare state. And I’m sure there were a lot of curmudgeons around who said “awe you know, that’s just hopeless naive dreamery, go smoke some weed and hug a tree because it’s never going to work.” Right?

But this story, this optimistic vision was sufficiently concrete and sufficiently both bold and realistic seeming that it actually caught on. We did this in Sweden and it actually conquered the world. Not like when the Vikings tried and failed to do it with swords, but this idea conquered the world. So now so many rich countries have copied this idea. I keep wondering if there is another new vision or story like this, some sort of welfare 3.0 which incorporates all of the exciting new technology that has happened since ’45 on the biotech side, on the AI side, et cetera, to envision a society which is truly bold and sufficiently appealing to people around the world that people could rally around this.

I feel that the shared positive experience is something that more than anything else can really help foster collaboration around the world. And I’m curious what you would say in terms of, what do you think of as a bold, positive vision for the planet now going away from what you spoke about earlier with yourself personally, getting to know yourself and so on.

Yuval Noah Harari: I think we can aim towards what you define as welfare 3.0 which is again based on a better understanding of humanity. The welfare state, which many countries have built over the last decades have been an amazing human achievement and it achieved many concrete results in fields that we knew what to aim for, like in health care. So okay, let’s vaccinate all the children in the country and let’s make sure everybody has enough to eat. We succeeded in doing that and the kind of welfare 3.0 program would try to expand that to other fields in which our achievements are far more moderate simply because we don’t know what to aim for. We don’t know what we need to do.

If you think about mental health, it’s much more difficult than providing food to people because we have a very poor understanding of the human mind and of what mental health is. Even if you think about food, one of the scandals of science is that we still don’t know what to eat, so we basically solve the problem of enough food. Now actually we have the opposite problem of people eating too much and not too little, but beyond the medical quantity, it’s I think one of the biggest scandals of science that after centuries we still don’t know what we should eat. And mainly because so many of these miracle diets, they are a one size fits all as if everybody should eat the same thing. Whereas obviously it should be tailored to individuals.

So if you harness the power of AI and big data and machine learning and biotechnology, you could create the best dietary system in the world that tell people individually what would be good for them to eat. And this will have enormous side benefits in reducing medical problems, in reducing waste of food and resources, helping the climate crisis and so forth. So this is just one example.

Max Tegmark: Yeah. Just on that example, I would argue also that part of the problem is beyond that we just don’t know enough that actually there are a lot of lobbyists who are telling people what to eat, knowing full well that that’s bad for them just because that way they’ll make more of a profit. Which gets back to your question of hacking, how we can prevent ourselves from getting hacked by powerful forces that don’t have our best interest in mind. But the things you mentioned seemed like a little bit of first world perspective which it’s easy to get when we live in Israel or Sweden, but of course there are many people on the planet who still live in pretty miserable situations where we actually can quite easily articulate how to make things at least a bit better.

But then also in our societies, I mean you touched on mental health. There’s a significant rise in depression in the United States. Life expectancy in the US has gone down three years in a row, which does not suggest the people are getting happier here. I’m wondering if you also in your positive vision of the future that we can hopefully end on here. We’d want to throw in some ingredients about the sort of society where we don’t just have the lowest rung of the Maslow pyramid taken care of food and shelter and stuff, but also feel meaning and purpose and meaningful connections with our fellow lifeforms.

Yuval Noah Harari: I think it’s not just a first world issue. Again, even if you think about food, even in developing countries, more people today die from diabetes and diseases related to overeating or to overweight than from starvation and mental health issues are certainly not just the problem for the first world. People are suffering from that in all countries. Part of the issue is that mental health is far, far more expensive. Certainly if you think in terms of going to therapy once or twice a week than just giving vaccinations or antibiotics. So it’s much more difficult to create a robust mental health system in poor countries, but we should aim there. It’s certainly not just for the first world. And if we really understand humans better, we can provide much better health care, both physical health and mental health for everybody on the planet, not just for Americans or Israelis or Swedes.

Max Tegmark: In terms of physical health, it’s usually a lot cheaper and simpler to not treat the diseases, but to instead prevent them from happening in the first place by reducing smoking, reducing people eating extremely unhealthy foods, et cetera. And the same way with mental health, presumably a key driver of a lot of the problems we have is that we have put ourselves in a human made environment, which is incredibly different from the environment that we evolved to flourish in. And I’m wondering rather than just trying to develop new pills to help us live in this environment, which is often optimized for the ability to produce stuff, rather than for human happiness. If you think that by deliberately changing our environment to be more conducive to human happiness might improve our happiness a lot without having to treat it, treat mental health disorders.

Yuval Noah Harari: It will demand the enormous amounts of resources and energy. But if you are looking for a big project for the 21st century, then yeah, that’s definitely a good project to undertake.

Max Tegmark: Okay. That’s probably a good challenge from you on which to end this conversation. I’m extremely grateful for having had this opportunity talk with you about these things. These are ideas I will continue thinking about with great enthusiasm for a long time to come and I very much hope we can stay in touch and actually meet in person, even, before too long.

Yuval Noah Harari: Yeah. Thank you for hosting me.

Max Tegmark: I really can’t think of anyone on the planet who thinks more profoundly about the big picture of the human condition here than you and it’s such an honor.

Yuval Noah Harari: Thank you. It was a pleasure for me too. Not a lot of opportunities to really go deeply about these issues. I mean, usually you get pulled away to questions about the 2020 presidential elections and things like that, which is important. But, we still have also to give some time to the big picture.

Max Tegmark: Yeah. Wonderful. So once again, todah, thank you so much.

Lucas Perry: Thanks so much for tuning in and being a part of our final episode of 2019. Many well and warm wishes for a happy and healthy new year from myself and the rest of the Future of Life Institute team. This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it’s a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we’ve come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we’ve accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond.

Topics discussed include:

  • Introductions to the FLI team and our work
  • Motivations for our projects and existential risk mitigation efforts
  • The goals and outcomes of our work
  • Our favorite projects at FLI in 2019
  • Optimistic directions for projects in 2020
  • Reasons for existential hope going into 2020 and beyond

Timestamps:

0:00 Intro

1:30 Meeting the Future of Life Institute team

18:30 Motivations for our projects and work at FLI

30:04 What we strive to result from our work at FLI

44:44 Favorite accomplishments of FLI in 2019

01:06:20 Project directions we are most excited about for 2020

01:19:43 Reasons for existential hope in 2020 and beyond

01:38:30 Outro

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is a special end of the year episode structured as an interview with members of the FLI core team. The purpose of this episode is to introduce the members of our team and their roles, explore the projects and work we’ve been up to at FLI throughout the year, and discuss future project directions we are excited about for 2020. Some topics we explore are the motivations behind our work and projects, what we are hoping will result from them, favorite accomplishments at FLI in 2019, and general trends and reasons we see for existential hope going into 2020 and beyond.

If you find this podcast interesting and valuable, you can follow us on your preferred listening platform like on itunes, soundcloud, google play, stitcher, and spotify

If you’re curious to learn more about the Future of Life Institute, our team, our projects, and our feelings about the state and ongoing efforts related to existential risk mitigation, then I feel you’ll find this podcast valuable. So, to get things started, we’re going to have the team introduce ourselves, and our role(s) at the Future of life Institute

Jared Brown: My name is Jared Brown, and I’m the Senior Advisor for Government Affairs at the Future of Life Institute. I help inform and execute FLI’s strategic advocacy work on governmental policy. It’s sounds a little bit behind the scenes because it is, but I primarily work in the U.S. and in global forums like the United Nations.

Kirsten Gronlund: My name is Kirsten and I am the Editorial Director for The Future of Life Institute. Basically, I run the website. I also create new content and manage the content that’s being created to help communicate the issues that FLI works on. I have been helping to produce a lot of our podcasts. I’ve been working on getting some new long form articles written; we just came out with one about CRISPR and gene drives. Right now I’m actually working on putting together a book list for recommended reading for things related to effective altruism and AI and existential risk. I also do social media, and write the newsletter, and a lot of things. I would say that my job is to figure out what is most important to communicate about what FLI does, and then to figure out how it’s best to communicate those things to our audience. Experimenting with different forms of content, experimenting with different messaging. Communication, basically, and writing and editing.

Meia Chita-Tegmark: I am Meia Chita-Tegmark. I am one of the co-founders of the Future of Life Institute. I am also the treasurer of the Institute, and recently I’ve been focusing many of my efforts on the Future of Life website and our outreach projects. For my day job, I am a postdoc in the human-robot interaction lab at Tufts University. My training is in social psychology, so my research actually focuses on the human end of the human-robot interaction. I mostly study uses of assistive robots in healthcare and I’m also very interested in ethical implications of using, or sometimes not using, these technologies. Now, with the Future of Life Institute, as a co-founder, I am obviously involved in a lot of the decision-making regarding the different projects that we are pursuing, but my main focus right now is the FLI website and our outreach efforts.

Tucker Davey: I’m Tucker Davey. I’ve been a member of the FLI core team for a few years. And for the past few months, I’ve been pivoting towards focusing on projects related to FLI’s AI communication strategy, various projects, especially related to advanced AI and artificial general intelligence, and considering how FLI can best message about these topics. Basically these projects are looking at what we believe about the existential risk of advanced AI, and we’re working to refine our core assumptions and adapt to a quickly changing public understanding of AI. In the past five years, there’s been much more money and hype going towards advanced AI, and people have new ideas in their heads about the risk and the hope from AI. And so, our communication strategy has to adapt to those changes. So that’s kind of a taste of the questions we’re working on, and it’s been really interesting to work with the policy team on these questions.

Jessica Cussins Newman: My name is Jessica Cussins Newman, and I am an AI policy specialist with the Future of Life Institute. I work on AI policy, governance, and ethics, primarily. Over the past year, there have been significant developments in all of these fields, and FLI continues to be a key stakeholder and contributor to numerous AI governance forums. So it’s been exciting to work on a team that’s helping to facilitate the development of safe and beneficial AI, both nationally and globally. To give an example of some of the initiatives that we’ve been involved with this year, we provided comments to the European Commission’s high level expert group on AI, to the Defense Innovation Board’s work on AI ethical principles, to the National Institute of Standards and Technology, or NIST, which developed a plan for federal engagement on technical AI standards.

We’re also continuing to participate in several multi-stakeholder initiatives, such as the Partnership on AI, the CNAS AI Task Force, and the UN Secretary General’s high level panel, and additional cooperation among others. I think all of this is helping to lay the groundwork for a more trustworthy AI, and we’ve also been engaged with direct policy engagement. Earlier this year we co-hosted an AI policy briefing at the California state legislature, and met with the White House Office of Science and Technology Policy. Lastly, on the educational side of this work, we maintain an online resource for global AI policy. So this includes information about national AI strategies and provides background resources and policy recommendations around some of the key issues.

Ian Rusconi: My name is Ian Rusconi and I edit and produce these podcasts. Since FLI’s podcasts aren’t recorded in a controlled studio setting, the interviews often come with a host of technical issues, so some of what I do for these podcasts overlaps with forensic audio enhancement, removing noise from recordings; removing as much of the reverb as possible from recordings, which works better sometimes than others; removing clicks and pops and sampling errors and restoring the quality of clipping audio that was recorded too loudly. And then comes the actual editing, getting rid of all the breathing and lip smacking noises that people find off-putting, and cutting out all of the dead space and vocal dithering, um, uh, like, you know, because we aim for a tight final product that can sometimes end up as much as half the length of the original conversation even before any parts of the conversation are cut out.

Part of working in an audio only format is keeping things to the minimum amount of information required to get your point across, because there is nothing else that distracts the listener from what’s going on. When you’re working with video, you can see people’s body language, and that’s so much of communication. When it’s audio only, you can’t. So a lot of the time, if there is a divergent conversational thread that may be an interesting and related point, it doesn’t actually fit into the core of the information that we’re trying to access, and you can construct a more meaningful narrative by cutting out superfluous details.

Emilia Javorsky: My name’s Emilia Javorsky and at the Future of Life Institute, I work on the topic of lethal autonomous weapons, mainly focusing on issues of education and advocacy efforts. It’s an issue that I care very deeply about and I think is one of the more pressing ones of our time. I actually come from a slightly atypical background to be engaged in this issue. I’m a physician and a scientist by training, but what’s conserved there is a discussion of how do we use AI in high stakes environments where life and death decisions are being made. And so when you are talking about the decisions to prevent harm, which is my field of medicine, or in the case of lethal autonomous weapons, the decision to enact lethal harm, there’s just fundamentally different moral questions, and also system performance questions that come up.

Key ones that I think about a lot are system reliability, accountability, transparency. But when it comes to thinking about lethal autonomous weapons in the context of the battlefield, there’s also this inherent scalability issue that arises. When you’re talking about scalable weapon systems, that quickly introduces unique security challenges in terms of proliferation and an ability to become what you could quite easily define as weapons of mass destruction. 

There’s also the broader moral questions at play here, and the question of whether we as a society want to delegate the decision to take a life to machines. And I personally believe that if we allow autonomous weapons to move forward and we don’t do something to really set a stake in the ground, it could set an irrecoverable precedent when we think about getting ever more powerful AI aligned with our values in the future. It is a very near term issue that requires action.

Anthony Aguirre: I’m Anthony Aguirre. I’m a professor of physics at the University of California at Santa Cruz, and I’m one of FLI’s founders, part of the core team, and probably work mostly on the policy related aspects of artificial intelligence and a few other topics. 

I’d say there are two major efforts that I’m heading up. One is the overall FLI artificial intelligence policy effort. That encompasses a little bit of our efforts on lethal autonomous weapons, but it’s mostly about wider issues of how artificial intelligence development should be thought about, how it should be governed, what kind of soft or hard regulations might we contemplate about it. Global efforts which are really ramping up now, both in the US and Europe and elsewhere, to think about how artificial intelligence should be rolled out in a way that’s kind of ethical, that keeps with the ideals of society, that’s safe and robust and in general is beneficial, rather than running into a whole bunch of negative side effects. That’s part of it.

And then the second thing is I’ve been thinking a lot about what sort of institutions and platforms and capabilities might be useful for society down the line that we can start to create, and nurture and grow now. So I’ve been doing a lot of thinking about… let’s imagine that we’re in some society 10 or 20 or 30 years from now that’s working well, how did it solve some of the problems that we see on the horizon? If we can come up with ways that this fictitious society in principle solved those problems, can we try to lay the groundwork for possibly actually solving those problems by creating new structures and institutions now that can grow into things that could help solve those problems in the future?

So an example of that is Metaculus. This is a prediction platform that I’ve been involved with in the last few years. So this is an effort to create a way to better predict what’s going to happen and make better decisions, both for individual organizations and FLI itself, but just for the world in general. This is kind of a capability that it would be good if the world had, making better predictions about all kinds of things and making better decisions. So that’s one example, but there are a few others that I’ve been contemplating and trying to get spun up.

Max Tegmark: Hi, I’m Max Tegmark, and I think of myself as having two jobs. During the day, I do artificial intelligence research at MIT, and on nights and weekends, I help lead the Future of Life Institute. My day job at MIT used to be focused on cosmology, because I was always drawn to the very biggest questions. The bigger the better, and studying our universe and its origins seemed to be kind of as big as it gets. But in recent years, I’ve felt increasingly fascinated that we have to understand more about how our own brains work, how our intelligence works, and building better artificial intelligence. Asking the question, how can we make sure that this technology, which I think is going to be the most powerful ever, actually becomes the best thing ever to happen to humanity, and not the worst.

Because all technology is really a double-edged sword. It’s not good or evil, it’s just a tool that we can do good or bad things with. If we think about some of the really horrible things that have happened because of AI systems, so far, it’s largely been not because of evil, but just because people didn’t understand how the system worked, and it did something really bad. So what my MIT research group is focused on is exactly tackling that. How can you take today’s AI systems, which are often very capable, but total black boxes… So that if you ask your system, “Why should this person be released on probation, but not this one?” You’re not going to get any better answer than, “I was trained on three terabytes of data and this is my answer. Beep, beep. Boop, boop.” Whereas, I feel we really have the potential to make systems that are just as capable, and much more intelligible. 

Trust should be earned and trust should be built based on us actually being able to peek inside the system and say, “Ah, this is why it works.” And the reason we have founded the Future of Life Institute was because all of us founders, we love technology, and we felt that the reason we would prefer living today rather than any time in the past, is all because of technology. But, for the first time in cosmic history, this technology is also on the verge of giving us the ability to actually self-destruct as a civilization. If we build AI, which can amplify human intelligence like never before, and eventually supersede it, then just imagine your least favorite leader on the planet, and imagine them having artificial general intelligence so they can impose their will on the rest of Earth.

How does that make you feel? It does not make me feel great, and I had a New Year’s resolution in 2014 that I was no longer allowed to complain about stuff if I didn’t actually put some real effort into doing something about it. This is why I put so much effort into FLI. The solution is not to try to stop technology, it just ain’t going to happen. The solution is instead win what I like to call the wisdom race. Make sure that the wisdom with which we manage our technology grows faster than the power of the technology.

Lucas Perry: Awesome, excellent. As for me, I’m Lucas Perry, and I’m the project manager for the Future of Life Institute. I’ve been with FLI for about four years now, and have focused on enabling and delivering projects having to do with existential risk mitigation. Beyond basic operations tasks at FLI that help keep things going, I’ve seen my work as having three cornerstones, these being supporting research on technical AI alignment, on advocacy relating to existential risks and related issues, and on direct work via our projects focused on existential risk. 

In terms of advocacy related work, you may know me as the host of the AI Alignment Podcast Series, and more recently the host of the Future of Life Institute Podcast. I see my work on the AI Alignment Podcast Series as promoting and broadening the discussion around AI alignment and AI safety to a diverse audience of both technical experts and persons interested in the issue.

There I am striving to include a diverse range of voices from many different disciplines, in so far as they can inform the AI alignment problem. The Future of Life Institute Podcast is a bit more general, though often dealing with related issues. There I strive to have conversations about avant garde subjects as they relate to technological risk, existential risk, and cultivating the wisdom with which to manage powerful and emerging technologies. For the AI Alignment Podcast, our most popular episode of all time so far is On Becoming a Moral Realist with Peter Singer, and a close second and third were On Consciousness, Qualia, and Meaning with Mike Johnson and Andres Gomez Emilsson, and An Overview of Technical AI Alignment with Rohin Shah. There are two parts to that podcast. These were really great episodes, and I suggest you check them out if they sound interesting to you. You can do that under the podcast tab on our site or by finding us on your preferred listening platform.

As for the main FLI Podcast Series, our most popular episodes have been an interview with FLI President Max Tegmark called Life 3.0: Being Human in the Age of Artificial intelligence. A podcast similar to this one last year, called Existential Hope in 2019 and Beyond was the second most listened to FLI podcast. And then the third is a more recent podcast called The Climate Crisis As An Existential Threat with Simon Beard and Hayden Belfield. 

In so far as the other avenue of my work, my support of research can be stated quite simply as fostering review of grant applications, and also reviewing interim reports for dispersing funds related to AGI safety grants. And then just touching again on my direct work around our projects, often if you see some project put out by the Future of Life Institute, I usually have at least some involvement with it from a logistics, operations, execution, or ideation standpoint related to it.

And moving into the next line of questioning here for the team, what would you all say motivates your interest in existential risk and the work that you do at FLI? Is there anything in particular that is motivating this work for you?

Ian Rusconi: What motivates my interest in existential risk in general I think is that it’s extraordinarily interdisciplinary. But my interest in what I do at FLI is mostly that I’m really happy to have a hand in producing content that I find compelling. But it isn’t just the subjects and the topics that we cover in these podcasts, it’s how you and Ariel have done so. One of the reasons I have so much respect for the work that you two have done and consequently enjoy working on it so much is the comprehensive approach that you take in your lines of questioning.

You aren’t afraid to get into the weeds with interviewees on very specific technical details, but still seek to clarify jargon and encapsulate explanations, and there’s always an eye towards painting a broader picture so we can contextualize a subject’s placement in a field as a whole. I think that FLI’s podcasts often do a tightrope act, walking the line between popular audience and field specialists in a way that doesn’t treat the former like children, and doesn’t bore the latter with a lack of substance. And that’s a really hard thing to do. And I think it’s a rare opportunity to be able to help create something like this.

Kirsten Gronlund: I guess really broadly, I feel like there’s sort of this sense generally that a lot of these technologies and things that we’re coming up with are going to fix a lot of issues on their own. Like new technology will help us feed more people, and help us end poverty, and I think that that’s not true. We already have the resources to deal with a lot of these problems, and we haven’t been. So I think, really, we need to figure out a way to use what is coming out and the things that we’re inventing to help people. Otherwise we’re going to end up with a lot of new technology making the top 1% way more wealthy, and everyone else potentially worse off.

So I think for me that’s really what it is, is to try to communicate to people that these technologies are not, on their own, the solution, and we need to all work together to figure out how to implement them, and how to restructure things in society more generally so that we can use these really amazing tools to make the world better.

Lucas Perry: Yeah. I’m just thinking about how technology enables abundance and how it seems like there are not limits to human greed, and there are limits to human greed. Human greed can potentially want infinite power, but also there’s radically diminishing returns on one’s own happiness and wellbeing as one gains more access to more abundance. It seems like there’s kind of a duality there. 

Kirsten Gronlund: I agree. I mean, I think that’s a very effective altruist way to look at it. That those same resources, if everyone has some power and some money, people will on average be happier than if you have all of it and everyone else has less. But I feel like people, at least people who are in the position to accumulate way more money than they could ever use, tend to not think of it that way, which is unfortunate.

Tucker Davey: In general with working with FLI, I think I’m motivated by some mix of fear and hope. And I would say the general fear is that, if we as a species don’t figure out how to cooperate on advanced technology, and if we don’t agree to avoid certain dangerous paths, we’ll inevitably find some way to destroy ourselves, whether it’s through AI or nuclear weapons or synthetic biology. But then that’s also balanced by a hope that there’s so much potential for large scale cooperation to achieve our goals on these issues, and so many more people are working on these topics as opposed to five years ago. And I think there really is a lot of consensus on some broad shared goals. So I have a hope that through cooperation and better coordination we can better tackle some of these really big issues.

Emilia Javorsky: Part of the reason as a physician I went into the research side of it is this idea of wanting to help people at scale. I really love the idea of how do we use science and translational medicine, not just to help one person, but to help whole populations of people. And so for me, this issue of lethal autonomous weapons is the converse of that. This is something that really has the capacity to both destroy lives at scale in the near term, and also as we think towards questions like value alignment and longer term, more existential questions, it’s something that for me is just very motivating. 

Jared Brown: This is going to sound a little cheesy and maybe even a little selfish, but my main motivation is my kids. I know that they have a long life ahead of them, hopefully, and there’s various different versions of the future that’ll better or worse for them. And I know that emerging technology policy is going to be key to maximizing the benefit of their future and everybody else’s, and that’s ultimately what motivates me. I’ve been thinking about tech policy basically ever since I started researching and reading Futurism books when my daughter was born about eight years ago, and that’s what really got me into the field and motivated to work on it full-time.

Meia Chita-Tegmark: I like to think of my work as being ultimately about people. I think that one of the most interesting aspects of this human drama is our relationship with technology, which recently has become evermore promising and also evermore dangerous. So, I want to study that, and I feel crazy lucky that there are universities willing to pay me to do it. And also to the best of my abilities, I want to try to nudge people in the technologies that they develop in more positive directions. I’d like to see a world where technology is used to save lives and not to take lives. I’d like to see technologies that are used for nurture and care rather than power and manipulation. 

Jessica Cussins Newman: I think the integration of machine intelligence into the world around us is one of the most impactful changes that we’ll experience in our lifetimes. I’m really excited about the beneficial uses of AI, but I worry about its impacts, and the questions of not just what we can build, but what we should build. And how we could see these technologies being destabilizing, or that won’t be sufficiently thoughtful about ensuring that the systems aren’t developed or used in ways that expose us to new vulnerabilities, or impose undue burdens on particular communities.

Anthony Aguirre: I would say it’s kind of a combination of things. Everybody looks at the world and sees that there are all kinds of problems and issues and negative directions that lots of things are going, and it feels frustrating and depressing. And I feel that given that I’ve got a particular day job that’ll affords me a lot of freedom, given that I have this position at Future of Life Institute, that there are a lot of talented people around who I’m able to work with, there’s a huge opportunity, and a rare opportunity to actually do something.

Who knows how effective it’ll actually be in the end, but to try to do something and to take advantage of the freedom, and standing, and relationships, and capabilities that I have available. I kind of see that as a duty in a sense, that if you find in a place where you have a certain set of capabilities, and resources, and flexibility, and safety, you kind of have a duty to make use of that for something beneficial. I sort of feel that, and so try to do so, but I also feel like it’s just super interesting, thinking about the ways that you can create things that can be effective, it’s just a fun intellectual challenge. 

There are certainly aspects of what I do at Future of Life Institute that are sort of, “Oh, yeah, this is important so I should do it, but I don’t really feel like it.” Those are occasionally there, but mostly it feels like, “Ooh, this is really interesting and exciting, I want to get this done and see what happens.” So in that sense it’s really gratifying in both ways, to feel like it’s both potentially important and positive, but also really fun and interesting.

Max Tegmark: What really motivates me is this optimistic realization that after 13.8 billion years of cosmic history, we have reached this fork in the road where we have these conscious entities on this little spinning ball in space here who, for the first time ever, have the future in their own hands. In the stone age, who cared what you did? Life was going to be more or less the same 200 years later regardless, right? Whereas now, we can either develop super powerful technology and use it to destroy life on earth completely, go extinct and so on. Or, we can create a future where, with the help of artificial intelligence amplifying our intelligence, we can help life flourish like never before. And I’m not talking just about the next election cycle, I’m talking about for billions of years. And not just here, but throughout much of our amazing universe. So I feel actually that we have a huge responsibility, and a very exciting one, to make sure we don’t squander this opportunity, don’t blow it. That’s what lights me on fire.

Lucas Perry: So I’m deeply motivated by the possibilities of the deep future. I often take cosmological or macroscopic perspectives when thinking about my current condition or the condition of life on earth. The universe is about 13.8 billion years old and our short lives of only a few decades are couched within the context of this ancient evolving system of which we are a part. As far as we know, consciousness has only really exploded and come onto the scene in the past few hundred million years, at least in our sector of space and time, and the fate of the universe is uncertain but it seems safe to say that we have at least billions upon billions of years left before the universe perishes in some way. That means there’s likely longer than the current lifetime of the universe for earth originating intelligent life to do and experience amazing and beautiful things beyond what we can even know or conceive of today.

It seems very likely to me that the peaks and depths of human consciousness, from the worst human misery to the greatest of joy, peace, euphoria, and love, represent only a very small portion of a much larger and higher dimensional space of possible conscious experiences. So given this, I’m deeply moved by the possibility of artificial intelligence being the next stage in the evolution of life and the capacities for that intelligence to solve existential risk, for that intelligence to explore the space of consciousness and optimize the world, for super-intelligent and astronomical degrees of the most meaningful and profound states of consciousness possible. So sometimes I ask myself, what’s a universe good for if not ever evolving into higher and more profound and intelligent states of conscious wellbeing? I’m not sure, and this is still an open question for sure, but this deeply motivates me as I feel that the future can be unimaginably good to degrees and kinds of wellbeing that we can’t even conceive of today. There’s a lot of capacity there for the future to be something that is really, really, really worth getting excited and motivated about.

And moving along in terms of questioning again here, this question is again for the whole team: do you have anything more specifically that you hope results from your work, or is born of your work at FLI?

Jared Brown: So, I have two primary objectives, the first is sort of minor but significant. A lot of what I do on a day-to-day basis is advocate for relatively minor changes to existing and future near term policy on emerging technology. And some of these changes won’t make a world of difference unto themselves, but the small marginal benefits to the future can cumulate rather significantly overtime. So, I look for as many small wins as possible in different policy-making environments, and try and achieve those on a regular basis.

And then more holistically in the long-run, I really want to help destigmatize the discussion around global catastrophic and existential risk, and Traditional National Security, and International Security policy-making. It’s still quite an obscure and weird thing to say to people, I work on global catastrophic and existential risk, and it really shouldn’t be. I should be able talk to most policy-makers in security related fields, and have it not come off as a weird or odd thing to be working on. Because inherently what we’re talking about is the very worst of what could happen to you or humanity or even life as we know it on this planet. And there should be more people who work on these issues both from an effective altruistic perspective and other perspectives going forward.

Jessica Cussins Newman: I want to raise awareness about the impacts of AI and the kinds of levers that we have available to us today to help shape these trajectories. So from designing more robust machine learning models, to establishing the institutional procedures or processes that can track and monitor those design decisions and outcomes and impacts, to developing accountability and governance mechanisms to ensure that those AI systems are contributing to a better future. We’ve built a tool that can automate decision making, but we need to retain human control and decide collectively as a society where and how to implement these new abilities.

Max Tegmark: I feel that there’s a huge disconnect right now between our potential, as the human species, and the direction we’re actually heading in. We are spending most of our discussions in news media on total BS. You know, like country A and country B are squabbling about something which is quite minor, in the grand scheme of things, and people are often treating each other very badly in the misunderstanding that they’re in some kind of zero-sum game, where one person can only get better off if someone else gets worse off. Technology is not a zero-sum game. Everybody wins at the same time, ultimately, if you do it right. 

Why are we so much better off now than 50,000 years ago or 300 years ago? It’s because we have antibiotics so we don’t die of stupid diseases all the time. It’s because we have the means to produce food and keep ourselves warm, and so on, with technology, and this is nothing compared to what AI can do.

I’m very much hoping that this mindset that we all lose together or win together is something that can catch on a bit more as people gradually realize the power of this tech. It’s not the case that either China is going to win and the U.S. is going to lose, or vice versa. What’s going to happen is either we’re both going to lose because there’s going to be some horrible conflict and it’s going to ruin things for everybody, or we’re going to have a future where people in China are much better off, and people in the U.S. and elsewhere in the world are also much better off, and everybody feels that they won. There really is no third outcome that’s particularly likely.

Lucas Perry: So, in the short term, I’m hoping that all of the projects we’re engaging with help to nudge the trajectory of life on earth in a positive direction. I’m hopeful that we can mitigate an arms race in lethal autonomous weapons. I see that as being a crucial first step in coordination around AI issues such that, if that fails, it may likely be much harder to coordinate in the future on making sure that beneficial AI takes place. I am also hopeful that we can promote beneficial AI alignment and AI safety research farther and mainstream its objectives and understandings about the risks posed by AI and what it means to create beneficial AI. I’m hoping that we can maximize the wisdom with which we handle technology through projects and outreach, which explicitly cultivate ethics and coordination and governance in ways which help to direct and develop technologies in ways that are beneficial.

I’m also hoping that we can promote and instantiate a culture and interest in existential risk issues and the technical, political, and philosophical problems associated with powerful emerging technologies like AI. It would be wonderful if the conversations that we have on the podcast and at FLI and in the surrounding community weren’t just something for us. These are issues that are deeply interesting and will ever become more important as technology becomes more powerful. And so I’m really hoping that one day discussions about existential risk and all the kinds of conversations that we have on the podcast are much more mainstream, are normal, that there are serious institutions in government and society which explore these, is part of common discourse as a society and civilization.

Emilia Javorsky: In an ideal world, all of FLI’s work in this area, a great outcome would be the realization of the Asilomar principle that an arms race in lethal autonomous weapons must be avoided. I hope that we do get there in the shorter term. I think the activities that we’re doing now on increasing awareness around this issue, better understanding and characterizing the unique risks that these systems pose across the board from a national security perspective, a human rights perspective, and an AI governance perspective, are a really big win in my book.

Meia Chita-Tegmark: When I allow myself to unreservedly daydream about how I want my work to manifest itself into the world, I always conjure up fantasy utopias in which people are cared for and are truly inspired. For example, that’s why I am very committed to fighting against the development of lethal autonomous weapons. It’s precisely because a world with such technologies would be one in which human lives would be cheap, killing would be anonymous, our moral compass would likely be very damaged by this. I want to start work on using technology to help people, maybe to heal people. In my research, I tried to think of various disabilities and how technology can help with those, but that is just one tiny aspect of a wealth of possibilities for using technology, and in particular, AI for good.

Anthony Aguirre: I’ll be quite gratified if I can find that some results of some of the things that I’ve done help society be better and more ready, and to wisely deal with challenges that are unfolding. There are a huge number of problems in society, but there are a particular subset that are just sort of exponentially growing problems, because they have to do with exponentially advancing technology. And the set of people who are actually thinking proactively of the problems that those technologies are going to create, rather than just creating the technologies or sort of dealing with the problems when they arise, it’s quite small.

FLI is a pretty significant part of that tiny community of people who are thinking about that. But I also think it’s very important. Problems are better solved in advance, if possible. So I think anything that we can do to nudge things in the right direction, taking the relatively high point of leverage I think the Future of Life Institute has, will feel useful and worthwhile. Any of these projects being successful, I think will have a significant positive impact, and it’s just a question of buckling down and trying to get them to work.

Kirsten Gronlund: A big part of this field, not necessarily, but sort of just historically has been that it’s very male, and it’s very white, and in and of itself is a pretty privileged group of people, and something that I personally care about a lot is to try to expand some of these conversations around the future, and what we want it to look like, and how we’re going to get there, and involve more people and more diverse voices, more perspectives.

It goes along with what I was saying, that if we don’t figure out how to use these technologies in better ways, we’re just going to be contributing to people who have historically been benefiting from technology, and so I think bringing some of the people who have historically not been benefiting from technology and the way that our society is structured into these conversations, can help us figure out how to make things better. I’ve definitely been trying, while we’re doing this book guide thing, to make sure that there’s a good balance of male and female authors, people of color, et cetera and same with our podcast guests and things like that. But yeah, I mean I think there’s a lot more to be done, definitely, in that area.

Tucker Davey: So with the projects related to FLI’s AI communication strategy, I am hopeful that as an overall community, as an AI safety community, as an effective altruism community, existential risk community, we’ll be able to better understand what our core beliefs are about risks from advanced AI, and better understand how to communicate to different audiences, whether these are policymakers that we need to convince that AI is a problem worth considering, or whether it’s just the general public, or shareholders, or investors. Different audiences have different ideas of AI, and if we as a community want to be more effective at getting them to care about this issue and understand that it’s a big risk, we need to figure out better ways to communicate with them. And I’m hoping that a lot of this communications work will help the community as a whole, not just FLI, communicate with these different parties and help them understand the risks.

Ian Rusconi: Well, I can say that I’ve learned more since I started working on these podcasts about more disparate subjects than I had any idea about. Take lethal autonomous weapon systems, for example, I didn’t know anything about that subject when I started. These podcasts are extremely educational, but they’re conversational, and that makes them accessible, and I love that. And I hope that as our audience increases, other people find the same thing and keep coming back because we learn something new every time. I think that through podcasts, like the ones that we put out at FLI, we are enabling that sort of educational enrichment.

Lucas Perry: Cool. I feel the same way. So, you actually have listened to more FLI podcasts than perhaps anyone, since you’ve listened to all of them. Of all of these podcasts, do you have any specific projects, or a series that you have found particularly valuable? Any favorite podcasts, if you could mention a few, or whatever you found most valuable?

Ian Rusconi: Yeah, a couple of things. First, back in February, Ariel and Max Tegmark did a two part conversation with Matthew Meselson in advance of FLI awarding him in April, and I think that was probably the most fascinating and wide ranging single conversation I’ve ever heard. Philosophy, science history, weapons development, geopolitics, the value of the humanities from a scientific standpoint, artificial intelligence, treaty development. It was just such an incredible amount of lived experience and informed perspective in that conversation. And, in general, when people ask me what kinds of things we cover on the FLI podcast, I point them to that episode.

Second, I’m really proud of the work that we did on Not Cool, A Climate Podcast. The amount of coordination and research Ariel and Kirsten put in to make that project happen was staggering. I think my favorite episodes from there were those dealing with the social ramifications of climate change, specifically human migration. It’s not my favorite topic to think about, for sure, but I think it’s something that we all desperately need to be aware of. I’m oversimplifying things here, but Kris Ebi’s explanations of how crop failure and malnutrition and vector borne diseases can lead to migration, Cullen Hendrix touching on migration as it relates to the social changes and conflicts born of climate change, Lindsay Getschel’s discussion of climate change as a threat multiplier and the national security implications of migration.

Migration is happening all the time and it’s something that we keep proving we’re terrible at dealing with, and climate change is going to increase migration, period. And we need to figure out how to make it work and we need to do it in a way that ameliorates living standards and prevents this extreme concentrated suffering. And there are questions about how to do this while preserving cultural identity, and the social systems that we have put in place, and I know none of these are easy. But if instead we’d just take the question of, how do we reduce suffering? Well, we know how to do that and it’s not complicated per se: have compassion and act on it. We need compassionate government and governance. And that’s a thing that came up a few times, sometimes directly and sometimes obliquely, in Not Cool. The more I think about how to solve problems like these, the more I think the intelligent answer is compassion.

Lucas Perry: So, do you feel like you just learned a ton about climate change from the Not Cool podcast that you just had no idea about?

Ian Rusconi: Yeah, definitely. And that’s really something that I can say about all of FLI’s podcast series in general, is that there are so many subtopics on the things that we talk about that I always learn something new every time I’m putting together one of these episodes. 

Some of the actually most thought provoking podcasts to me are the ones about the nature of intelligence and cognition, and what it means to experience something, and how we make decisions. Two of the AI Alignment Podcast episodes from this year stand out to me in particular. First was the one with Josh Green in February, which did an excellent job of explaining the signal grounding problem and grounded cognition in an understandable and engaging way. And I’m also really interested in his lab’s work using the veil of ignorance. And second was the episode with Mike Johnson and Andres Gomez Emilsson of the Qualia Research Institute in May, where I particularly liked the discussion of electromagnetic harmony in the brain, and the interaction between the consonance and dissonance of it’s waves, and how you can basically think of music as a means by which we can hack our brains. Again, it gets back to the fabulously, extraordinarily interdisciplinary aspect of everything that we talk about here.

Lucas Perry: Kirsten, you’ve also been integral to the podcast process. What are your favorite things that you’ve done at FLI in 2019, and are there any podcasts in particular that stand out for you?

Kirsten Gronlund: The Women For The Future campaign was definitely one of my favorite things, which was basically just trying to highlight the work of women involved in existential risk, and through that try to get more women feeling like this is something that they can do and to introduce them to the field a little bit. And then also the Not Cool Podcast that Ariel and I did. I know climate isn’t the major focus of FLI, but it is such an important issue right now, and it was really just interesting for me because I was much more closely involved with picking the guests and stuff than I have been with some of the other podcasts. So it was just cool to learn about various people and their research and what’s going to happen to us if we don’t fix the climate. 

Lucas Perry: What were some of the most interesting things that you learned from the Not Cool podcast? 

Kirsten Gronlund: Geoengineering was really crazy. I didn’t really know at all what geoengineering was before working on this podcast, and I think it was Alan Robock in his interview who was saying even just for people to learn about the fact that one of the solutions that people are considering to climate change right now being shooting a ton of crap into the atmosphere and basically creating a semi nuclear winter, would hopefully be enough to kind of freak people out into being like, “maybe we should try to fix this a different way.” So that was really crazy.

I also thought it was interesting just learning about some of the effects of climate change that you wouldn’t necessarily think of right away. The fact that they’ve shown the links between increased temperature and upheaval in government, and they’ve shown links between increased temperature and generally bad mood, poor sleep, things like that. The quality of our crops is going to get worse, so we’re going to be eating less nutritious food.

Then some of the cool things, I guess this ties in as well with artificial intelligence, is some of the ways that people are using some of these technologies like AI and machine learning to try to come up with solutions. I thought that was really cool to learn about, because that’s kind of like what I was saying earlier where if we can figure out how to use these technologies in productive ways. They are such powerful tools and can do so much good for us. So it was cool to see that in action in the ways that people are implementing automated systems and machine learning to reduce emissions and help out with the climate.

Lucas Perry: From my end, I’m probably most proud of our large conference, Beneficial AGI 2019, we did to further mainstream AGI safety thinking and research and then the resulting projects which were a result of conversations which took place there were also very exciting and encouraging. I’m also very happy about the growth and development of our podcast series. This year, we’ve had over 200,000 listens to our podcasts. So I’m optimistic about the continued growth and development of our outreach through this medium and our capacity to inform people about these crucial issues.

Everyone else, other than podcasts, what are some of your favorite things that you’ve done at FLI in 2019?

Tucker Davey: I would have to say the conferences. So the beneficial AGI conference was an amazing start to the year. We gathered such a great crowd in Puerto Rico, people from the machine learning side, from governance, from ethics, from psychology, and really getting a great group together to talk out some really big questions, specifically about the long-term future of AI, because there’s so many conferences nowadays about the near term impacts of AI, and very few are specifically dedicated to thinking about the long term. So it was really great to get a group together to talk about those questions and that set off a lot of good thinking for me personally. That was an excellent conference. 

And then a few months later, Anthony and a few others organized a conference called the Augmented Intelligence Summit, and that was another great collection of people from many different disciplines, basically thinking about a hopeful future with AI and trying to do world building exercises to figure out what that ideal world with AI would look like. These conferences and these events in these summits do a great job of bringing together people from different disciplines in different schools of thought to really tackle these hard questions, and everyone who attends them is really dedicated and motivated, so seeing all those faces is really inspiring.

Jessica Cussins Newman: I’ve really enjoyed the policy engagement that we’ve been able to have this year. You know, looking back to last year, we did see a lot of successes around the development of ethical principles for AI, and I think this past year, there’s been significant interest in actually implementing those principles into practice. So seeing many different governance forums, both within the U.S. and around the world, look to that next level, and so I think one of my favorite things has just been seeing FLI become a trusted resource for so many of those governance and policies processes that I think will significantly shape the future of AI.

I think the thing that I continue to value significantly about FLI is its ability as an organization to just bring together an amazing network of AI researchers and scientists, and to be able to hold events, and networking and outreach activities, that can merge those communities with other people thinking about issues around governance or around ethics or other kinds of sectors and disciplines. We have been playing a key role in translating some of the technical challenges related to AI safety and security into academic and policy spheres. And so that continues to be one of my favorite things that FLI is really uniquely good at.

Jared Brown: A recent example here, Future of Life Institute submitted some comments on a regulation that the Department of Housing and Urban Development put out in the U.S. And essentially the regulation is quite complicated, but they were seeking comment about how to integrate artificial intelligence systems into the legal liability framework surrounding something called ‘the Fair Housing Act,’ which is an old, very important civil rights legislation and protection to prevent discrimination in the housing market. And their proposal was essentially to grant users, such as a mortgage lender, or the banking system seeking loans, or even a landlord, if they were to use an algorithm to decide who they rent out a place to, or who to give a loan, that met certain technical standards, they’d be given liability protection. And this stems from the growing use of AI in the housing market. 

Now, in theory, there’s nothing wrong with using algorithmic systems so long as they’re not biased, and they’re accurate, and well thought out. However, if you grant it like HUD wanted to, blanket liability protection, you’re essentially telling that bank officer or that landlord that they should only exclusively use those AI systems that have the liability protection. And if they see a problem in those AI systems, and they’ve got somebody sitting across from them, and think this person really should get a loan, or this person should be able to rent my apartment because I think they’re trustworthy, but the AI algorithm says “no,” they’re not going to dispute what the AI algorithm tells them too, because to do that, they take on liability of their own, and could potentially get sued. So, there’s a real danger here in moving too quickly in terms of how much legal protection we give these systems. And so, the Future of Life Institute, as well as many other different groups, commented on this proposal and pointed out these flaws to the Department of Housing and Urban Development. That’s an example of just one of many different things that the Future of Life has done, and you can actually go online and see our public comments for yourself, if you want to.

Lucas Perry:Wonderful.

Jared Brown: Honestly, a lot of my favorite things are just these off the record type conversations that I have in countless formal and informal settings with different policymakers and people who influence policy. The policy-making world is an old-fashioned, face-to-face type business, and essentially you really have to be there, and to meet these people, and to have these conversations to really develop a level of trust, and a willingness to engage with them in order to be most effective. And thankfully I’ve had a huge range of those conversations throughout the year, especially on AI. And I’ve been really excited to see how well received Future of Life has been as an institution. Our reputation precedes us because of a lot of the great work we’ve done in the past with the Asilomar AI principles, and the AI safety grants. It’s really helped me get in the room for a lot of these conversations, and given us a lot of credibility as we discuss near-term AI policy.

In terms of bigger public projects, I also really enjoyed coordinating with some community partners across the space in our advocacy on the U.S. National Institute of Standards and Technology’s plan for engaging in the development of technical standards on AI. In the policy realm, it’s really hard to see some of the end benefit of your work, because you’re doing advocacy work, and it’s hard to get folks to really tell you why the certain changes were made, and if you were able to persuade them. But in this circumstance, I happen to know for a fact that we had real positive effect on the end products that they developed. I talked to the lead authors about it, and others, and can see the evidence in the final product of the effect of our changes.

In addition to our policy and advocacy work, I really, really like that FLI continues to interface with the AI technical expert community on a regular basis. And this isn’t just through our major conferences, but also informally throughout the entire year, through various different channels and personal relationships that we’ve developed. It’s really critical for anyone’s policy work to be grounded in the technical expertise on the topic that they’re covering. And I’ve been thankful for the number of opportunities I’ve been given throughout the year to really touch base with some of the leading minds in AI about what might work best, and what might not work best from a policy perspective, to help inform our own advocacy and thinking on various different issues.

I also really enjoy the educational and outreach work that FLI is doing. As with our advocacy work, it’s sometimes very difficult to see the end benefit of the work that we do with our podcasts, and our website, and our newsletter. But I know anecdotally, from various different people, that they are listened too, that they are read by leading policymakers and researchers in this space. And so, they have a real effect on developing a common understanding in the community and helping network and develop collaboration on some key topics that are of interest to the Future of Life and people like us.

Emilia Javorsky: 2019 was a great year at FLI. It’s my first year at FLI, so I’m really excited to be part of such an incredible team. There are two real highlights that come to mind. One was publishing an article in the British Medical Journal on this topic of engaging the medical community in the lethal autonomous weapons debate. In previous disarmament conversations, it’s always been a community that has played an instrumental role in getting global action on these issues passed, whether you look at nuclear, landmines, biorisk… So that was something that I thought was a great contribution, because up until now, they hadn’t really been engaged in the discussion.

The other that comes to mind that was really amazing was a workshop that we hosted, where we brought together AI researchers, and roboticists, and lethal autonomous weapons experts, with very divergent range of views of the topic, to see if they could achieve consensus on something. Anything. We weren’t really optimistic to say what that could be going into it, and the result of that was actually remarkably heartening. They came up with a roadmap that outlined four components for action on lethal autonomous weapons, including things like the potential role that a moratorium may play, research areas that need exploration, non-proliferation strategies, ways to avoid unintentional escalation. They actually published this in the IEEE Spectrum, which I really recommend reading, but it was just really exciting to see just how much area of agreement and consensus that can exist in people that you would normally think have very divergent views on the topic.

Max Tegmark: To make it maximally easy for them to get along, we actually did this workshop in our house, and we had lots of wine. And because they were in our house, also it was a bit easier to exert social pressure on them to make sure they were nice to each other, and have a constructive discussion. The task we gave them was simply: write down anything that they all agreed on that should be done to reduce the risk of terrorism or destabilizing events from this tech. And you might’ve expected a priori that they would come up with a blank piece of paper, because some of these people had been arguing very publicly that we need lethal autonomous weapons, and others had been arguing very vociferously that we should ban them. Instead, it was just so touching to see that when they actually met each other, often for the first time, they could actually listen directly to each other, rather than seeing weird quotes in the news about each other. 

Meia Chita-Tegmark: If I had to pick one thing, especially in terms of emotional intensity, it’s really been a while since I’ve been on such an emotional roller coaster as the one during the workshop related to lethal autonomous weapons. It was so inspirational to see how people that come with such diverging opinions could actually put their minds together, and work towards finding consensus. For me, that was such a hope inducing experience. It was a thrill.

Max Tegmark: They built a real camaraderie and respect for each other, and they wrote this report with five different sets of recommendations in different areas, including a moratorium on these things and all sorts of measures to reduce proliferation, and terrorism, and so on, and that made me feel more hopeful.

We got off to a great start I feel with our January 2019 Puerto Rico conference. This was the third one in a series where we brought together world leading AI researchers from academia, and industry, and other thinkers, to talk not about how to make AI more powerful, but how to make it beneficial. And what I was particularly excited about was that this was the first time when we also had a lot of people from China. So it wasn’t just this little western club, it felt much more global. It was very heartening to meet to see how well everybody got along and shared visions people really, really had. And I hope that if people who are actually building this stuff can all get along, can help spread this kind of constructive collaboration to the politicians and the political leaders in their various countries, we’ll all be much better off.

Anthony Aguirre: That felt really worthwhile in multiple aspects. One, just it was a great meeting getting together with this small, but really passionately positive, and smart, and well-intentioned, and friendly community. It’s so nice to get together with all those people, it’s very inspiring. But also, that out of that meeting came a whole bunch of ideas for very interesting and important projects. And so some of the things that I’ve been working on are projects that came out of that meeting, and there’s a whole long list of other projects that came out of that meeting, some of which some people are doing, some of which are just sitting, gathering dust, because there aren’t enough people to do them. That feels like really good news. It’s amazing when you get a group of smart people together to think in a way that hasn’t really been widely done before. Like, “Here’s the world 20 or 30 or 50 or 100 years from now, what are the things that we’re going to want to have happened in order for the world to be good then?”

Not many people sit around thinking that way very often. So to get 50 or 100 people who are really talented together thinking about that, it’s amazing how easy it is to come up with a set of really compelling things to do. Now actually getting those done, getting the people and the money and the time and the organization to get those done is a whole different thing. But that was really cool to see, because you can easily imagine things that have a big influence 10 or 15 years from now that were born right at that meeting.

Lucas Perry: Okay, so that hits on BAGI. So, were there any other policy-related things that you’ve done at FLI in 2019 that you’re really excited about?

Anthony Aguirre: It’s been really good to see, both at FLI and globally, the new and very serious attention being paid to AI policy and technology policy in general. We created the Asilomar principles back in 2017, and now two years later, there are multiple other sets of principles, many of which are overlapping and some of which aren’t. And more importantly, now institutions coming into being, international groups like the OECD, like the United Nations, the European Union, maybe someday the US government, actually taking seriously these sets of principles about how AI should be developed and deployed, so as to be beneficial.

There’s kind of now too much going on to keep track of, multiple bodies, conferences practically every week, so the FLI policy team has been kept busy just keeping track of what’s going on, and working hard to positively influence all these efforts that are going on. Because of course while there’s a lot going on, it doesn’t necessarily mean that there’s a huge amount of expertise that is available to feed those efforts. AI is relatively new on the world’s stage, at least at the size that it’s assuming. AI and policy expertise, that intersection, there just aren’t a huge number of people who are ready to give useful advice on the policy side and the technical side and what the ramifications are and so on.

So I think the fact that FLI has been there from the early days of AI policy five years ago, means that we have a lot to offer to these various efforts that are going on. I feel like we’ve been able to really positively contribute here and there, taking opportunistic chances to lend our help and our expertise to all kinds of efforts that are going on and doing real serious policy work. So that’s been really interesting to see that unfold and how rapidly these various efforts are gearing up around the world. I think that’s something that FLI can really do, bringing the technical expertise to make those discussions and arguments more sophisticated, so that we can really take it to the next step and try to get something done.

Max Tegmark: Another one which was very uplifting is this tradition we have to celebrate unsung heroes. So three years ago we celebrated the guy who prevented the world from getting nuked in 1962, Vasili Arkhipov. Two years ago, we celebrated the man who probably helped us avoid getting nuked in 1983, Stanislav Petrov. And this year we celebrated an American who I think has done more than anyone else to prevent all sorts of horrible things happening with bioweapons, Matthew Meselson from Harvard, who ultimately persuaded Kissinger, who persuaded Brezhnev and everyone else that we should just ban them. 

We celebrated them all by giving them or their survivors a $50,000 award and having a ceremony where we honored them, to remind the world of how valuable it is when you can just draw a clear, moral line between the right thing to do and the wrong thing to do. Even though we call this the Future of Life award officially, informally, I like to think of this as our unsung hero award, because there really aren’t awards particularly for people who prevented shit from happening. Almost all awards are for someone causing something to happen. Yet, obviously we wouldn’t be having this conversation if there’d been a global thermonuclear war. And it’s so easy to think that just because something didn’t happen, there’s not much to think about it. I’m hoping this can help create both a greater appreciation of how vulnerable we are as a species and the value of not being too sloppy. And also, that it can help foster a tradition that if someone does something that future generations really value, we actually celebrate them and reward them. I want us to have a norm in the world where people know that if they sacrifice themselves by doing something courageous, that future generations will really value, then they will actually get appreciation. And if they’re dead, their loved ones will get appreciation.

We now feel incredibly grateful that our world isn’t radioactive rubble, or that we don’t have to read about bioterrorism attacks in the news every day. And we should show our gratitude, because this sends a signal to people today who can prevent tomorrow’s catastrophes. And the reason I think of this as an unsung hero award, and the reason these people have been unsung heroes, is because what they did was often going a little bit against what they were supposed to do at the time, according to the little system they were in, right? Arkhipov and Petrov, neither of them got any medals for averting nuclear war because their peers either were a little bit pissed at them for violating protocol, or a little bit embarrassed that we’d almost had a war by mistake. And we want to send the signal to the kids out there today that, if push comes to shove, you got to go with your own moral principles.

Lucas Perry: Beautiful. What project directions are you most excited about moving in, in 2020 and beyond?

Anthony Aguirre: Along with the ones that I’ve already mentioned, something I’ve been involved with is Metaculus, this prediction platform, and the idea there is there are certain facts about the future world, and Metaculus is a way to predict probabilities for those facts being true about the future world. But they’re also facts about the current world, that we either don’t know whether they’re true or not or we disagree about whether they’re true or not. Something I’ve been thinking a lot about is how to extend the predictions of Metaculus into a general truth-seeking mechanism. If there’s something that’s contentious now, and people disagree about something that should be sort of a fact, can we come up with a reliable truth-seeking arbiter that people will believe, because it’s been right in the past, and it has very clear reliable track record for getting things right, in the same way that Metaculus has that record for getting predictions right?

So that’s something that interests me a lot, is kind of expanding that very strict level of accountability and track record creation from prediction to just truth-seeking. And I think that could be really valuable, because we’re entering this phase where people feel like they don’t know what’s true and facts are under contention. People simply don’t know what to believe. The institutions that they’re used to trusting to give them reliable information are either conflicting with each other or getting drowned in a sea of misinformation.

Lucas Perry: So, would this institution gain its credibility and epistemic status and respectability by taking positions on unresolved, yet concrete issues, which are likely to resolve in the short-term?

Anthony Aguirre: Or the not as short-term. But yeah, so just like in a prediction, where there might be disagreements as to what’s going to happen because nobody quite knows, and then at some point something happens and we all agree, “Oh, that happened, and some people were right and some people were wrong,” I think there are many propositions under contention now, but in a few years when the dust has settled and there’s not so much heat about them, everybody’s going to more or less agree on what the truth was.

And so I think, in a sense, this is about saying, “Here’s something that’s contentious now, let’s make a prediction about how that will turn out to be seen five or 10 or 15 years from now, when the dust has settled people more or less agree on how this was.”

I think there’s only so long that people can go without feeling like they can actually rely on some source of information. I mean, I do think that there is a reality out there, and ultimately you have to pay a price if you are not acting in accordance with what is true about that reality. You can’t indefinitely win by just denying the truth of the way that the world is. People seem to do pretty well for awhile, but I maintain my belief that eventually there will be a competitive advantage in understanding the way things actually are, rather than your fantasy of them.

We in the past did have trusted institutions that people generally listened to, and felt like I’m being told that basic truth. Now they weren’t always, and there were lots of problems with those institutions, but we’ve lost something, in that almost nobody trusts anything anymore at some level, and we have to get that back. We will solve this problem, I think, in the sense that we sort of have to. What that solution will look like is unclear, and this is sort of an effort to seek some way to kind of feel our way towards a potential solution to that.

Tucker Davey: I’m definitely excited to continue this work on our AI messaging and generally just continuing the discussion about advanced AI and artificial general intelligence within the FLI team and within the broader community, to get more consensus about what we believe and how we think we should approach these topics with different communities. And I’m also excited to see how our policy team continues to make more splashes across the world, because it’s really been exciting to watch how Jared and Jessica and Anthony have been able to talk with so many diverse shareholders and help them make better decisions about AI.

Jessica Cussins Newman: I’m most excited to see the further development of some of these global AI policy forums in 2020. For example, the OECD is establishing an AI policy observatory, which we’ll see further development on early in next year. And FLI is keen to support this initiative, and I think it may be a really meaningful forum for global coordination and cooperation on some of these key AI global challenges. So I’m really excited to see what they can achieve.

Jared Brown: I’m really looking forward to the opportunity the Future of Life has to lead the implementation of a recommendation related to artificial intelligence from the UN’s High-Level Panel on Digital Cooperation. This is a group that was led by Jack Ma and Melinda Gates, and they produced an extensive report that had many different recommendations on a range of digital or cyber issues, including one specifically on artificial intelligence. And because of our past work, we were invited to be a leader on the effort to implement and further refine the recommendation on artificial intelligence. And we’ll be able to do that with cooperation from the government of France, and Finland, and also with a UN agency called the UN Global Pulse. So I’m really excited about this opportunity to help lead a major project in the global governance arena, and to help actualize how some of these early soft law norms that have developed in AI policy can be developed for a better future.

I’m also excited about continuing to work with other civil society organizations, such as the Future of Humanity Institute, the Center for the Study of Existential Risk, other groups that are like-minded in their approach to tech issues. And helping to inform how we work on AI policy in a number of different governance spaces, including with the European Union, the OECD, and other environments where AI policy has suddenly become the topic du jour of interest to policy-makers.

Emilia Javorsky: Something that I’m really excited about is continuing to work on this issue of global engagement in the topic of lethal autonomous weapons, as I think this issue is heading in a very positive direction. By that I mean starting to move towards meaningful action. And really the only way we get to action on this issue is through education, because policy makers really need to understand what these systems are, what their risks are, and how AI differs from traditional other areas of technology that have really well established existing governance frameworks. So that’s something I’m really excited about for the next year. And this has been especially in the context of engaging with states at the United nations. So it’s really exciting to continue those efforts and continue to keep this issue on the radar.

Kirsten Gronlund: I’m super excited about our website redesign. I think that’s going to enable us to reach a lot more people and communicate more effectively, and obviously it will make my life a lot easier. So I think that’s going to be great.

Lucas Perry: I’m excited about that too. I think there’s a certain amount of a maintenance period that we need to kind of go through now, with regards to the website and a bunch of the pages, so that everything is refreshed and new and structured better. 

Kirsten Gronlund: Yeah, we just need like a little facelift. We are aware that the website right now is not super user friendly, and we are doing an incredibly in depth audit of the site to figure out, based on data, what’s working and what isn’t working, and how people would best be able to use the site to get the most out of the information that we have, because I think we have really great content, but the way that the site is organized is not super conducive to finding it, or using it.

So anyone who likes our site and our content but has trouble navigating or searching or anything: hopefully that will be getting a lot easier.

Ian Rusconi: I think I’d be interested in more conversations about ethics overall, and how ethical decision making is something that we need more of, as opposed to just economic decision making, and reasons for that with actual concrete examples. It’s one of the things that I find is a very common thread throughout almost all of the conversations that we have, but is rarely explicitly connected from one episode to another. And I think that there is some value in creating a conversational narrative about that. If we look at, say, the Not Cool Project, there are episodes about finance, and episodes about how the effects of what we’ve been doing to create global economy have created problems. And if we look at the AI Alignment Podcasts, there are concerns about how systems will work in the future, and who they will work for, and who benefits from things. And if you look at FLI’s main podcast, there are concerns about denuclearization, and lethal autonomous weapons, and things like that, and there are major ethical considerations to be had in all of these.

And I think that there’s benefit in taking all of these ethical considerations, and talking about them specifically outside of the context of the fields that they are in, just as a way of getting more people to think about ethics. Not in opposition to thinking about, say, economics, but just to get people thinking about ethics as a stand-alone thing, before trying to introduce how it’s relevant to something. I think if more people thought about ethics, we would have a lot less problems than we do.

Lucas Perry: Yeah, I would be interested in that too. I would first want to know empirically how much of the decisions that the average human being makes a day are actually informed by “ethical decision making,” which I guess my intuition at the moment is probably not that much?

Ian Rusconi: Yeah, I don’t know how much ethics plays into my autopilot-type decisions. I would assume. Probably not very much.

Lucas Perry: Yeah. We think about ethics explicitly a lot. I think that that definitely shapes my terminal values. But yeah, I don’t know, I feel confused about this. I don’t know how much of my moment to moment lived experience and decision making is directly born of ethical decision making. So I would be interested in that too, with that framing that I would first want to know the kinds of decision making faculties that we have, and how often each one is employed, and the extent to which improving explicit ethical decision making would help in making people more moral in general.

Ian Rusconi: Yeah, I could absolutely get behind that.

Max Tegmark: What I find also to be a concerning trend, and a predictable one, is that just like we had a lot of greenwashing in the corporate sector about environmental and climate issues, where people would pretend to care about the issues just so they didn’t really have to do much, we’re seeing a lot of what I like to call “ethics washing” now in AI, where people say, “Yeah, yeah. Okay, let’s talk about AI ethics now, like an ethics committee, and blah, blah, blah, but let’s not have any rules or regulations, or anything. We can handle this because we’re so ethical.” And interestingly, the very same people who talk the loudest about ethics are often among the ones who are the most dismissive about the bigger risks from human level AI, and beyond. And also the ones who don’t want to talk about malicious use of AI, right? They’ll be like, “Oh yeah, let’s just make sure that robots and AI systems are ethical and do exactly what they’re told,” but they don’t want to discuss what happens when some country, or some army, or some terrorist group has such systems, and tells them to do things that are horrible for other people. That’s an elephant in the room we are looking forward to help draw more attention to, I think, in the coming year. 

And what I also feel is absolutely crucial here is to avoid splintering the planet again, into basically an eastern and a western zone of dominance that just don’t talk to each other. Trade is down between China and the West. China has its great firewall, so they don’t see much of our internet, and we also don’t see much of their internet. It’s becoming harder and harder for students to come here from China because of visas, and there’s sort of a partitioning into two different spheres of influence. And as I said before, this is a technology which could easily make everybody a hundred times better or richer, and so on. You can imagine many futures where countries just really respect each other’s borders, and everybody can flourish. Yet, major political leaders are acting like this is some sort of zero-sum game. 

I feel that this is one of the most important things to help people understand that, no, it’s not like we have a fixed amount of money or resources to divvy up. If we can avoid very disruptive conflicts, we can all have the future of our dreams.

Lucas Perry: Wonderful. I think this is a good place to end on that point. So, what are reasons that you see for existential hope, going into 2020 and beyond?

Jessica Cussins Newman: I have hope for the future because I have seen this trend where it’s no longer a fringe issue to talk about technology ethics and governance. And I think that used to be the case not so long ago. So it’s heartening that so many people and institutions, from engineers all the way up to nation states, are really taking these issues seriously now. I think that momentum is growing, and I think we’ll see engagement from even more people and more countries in the future.

I would just add that it’s a joy to work with FLI, because it’s an incredibly passionate team, and everybody has a million things going on, and still gives their all to this work and these projects. I think what unites us is that we all think these are some of the most important issues of our time, and so it’s really a pleasure to work with such a dedicated team.

Lucas Perry:  Wonderful.

Jared Brown: As many of the listeners will probably realize, governments across the world have really woken up to this thing called artificial intelligence, and what it means for civil society, their governments, and the future really of humanity. And I’ve been surprised, frankly, over the past year, about how many of the new national, and international strategies, the new principles, and so forth are actually quite aware of both the potential benefits but also the real safety risks associated with AI. And frankly, this time this year, last year, I wouldn’t have thought as many principles would have come out, that there’s a lot of positive work in those principles, there’s a lot of serious thought about the future of where this technology is going. And so, on the whole, I think the picture is much better than what most people might expect in terms of the level of high-level thinking that’s going on in policy-making about AI, its benefits, and its risks going forward. And so on that score, I’m quite hopeful that there’s a lot of positive soft norms to work from. And hopefully we can work to implement those ideas and concepts going forward in real policy.

Lucas Perry: Awesome.

Emilia Javorsky: I am optimistic, and it comes from having had a lot of these conversations, specifically this past year, on lethal autonomous weapons, and speaking with people from a range of views and being able to sit down, coming together, having a rational and respectful discussion, and identifying actionable areas of consensus. That has been something that has been very heartening for me, because there is just so much positive potential for humanity waiting on the science and technology shelves of today, nevermind what’s in the pipeline that’s coming up. And I think that despite all of this tribalism and hyperbole that we’re bombarded with in the media every day, there are ways to work together as a society, and as a global community, and just with each other to make sure that we realize all that positive potential, and I think that sometimes gets lost. I’m optimistic that we can make that happen and that we can find a path forward on restoring that kind of rational discourse and working together.

Tucker Davey: I think my main reasons for existential hope in 2020 and beyond are, first of all, seeing how many more people are getting involved in AI safety, in effective altruism, and existential risk mitigation. It’s really great to see the community growing, and I think just by having more people involved, that’s a huge step. As a broader existential hope, I am very interested in thinking about how we can better coordinate to collectively solve a lot of our civilizational problems, and to that end, I’m interested in ways where we can better communicate about our shared goals on certain issues, ways that we can more credibly commit to action on certain things. So these ideas of credible commitment mechanisms, whether that’s using advanced technology like blockchain or whether that’s just smarter ways to get people to commit to certain actions, I think there’s a lot of existential hope for bigger groups in society coming together and collectively coordinating to make systemic change happen.

I see a lot of potential for society to organize mass movements to address some of the biggest risks that we face. For example, I think it was last year, an AI researcher, Toby Walsh, who we’ve worked with, he organized a boycott against a South Korean company that was working to develop these autonomous weapons. And within a day or two, I think, he contacted a bunch of AI researchers and they signed a pledge to boycott this group until they decided to ditch the project. And the boycotts succeeded basically within two days. And I think that’s one good example of the power of boycotts, and the power of coordination and cooperation to address our shared goals. So if we can learn lessons from Toby Walsh’s boycott, as well as from the fossil fuel and nuclear divestment movements, I think we can start to realize some of our potential to push these big industries in more beneficial directions.

So whether it’s the fossil fuel industry, the nuclear weapons industry, or the AI industry, as a collective, we have a lot of power to use stigma to push these companies in better directions. No company or industry wants bad press. And if we get a bunch of researchers together to agree that a company’s doing some sort of bad practice, and then we can credibly say that, “Look, you guys will get bad press if you guys don’t change your strategy,” many of these companies might start to change their strategy. And I think if we can better coordinate and organize certain movements and boycotts to get different companies and industries to change their practices, that’s a huge source of existential hope moving forward.

Lucas Perry: Yeah. I mean, it seems like the point that you’re trying to articulate is that there are particular instances like this thing that happened with Toby Walsh that show you the efficacy of collective action around our issues.

Tucker Davey: Yeah. I think there’s a lot more agreement on certain shared goals such,as we don’t want banks investing in fossil fuels, or we don’t want AI companies developing weapons that can make targeted kill decisions without human intervention. And if we take some of these broad shared goals and then we develop some sort of plan to basically pressure these companies to change their ways or to adopt better safety measures, I think these sorts of collective action can be very effective. And I think as a broader community, especially with more people in the community, we have much more of a possibility to make this happen.

So I think I see a lot of existential hope from these collective movements to push industries in more beneficial directions, because they can really help us, as individuals, feel more of a sense of agency that we can actually do something to address these risks.

Kirsten Gronlund: I feel like there’s actually been a pretty marked difference in the way that people are reacting to… at least things like climate change, and I sort of feel like more generally, there’s sort of more awareness just of the precariousness of humanity, and the fact that our continued existence and success on this planet is not a given, and we have to actually work to make sure that those things happen. Which is scary, and kind of exhausting, but I think is ultimately a really good thing, the fact that people seem to be realizing that this is a moment where we actually have to act and we have to get our shit together. We have to work together and this isn’t about politics, this isn’t about, I mean it shouldn’t be about money. I think people are starting to figure that out, and it feels like that has really become more pronounced as of late. I think especially younger generations, like obviously there’s Greta Thunberg and the youth movement on these issues. It seems like the people who are growing up now are so much more aware of things than I certainly was at that age, and that’s been cool to see, I think. They’re better than we were, and hopefully things in general are getting better.

Lucas Perry: Awesome.

Ian Rusconi: I think it’s often easier for a lot of us to feel hopeless than it is to feel hopeful. Most of the news that we get comes in the form of warnings, or the existing problems, or the latest catastrophe, and it can be hard to find a sense of agency as an individual when talking about huge global issues like lethal autonomous weapons, or climate change, or runaway AI.

People frame little issues that add up to bigger ones as things like death by 1,000 bee stings, or the straw that broke the camel’s back, and things like that, but that concept works both ways. 1,000 individual steps in a positive direction can change things for the better. And working on these podcasts has shown me the number of people taking those steps. People working on AI safety, international weapons bans, climate change mitigation efforts. There are whole fields of work, absolutely critical work, that so many people, I think, probably know nothing about. Certainly that I knew nothing about. And sometimes, knowing that there are people pulling for us, that’s all we need to be hopeful. 

And beyond that, once you know that work exists and that people are doing it, nothing is stopping you from getting informed and helping to make a difference. 

Kirsten Gronlund: I had a conversation with somebody recently who is super interested in these issues, but was feeling like they just didn’t have particularly relevant knowledge or skills. And what I would say is “neither did I when I started working for FLI,” or at least I didn’t know a lot about these specific issues. But really anyone, if you care about these things, you can bring whatever skills you have to the table, because we need all the help we can get. So don’t be intimidated, and get involved.

Ian Rusconi: I guess I think that’s one of my goals for the podcast, is that it inspires people to do better, which I think it does. And that sort of thing gives me hope.

Lucas Perry: That’s great. I feel happy to hear that, in general.

Max Tegmark: Let me first give a more practical reason for hope, and then get a little philosophical. So on the practical side, there are a lot of really good ideas that the AI community is quite unanimous about, in terms of policy and things that need to happen, that basically aren’t happening because policy makers and political leaders don’t get it yet. And I’m optimistic that we can get a lot of that stuff implemented, even though policy makers won’t pay attention now. If we get AI researchers around the world to formulate and articulate really concrete proposals and plans for policies that should be enacted, and they get totally ignored for a while? That’s fine, because eventually some bad stuff is going to happen because people weren’t listening to their advice. And whenever those bad things do happen, then leaders will be forced to listen because people will be going, “Wait, what are you going to do about this?” And if at that point, there are broad international consensus plans worked out by experts about what should be done, that’s when they actually get implemented. So the hopeful message I have to anyone working in AI policy is: don’t despair if you’re being ignored right now, keep doing all the good work and flesh out the solutions, and start building consensus for it among the experts, and there will be a time people will listen to you. 

To just end on a more philosophical note, again, I think it’s really inspiring to think how much impact intelligence has had on life so far. We realize that we’ve already completely transformed our planet with intelligence. If we can use artificial intelligence to amplify our intelligence, it will empower us to solve all the problems that we’re stumped by thus far, including curing all the diseases that kill our near and dear today. And for those so minded, even help life spread into the cosmos. Not even the sky is the limit, and the decisions about how this is going to go are going to be made within the coming decades, so within the lifetime of most people who are listening to this. There’s never been a more exciting moment to think about grand, positive visions for the future. That’s why I’m so honored and excited to get to work with the Future Life Institute.

Anthony Aguirre: Just like disasters, I think big positive changes can arise with relatively little warning and then seem inevitable in retrospect. I really believe that people are actually wanting and yearning for a society and a future that gives them fulfillment and meaning, and that functions and works for people.

There’s a lot of talk in the AI circles about how to define intelligence, and defining intelligence as the ability to achieve one’s goals. And I do kind of believe that for all its faults, humanity is relatively intelligent as a whole. We can be kind of foolish, but I think we’re not totally incompetent at getting what we are yearning for, and what we are yearning for is a kind of just and supportive and beneficial society that we can exist in. Although there are all these ways in which the dynamics of things that we’ve set up are going awry in all kinds of ways, and people’s own self-interest fighting it out with the self-interest of others is making things go terribly wrong, I do nonetheless see lots of people who are putting interesting, passionate effort forward toward making a better society. I don’t know that that’s going to turn out to be the force that prevails, I just hope that it is, and I think it’s not time to despair.

There’s a little bit of a selection effect in the people that you encounter through something like the Future of Life Institute, but there are a lot of people out there who genuinely are trying to work toward a vision of some better future, and that’s inspiring to see. It’s easy to focus on the differences in goals, because it seems like different factions that people want totally different things. But I think that belies the fact that there are lots of commonalities that we just kind of take for granted, and accept, and brush under the rug. Putting more focus on those and focusing the effort on, “given that we can all agree that we want these things and let’s have an actual discussion about what is the best way to get those things,” that’s something that there’s sort of an answer to, in the sense that we might disagree on what our preferences are, but once we have the set of preferences we agree on, there’s kind of the correct or more correct set of answers to how to get those preferences satisfied. We actually are probably getting better, we can get better, this is an intellectual problem in some sense and a technical problem that we can solve. There’s plenty of room for progress that we can all get behind.

Again, strong selection effect. But when I think about the people that I interact with regularly through the Future of Life Institute and other organizations that I work as a part of, they’re almost universally highly-effective, intelligent, careful-thinking, well-informed, helpful, easy to get along with, cooperative people. And it’s not impossible to create or imagine a society where that’s just a lot more widespread, right? It’s really enjoyable. There’s no reason that the world can’t be more or less dominated by such people.

As economic opportunity grows and education grows and everything, there’s no reason to see that that can’t grow also, in the same way that non-violence has grown. It used to be a part of everyday life for pretty much everybody, now many people I know go through many years without having any violence perpetrated on them or vice versa. We still live in a sort of overall, somewhat violent society, but nothing like what it used to be. And that’s largely because of the creation of wealth and institutions and all these things that make it unnecessary and impossible to have that as part of everybody’s everyday life.

And there’s no reason that can’t happen in most other domains, I think it is happening. I think almost anything is possible. It’s amazing how far we’ve come, and I see no reason to think that there’s some hard limit on how far we go.

Lucas Perry: So I’m hopeful for the new year simply because in areas that are important, I think things are on average getting better than they are getting worse. And it seems to be that much of what causes pessimism is perception that things are getting worse, or that we have these strange nostalgias for past times that we believe to be better than the present moment.

This isn’t new thinking, and is much in line with what Steven Pinker has said, but I feel that when we look at the facts about things like poverty, or knowledge, or global health, or education, or even the conversation surrounding AI alignment and existential risk, that things really are getting better, and that generally the extent to which it seems like it isn’t or that things are getting worse can be seen in many cases as our trend towards more information causing the perception that things are getting worse. But really, we are shining a light on everything that is already bad or we are coming up with new solutions to problems which generate new problems in and of themselves. And I think that this trend towards elucidating all of the problems which already exist, or through which we develop technologies and come to new solutions, which generate their own novel problems, this can seem scary as all of these bad things continue to come up, it seems almost never ending.

But they seem to me more now like revealed opportunities for growth and evolution of human civilization to new heights. We are clearly not at the pinnacle of life or existence or wellbeing, so as we encounter and generate and uncover more and more issues, I find hope in the fact that we can rest assured that we are actively engaged in the process of self-growth as a species. Without encountering new problems about ourselves, we are surely stagnating and risk decline. However, it seems that as we continue to find suffering and confusion and evil in the world and to notice how our new technologies and skills may contribute to these things, we have an opportunity to act upon remedying them and then we can know that we are still growing and that, that is a good thing. And so I think that there’s hope in the fact that we’ve continued to encounter new problems because it means that we continue to grow better. And that seems like a clearly good thing to me.

And with that, thanks so much for tuning into this Year In The Review Podcast on our activities and team as well as our feelings about existential hope moving forward. If you’re a regular listener, we want to share our deepest thanks for being a part of this conversation and thinking about these most fascinating and important of topics. And if you’re a new listener, we hope that you’ll continue to join us in our conversations about how to solve the world’s most pressing problems around existential risks and building a beautiful future for all. Many well and warm wishes for a happy and healthy end of the year for everyone listening from the Future of Life Institute team. If you find this podcast interesting, valuable, unique, or positive, consider sharing it with friends and following us on your preferred listening platform. You can find links for that on the pages for these podcasts found at futureoflife.org.

FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that’s stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford’s Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. 

Topics discussed include:

  • The psychology of existential risk, longtermism, effective altruism, and speciesism
  • Stefan’s study “The Psychology of Existential Risks: Moral Judgements about Human Extinction”
  • Various works and studies Stefan Schubert has co-authored in these spaces
  • How this enables us to be more altruistic

Timestamps:

0:00 Intro

2:31 Stefan’s academic and intellectual journey

5:20 How large is this field?

7:49 Why study the psychology of X-risk and EA?

16:54 What does a better understanding of psychology here enable?

21:10 What are the cognitive limitations psychology helps to elucidate?

23:12 Stefan’s study “The Psychology of Existential Risks: Moral Judgements about Human Extinction”

34:45 Messaging on existential risk

37:30 Further areas of study

43:29 Speciesism

49:18 Further studies and work by Stefan

Works Cited 

Understanding cause-neutrality

Considering Considerateness: Why communities of do-gooders should be exceptionally considerate

On Caring by Nate Soares

Against Empathy: The Case for Rational Compassion

Eliezer Yudkowsky’s Sequences

Whether and Where to Give

A Person-Centered Approach to Moral Judgment

Moral Aspirations and Psychological Limitations

Robin Hanson on Near and Far Mode 

Construal-Level Theory of Psychological Distance

The Puzzle of Ineffective Giving (Under Review) 

Impediments to Effective Altruism

The Many Obstacles to Effective Giving (Under Review) 

Moral Aspirations and Psychological Limitations

 

You can listen to the podcast above, or read the full transcript below. All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

Lucas Perry: Hello everyone and welcome to the Future of Life Institute Podcast. I’m Lucas Perry.  Today, we’re speaking with Stefan Schubert about the psychology of existential risk, longtermism, and effective altruism more broadly. This episode focuses on Stefan’s reasons for exploring psychology in this space, how large this space of study currently is, the usefulness of studying psychology as it pertains to these areas, the central questions which motivate his research, a recent publication that he co-authored which motivated this interview called The Psychology of Existential Risks: Moral Judgements about Human Extinction, as well as other related work of his. 

This podcast often ranks in the top 100 of technology podcasts on Apple Music. This is a big help for increasing our audience and informing the public about existential and technological risks, as well as what we can do about them. So, if this podcast is valuable to you, consider sharing it with friends and leaving us a good review. It really helps. 

Stefan Schubert is a researcher at the the Social Behaviour and Ethics Lab at the University of Oxford, working in the intersection of moral psychology and philosophy. He focuses on psychological questions of relevance to effective altruism, such as why our altruistic actions are often ineffective, and why we don’t invest more in safe-guarding our common future. He was previously a researcher at Centre for Effective Altruism and a postdoc in philosophy at the London School of Economics. 

We can all be more altruistic and effective in our service of others. Expanding our moral circles of compassion farther into space and deeper into time, as well as across species, and possibly even eventually to machines, while mitigating our own tendencies towards selfishness and myopia is no easy task and requires deep self-knowledge and far more advanced psychology than I believe we have today. 

This conversation explores the first steps that researchers like Stefan are taking to better understand this space in service of doing the most good we can. 

So, here is my conversation with Stefan Schubert 

Can you take us through your intellectual and academic journey in the space of EA and longtermism and in general, and how that brought you to what you’re working on now?

Stefan Schubert: I started range of different subjects. I guess I had a little bit of hard time deciding what I wanted to do. So I got a masters in political science. But then in the end, I ended up doing a PhD in philosophy at Lund University in Sweden, specifically in epistemology, the theory of knowledge. And then I went to London School of Economics to do a post doc. And during that time, I discovered effective altruism and I got more and more involved with that.

So then I applied to Centre for Effective Altruism, here in Oxford, to work as a researcher. And I worked there as a researcher for two years. At first, I did policy work, including reports on catastrophic risk and x-risk for a foundation and for a government. But then I also did some work, which was general and foundational or theoretical nature, including work on the notion of cause neutrality, how we should understand that. And also on how EAs should think about everyday norms like norms of friendliness and honesty.

And I guess that even though I, at the time I didn’t do sort of psychological empirical research, that sort of relates to my current work on psychology because for the last two years, I’ve worked on the psychology of effective altruism at the Social Behavior and Ethics Lab here at Oxford. This lab is headed by Nadira Farber and I also work closely with Lucius Caviola, who did his PhD here at Oxford and recently moved to Harvard to do a postdoc.

So we have three strands of research. The first one is sort of the psychology of effective altruism in general. So why is it that people aren’t effectively altruistic? This is a bit of a puzzle because generally people, they are at least somewhat effective when they working for their own interest. To be sure they are not maximally effective, but when they try to buy a home or save for retirement, they do some research and sort of try to find good value for money.

But they don’t seem to do the same when they donate to charity. They aren’t as concerned with effectiveness. So this is a bit of a puzzle. And then there are two strands of research, which have to do with specific EA causes. So one is the psychology of longtermism and existential risk, and the other is the psychology of speciesism, human-animal relations. So out of these three strands of research, I focused the most on the psychology of effective altruism in general and the psychology of longtermism and existential risk.

Lucas Perry: How large is the body of work regarding the psychology of existential risk and effective altruism in general? How many people are working on this? If you give us more insight into the state of the field and the amount of interest there.

Stefan Schubert: It’s somewhat difficult to answer because it sort of depends on how do you define these domains. There’s research, which is of some relevance to ineffective altruism, but it’s not exactly on that. But I will say that there may be around 10 researchers or so who are sort of EAs and work on these topics for EA reasons. So you definitely want to count them. And then when we thinking about non EA researchers, like other academics, there hasn’t been that much research I would say on the psychology of X-risk and longtermism

There’s research on the psychology of climate change, that’s a fairly large topic. But more specifically on X-risk and longtermism, there’s less. Effective altruism in general. That’s a fairly large topic. There’s lots of research on biases like the identifiable victim effect: people’s tendency to donate to identifiable victims over larger number of known unidentifiable statistical victims. Maybe the order of a few hundred papers.

And then the last topic, speciesism; human-animals relations: that’s fairly large. I know less of that literature, but my impression is that it’s fairly large.

Lucas Perry: Going back into the 20th century, much of what philosophers have done, like Peter Singer is constructing thought experiments, which isolate the morally relevant aspects of a situation, which is intended in the end to subvert psychological issues and biases in people.

So I guess I’m just reflecting here on how philosophical thought experiments are sort of the beginnings of elucidating a project of the psychology of EA or existential risk or whatever else.

Stefan Schubert: The vast majority of these papers are not directly inspired by philosophical thought experiments. It’s more like psychologists who run some experiments because there’s some theory that some other psychologist has devised. Most don’t look that much at philosophy I would say. But I think effective altruism and the fact that people are ineffectively altruistic, that’s fairly theoretically interesting for psychologists, and also for economists.

Lucas Perry: So why study psychological questions as they relate to effective altruism, and as they pertain to longtermism and longterm future considerations?

Stefan Schubert: It’s maybe easiest to answer that question in the context of effective altruism in general. I should also mention that when we studied this topic of sort of effectively altruistic actions in general, what we concretely study is effective and ineffective giving. And that is because firstly, that’s what other people have studied, so it’s easier to put our research into context.

The other thing is that it’s quite easy to study in a lab setting, right? So you might ask people, where would you donate to the effective or the ineffective charity? You might think that career choice is actually more important than giving, or some people would argue that, but that seems more difficult to study in a lab setting. So with regards to what motivates our research on effective altruism in general and effective giving, what ultimately motivates our research is that we want to make people improve their decisions. We want to make them donate more effectively, be more effectively altruistic in general.

So how can you then do that? Well, I want to make one distinction here, which I think might be important to think about. And that is the distinction between what I call a behavioral strategy and an intellectual strategy. And the behavioral strategy is that you come up with certain framings or setups to decision problems, such that people behave in a more desirable way. So there’s literature on nudging for instance, where you sort of want to nudge people into desirable options.

So for instance, in a cafeteria where you have healthier foods at eye level and the unhealthy food is harder to reach people will eat healthier than if it’s the other way round. You could come up with interventions that similarly make people donate more effectively. So for instance, the default option could be an effective charity. We know that in general, people tend often to go with the default option because of some kind of cognitive inertia. So that might lead to more effective donations.

I think it has some limitations. For instance, nudging might be interesting for the government because the government has a lot of power, right? It might frame the decision on whether you want to donate your organs after you’re dead. The other thing is that just creating an implementing these kinds of behavior interventions can often be very time consuming and costly.

So one might think that this sort of intellectual strategy should be emphasized and it shouldn’t be forgotten. So with respect to the intellectual strategy, you’re not trying to change people’s behavior solely, you are trying to do that as well, but you’re also trying to change their underlying way of thinking. So in a sense it has a lot in common with philosophical argumentation. But the difference is that you start with descriptions of people’s default way of thinking.

You describe that your default way of thinking, that leads you to prioritize an identifiable victim over larger numbers of statistical victims. And then you sort of provide an argument that that’s wrong. Statistical victims, they are just as real individuals as the identifiable victims. So you get people to accept that their own default way of thinking about identifiable versus statistical victims is wrong, and that they shouldn’t trust the default way of thinking but instead think in a different way.

I think that this strategy is actually often used, but we don’t often think about it as a strategy. So for instance, Nate Soares has this blog post “On Caring” where he argues that we shouldn’t trust our internal care-o-meter. And this is because we can’t increase how much we feel about more people dying with the number of people that die or with the badness of those increasing numbers. So it’s sort of an intellectual argument that takes psychological insight as a starting point and other people have done as well.

So the psychologist Paul Bloom has this book Against Empathy where he argues for similar conclusions. And I think Eliezer Yudkowsky uses his strategy a lot in his sequences. I think it’s often an effective strategy that should be used more.

Lucas Perry: So there’s the extent to which we can know about underlying, problematic cognition in persons and we can then change the world in ways. As you said, this is framed as nudging, where you sort of manipulate the environment in such a way without explicitly changing their cognition, in order to produce desired behaviors. Now, my initial reaction to this is, how are you going to deal with the problem when they find out that you’re doing this to them?

Now the second one here is the extent to which we can use our insights from psychological and analysis and studies to change implicit and explicit models and cognition in order to effectively be better decision makers. If a million deaths is a statistic and a dozen deaths is a tragedy, then there is some kind of failure of empathy and compassion in the human mind. We’re not evolved or set up to deal with these kinds of moral calculations.

So maybe you could do nudging by setting up the world in such a way that people are more likely to donate to charities that are likely to help out statistically large, difficult to empathize with numbers of people, or you can teach them how to think better and better act on statistically large numbers of people.

Stefan Schubert: That’s a good analysis actually. On the second approach: what I call the intellectual strategy, you are sort of teaching them to think differently. Whereas on this behavioral or nudging approach, you’re changing the world. I also think that this comment about “they might not like the way you nudged them” is a good comment. Yes, that has been discussed. I guess in some cases of nudging, it might be sort of cases of weakness of will. People might not actually want the chocolate but they fall prey to their impulses. And the same might be true with saving for retirement.

So whereas with ineffective giving, yeah there it’s much less clear. Is it really the case that people really want to donate effectively and therefore sort of are happy to be nudged in this way, that doesn’t seem to clear at all? So that’s absolutely a reason against that approach.

And then with respect to arguing for certain conclusions, in the sense that it is argument or argumentation, it’s more akin to philosophical argumentation. But it’s different from standard analytic philosophical argumentation in that it discusses human psychology. You discuss how our psychological dispositions mislead us at length and that’s not how analytic philosophers normally do it. And of course you can argue for instance, effective giving in the standard philosophical vein.

And some people have done that, like this EA philosopher Theron Pummer, he has an interesting paper called Whether and Where to Give on this question of whether it is an obligation to donate effectively. So I think that’s interesting, but one worries that there might not be that much to say about these issues because everything else equal is maybe sort of trivial that the more effectiveness the better. Of course everything isn’t always equal. But in general, it might not be too much interesting stuff you can say about that from a normative or philosophical point of view.

But there are tons of interesting psychological things you can say because there are tons of ways in which people aren’t effective. The other related issue is that this form of psychology might have a substantial readership. So it seems to me based on the success of Kahneman and Haidt and others, that people love to read about how their own and others’ thoughts by default go wrong. Whereas in contrast, standard analytic philosophy, it’s not as widely read, even among the educated public.

So for those reasons, I think that the sort of more psychology based augmentation may in some respects be more promising than purely abstract philosophical arguments for why we should be effectively altruistic.

Lucas Perry: My view or insight here is that the analytic philosopher is more so trying on the many different perspectives in his or her own head, whereas the psychologist is empirically studying what is happening in the heads of many different people. So clarifying what a perfected science of psychology in this field is useful for illustrating the end goals and what we’re attempting to do here. This isn’t to say that this will necessarily happen in our lifetimes or anything like that, but what does a full understanding of psychology as it relates to existential risk and longtermism and effective altruism enable for human beings?

Stefan Schubert: One thing I might want to say is that psychological insights might help us to formulate a vision of how we ought to behave or what mindset we ought to have and what we ought to be like as people, which is not the only normatively valid, which is what philosophers talk about, but also sort of persuasive. So one idea there that Lucius and I have discussed quite extensively recently is that some moral psychologists suggest that when we think about morality, we think to a large degree, not in terms of whether a particular act was good or bad, but rather about whether the person who performed that act is good or bad or whether they are virtuous or vicious.

So this is called the person centered approach to moral judgment. Based on that idea, we’ve been thinking about what lists of virtues people would need, in order to make the world better, more effectively. And ideally these should be virtues that both are appealing to common sense, or which can at least be made appealing to common sense, and which also make the world better when applied.

So we’ve been thinking about which such virtues one would want to have on such a list. We’re not sure exactly what we’ll include, but some examples might be prioritization, that you need to make sure that you prioritize the best ways of helping. And then we have another which we call Science: That you do proper research and how to help effectively or that you rely on others who do. And then collaboration, that you’re willing to collaborate on moral issues, potentially even with your moral opponents.

So the details of this virtues aren’t too important, but the idea is that it hopefully should seem like a moral ideal to some people, to be a person who lives these virtues. I think that to many people philosophical arguments about the importance of being more effective and putting more emphasis on consequences, if you read them in a book of analytic philosophy, that might seem pretty uninspiring. So people don’t read that and think “that’s what I would want to be like.”

But hopefully, they could read about these kinds of virtues and think, “that’s what I would want to be like.” So to return to your question, ideally we could use psychology to sort of create such visions of some kind of moral ideal that would not just be normatively correct, but also sort of appealing and persuasive.

Lucas Perry: It’s like a science, which is attempting to contribute to the project of human and personal growth and evolution and enlightenment in so far as that as possible.

Stefan Schubert: We see this as part of the larger EA project of using evidence and reason and research to make the world a better place. EA has this prioritization research where you try to find the best ways of doing good. I gave this talk at EAGx Nordics earlier this year on “Moral Aspirations and Psychological Limitations.” And in that talk I said, well what EAs normally do when they prioritize ways of doing good, is as it were, they look into the world and they think: what ways of doing good are there? What different courses are there? What sort of levers can we pull to make the world better?

So should we reduce existential risk from specific sources like advanced AI or bio risk, or is rather global poverty or animal welfare the best thing to work on? But then the other approach is to rather sort of look inside yourself and think, well I am not perfectly effectively altruistic, and that is because of my psychological limitations. So then we want to find out which of those psychological limitations are most impactful to work on because, for instance, they are more tractable or because it makes a bigger difference if we remove them. That’s one way of thinking about this research, that we sort of take this prioritization research and turn it inwards.

Lucas Perry: Can you clarify the kinds of things that psychology is really pointing out about the human mind? Part of this is clearly about biases and poor aspects of human thinking, but what does it mean for human beings to have these bugs and human cognition? What are the kinds of things that we’re discovering about the person and how he or she thinks that fail to be in alignment with the truth.

Stefan Schubert: I mean, there are many different sources of error, one might say. One thing that some people have discussed is that people are not that interested in being effectively altruistic. Why is that? Some people say that’s just because they get more warm glow out of giving someone who’s suffering more saliently and then the question arises, why do they get more warm glow out of that? Maybe that’s because they just want to signal their empathy. That’s sort of one perspective, which is maybe a bit cynical, then ,that the ultimate source of lots of ineffectiveness is just this preference for signaling and maybe a lack of genuine altruism.

Another approach would be to just say, the world is very complex and it’s very difficult to understand it and we’re just computationally constrained, so we’re not good enough at understanding it. Another approach would be to say that because the world is so complex, we evolved various broad-brushed heuristics, which generally work not too badly, but then, when we are put in some evolutionarily novel context and so on, they don’t guide us too well. That might be another source of error. In general, what I would want to emphasize is that there are likely many different sources of human errors.

Lucas Perry: You’ve discussed here how you focus and work on these problems. You mentioned that you are primarily interested in the psychology of effective altruism in so far as we can become better effective givers and understand why people are not effective givers. And then, there is the psychology of longtermism. Can you enumerate some central questions that are motivating you and your research?

Stefan Schubert: To some extent, we need more research just in order to figure out what further research we and others should do so I would say that we’re in a pre-paradigmatic stage with respect to that. There are numerous questions one can discuss with respect to psychology of longtermism and existential risks. One is just people’s empirical beliefs on how good the future will be if we don’t go extinct, what the risk of extinction is and so on. This could potentially be useful when presenting arguments for the importance of work on existential risks. Maybe it turns out that people underestimate the risk of extinction and the potential quality of the future and so on. Another issue which is interesting is moral judgments, people’s moral judgements about how bad extinction would be, and the value of a good future, and so on.

Moral judgements about human extinction, that’s exactly what we studied in a recent paper that we published, which is called “The Psychology of Existential Risks: Moral Judgements about Human Extinction.” In that paper, we test this thought experiment by philosopher Derek Parfit. He has this thought experiment where he discusses three different outcomes. First, peace, the second, a nuclear war that kills 99% of the world’s existing population and three, a nuclear war that kills everyone. Parfit says, then, that a war that kills everyone, that’s the worst outcome. Near-extinction is the next worst and peace is the best. Maybe no surprises there, but the more interesting part of the discussion, that concerns the relative differences between these outcomes in terms of badness. Parfit effectively made an empirical prediction, saying that most people would find a difference in terms of badness between peace and near-extinction to be greater, but he himself thought that the difference between near-extinction and extinction, that’s the greater difference. That’s because only extinction would lead to the future forever being lost and Parfit thought that if humanity didn’t go extinct, the future could be very long and good and therefore, it would be a unique disaster if the future was lost.

On this view, extinction is uniquely bad, as we put it. It’s not just bad because it would mean that many people would die, but also because it would mean that we would lose a potentially long and grand future. We tested this hypothesis in the paper, then. First, we had a preliminary study, which didn’t actually pertain directly to Parfit’s hypothesis. We just studied whether people would find extinction a very bad event in the first place and we found that, yes, they do and they that the government should invest substantially to prevent it.

Then, we moved on to the main topic, which was Parfit’s hypothesis. We made some slight changes. In the middle outcome, Parfit had 99% dying. We reduced that number to 80%. We also talked about catastrophes in general rather than nuclear wars and we didn’t want to talk about peace because we thought that you might have an emotional association with the word “peace;” we just talked about no catastrophe instead. Using this paradigm, we found that Parfit was right. First, most people, just like him, thought that extinction was the worst outcome, near extinction the next, and no catastrophe was the best. But second, we find, then, that most people find the difference in terms of badness, between no one dying and 80% dying, that’s greater than the difference between 80% dying and 100% dying.

Our interpretation, then, is that this is presumably because they focus most on the immediate harm that the catastrophes cause and in terms of the immediate harm, the difference between no one dying and 80% dying, it’s obviously greater than that between 80% dying and 100% dying. That was a control condition in some of our experiments, but we also had other conditions where we would slightly tweak the question. We had one condition which we call the salience condition, where we made the longterm consequences of the three outcomes salient. We told participants to remember the longterm consequences of the outcomes. Here, we didn’t actually add any information that they don’t have access to, but we just made some information more salient and that made significantly more participants find the difference between 80% dying and 100% dying the greater one.

Then, we had yet another condition which we call the utopia condition, where we told participants that if humanity doesn’t go extinct, then the future will be extremely long and extremely good and it was said that if 80% die, then, obviously, at first, things are not so good, but after a recovery period, we would go on to this rosy future. We included this condition partly because such scenarios have been discussed to some extent by futurists, but partly also because we wanted to know, if we ramp up this goodness of the future to the maximum and maximize the opportunity costs of extinction, how many people would then find the difference between near extinction and extinction the greater one. Indeed, we found, then, that given such a scenario, a large majority found the difference between 80% dying and 100% dying the larger one so then, they did find extinction uniquely bad given this enormous opportunity cost of a utopian future.

Lucas Perry: What’s going on in my head right now is we were discussing earlier the role or not of these philosophical thought experiments in psychological analysis. You’ve done a great study here that helps to empirically concretize the biases and remedies for the issues that Derek Parfit had exposed and pointed to in his initial thought experiment. That was popularized by Nick Bostrom and it’s one of the key thought experiments for much of the existential risk community and people committed to longtermism because it helps to elucidate this deep and rich amount of value in the deep future and how we don’t normally consider that. Your discussion here just seems to be opening up for me tons of possibilities in terms of how far and deep this can go in general. The point of Peter Singer’s child drowning in a shallow pond was to isolate the bias of proximity and Derek Parfit’s thought experiment isolates the bias of familiarity, temporal bias and continuing into the future, it’s making me think, we also have biases about identity.

Derek Parfit also has thought experiments about identity, like with his teleportation machine where, say, you stepped into a teleportation machine and it annihilated all of your atoms but before it did so, it scanned all of your information and once it scanned you, it destroyed you and then re-assembled you on the other side of the room, or you can change the thought experiment and say on the other side of the universe. Is that really you? What does it mean to die? Those are the kinds of questions that are elicited. Listening to what you’ve developed and learned and reflecting on the possibilities here, it seems like you’re at the beginning of a potentially extremely important and meaningful field that helps to inform decision-making on these morally crucial and philosophically interesting questions and points of view. How do you feel about that or what I’m saying?

Stefan Schubert: Okay, thank you very much and thank you also for putting this Parfit thought experiment a bit in context. What you’re saying is absolutely right, that this has been used a lot, including by Nick Bostrom and others in the longtermist community and that was indeed one reason why we wanted to test it. I also agree that there are tons of interesting philosophical thought experiments there and they should be tested more. There’s also this other field of experimental philosophy where philosophers test philosophical thought experiments themselves, but in general, I think there’s absolutely more room for empirical testing of them.

With respect to temporal bias, I guess it depends a bit what one means by that, because we actually did get an effect from just mentioning that they should consider the longterm consequences, so I might think that to some extent it’s not only that people are biased in favor of the present, but it’s also that they don’t really consider the longterm future. They sort of neglect it and it’s not something that’s generally discussed among most people. I think this is also something that Parfit’s thought experiment highlights. You have to think about the really longterm consequences here and if you do think about them, then, your intuitions about these thought experiment should reverse.

Lucas Perry: People’s cognitive time horizons are really short.

Stefan Schubert: Yes.

Lucas Perry: People probably have the opposite discounting of future persons that I do. Just because I think that the kinds of experiences that Earth-originating intelligent life forms will be having in the near to 100 to 200 years will be much more deep and profound than what humans are capable of, that I would value them more than I value persons today. Most people don’t think about that. They probably just think there’ll be more humans and short of their bias towards present day humans, they don’t even consider a time horizon long enough to really have the bias kick in, is what you’re saying?

Stefan Schubert: Yeah, exactly. Thanks for that, also, for mentioning that. First of all, my view is that people don’t even think so much about the longterm future unless prompted to do so. Second, in this first study I mentioned, which was sort of a pre-study, we asked, “How good do you think that the future’s going to be?” On the average, I think they said, “It’s going to be slightly better than the present” and that would be very different from your view, then, that the future’s going to be much better. You could argue that this view that the future is going to be about as good as present is somewhat unlikely. I think it’s going to be much better or maybe it’s going to be much worse. There’s several different biases or errors that are present here.

Merely making the longterm consequences of the three outcomes salient, that already makes people more inclined to find a difference between 80% dying and 100% dying the greater one, so then you don’t add any information. Also ,specifying that the longterm outcomes are going to be extremely good, that makes a further difference that make most people find the difference between 80% dying and 100% dying the greater one.

Lucas Perry: I’m sure you and I, and listeners as well, have the hilarious problem of trying to explain this stuff to friends or family members or people that you meet that are curious about it and the difficulty of communicating it and imparting the moral saliency. I’m just curious to know if you have explicit messaging recommendations that you have extracted or learned from the study that you’ve done.

Stefan Schubert: You want to make the future more salient if you want people to care more about existential risk. With respect to explicit messaging more generally, like I said, there haven’t been that many studies on this topic, so I can’t refer to any specific study that says that this is how you should work with the messaging on this topic but just thinking more generally, one thing I’ve been thinking about is that maybe, with many of these issues, it’s just that it takes a while for people to get habituated with them. At first, if someone hears a very surprising statement that has very far reaching conclusions, they might be intuitively a bit skeptical about it, independently of how reasonable that argument would be for someone who would be completely unbiased. Their prior is that, probably, this is not right and to some extent, this might even be reasonable. Maybe people should be a bit skeptical of people who say such things.

But then, what happens is that most such people who make such claims that seem to people very weird and very far-reaching, they get discarded after some time because people poke holes in their arguments and so on. But then, a small subset of all such people, they actually stick around and they get more and more recognition and you could argue that that’s what’s now happening with people who work on longtermism and X-risk. And then, people slowly get habituated to this and they say, “Well, maybe there is something to it.” It’s not a fully rational process. I think this doesn’t just relate to longtermism an X-risk but maybe also specifically to AI risk, where it takes time for people to accept that message.

I’m sure there are some things that you can do to speed up that process and some of them would be fairly obvious like have smart, prestigious, reasonable people talk about this stuff and not people who don’t seem as credible.

Lucas Perry: What are further areas of the psychology of longtermism or existential risk that you think would be valuable to study? And let’s also touched upon other interesting areas for effective altruism as well.

Stefan Schubert: I mentioned previously people’s empirical beliefs, that could be valuable. One thing I should mention there is that I think that people’s empirical beliefs about the distant future are massively affected by framing effects, so depending on how you ask these questions, you are going to get very different answers so that’s important to remember that it’s not like people have these stable beliefs and they will always say that. The other thing I mentioned is moral judgments, and I said we stated moral judgements about human extinction, but there’s a lot of other stuff to do, like people’s views on population ethics could obviously be useful. Views on whether creating happy people is morally valuable. Whether it’s more valuable to bring large number of people whose life is barely worth living into existence than to bring in a small number of very happy people into existence and so on.

Those questions obviously have relevance for the moral value of the future. One thing I would want to say is that if you’re rational, then, obviously, your view on what and how much we should do to affect the distant future, that should arguably be a function of your moral views, including on population ethics, on the one hand, and also your empirical views of how the future’s likely to pan out. But then, I also think that people obviously aren’t completely rational and I think, in practice, their views on the longterm future will also be influenced by other factors. I think that their view on whether helping the longterm future seems like an inspiring project, that might depend massively on how the issue is framed. I think these aspects could be worth studying because if we find these kinds of aspects, then we might want to emphasize the positive aspects and we might want to adjust our behavior to avoid the negative. The goal should be to formulate a vision of longtermism that feels inspiring to people, including to people who haven’t put a lot of thought into, for instance, population ethics and related matters.

There are also some other specific issues which I think could be useful to study. One is the psychology of predictions about the distant future and the implications of the so-called construal level theory for the psychology or the longterm future. Many effective altruists would know construal level theory under another name: near mode and far mode. This is Robin Hanson’s terminology. Construal level theory is a theory about psychological distance and how it relates to how abstractly we construe things. It says that we conceive of different forms of distance – spatial, temporal, social – similarly. The second claim is that we conceive of items and events at greater psychological distance. More abstractly, we focus more on big picture features and less on details. So, Robin Hanson, he’s discussed this theory very extensively including with respect to the long term future. And he argues that the great psychological distance to the distant future causes us to reason in overly abstract ways, to be overconfident to have poor epistemics in general about the distant future.

I find this very interesting, and these kinds of ideas are mentioned a lot in EA and the X-risk community. But, to my knowledge there hasn’t been that much research which applies construal level theory specifically to the psychology of the distant future.

It’s more like people look at these general studies of construal level theory, and then they noticed that, well, the temporal distance to the distant future is obviously extremely great. Hence, these general findings should apply to a very great extent. But, to my knowledge, this hasn’t been studied so much. And given how much people discuss near or far mode in this case, it seems that there should be some empirical research.

I should also mention that I find that construal level theory a very interesting and rich psychological theory in general. I could see that it could illuminate the psychology of the distant future in numerous ways. Maybe it could be some kind of a theoretical framework that I could use for many studies about the distant future. So, I recommend that key paper from 2010 by Trope and Liberman on construal level theory.

Lucas Perry: I think that just hearing you say this right now, it’s sort of opening my mind up to the wide spectrum of possible applications of psychology in this area.

You mentioned population ethics. That makes me just think of in the context of EA and longtermism and life in general, the extent to which psychological study and analysis can find ethical biases and root them out and correct for them, either by nudging or by changing the explicit methods by which humans cognize about such ethics. There’s the extent to which psychology can better inform our epistemics, so this is the extent to which we can be more rational.

And I’m reflecting now how quantum physics subverts many of our Newtonian mechanics and classical mechanics, intuitions about the world. And there’s the extent to which psychology can also inform the way in which our social and experiential lives also condition the way that we think about the world and the extent to which that sets us astray in trying to understand the fundamental nature of reality or thinking about the longterm future or thinking about ethics or anything else. It seems like you’re at the beginning stages of debugging humans on some of the most important problems that exist.

Stefan Schubert: Okay. That’s a nice way of putting it. I certainly think that there is room for way more research on the psychology of longtermism and X-risk.

Lucas Perry: Can you speak a little bit now here about speciesism? This is both an epistemic thing and an ethical thing in the sense that we’ve invented these categories of species to describe the way that evolutionary histories of beings bifurcate. And then, there’s the psychological side of the ethics of it where we unnecessarily devalue the life of other species given that they fit that other category.

Stefan Schubert: So, we have one paper on the review, which is called “Why People Prioritize Humans Over Animals: A Framework for Moral Anthropocentrism.

To give you a bit of context, there’s been a lot of research on speciesism and on humans prioritizing humans over animals. So, in this paper we sort of try to take a bit more systematic approach and pick these different hypotheses for why humans prioritize humans over animals against each other, and look at their relative strengths as well.

And what we find is that there is truth to several of these hypotheses of why humans prioritize humans over animals. One contributing factor is just that they value individuals with greater mental capacities, and most humans have great mental capacities than most animals.

However, that explains the only part of the effect we find. We also find that people think that humans should be prioritized over animals even if they have the same mental capacity. And here, we find that this is for two different reasons.

First, according to our findings, people are what we call species relativists. And by that, we mean that they think that members of the species, including different non-human species, should prioritize other members of that species.

So, for instance, humans should prioritize other humans, and an elephant should prioritize other elephants. And that means that because humans are the ones calling the shots in the world, we have a right then, according to this species relativist view, to prioritize our own species. But other species would, if they were in power. At least that’s the implication of what the participants say, if you take them at face value. That’s species relativism.

But then, there is also the fact that they exhibit an absolute preference for humans over animals, meaning that even if we control for the mental capacities of humans and animals, and even if we control for the species relativist factors that we control for who the individual who could help them is, there remains a difference which can’t be explained by those other factors.

So, there’s an absolute speciesist preference for humans which can’t be explained by any further factor. So, that’s an absolute speciesist preference as opposed to this species relativist view.

In total, there’s a bunch of factors that together explain why humans prioritize animals, and these factors may also influence each other. So, we present some evidence that if people have a speciesist preference for humans over animals, that might, in turn, lead them to believe that animals have less advanced mental capacities than they actually have. And because they have this view that individuals with lower mental capacity, they are less morally valuable, that leads them to further deprioritize animals.

So, these three different factors, they sort of interact with each other in intricate ways. Our paper gives this overview over these different factors which contribute to humans prioritizing humans over animals.

Lucas Perry: This helps to make clear to me that a successful psychological study with regards to at least ethical biases will isolate the salient variables which are knobs that are tweaking the moral saliency of one thing over another.

Now, you said mental capacities there. You guys aren’t bringing consciousness or sentience into this?

Stefan Schubert: We discuss different formulations at length, and we went for the somewhat generic formulation.

Lucas Perry: I think people have beliefs about the ability to rationalize and understand the world, and then how that may or may not be correlated with consciousness that most people don’t make explicit. It seems like there are some variables to unpack underneath cognitive capacity.

Stefan Schubert: I agree. This is still like fairly broad brushed. The other thing to say is that sometimes we say that this human has as advanced mental capacities as these animals. Then, they have no reason to believe that the human has a more sophisticated sentience or is more conscious or something like that.

Lucas Perry: Our species membership tells me that we probably have more consciousness. My bedrock thing is I care about how much the thing can suffer or not, not how well it can model the world. Though those things are maybe probably highly correlated with one another. I think I wouldn’t be a speciesist if I thought human beings were currently the most important thing on the planet.

Stefan Schubert: You’re a speciesist if you prioritize humans over animals purely because of species membership. But, if you prioritize one species over another for some other reasons which are morally relevant, then you would not be seen as a speciesist.

Lucas Perry: Yeah, I’m excited to see what comes of that. I think that working on overcoming racism and misogyny and other things, and I think that overcoming speciesism and temporal biases and physical space, proximity biases are some of the next stages in human moral evolution that have to come. So, I think it’s honestly terrific that you’re working on these issues.

Is there anything you would like to say or that you feel that we haven’t covered?

Stefan Schubert: We have one paper which is called “The Puzzle of Ineffective Giving,” where we study this misconception that people have, which is that they think the difference in effectiveness between charities is much smaller than it actually is. So, experts think that the most effective charities are vastly much more effective than the average charity, and people don’t know that.

That seems to suggest that beliefs play a role in ineffective giving. But, there was one interesting paper called “Impediments to Effective Altruism” where they show that even if you tell people that cancer charity is less effective than an arthritis charity, they still donate.

So, then we have this other paper called “The Many Obstacles to Effective Giving.” It’s a bit similar to this speciesist paper, I guess, that we sort of pit different competing hypotheses that people have studied against each other. We give people different tasks, for instance, tasks which involve identifiable victims and tasks which involve ineffective but low overhead charities.

And then, we sort of started, well, what if we tell them how to be effective? Does that change how they behave? What’s the role of that pure belief factor? What’s the role of preferences? The result is a bit of a mix. Both beliefs and preferences contribute to ineffective giving.

In the real world, it’s likely that are several beliefs and preferences that obstruct effective giving present simultaneously. For instance, people might fail to donate to the most effective charity because first, it’s not a disaster charity, and they might have a preference for a disaster charity. And it might have a high overhead, and they might falsely believe then that high overhead entails low effectiveness. And it might not highlight identifiable victims, and they have a preference for donating to identifiable victims.

Several of these obstacles are present at the same time, and in that sense, ineffective giving is overdetermined. So, fixing one specific obstacle may not make as much of the difference as one would have wanted. That might support the view that what we need is not primarily behavioral interventions that address individual obstacles, but rather a more broad mindset change that can motivate people to proactively seek out the most effective ways of doing good.

Lucas Perry: One other thing that’s coming to my mind is the proximity of a cause to someone’s attention and the degree to which it allows them to be celebrated in their community for the good that they have done.

Are you suggesting that the way for remedying this is to help instill a curiosity and something resembling the EA mindset that would allow people to do the cognitive exploration and work necessary to transcend these limitations that bind them to their ineffective giving or is that unrealistic?

Stefan Schubert: First of all, let me just say that with respect to this proximity issue, that was actually another task that we had. I didn’t mention all the tasks. So, we told people that you can either help a local charity or a charity, I think it was in India. And then, we told them that the Indian charity is more effective and asked “where would you want to donate?”

So, you’re absolutely right. That’s another obstacle to effective giving, that people sometimes have preferences or beliefs that local charities are more effective even when that’s not the case. Some donor I talked to, he said, “Learning how to donate effectively, it’s actually fairly complicated, and there are lots of different things to think about.”

So, just fixing the overhead myth or something like that, that may not take you very far, especially if you think that the very best charities that are sort of extremely much more effective than the average charity. So, what’s important is not going from an average charity to a somewhat more effective charity, but to actually find the very best charities.

And to do that, we may need to address many psychological obstacles because the most effective charities, they might be very weird and sort of concerned with longterm future or what-not. So, I do think that a mindset where people seek out effective charities, or defer to others who do, that might be necessary. It’s not super easy to make people adopt that mindset, definitely not.

Lucas Perry: We have charity evaluators, right? These institutions which are intended to be reputable enough that they can tell you which are the most effective charities to donate to. It wouldn’t even be enough to just market those really hard. They’d be like, “Okay, that’s cool. But, I’m still going to donate my money to seeing eye dogs because blindness is something that runs in my family and is experientially and morally salient for me.”

Is the way that we fix the world really about just getting people to give more, and what is the extent to which the institutions which exist, which require people to give, need to be corrected and fixed? There’s that tension there between just the mission of getting people to give more, and then the question of, well, why do we need to get everyone to give so much in the first place?

Stefan Schubert: This insight that ineffective giving is overdetermined and there are lots of things that stand in a way of effective giving, one thing I like about it is that it seems to sort of go well with this observation that it is actually, in the real world, very difficult to make people donate effectively.

I might relate there a bit to what you mentioned about the importance of giving more, and so we could sort of distinguish between the different kinds of psychological limitations. First, that limitations that relate to how much we give. We’re selfish, so therefore we don’t necessarily give as much of our monetary rather resources as we should. There are sort of limits to altruism.

But then, there are also limits to effectiveness. We are ineffective for various reasons that we’ve discussed. And then, there’s also fact that we can have the wrong moral goals. Maybe we work towards short term goals, but then we would realize on the careful reflection that we should work towards long term goals.

And then, I was thinking like, “Well, which of these obstacles should you then prioritize if you turn this sort of prioritization framework inwards?” And then, you might think that, well, at least with respect to giving, it might be difficult for you to increase the amount that you give by more than 10 times. Americans, for instance, they already donate several percent of their income. We know from historical experience that it might be hard for people to sustain very high levels of altruism, so maybe it’s difficult for them to sort of ramp up this altruist factor to the extreme amount.

But then, with effectiveness, if this story about heavy-tailed distributions of effectiveness is right, then you could increase the effectiveness of your donations a lot. And arguably, the sort of psychological price for that is lower. It’s very demanding to give up a huge proportion of your income for others, but I would say that it’s less demanding to redirect your donations to a more effective cause, even if you feel more strongly for the ineffective cause.

I think it’s difficult to really internalize how enormously important it is to go for the most effective option. And also, of course, then the third factor to sort of change your moral goals if necessary. If people would reduce their donations by 99%, they would reduce the impact by 99%. Many people would feel guilty about it.

But then, if they reduce their impact 99% via reducing their effectiveness 99% through choosing an ineffective charity, then people don’t feel similarly guilty, so similar to Nate Soares’ idea of a care-o-meter: our feelings aren’t adjusted for these things, so we don’t feel as much about the ineffectiveness as we do about altruistic sacrifice. And that might lead us to not focus enough on effectiveness, and we should really think carefully about going that extra mile for the sake of effectiveness.

Lucas Perry: Wonderful. I feel like you’ve given me a lot of concepts and tools that are just very helpful for reinvigorating a introspective mindfulness about altruism in my own life and how that can be nurtured and developed.

So, thank you so much. I’ve really enjoyed this conversation for the reasons I just said. I think this is a very important new research stream in this space, and it seems small now, but I really hope that it grows. And thank you for you and your colleagues work here on seeding and doing the initial work in this field.

Stefan Schubert: Thank you very much. Thank you for having me. It was a pleasure.