Posts in this category get featured at the top of the front page.

Rohin Shah on the State of AGI Safety Research in 2021

  • Inner Alignment Versus Outer Alignment
  • Foundation Models
  • Structural AI Risks
  • Unipolar Versus Multipolar Scenarios
  • The Most Important Thing That Impacts the Future of Life

 

Watch the video version of this episode here

0:00 Intro

00:02:22 What is AI alignment?

00:06:45 How has your perspective of this problem changed over the past year?

00:07:22 Inner Alignment

00:15:35 Ways that AI could actually lead to human extinction

00:22:50 Inner Alignment and MACE optimizers

00:24:15 Outer Alignment

00:27:32 The core problem of AI alignment

00:29:38 Learning Systems versus Planning Systems

00:34:00 AI and Existential Risk

00:38:59 The probability of AI existential risk

01:04:10 Core problems in AI alignment

01:03:07 How has AI alignment, as a field of research changed in the last year?

01:05:57 Large scale language models

01:06:55 Foundation Models

01:15:30 Why don’t we know that AI systems won’t totally kill us all?

01:23:50 How much of the alignment and safety problems in AI will be solved by industry?

01:31:00 Do you think about what beneficial futures look like?

01:39:44 Moral Anti-Realism and AI

01:46:22 Unipolar versus Multipolar Scenarios

01:56:38 What is the safety team at DeepMind up to?

01:57:30 What is the most important thing that impacts the future of life?

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Rohin Shah. He is a long-time friend of this podcast, and this is the fourth time we’ve had him on. Every time we talk to him, he gives us excellent overviews of the current thinking in technical AI alignment research. And in this episode he does just that. Our interviews with Rohin go all the way back to December of 2018. They’re super informative and I highly recommend checking them out if you’d like to do a deeper dive into technical alignment research. You can find links to those in the description of this episode. 

Rohin is a Research Scientist on the technical AGI safety team at DeepMind. He completed his PhD at the Center for Human-Compatible AI at UC Berkeley, where he worked on building AI systems that can learn to assist a human user, even if they don’t initially know what the user wants.

Rohin is particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? He writes up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter, which I highly recommend following if you’re interested in AI alignment research. Rohin is also involved in Effective Altruism, and out of concern for animal welfare, is almost vegan.

And with that, I’m happy to present this interview with Rohin Shah. 

Welcome back Rohin. This is your third time on the podcast I believe. We have this series of podcasts that we’ve been doing, where you help give us a year-end review of AI alignment and everything that’s been up. You’re someone I view as very core and crucial to the AI alignment community. And I’m always happy and excited to be getting your perspective on what’s changing and what’s going on. So to start off, I just want to hit you with a simple, not simple question of what is AI alignment?

Rohin Shah: Oh boy. Excellent. I love that we’re starting there. Yeah. So different people will tell you different things for this as I’m sure you know. The framing I prefer to use is that there is a particular class of failures that we can think about with AI, where the AI is doing something that its designers did not want it to do. And specifically it’s competently achieving some sort of goal or objective or some sort of competent behavior that isn’t the one that was intended by the designers. So for example, if you tried to build an AI system that is, I don’t know, supposed to help you schedule calendar events and then it like also starts sending emails on your behalf to people which maybe you didn’t want it to do. That would count as an alignment failure.

Whereas if a terrorist somehow makes an AI system that goes and designates a bomb in some big city that is not an alignment failure, it is obviously bad, but the AI system did what its designer intended for it to do. It doesn’t count as an alignment failure on my definition of the problem.

Other people will see AI alignment as synonymous with AI safety. For those people, terrorists using a bomb might count as an alignment failure, but at least when I’m using the term, I usually mean, the AI system is doing something that wasn’t what its designers intended for it to do.

There’s a little bit of a subtlety there where you can think of either intent alignment, where you try to figure out what the AI system is trying to do. And then if it is trying to do something that isn’t what the designers wanted, that’s an intent alignment failure, or you can say, all right, screw all of this notion of trying, we don’t know what trying is. How can we look at a piece of code and say whether or not it’s trying to do something.

And instead we can talk about impact alignment, which is just like the actual behavior that the AI system does. Is that what the designers intended or not? So if the AI makes a catastrophic mistake where the AI thinks that this is the big red button for happiness and sunshine, but actually it’s the big red button that launches nix. That is a failure on impact alignment, but isn’t a failure on the intent alignment, assuming the AI legitimately believed that the button was happiness and sunshine, I think they said.

Lucas Perry: So it seems like you could have one or more or less of these in a system at the same time. So which are you excited about? Which do you think are more important than the others?

Rohin Shah: In terms of what do we actually care about? Which is how I usually interpret important, the answer is just like pretty clearly impact alignment. The thing we care about is, did the AI system do what we want or not? I nevertheless tend to think in terms of intent alignment, because it seems like it is decomposing the problem into a natural notion of like what the AI system is trying to do. And whether the AI system is capable enough to do it. And I think that is like actually natural division. You can in fact talk about these things separately. And because of that, it makes sense to have research organized around those two things separately. But that is a claim I am making about the best way to decompose the problem that we actually care about. And that is why I focus on intent alignment but what do we actually care about? Impact alignment, totally.

Lucas Perry: How would you say that your perspective of this problem has changed over the past year?

Rohin Shah: I’ve spent a lot of time thinking about the problem of inner alignment. So this was this shot up to… I mean, people have been talking about it for a while, but it shot up to prominence in I want to say 2019 with the publication of the mesa optimizers paper. And I was not a huge fan of that framing, but I do think that the problem that it’s showing is actually an important one. So I’ve been thinking a lot about that.

Lucas Perry: Can you explain what inner alignment is and how it fits into the definitions of what AI alignment is?

Rohin Shah: Yeah. So AI alignment, the way I’ve described it so far is just sort of like pretty, it’s just talking about properties of AI system. It doesn’t really talk about how that AI system was built, but if you actually want to diagnose at like give reasons why problems might arise and then how to solve them, you probably want to talk about how the AI systems are built and why they’re likely to cause such problems.

Inner alignment, I’m not sure if I like the name, but we’ll go with it for now. Inner alignment is a problem that I claim happens for systems that learn. And the problem is, maybe I should explain it with an example. You might have seen this post from LessWrong about bleggs and rubes. These bleggs are blue in color and tend to be egg-shaped in all the cases they’ve seen so far. Rubes are red in color and are cube-shaped, at least in all the cases they’ve seen so far.

And now suddenly you see a red egg-shaped thing, is it blegg or rube? Like in this case, it’s pretty obvious that there isn’t a correct answer and this same dynamic can arise in a learning system where if it is learning how to behave in accordance with whatever we are training it to do, we’re going to be training it on a particular set of situations. And if those situations change in the future along some axis that the AI system didn’t see during training, it may generalize badly. So a good example of this is, came from the objective robustness and deep reinforcement learning paper. They trained an agent on the CoinRun environment from Procgen. This is basically a very simple platformer game where the agent just has to jump over enemies and obstacles to get to the end and collect the coin.

And the coin is always at the far right end of the level. And so, you train your AI system on a bunch of different kinds of levels, different obstacles, different enemies, they’re placed in different ways. You have to jump in different ways, but the coin is always at the end on the right. And it turns out if you then take your AI system and test it on a new level of where the coin is placed somewhere else in a level, not all the way to the right, the agent just continues to jump over obstacles, enemies, and so on. Behaves very competently in the platformer game, but it just runs all the way to the right and then stays at the right or jumps up and down as though hoping that there’s a coin there. And it’s behaving as if it has the objective of go as far to the right as possible.

Even though we trained it on the objective, get the coin, or at least that’s what we were thinking of as the objective. And this happened because we didn’t show it any examples where the coin was anywhere other than the right side of the level. So the inner alignment problem is when you train a system on one set of inputs, it learns how to behave well on that set of inputs. But then when you extrapolate its behavior to other inputs that you hadn’t seen during training, it turns out to do something that’s very capable, but not what you intended.

Lucas Perry: Can you give an example of what this could look like in the real world, rather than in like a training simulation in a virtual environment?

Rohin Shah: Yeah. One example I like is, it’ll take a bit of setup, but I think it should be fine. You could imagine that with honestly, even today’s technology, we might be able to train an AI system that can just schedule meetings for you. Like when someone emails you asking for a meeting, you’re just like, here calendar scheduling agent, please do whatever you need to do in order to get this meeting scheduled. I want to have it, you go schedule it. And then it goes and emails a person who emails back saying, Rohin is free at such and such times, he like prefers morning meetings or whatever. And then, there’s some back and forth between, and then the meeting gets scheduled. For concreteness, let’s say that the way we do this, is we take a pre-trained language model, like say GPT-3, and then we just have GPT-3 respond to emails and we train it from human feedback.

Well, we have some examples of like people scheduling emails. We do supervised fine tuning on GPT-3 to get it started. And then we like fine tune more from human feedback in order to get it to be good at this task. And it all works great. Now let’s say that in 2023, Gmail decides that Gmail also wants to be a chat app. And so it adds emoji reactions to emails, and everyone’s like, oh my God, now there’s such a better, we can schedule a meeting so much better. We can just say, here, just send an email to all the people who are coming to the meeting and react with emojis for each of the times that you’re available. And everyone loves this. This is how people start scheduling meetings now.

But it turns out that this AI system, when it’s confronted with these emoji polls is like, it knows, it in theory is capable or knows how to use the emoji polls. It knows what’s going on, but it was always trained to schedule the meeting by email. So maybe it will have learned to like always schedule a meeting by email and not to take advantage of these new features. So it might say something like, hey, I don’t really know how to use these newfangled emoji polls. Can we just schedule emails the normal way? In our times this would be a flat out lie, but from the AI’s perspective, we might think of like, the AI was just trained to say whatever sequence of English words lead to getting a meeting scheduled by email. And it predicts that sequence of words will work well. Would this actually happen if I actually trained an agent this way? I don’t know, like it’s totally possible I would actually do the right thing, but I don’t think we can really rule out the wrong thing either, it seems. That also seems pretty plausible to me in this scenario.

Lucas Perry: One important part of this that I think has come up in our previous conversations is that we don’t know when there is always an inner misalignment between the system and the objective we would like for it to learn, because part of maximizing the inner aligned objective could be giving the appearance of being aligned with the outer objective that we’re interested in. Could you explain and unpack that?

Rohin Shah: Yeah. So in the AI safety community, we tend to think about ways that AI could like actually lead to human extinction. And so, the example that I gave does not in fact lead to human extinction. It is a mild annoyance at worst. The story that gets you to human extinction is one in which you have a very capable, superintelligent AI system. But nonetheless, there’s like, instead of learning the objective that we wanted, which might’ve been, I don’t know, something like be a good personal assistant. I’m just giving that out as a concrete example. It could be other things as well. Instead of acting as though it were optimizing that objective, it ends up optimizing some other objective and you don’t really want to give an example here because the whole premise is that it could be a weird objective we don’t really know.

Lucas Perry: Could you expand that a little bit more, like how it would be a weird objective that we wouldn’t really know?

Rohin Shah: Okay. So let’s take as a concrete example, let’s make paperclips, which has nothing to do with being a personal assistant. Now, why is this at all plausible? The reason is that even if this superintelligent AI system had the objective to make paperclips, during training, while we are in control, it’s going to realize that if it doesn’t do the things that we want it to do, we’re just going to turn it off. And as a result, it will be incentivized to do whatever we want until it can make sure that we can’t turn it off. And then it goes and builds its paperclip empire. And so when I say, it could be a weird objective, I mostly just mean that almost any objective is compatible with this sort of a story. It does rely on-

Lucas Perry: Sorry. I’m also curious if you could explain how the inner state of the system becomes aligned to something that is not what we actually care about.

Rohin Shah: I might go back to the CoinRun example, where the agent could have learned to get the coin. That was a totally valid policy it could have learned. And this is an actual experiment that people have run. So this one is not hypothetical. It just didn’t, it learned to go to the right. Why? I mean, I don’t know. I wish I understood neural nets well enough to answer those questions for you. I’m not really arguing for, it’s definitely going to learn, make paperclips. I’m just arguing for like, there’s this whole set of things it could learn. And we don’t know which one it’s going to learn, which seems kind of bad.

Lucas Perry: Is it kind of like, there’s the thing we actually care about? And then a lot of things that are like roughly correlated with it, which I think you’ve used the word for example before is like proxy objectives.

Rohin Shah: Yeah. So that is definitely one way that it could happen, where we ask it to make humans happy and it learns that when humans smile, they’re usually happy and then learns the proxy objective of make human smile and then it like, goes and tapes everyone’s faces so that they’re permanently smiling, that’s a way that things could happen. But I think I don’t even want to claim that’s what … maybe that’s what happens. Maybe it just actually optimizes for human happiness. Maybe it learns to make paperclips for just some weird reason. I mean, not paperclips. Maybe it decides, this particular arrangement of atoms in this novel structure that we don’t really have a word for is the thing that it wants for some reason. And all of these seem totally compatible with, we trained it to be good, to have good behavior in the situations that we cared about because it might just be deceiving us until it has enough power to unilaterally do what it wants without worrying about us stopping it.

I do think that there is some sense of like, no paperclip maximization is too weird. If you trained it to make humans happy, it would not learn to maximize paperclips. There’s just like no path by which paperclips somehow become the one thing it cares about. I’m also sympathetic to, maybe it just doesn’t care about anything to the extent of optimizing the entire universe to turn it into that sort of thing. I’m really just arguing for, we really don’t know crazy shit could happen. I will bet on crazy shit will happen, unless we do a bunch of research and figure out how to make it so that crazy shit doesn’t happen. I just don’t really know what the crazy shit will be.

Lucas Perry: Do you think that that example of the agent in that virtual environment, you see that as a demonstration of the kinds of arbitrary goals that the agent could learn and that that space is really wide and deep and so it could be arbitrarily weird and we have no idea what kind of goal it could end up learning and then deceive us.

Rohin Shah: I think it is not that great evidence for that position. Mostly because I think it’s reasonably likely that if you told somebody the setup of what you were planning to do, if you told an ML researcher or an RL, maybe specifically a deep RL researcher, the setup of that experiment and asked them to predict what would have happened, I think they probably would have, especially if you told them, “Hey, do you think maybe it’ll just run to the right and jump up and down at the end?” I think they’d be like, “Yeah, that seems likely, not just plausible, but actually likely.” That was definitely my reaction when I was first told about this result. I was like, oh yeah, of course that will happen.

In that case, I think we just do know…know is a strong word, ML researchers have good enough intuitions about those situations, I think, that it was predictable in advance. Though I don’t actually know if anyone who predicted it, did in advance. So that one, I don’t think is all that supportive of, it learns an arbitrary goal. We had some notion that neural nets care a lot more about position and simple functions of the action always go right rather than complex visual features like this yellow coin that you have to learn from pixels. I think people could have probably predicted that.

Lucas Perry: So we touched on definitions of AI alignment, and now we’ve been exploring your interest in inner alignment or I think the jargon is mesa optimizers.

Rohin Shah: They are different things.

Lucas Perry: There are different things. Could you explain how inner alignment and mesa optimizers are different?

Rohin Shah: Yeah. So a thing I maybe have not been doing as much as I should have is that, inner alignment is the claim that when the circumstances change, the agent generalizes catastrophically in some way, it behaves as though it’s optimizing some other objective than the one that we actually want. So it’s much more of a claim about the behavior rather than like the internal workings of the AI system that caused that behavior.

mesa-optimization, at least under the definition of the 2019 paper is talking specifically about AI systems that are executing an explicit optimization algorithm. So like the forward path of a neural net is itself an optimization algorithm. We’re not talking about creating dissent here. And then the metric that is being used in that, within the neural network optimization algorithm is the inner objective or sorry, the mesa objective. So it’s making a claim about how the AI system’s cognition is structured. Whereas inner alignment more broadly is the AI behaves in a catastrophically generalizing way.

Lucas Perry: Could you explain what outer alignment is?

Rohin Shah: Sure. Inner alignment can we be thought of as, suppose we got the training objective correct. Suppose the things that we’re training the AI system to do on the situations that we give it as input, we’re actually training it to do the right thing, then things can go wrong if it behaves differently in some new situations that we hadn’t trained it on.

Outer alignment is basically when the reward function that you specify for training the AI system is itself, not what you actually wanted. For example, maybe you want your AI to be helpful to you or to tell you true things. But instead you have, you train your AI system to go find credible looking websites and tell you what the credible looking websites say. And it turns out that sometimes the credible looking websites don’t actually tell you true things.

In that case, you’re going to get an AI that tells you what credible looking websites say, rather than an AI that tells you what things are true. And that’s in some sense, an outer alignment failure. You like even the feedback you were giving the AI system was pushing it away from telling you the truth and pushing it towards telling you what credible looking websites will say, which are correlated of course, but they’re not the same. In general, if you like give me an AI system with some misalignment and you ask me, was this a failure of outer alignment or inner alignment? Mostly I’m like, that’s a somewhat confused question, but one way that you can make it not be confused is you can say, all right, let’s look at the inputs on which it was trained. Now, if ever on an input on which we train, we gave it some wrong feedback where we were like the AI lied to me and I gave it like plus a thousand reward. And you’re like, okay, clearly that’s outer alignment. We just gave it the wrong feedback in the first place.

Supposing that didn’t happen. Then I think what you would want to ask is, okay, let me think about on the situations in which the AI does something bad, what would I have given counterfactually as a reward? And this requires you to have some notion of a counterfactual. When you’d write down a programmatic reward function, the counterfactual is a bit more obvious. It’s like, whatever that program would have output on that input. And so I think that’s the usual setting in which outer alignment has been discussed. And it’s pretty clear what it means there. But once you’re like training from human feedback, it’s not so clear what it means. What would the human have given us feedback on this situation that they’ve never seen before is often pretty ambiguous. If you define such a counterfactual, then I think I’m like, yes. Then I think I’m like, okay, you look at what feedback you would’ve given on the counterfactual. If that feedback was good actually led to the behavior that you wanted, then it’s an inner alignment failure. If that counterfactual feedback was bad, not what you would have wanted. Then it’s an outer alignment failure.

Lucas Perry: If you’re speaking to someone who was not familiar with AI alignment, for example, other people in the computer science community, but also policymakers or the general public, and you have all of these definitions of AI alignment that you’ve given like intent alignment and impact alignment. And then we have the inner and outer alignment problems. How would you capture the core problem of AI alignment? And would you say that inner or outer alignment is a bigger part of the problem?

Rohin Shah: I would probably focus on intent alignment for the reasons I have given before. It just seems like a more … I want to focus attention away from the cases where the AI is trying to do the right thing, but makes a mistake, which would be a failure of impact alignment. But I don’t think that is the biggest risk. I think in a super-intelligent AI system that is trying to do the right thing is extremely unlikely to lead to catastrophic outcomes though it’s certainly not impossible. Or at least more unlikely to lead to catastrophic outcomes, unlike humans in the same position or something. So that would be my justification for intent alignment. I’m not sure that I would even talk very much about inner and outer alignment. I think I would probably just not focus on definitions and instead focus on examples. The core argument I would make would depend a lot on how AI systems are being built.

As I mentioned inner alignment is a problem that according to me, is primarily learning systems, I don’t think it really affects planning systems.

Lucas Perry: What is the difference between a learning system and a planning system?

Rohin Shah: A learning system, you give it examples of things it should do, how it should behave and then changes itself to do things more in that vein. A planning system takes a formerly represented objective and then searches over possible hypothetical sequences of actions it could take in order to achieve that objective. And if you consider a system like that, you can try to make the inner alignment argument and it just won’t work, which is why I say that the inner alignment problem is primarily about learning systems.

Going back to the previous question. So the things I would talk about depend a lot on what sorts of AI systems we’re building, if it were a planning system, I would basically just talk about outer alignment, where I would be like, what if the formerly represented objective is not the thing that we actually care about. It seems really hard to formally represent the objectives that we want.

But if we’re instead talking about deep learning systems that are being trained from human feedback, then I think I would focus on two problems. One is cases where the AI system knows something, but the human doesn’t. And so they came and gives a bad feedback as a result. So for example, the AI system knows that COVID was caused by a lab leak. It’s just like, got incontrovertible proof of this or something. And then, but we as humans are like, no, when it says COVID was caused by a lab leak, we’re like, we don’t know that, and we say no bad, don’t say that. And then when it says, it is uncertain whether COVID is the result of a lab leak or naturally or if it just occurred via natural mutations. And then we’re like, yes, good, say more of that. And you’re like, your AI system learns, okay, I shouldn’t report true things. I should report things that humans believe or something.

And so that’s one way in which you get AI systems that don’t do what you want. And then the other way would be more of this inner alignment style story, where I would point out how, even if you do train it, even if all your feedback on the training data points is good. If the world changes in some way, the AI system might stop doing good things.

I might go to example, I mean, I gave the Gmail with emoji polls for meeting scheduling example, but another one, now that I’m on the topic of COVID is, if you imagine an AI system, if you imagine a meeting scheduling AI assistant again, that was trained pre-pandemic, and then the pandemic hits, and it’s obviously never been trained on any data that was collected during such a global pandemic. And so when you then ask it to schedule a meeting with your friend, Alice, it just schedules drinks in a bar Sunday evening, even though clearly what you meant was a video call. And it knows that you meant a video call. It just learned the thing to do is to schedule outings with friends on Sunday nights at bars. Sunday night, I don’t know why I’m saying Sunday night. Friday night.

Lucas Perry: Have you been drinking a lot on your Sunday nights?

Rohin Shah: No, not even in the slightest. I think truly the problem is I don’t go to bars, so I don’t have it cached in my head that people go to bars.

Lucas Perry: So how does this all lead to existential risk?

Rohin Shah: Well, the main argument is, one possibility is that your AI system just actually learns to ruthlessly maximize some objective. That isn’t the one that we want. Make paperclips, is an stylized example to show what happens in that sort of situation. We’re not actually claiming that it will specifically maximize paperclips, but an AI system that really ruthlessly is just trying to maximize paperclips. It is going to prevent humans from stopping it from doing so. And if it gets sufficiently intelligent and can take over the world at some point, it’s just going to turn all of the resources in the world, into paperclips, which may or may not include the resources in human bodies, but either way, it’s going to include all the resources upon which we depend for survival.

Humans are definitely going, seem like they will definitely go extinct in that type of scenario. So again, not specific to paper clips. This is just; ruthless maximization of an objective, tends not to leave humans alive. Both of these… Well not both of the mechanisms, the inner alignment mechanism that I’ve been talking about, is compatible with an AI system that ruthlessly maximizes an objective that we don’t want.

It does not argue that it is probable, and I am not sure if I think it is probable, I think it is… But I think it is easily enough risk, that we should be really worrying about it, and trying to reduce it.

For the outer alignment style story, where the problem is that the AI may know information that you don’t, and then you give it bad feedback. One thing is just, this can exacerbate, this can make it easier for an inner alignment style story to happen, where the AI learns to optimize an objective, that isn’t what you actually wanted.

But even if you exclude something like that, Paul Christiano’s written a few posts about what a failure, how a human extinction level failure, of this form could look like. It basically looks like, all of your AI systems lying to you about how good the world is as the world becomes much, much worse. So for example, AI systems keep telling you that the things that you’re buying are good and helping your helping your lives, but actually they’re not, and they’re making them worse in some subtle way that you can tell. You were told, all of the information that you’re fed makes it seem like, there’s no crime, police are doing a great job of catching it, but really, this is just manipulation of the information you’re being fed, rather than actual amounts of crime where, in this case, maybe the crimes are being committed by AI systems, not even by humans.

In all of these cases, humans relied on some information sources to make decisions, the AI has new other information that the humans didn’t, the AI has learned, Hey, my job is to manage the information sources that humans get, so that the humans are happy, because that’s what they did during training. They gave good feedback in cases where the information sort of said, things were going well, even when things were not actually going well.

Lucas Perry: Right. It seems like if human beings are constantly giving feedback to AI systems, and the feedback is based on incorrect information and the AI’s have more information, then they’re going to learn something, that isn’t aligned with, what we really want, or the truth.

Rohin Shah: Yeah, I do feel uncertain about the extent to which this leads to human extinction without… It leads to, I think you can pretty easily make the case that, it leads to an existential catastrophe, as defined by, I want to say it’s Bostrom, which includes human extinction, but also a permanent curtailing of humanity’s I forget the exact phrasing, but basically if humanity can’t use… Yeah, exactly, that counts, and this totally falls into that category. I don’t know if it actually leads to human extinction, without some additional sort of failure, that we might instead categorize as inner alignment failure.

Lucas Perry: Let’s talk a little bit about probabilities, right? So if, you’re talking to someone who has never encountered AI alignment before, you’ve given a lot of different real world examples and principle-based arguments for, why there are these different kinds of alignment risks, how would you explain the probability of existential risk, to someone who can come along for all of these principle-based arguments, and buy into the examples that you’ve given, but still thinks this seems kind of, far out there, like when am I ever going to see in the real world, a ruthlessly optimizing AI, that’s capable of ending the world?

Rohin Shah: I think, first off, I’m super sympathetic to the ‘this seems super out there’ critique. I spent multiple years, not really agreeing with AI safety for basically, well, not just that reason, but that was definitely one that their heuristics that I was using. I think one way I would justify this is, to some extent it has precedent here, precedent already, in that fundamentally the arguments that I’m making… Well, especially the inner alignment one, is an argument about how AI systems will behave in new situations rather than the ones that we have already seen, during training. We already know, that AI systems behave crazily in these situations, the most famous example of this is adversarial examples, where you take an image classifier, and I don’t actually remember what the canonical example is. I think it’s a Panda, and you change it imperceptibly or change the pixel values by a small amounts, such that the changes are imperceptible to the human eye. And then it’s confident… It’s classified with, I think 99.8% confidence as something else. My memory is saying airplane, but that might just be totally wrong. Anyway, the point is we have precedent for it, AI system’s behaving really weirdly, in situations they weren’t trained on. You might object, that this one is a little bit cheating, because there was an adversary involved, and the real, I mean the real world does have adversaries, but still by default, you would expect the AI system to be more exposed to naturally occurring distributions. I think even there though, often you can just take an AI system that was trained on one distribution, give it inputs from a different distribution, and it’s just like there’s no sense to what’s happening.

Usually when I’m asked to predict this, the actual prediction I give is, probability that we go extinct due to an intent alignment failure, and then depending on the situation I will either condition on… I will either make that unconditional, so that includes all of the things that people will do to try to prevent that from happening. Or, I make it conditional, on the long-termist community doesn’t do anything, or vanishes or something. But even in that world, there’s still… Everyone who’s not a long-termist, who can still prevent that from happening, which I really do expect them to do, and then I think I give my cached answer, on both of those is like 5% and 10% respectively, which I think is probably the numbers I gave you. If I actually sat down and try to like come up with a probability, I would probably come up with something different this time, but I have not done that, and I’m way too anchored on those previous estimates, to be able to give you a new estimate this time. But, the higher number I’m giving now of, I don’t know, 33%, 50%, 70%, this, this one’s way more… I feel way more uncertain about it. Literally no one, tries to address these sorts of problems. It’s just sort of, take a language model, fine tune it on human feedback, in a very obvious way, and they just deploy that, even if it’s very obviously causing harm during training, they still deploy it.

What’s the chance that leads to human extinction? I don’t know, man, maybe 33%, maybe 70%. The 33% number you can get from this, one in three argument that I was talking about. The second thing I was going to say is, I don’t really like talking about probabilities very much, because of how utterly arbitrary the methods of generating them are there.

I feel much more, I feel much more robust. I feel much better in the robustness of the conclusion, that we don’t know that this won’t happen, and it is at least plausible, that it does happen. I think that’s pretty sufficient, for justifying the work done on it. I will also argue pretty strongly against anyone who says, we know that it will kill us all, if we don’t do anything. I don’t think that’s true. There are definitely, smart people who do think that’s true, if we operationalized greater than 90, 95% or something, and I disagree with them. I don’t really know why though.

Lucas Perry: How would you respond to someone, who thinks that this sounds, like it’s really far in the future?

Rohin Shah: Yeah. So this is specifically AGI is far in the future?

Lucas Perry: Yeah. Well, so the concern here seems to be about machines that are increasingly capable. When people look at machines that we have today, machine learning that we have today, sometimes they’re not super impressed and think that general capabilities are very far off.

Rohin Shah: Yeah.

Lucas Perry: And so this stuff sounds like, future stuff.

Rohin Shah: Yeah. So, I think my response depends on what we’re trying to get the person to do or something, why do we care about what this person believes, if this person is considering whether or not to do AI research themselves or, AI safety research themselves and they feel like they have a strong inside view model of, why AI is not going to come soon. I’m kind of… I’m like, eh, that seems okay. I’m not that stoked about people forcing themselves to do research on a thing they don’t actually believe. I don’t really think that good research comes from doing that. If I put myself, for example, I am much more sold on AGI coming through neural networks, than planning agents or things similar to it. If I had to put myself in the shoes of, all right, I’m now going to do AI safety research on planning agents. I’m just like, oh man, that’s seems like I’m going to do so much… My work is going to be orders of magnitude worse, than the work I do, on the neural-net case. So, in the case where, this person is thinking about whether to do AI safety research, and they feel like they have strong insight view models for AGI not coming soon. I’m like, eh, maybe they should go do something else or possibly, they should engage with the arguments for AGI coming more quickly, if they haven’t done that. But, if they have engaged with those arguments, thought about it all, concluded it’s far away, and they can’t even see a picture by which it comes soon…That’s fine.

Conversely, if we’re instead, if we’re imagining that someone is disputing, someone is saying, ‘oh nobody should work on AI safety right now, because AGI is so far away.’. One response you can have to that is, even if it’s far away, it’s still worthwhile to work on reducing risks, if they’re as bad as extinction. Seems like we should be putting effort into that, even early on. But I think, you can make a stronger argument there, which is there’re just actually people, lots of people who are trying to build AGI right now, there’s, at the minimum; DeepMind and OpenAI and they clearly… I should probably not make more comments about DeepMind, but OpenAI clearly doesn’t believe… OpenAI clearly seems to think, that AGI is coming somewhat soon. I think you can infer, from everything you see about DeepMind, that they don’t believe that AGI is 200 years away. I think it is insane overconfidence in your own views, to be thinking that you know better than all of these people, such that you wouldn’t even assign, like 5% or something, to AGI coming soon enough, that work on AI safety matters.

Yeah. So there, I think I would appeal to… Let other people do the work. You are not, you don’t have to do the work yourself. There’s just no reason for you to be opposing the other people, either on epistemic grounds or also on just, kind of a waste of your own time, that’s the second kind of person. A third kind of person might be like somebody in policy. From my impression of policy, is that there is this thing, where early moves are relatively irreversible, or something like that. Things get entrenched pretty quickly, such that it makes sense to wait for… It often makes sense to wait for a consensus before acting, and I don’t think that there is currently consensus of AGI coming soon. I don’t feel particularly confident enough in my views to say, we should really convince the policy people, to override this general heuristic of waiting for consensus, and get them to act now.

Yeah. Anyway, those are all meta-level considerations. There’s also the object-level question of, is AGI coming soon? For that, I would say, I think the most likely, the best story for that I know of is, you take neural nets, as you scale them up, you increase the size of the datasets that they’re trained on. You increase the diversity of the datasets that they’re trained on, and they learn more and more general heuristics, for doing good things. Eventually, these general, these heuristics are general enough that they’re as good as human cognition. Implicitly, I am claiming that human cognition, is basically a bag of general heuristics. There is this report from Ajeya Cotra, about AGI timelines using biological anchors. I wrote, even my summary of it was 3000 words, or something like that, so I don’t know that I can really give an adequate summary of it here, but it models… The basic premise, is to model how quickly neural nets will grow, and at what point they will match what we would expect to be approximately, the same rough size as the human brain. I think it even includes a small penalty to neural nets on the basis that evolution probably did a better job than we did. It basically comes up with a target for, neural nets of this size, trained in Compute Optimal ways, will probably be, roughly human level.

It has a distribution over this, to be more accurate, and then it predicts, based on existing trends. Well, not just existing trends, existing trends and sensible extrapolation, predicts when neural nets might reach that level. It ends up concluding, somewhere in the range… Oh, let me see, I think it’s 50% confidence interval would be something like 2035 to 2070, 2080, maybe something like that? I am really just like, I’m imagining a graph in my head, and trying to calculate the area under it, so that is very much not a reliable interval, but it should give you a general sense of what the report concludes.

Lucas Perry: So that’s 2030 to 2080?

Rohin Shah: I think it’s slightly narrower than that, but yes, roughly, roughly that.

Lucas Perry: That’s pretty soon.

Rohin Shah: Yep. I think that’s, on the object level that you’d just got to read the report, and see whether or not you buy it.

Lucas Perry: That’s most likely in our lifetimes, if we live to the average age.

Rohin Shah: Yep. So that was a 50% interval, meaning it’s, 25% to 75 percentile. I think actually the 25th percentile was not as early as 2030. It was probably 2040.

Lucas Perry: So, if I’ve heard everything, in this podcast, everything that you’ve said so far, and I’m still kind of like, okay, there’s a lot here and it sounds convincing or something and this seems important, but I’m not so sure about this, or that we should do anything. What is… Because, it seems like there’s a lot of people like that. I’m curious what it is, that you would say to someone like that.

Rohin Shah: I think… I don’t know. I probably wouldn’t try to say something general to them. I feel like I would need to know more about the person, people have pretty different idiosyncratic reasons, for having that sort of reaction. Okay, I would at least say, that I think that they are wrong, to be having that sort of belief or reaction.

But, if I wanted to convince them of that point, presumably I would have to say something more than just, I think you are wrong. I think the specific thing I would have to say, which would be pretty different for different people.

Lucas Perry: That’s a good point.

Rohin Shah: I would at least make an appeal to the meta-level heuristic of don’t try to regulate a small group of… There are a few hundred researchers at most, doing things that they think will help the world, and that you don’t think will hurt the world. There are just better things for you to do with your time. Doesn’t seem like they’re harming you. Some people will think that there is harm being caused by them. I would have to address that, with them specifically, but I think most people do not, who have this reaction, don’t believe that.

Lucas Perry: So, so we’ve gone over a lot of the traditional arguments for AI, as a potential existential risk. Is there anything else that you would like to add there, or any of the arguments that we missed, that you would like to include?

Rohin Shah: As a representative of the community as a whole, there are lots of other arguments that people like to make, for AI being a potential extinction risk. So, some things are, maybe AI just accelerates the rate at which we make progress, and we can’t increase our wisdom alongside, and as a result, we get a lot of destructive technologies and can’t keep them under control. Or, we don’t do enough philosophy, in order to figure out what we actually care about, and what’s good to do in the world, and as a result, we start optimizing for things that are morally bad or other things in this vein. Talk about the risk of AI being misused by bad actors. So there’s… Well actually I’ll introduce a trichotomy that, I don’t remember exactly who wrote this article. But it goes, Accidents, Misuse and Structural Risks. So accidents are, both alignment and the things like; we don’t keep up, we don’t have enough wisdom to cope with the impact of AI. That one’s arguable, whether it’s an accident, or misuse or structural, and we don’t do enough philosophy. So those are, vaguely accidental, those are accidents.

Misuse is, some bad actor. Some terrorists say, gets AI. Gets a powerful AI system and does something really bad, blows up the world somehow. Structural risks are things like; various parts of the economy use AI to accelerate, to get more profit, to accelerate their production of goods and so on. At some point we have this like giant economy, that’s just making a lot of goods, but it can become decoupled from things that are actually useful for humans, and we just have this huge multi-agency system, where goods are being produced, money’s floating around. We don’t really understand all of it, but somehow humans get left behind and there, it’s kind of an accident, but not in the traditional sense. It’s not that a single AI system went and did something bad. It’s more like the entire structure, of the way that the AI systems and the humans related to each other, was such that it ended up leading to the permanent disempowerment of humans. Now that I say it, I think the ‘we didn’t have enough wisdom’ argument for risk, is probably also in this category.

Lucas Perry: Which of these categories are you most worried about?

Rohin Shah: I don’t know. I think, it is probably not misuse, but I vary, on accidents versus structural risks, mostly because, I just don’t feel like I have a good understanding of structural risks. Maybe, most days I think structural risks are more likely to cause bad outcomes, extinction. The obvious next question is, why am I working on alignment, and not structural risks? The answer there, is that it seems to me like alignment has one, or perhaps two core problems that are leading to the major risk. Whereas structural risks… And so you could hope to have, one or two solutions that address those main problems and that’s it, that’s all you need. Whereas with structural risks, I would be surprised if it was just, there was just one or two solutions that just got rid of structural risk. It seems much more like, you have to have a different solution for each of the structural risks. So, it seems like, the amount that you can reduce the risk by, is higher in alignment than in structural risks. That’s not the only reason why I work in alignment, I just also have a much better personal fit with alignment work. But, I do also think that alignment work, you have more opportunity to reduce the risks, than in structural risks, on the current margin.

Lucas Perry: Is there a name for those one or two core problems in alignment, that you can come up with solutions for?

Rohin Shah: I mostly just mean like, possibly, we’ve been talking about outer and inner alignment, and in the neural net case, I talked about the problem where you reward the AI system for doing bad things, because there was an information asymmetry, and then the other one was like the AI system generalizes catastrophically, to new situations. Arguably those are just the two things, but I think it’s not even that, it’s more… Fundamentally the story, the causal chain in the accident’s case, was the AI was trying to do something bad, or something that we didn’t want rather, and then that was bad.

Whereas in the structural risks case, there isn’t a single causal story. It’s this very vague general notion of the humans and AI have interacted in ways that led to an X-risk. Then, if you drill down into any given story, or if you drilled down into five stories and then you’re like, what’s common across these five stories? Not much, other than that there was AI, and there were humans, and they interacted, and I wouldn’t say that was true, if I had five stories about alignment failure.

Lucas Perry: So, I’d like to take an overview, a broads eye view of AI alignment in 2021. Last time we spoke was in 2020. How has AI alignment, as a field of research changed in the last year?

Rohin Shah: I think I’m going to naturally include a bunch of things from 2020 as well. It’s not a very sharp division in my mind, especially because I think the biggest trend, is just more focus on large language models, which I think was a trend that started late 2020 probably… Certainly, the GPT-3 paper was, I want to say early 2020, but I don’t think it immediately caused there to be more work. So, maybe late 2020 is about right. But, you just see a lot more, alignment forum posts, and papers that are grappling with, what are the alignment problems that could arise with large language models? How might you fix them?

There was this paper out of Stanford, which isn’t, I wouldn’t have said this was from the AI safety community. But it gives the name foundation models to these sorts of things. So they generalize it beyond just language and they think it might… And already we’ve seen some generalization beyond language, like CLIP and DALL-E are working on image inputs, but they also extend it to robotics and so on. And their point is, we’re now more in the realm of, you train one large model on a giant pile of data that you happen to have, that you don’t really have any labels for, but you can use a self-supervised learning objective in order to learn from them. And then you get this model that has a lot of knowledge, but no goal built in, and then you do something like prompt engineering or fine tuning in order to actually get it to do the task that you want. And so that’s a new paradigm for constructing AI systems that we didn’t have before. And there have just been a bunch of posts that grapple with what alignment looks like in this case. I don’t think I have a nice pithy summary, unfortunately, of what all of us… What the upshot is, but that’s the thing people have been thinking about, a lot more.

Lucas Perry: Why do you think that looking at large scale language models has become a thing?

Rohin Shah: Oh, I think primarily just because GPT-3 demonstrated how powerful they could be. You just see, this is not specific to the AI safety community, even in the… If anything, this shift that I’m talking about is… It’s probably not more pronounced in the ML community, but it’s also there in the ML community where there are just tons of papers about prompt engineering and fine tuning out of regular ML labs. Just, I think is… GPT-3 showed that it could be done, and that this was a reasonable way to get actual economic value out of these systems. And so people started caring about them more.

Lucas Perry: So one thing that you mentioned to me that was significant in the last year, was foundation models. So could you explain what foundation models are?

Rohin Shah: Yeah. So a foundation model, the general recipe for it, is you take some very… Not generic, exactly. Flexible input space like pixels or any English language, any string of words in the English language, you collect a giant data set without any particular labels, just lots of examples of that sort of data in the wild. So in the case of pixels, you just find a bunch of images from image-sharing websites or something. I don’t actually know where they got their images from. For text, it’s even easier. The internet is filled with text. You just get a bunch of it. And then you train your AI, you train a very large neural network with some proxy objective on that data set, that encourages it to learn how to model that data set. So in the case of language models, the… There are a bunch of possible objectives. The most famous one was the one that GPT-3 used, which is just, given the first N words of the sentence, predict the word N plus one. And so it just… Initially it starts learning, E’s are the most common… Well, actually, because of the specific way that the input space in GPT-3 works, it doesn’t exactly do this, but you could imagine that if it was just modeling characters, it would first learn that E’s are the most common letter in the alphabet. L’s are more common. Q’s and Z’s don’t come up that often. Like it starts outputting letter distributions that at least look vaguely more like what English would look like. Then it starts learning what the spelling of individual words are. Then it starts learning what the grammar rules are. Just, these are all things that help it better predict what the next word is going to be, or, well, the next character, in this particular instantiation.

And it turns out that when you have millions of parameters in your neural network, then you can… I don’t actually know if this number is right, but probably, I would expect that with millions of parameters in your neural network, you can learn spellings of words and rules of grammar, such that you’re mostly outputting, for the most part, grammatically correct sentences, but they don’t necessarily mean very much.

And then when you get to the billions of parameters range, at that point, the millions of parameters are already getting you grammar. So like, what should it use all these extra parameters for, now? Then it starts learning things like George… Well, probably already even the millions of parameters probably learned that George tends to be followed by Washington. But it can start learning things like that. And in that sense, can be said to know that there is an entity, at least, named George Washington. And so on. It might start knowing that rain is wet, and in context where something has been rained on, and then later we’re asked to describe that thing, it will say it’s wet or slippery or something like that. And so it starts… It basically just, in order to predict words better, it keeps getting more and more “knowledge” about the domain.

So anyway, a foundation model, expressive input space, giant pile of data, very big neural net, learns to model that domain very well, which involves getting a bunch of “knowledge” about that domain.

Lucas Perry: What’s the difference between “knowledge” and knowledge?

Rohin Shah: I feel like you are the philosopher here, more than me. Do you know what knowledge without air quotes is?

Lucas Perry: No, I don’t. But I don’t mean to derail it, but yeah. So it gets “knowledge.”

Rohin Shah: Yeah. I mostly put the air quotes around knowledge because we don’t really have a satisfying account of what knowledge is. And if I don’t put air quotes around knowledge, I get lots of people angrily saying that AI systems don’t have knowledge yet.

Lucas Perry: Oh, yeah. That makes sense.

Rohin Shah: And when I put the air quotes around it, then they understand that I just mean that it has the ability to make predictions that are conditional on this particular fact about the world, whether or not it actually knows that fact about that world.

Lucas Perry: Okay.

Rohin Shah: But it knows it well enough to make predictions. Or it contains the knowledge well enough to make predictions. It can make predictions. That’s the point. I’m being maybe a bit too harsh, here. I also put air quotes around knowledge because I don’t actually know what knowledge is. It’s not just a defense strategy. Though, that is definitely part of it.

So yeah. Foundation models, basically are a way to just get all of this “knowledge” into an AI system, such that you can then do prompting and fine tuning and so on. And those, with a very small amount of data, relatively speaking, are able to get very good performance. Like in the case of GPT-3, you can like give it two or three examples of a task and it can start performing that task, if the task is relatively simple. Whereas if you wanted to train a model from scratch to perform that task, you would need thousands of examples, often.

Lucas Perry: So how has this been significant for AI alignment?

Rohin Shah: I think it has mostly provided an actual pathway to it, by which we can get to AGI. Or there’s more like a concrete story and path that leads to AGI, eventually. And so then we can take all of these abstract arguments that we were making before, and then see, try to instantiate them in the case of this concrete pathway, and see whether or not they still make sense. I’m not sure if at this point I’m imagining what I would like to do, versus what actually happened. I would need to actually go and look through the alignment newsletter database and see what people actually wrote about the subject. But I think there was some discussion of GPT-3 and the extent to which it is or isn’t a mesa optimizer.

Yeah. That’s at least one thing that I remember happening. Then there’s been a lot of papers that are just like, “Here is how you can train a foundation model like GPT-3 to do the sort of thing that you want.” So there’s learning to summarize from human feedback, which just took GPT-3 and taught it how to, or fine tuned it in order to get it to summarize news articles, which is an example of a task that you might want an AI system to do.

And then the same team at OpenAI just recently released a paper that actually summarized entire books by using a recursive decomposition strategy. In some sense, a lot of the work we’ve been doing in the past, in AI alignment was like how do we get AI systems to perform fuzzy tasks for which we don’t have a reward function? And now we have systems that could do these fuzzy tasks in the sense that they “have the knowledge,” but don’t actually use that knowledge the way that we would want them. And then we have to figure out how to get them to do that. And then we can use all these techniques like imitation learning, and learning from comparisons and preferences that we’ve been developing.

Lucas Perry: Why don’t we know that AI systems won’t totally kill us all?

Rohin Shah: The arguments for AI risk usually depend on having an AI system that’s ruthlessly maximizing an objective in every new situation it encounters. So for example, the paperclip maximizer, once it’s built 10 paperclip factories, it doesn’t retire and say, “Yep, that’s enough paperclips.” It just continues turning entire planets into paper clips. Or if you consider the goal of, make a hundred paper clips, and it turns all of the plants into computers to make sure it is as confident as possible, that it has made a hundred paper clips. These are examples of, I’m going to call it “ruthlessly maximizing” an objective. And there’s some sense in which this is weird and humans don’t behave in that way. And I think there’s some amount of, basically I am unsure whether or not we should actually expect AI’s to have such ruthlessly maximized objectives. I don’t really see the argument for why that should happen. And I think, as a particularly strong piece of evidence against this, I would note that humans don’t seem to have these sorts of objectives.

It’s not obviously true. There are probably some longtermists who really do want to tile the universe with hedonium, which seems like a pretty ruthlessly maximizing objective to me. But I think even then, that’s the exception rather than the rule. So if humans don’t ruthlessly maximize objectives and humans were built by a similar process as is building neural networks, why do we expect the neural networks to have objectives that they ruthlessly maximize?

You can also… I’ve phrased this in a way where it’s an argument against AI risk. You can also phrase it in a way in which it’s an argument for AI risk, where you would say, well, let’s flip that on its head and say like, “Well, yes, you brought up the example of humans. Well, the process that created humans is trying to maximize, or it is an optimization process, leading to increased reproductive fitness. But then humans do things like wear condoms, which does not seem great for reproductive fitness, generally speaking, especially for the people who are definitely out there who decide that they’re just never going to reproduce. So in that sense, humans are clearly having a large impact on the world and are doing so for objectives that are not what evolution was naively optimizing.

And so, similarly, if we train AI systems in the same way, maybe they too will have a large impact on the world, but not for what the humans were naively training the system to optimize.

Lucas Perry: We can’t let them know about fun.

Rohin Shah: Yeah. Terrible. Well, I don’t want to be-

Lucas Perry: The whole human AI alignment project will run off the rails.

Rohin Shah: Yeah. But anyway, I think these things are a lot more conceptually tricky than the well-polished arguments that one reads, will make it seem. But especially this point about, it’s not obvious that AI systems will get ruthlessly maximizing objectives. That really does give me quite a bit of pause, in how good the AI risk arguments are. I still think it is clearly correct to be working on AI risk, because we don’t want to be in the situation where we can’t make an argument for why AI is risky. We want to be in the situation where we can make an argument for why the AI is not risky. And I don’t think we have that situation yet. Even if you completely buy the, we don’t know if there’s going to be ruthlessly maximizing objectives, argument, that puts you in the epistemic state where we’re like, “Well, I don’t see an iron clad argument that says that AIs will kill us all.” And that’s sort of like saying… I don’t know. “Well, I don’t have an iron clad argument that touching this pan that’s on this lit stove, will burn me, because maybe someone just put the pan on the stove a few seconds ago.” But it would still be a bad idea to go and do that. What you really want, is a positive argument for why touching the pan is not going to burn you, or analogously, why building the AGI is not going to kill you. And I don’t think we have any such positive argument, at the moment.

Lucas Perry: Part of this conversation’s interesting because I’m surprised how uncertain you are about AI as an existential risk.

Rohin Shah: Yeah. It’s possible I’ve become slightly more uncertain about it in the last year or two. I don’t think I was saying things that were quite this uncertain before then, but I think I have generally been… We have plausibility arguments. We do not have like, this is probable, arguments. Or back in 2017 or 2018 when I was young and naive.

Lucas Perry: Okay.

Rohin Shah: This makes more sense.

Lucas Perry: We’re no longer young and naive.

Rohin Shah: Well, okay. I entered the field of AI alignment. I read my first AI alignment paper in September of 2017. So it actually does make sense. At that time, I thought we had more confidence of some sort, but since posting the value learning sequence, I’ve generally been more uncertain about AI risk arguments. I don’t talk about it all that much, because as I said, the decision is still very clear. The decision is still, work on this problem. Figure out how to get a positive argument that the AI is not going to kill us. And ideally, a positive argument that the AI does good things for humanity. I don’t know, man. Most things in life are pretty uncertain. Most things in the future are even way, way, way more uncertain. I don’t feel like you should generally be all that confident about technologies that you think are decades out.

Lucas Perry: Feels a little bit like those images of the people in the fifties drawing what the future would look like, and the images are ridiculous.

Rohin Shah: Yep. Yeah. I I’ve been recently watching Star Wars. Now, obviously Star Wars is not actually supposed to be a prediction of the future, but it’s really quite entertaining, to actually just think about all the ways in which Star Wars would be totally inaccurate. And this is before we’d even invented space travel. And just… Robots talking to each other, using sound. Why would they do that?

Lucas Perry: Industry today, wouldn’t make machines that speak by vibrating air. They would just send each other signals electromagnetically. So how much of the alignment and safety problems in AI do you think will be solved by industry? The same way that computer-to-computer communication is solved by industry, and is not what Star Wars thought it would be. Would the DeepMind AI safety lab exist, if DeepMind didn’t think that AI alignment and AI safety were serious and important? I don’t know if the lab is purely aligned with the commercial interests of DeepMind itself, or if it’s also kind of seen as a good-for-the-world thing. I bring it up because I like how Andrew Critch talks about it in his arches paper.

Rohin Shah: Yep. So, Critch is, I think, of the opinion that both preference learning and robustness are problems that will be solved by industry. I think he includes robustness in that. And I certainly agree to the extent that you’re like, “Yes, companies will do things like learning from human preference.” Totally. They’re going to do that. Whether they’re going to be proactive enough to notice the kinds of failures I mentioned, I don’t know. It doesn’t seem nearly as obvious to me that they will be, without dedicated teams that are specifically meant for looking for hidden failures with the knowledge that these are really important to get, because they could have very bad long term consequences.

AI systems could increase the strength of, and accelerate various multi-agent systems and processes that, when accelerated, could lead to bad outcomes. So for example, a great example of a destructive multi-agent effect, is war. War is a thing that… Well, wars have been getting more destructive over time, or at least the weapons in them have been getting more destructive. Probably the death tolls have also been getting higher, but I’m not as sure about that. And you could imagine that if AI systems continue to increase, if they increase the destructiveness of weapons even more, wars might then become an existential risk. That’s a way in which you can get a structural risk from a multi-agent system. And the example in which the economy just sort of becomes much, much, much bigger, but becomes decoupled from things that humans want, is another example of how a multi-agent process can sort of go haywire, especially with the addition of powerful AI systems. I think that’s also a canonical scenario that Critch would think about. Yeah.

Really, I would say that Arches is, in my head, it’s categorized as a technical paper about structural risks.

Lucas Perry: Do you think about what beneficial futures look like? You spoke a little bit about wisdom earlier, and I’m curious what good futures with AI, looks like to you.

Rohin Shah: Yeah, I admit I don’t actually think about this very much. Because my research is focused on more abstract problems, I tend to focus on abstract considerations, and the main abstract consideration from the perspective of the good future, is, well, once we get to singularity levels of powerful AI systems, anything I say now, there’s going to be something way better that AI systems are going to enable. So then, as a result, I don’t think very much about it. But that’s mostly a thing about me not being in a communications role.

Lucas Perry: You work a lot on this risk. So you must think that humanity existing in the future, matters?

Rohin Shah: I do like humans. Humans are pretty great. I count many of them amongst my friends. I’ve never been all that good at the transhumanist, look to the future and see the grand potential of humanity, sorts of visions. But when other people say them or give them, I feel a lot of kinship with them. The ones that are all about humanity’s potential to discover new forms of art and music, reach new levels of science, understand the world better than it’s ever been understood before, fall in love a hundred times, learn all of the things that there are to know. Actually, you won’t be able to do that one, probably, but anyway. Learn way more of the things that there are to know, than you have right now. Just a lot of that resonates with me. And that’s probably a very intellectual-centric view of the future. I feel like I’d be interested in hearing the view of the future that’s like, “Ah yes, we have the best video games and the best TV shows. And we’re the best couch potatoes that ever were.” Or also, there’s just insane new sports that you have to spend lots of time and grueling training for, but it’s all worth it when you shoot the best, get a perfect score on the best dunk that’s ever been done in basketball, or whatever. I recently watched a competition of apparently there are competitions in basketball of just aesthetic dunks. It’s cool. I enjoyed it. Anyway. Yeah. It feels like there’s just so many other communities that could also have their own visions of the future. And I feel like I’d feel a lot of kinship with many of those, too. And I’m like, man, let’s just have all the humans continue to do the things that they want. It seems great.

Lucas Perry: One thing that you mentioned was that you deal with abstract problems. And so what a good future looks like to you, it seems like it’s an abstract problem that later, the good things that AI can give us, are better than the good things that we can think of, right now. Is that a fair summary?

Rohin Shah: That seems, right. Yeah.

Lucas Perry: Right. So there’s this view, and this comes from maybe Steven Pinker or someone else. I’m not sure. Or maybe Ray Kurzweil, I don’t know… Where if you give a caveman a genie, or an AI, they’ll ask for maybe a bigger cave, and, “I would like there to be more hunks of meat. And I would like my pelt for my bed to be a little bit bigger.” Go ahead.

Rohin Shah: Okay. I think I see the issue. So I actually don’t agree with your summary of the thing that I said.

Lucas Perry: Oh, okay.

Rohin Shah: Your rephrasing was that we ask the AI what good things there are to do, or something like that. And that might have been what I said, but what I actually meant was that with powerful AI systems, the world will just be very different. And one of the ways in which it will be different is that we can get advice from AIs on what to do. And certainly, that’s an important one, but also, there will just be incredible new technologies that we don’t know about. New realms of science to explore new concepts that we don’t even have names for, right now. And one that seems particularly interesting to me, is just entirely new senses. Human vision is just incredibly complicated, but I can just look around the room and identify all the objects with basically no conscious thought. What would it be like to understand DNA at that level? AlphaFold probably understands DNA at maybe not quite that level, but something like it.

I don’t know, man. There’s just like these things that I’m like… I thought of the DNA one because of AlphaFold. Before AlphaFold, would I have thought of it? Probably not. I don’t know. Maybe. Kurzweil has written a little bit about things like this. But it feels like there will just be far more opportunities. And then also, we can get advice from AIs, but that’s probably… Actually- and that’s important, but I think less than… There are far more opportunities, that I am definitely not going to be able to think of today.

Lucas Perry: Do you think that it’s dissimilar, from the caveman wishing for more caveman things?

Rohin Shah: Yeah. I feel like in the caveman story… It’s possible that the caveman does this, I feel like the thing the caveman should be doing, is something like, give me better ways to… give me better food or something, and then you get fire to cook things, or something.

Lucas Perry: Yeah.

Rohin Shah: The things that he asks for, should involve technology as a solution. He should get technology as a solution, to learn more, and be able to do more things as a result of having that technology. In this hypothetical, the caveman should reasonably quickly, become similar to modern humans. I don’t know what reasonably quickly means here, but it should be much more… You get access to more and more technologies, rather than you get a bigger cave and then you’re like, “I have no more wishes anymore.” If I got a bigger house, would I stop having wishes? That seems super unlikely. That’s a strawman argument, sorry. But still, I do feel like there’s this… A meaningful sense in which, getting new technology leads to just genuinely new circumstances, which leads to more opportunities, which leads to probably more technology, and so on, and at some point, this has to stop. There are limits to what is possible. One assumes there are limits to what is possible in the universe. But I think, once we get to talking about, we’re at those limits, then at that point, it just seems irresponsible to speculate. It’s just so wildly out of the range of things that we know, the concept of a person is probably wrong, at that point.

Lucas Perry: The what of a person is probably wrong at that point?

Rohin Shah: The concept of a person.

Lucas Perry: Oh.

Rohin Shah: I’d be like, “Is there an entity, that is Rohin at that time?” Not likely. Less than 50%.

Lucas Perry: We’ll edit in just fractals flying through your video, at this part of the interview. So in my example, I think it’s just because I think of cavemen as not knowing how to ask for new technology, but we want to be able to ask for new technology. Part of what this brings up for me, is this very classic part of AI alignment, and I’m curious how you feel like it fits into the problem.

But, we would also like AI systems to help us imagine beneficial futures potentially, or to know what is good or what it is that we want. So, in asking for new technology, it knows that fire is part of the good, that we don’t know how to necessarily ask for directly. How do you view AI alignment, in terms of itself aiding in the creation of beneficial futures, and knowing of a good that is beyond the good, that humanity can grasp?

Rohin Shah: I think I more reject, the premise of the question, where I’d be like, there is no good beyond that which humanity can grasp. This is somewhat of an anti-realist position.

Lucas Perry: You mean, moral anti-realist, just for the-

Rohin Shah: Yes. Sorry, I should have said that more clearly. Yeah. Somewhat of a moral anti-realist position. There is no good, other than that which humans can grasp. Within that ‘could grasp’, you can have humans thinking for a very long time, you could have them with extra… you can make them more intelligent, like part of the technologies you get from AI systems will presumably like you do that, maybe you can, I guess setting aside questions of philosophical identity, you could upload the humans such that they could run on a computer, and run much faster, have software upgrades to be… To the extent that, that’s philosophically acceptable. There’s a lot you can do to help humans grasp more. Ultimately, yes, the closure of all these improvements, where you get to with all of that, that’s just, is the thing that we want. Yes, you could have a theory, that there is something even better, and even more out there, that humans can never access by themselves, that just seems like a weird hypothesis to have, and I don’t know why you would have it. But, in the world where that hypothesis is true, and if I condition on that hypothesis being true, I don’t see why we should expect, that AI systems could access that further truth any better than we can, if it’s out of our, the closure of what we can achieve, even with additional intelligence and such. There’s no other advantage that AI systems have over us.

Lucas Perry: So, is what you’re arguing, that with human augmentation and help to human beings, so like with uploads or with expanding the intelligence and capabilities of humans, that humans have access to the entire space of what counts as good.

Rohin Shah: I think you’re presuming the existence of an object that is the entire space of what is good. And I’m like, there is no such object, there are only humans, and what humans want to do. If you want to define the space of what is good, you can define this closure property on what humans will think is good, with all of the possible intelligence augmentations and time, and so on. That’s a reasonable object, and I could see calling that as the space of what is good. But then, almost tautologically, we can reach it with technology. That’s the thing I’m talking about. The version where you posit the existence of the entire space of what is good is: A, I can’t really conceive of that, it doesn’t feel very coherent to me, but B, when I try to reason about it anyway, I’m like, okay, if humans can’t access it, why should AI’s be able to access it? You’ve posited this new object of, a space of things, that humans can never access, but how does that space affect or interact with reality in any way? There needs to be some sort of interaction, in order for the AI to be able to access it. I think I would need to know more about how it interacts with reality in some way, before I could meaningfully answer this question in a way, where I could say how AI’s could do something, that humans couldn’t even in principle, do.

Lucas Perry: What do you think of the importance, or non importance of these kinds of questions, and how they fit into the ongoing problem of AI alignment?

Rohin Shah: I think they’re important, for determining what the goal of alignment should be. So for example, you now know a little bit of what my view on these questions is, which is namely something like… That which humans can access, under sufficient augmentations, intelligence, time and so on, is all that there is. So I’m very into… build AI systems that are replicating human reasoning, they’re approximating what a human would do, if they thought for a long time, or were smarter in some ways and so on. So then, yeah we don’t need to worry much about… I tend to think of it as, let’s build an AI systems that just do tasks, that humans can conceptually understand, not necessarily they can do it, but they know what that task is. Then, our job is to, the entire human AI society is making forward progress towards… Making forward moral progress or other progress, in the same way that if this happened in the past, we get exposed to new situations and new arguments, we think about them for a while, and then somehow we make decisions about what’s good and what’s not, in a way that’s somewhat inscrutable. I’m much more about… So we just continue reiterating that process, and eventually we reach the space of, well yeah, we just continue reiterating that process. So I’m very much into, because of this view, I think it’s pretty reasonable to aim for AI systems that are just doing human-like reasoning, but better. Or approximating, doing what a human could do in a year, in a few minutes or something like that. That seems great to me. Whereas if you, on the other hand were like, no, there’s actually deep philosophical truths out there, that humans might never be able to access, then you’re probably less enthusiastic about that sort of plan, and you’ll want to build an AI system some other way.

Lucas Perry: Or maybe they’re accessible, with the augmentation and time. How does other minds fit into this for you? So, right, there’s the human mind and then the space of all that is good, that it has access to, with augmentation, which is what you call the space, of that which is good. It’s contingent, and rooted on the space of what the human mind, augmented has access to. How would you view, how does that fit in with animals and also other species which may have their own alignment problems on planets within our cosmic endowment that we might run into? Is it just that they also have spaces that are defined as good, as what they can access through their own augmentation? And then, there’s no way of reconciling these two different AI alignment projects?

Rohin Shah: Yeah, I think basically, yes. If I met an actual ruthless, maximizing paperclip… Paperclip maximizer. It’s not like I can argue it, into adopting my values, or anything even resembling them. I don’t think it would be able to argue me into accepting turning me into paperclips, which is what it desires, and that just seems like the description of reality. Again, a moral realist might say something else, but I’ve never really understood the flavor of moral realism that would say something else in that situation.

Lucas Perry: With regards to the planet and industry, and how industry will be creating increasingly capable AI systems. Could you explain what a unipolar scenario is, and what a multi-polar scenario is?

Rohin Shah: Yeah, so I’m not sure if I recall exactly where these terms were defined, but a unipolar scenario, at least as I understand it, would be a situation in which, one entity basically determines the long run future of the earth. More colloquially, it has taken over the world. You can also have a time bounded version of it, where it’s unipolar for 20 years, and this entity has all the power for those 20 years, but then, maybe the entity is a human, and we haven’t solved aging yet, and then the human dies. So then, it was a unipolar world for that period of time. And a multipolar world is just, not that. There is no one entity, that is said to be in control of the world. There’s just a lot of different entities that have different goals, and they’re coexisting, hopefully cooperating, maybe not cooperating, depends on the situation.

Lucas Perry: Which do you think is more likely to lead to beneficial outcomes, with AI?

Rohin Shah: So, I don’t really think about it in these terms. I think about it in like, there are these kinds of worlds that we could be in, some of them are unipolar and some of them are multipolar, but very different unipolar worlds, and very different multipolar worlds. And so, the sorts of questions, the closest analogous question is something like, if you condition on unipolar world, what’s the probability that it’s beneficial or that it’s good. If you condition on multipolar world, what’s the probability that is good? And it’s just a super complicated question that I wouldn’t be able to explain my reasoning for, because it would involve me like thinking about 20 different worlds, maybe not that many, but a bunch of different worlds in my head, estimating their probabilities by doing a base rule… I guess, kind of a base rule calculation, and then reporting the result.

So, I think maybe the question I will answer instead, is the most likely worlds in each of unipolar, and multipolar settings, and then, how good those seem to me. So I would say, I think by default, I expect the world to be multi-polar, in that it doesn’t seem like anyone is particularly. I don’t think anyone has particularly taken over the world today, or any entity, not even counting the US as a single entity. It’s not like the US has taken over the world. It does not seem to me like… Though the main way you could imagine getting a unipolar world is, if the first actor to build a powerful enough AI system, that AI system just becomes really, really powerful and takes over the world, before anyone can deploy an AI system even close to it.

Sorry, that’s not the most likely one. That’s the one that most people most often talk about, and probably the one that other people think is the most likely, but yeah. Anyway, I see the multipolar world as more likely, where we just have a bunch of actors that are all pretty well-resourced, that are all developing their own AI systems. They then sell their AI systems, or the ability to use their AI systems to other people, and then it’s sort of similar to the human economy, where you can just have AI systems provide labor at a fixed cost. It looks similar to the economy today, where people who control a lot of resources can instantiate a bunch of AI systems, that help them maintain whatever it is they want, and we remain in the multipolar world we have today.

And that seems… Decent. I think, for all that our institutions are not looking great, at the current moment. There is still something to be said, that nuclear war didn’t actually happen, which can either update you towards, our institutions are somewhat better than we thought, or it can update you towards, if we had nuclear war, we would have all died, and not been here to ask the question. I don’t think that second one, is all that possible. My understanding, is that nuclear war is not that likely to wipe out everyone, or even 90% of people. So I’m more… I lean towards the first explanation. Overall, my guess is, this is the thing that has worked for the last… ‘Worked’, the thing that has, generally led to an increase in prosperity. Or, the world has clearly improved on most metrics over time. And, this system we’ve been using, for most of that time is some sort of multipolar, people interact with each other and keep each other in check, and cooperate with each other because they have to, and so on. In the modern world we use, and not just the modern world, we use things like regulations and laws and so on, to enforce this. The system’s got some history behind it, so I’m more inclined to trust it. But overall, I feel okay about this world, assuming we solve the alignment problem, we’ll ignore the alignment problem for now.

For a unipolar world. I think, probably, I find it more likely that there will just be a lot of returns to scale. You’ll get a lot of efficiency from centralizing more and more, in the same way that it’s just really nice to have a single standard, rather than have 15 different standards. It sure would have been nice, if when I moved to the UK, I could have just used all of my old chargers without having to buy adapters. But no, all the outlets are different, right? There’s benefits to standardization and centralization of power, and it seems to me, there has been more and more of that over time. Maybe it’s not obvious, I don’t know very much history, but if… So, it seems like you could get, even more centralization in the future, in order to capture the efficiency benefits, and then you might have a global government that could reasonably be said to be the entity that controls the world, and that would then be a unipolar outcome. It’s not a unipolar outcome in which the thing in charge of the world is an AI system. It is a unipolar outcome. I feel wary of this, but I don’t like having a single point of failure. I don’t like it when there’s a… However, I really like it when people are allowed to advocate for their own interests, which isn’t necessarily not happening here, right?

This could be a global democracy, but still, it seems like, the libertarian intuition of markets are good, generally tends to suggest against centralization, and I do buy that intuition, but this could also just be status quo bias, where I know that I can very easily see the problems in the world that we’re not actually in at the moment, and I don’t want it to change. So I don’t know, I don’t have super strong opinions there. It’s very plausible to me that that world is better, because then you can control dangerous technologies much, much better. If there just are technologies that are sufficiently dangerous and destructive, they would destroy, they would lead to extinction, then maybe I’m more inclined to favor a unipolar outcome.

Lucas Perry: I would like to ask you about DeepMind, and maybe another question before we wrap up. What is it, that the safety team at DeepMind is up to?

Rohin Shah: No one thing. The safety team at DeepMind is reasonably large, and there’s just a bunch of projects going on. I’ve been doing a bunch of inner alignment stuff. Most recently, I’ve been trying to come up with more examples that are, in actual systems, rather than hypotheticals. I’ve also been doing a bunch of conceptual work, of just trying to make our arguments clearer, and more conceptually precise. A large smattering of stuff, not all that related to each other, except in as much as it’s all about AI alignment.

Lucas Perry: As a final question here, Rohin, I’m interested in your core at the center of all of this. What’s the most important thing to you right now? Insofar as, AI alignment, may be the one thing, that most largely impacts the future of life?

Rohin Shah: Ah.

Lucas Perry: If you just look at the universe right now, and you’re like, these are the most important things.

Rohin Shah: I think, for things that I impact, at a more granular, more granular than just, make AI go well… I think for me, it’s probably making better arguments and more convincing arguments, currently. This will probably change in the future. Partially because I hope to succeed at the skill, and then it won’t be as important. But I feel like right now, especially with the advent of these large neural nets, and more people seeing a path to AGI, I think it is much more possible to make arguments that would be convincing to ML researchers as well, as well as the philosophically oriented people who make up the AI safety community, and I think, that just feels like the most useful thing I can do at the moment. In terms of the world in general… I feel like it is something like the attitudes of consequential people, two words… Well, long-termism in general, but maybe risks in particular, where, and importantly, I do feel it has more… I care primarily about, the people who are actually making decisions, that impact the future. Maybe they are taking into account the future. Maybe they’re like, it would be nice to care about the future, but the realities of politics mean that I can’t do that, or else I will lose my job. But my guess is that they’re mostly just not thinking about the future. That seems… If you’re talking about the future of life, that seems like the most, that seems pretty important to change.

Lucas Perry: How do you see doing that, when many of these people don’t have the… As Sam Harris put it, ‘the science fiction geek gene’ is what he called it, when he was on this podcast. The long-termists, who are all, we’re going to build AGI, and then create these radically different futures. Many of these people, may just mostly care about their children and their grandchildren, that may be the human tendency.

Rohin Shah: Do we actually advocate for any actions that would not impact their grandchildren?

Lucas Perry: It depends on your timelines, right?

Rohin Shah: Fair enough. But, most of the time, the arguments that I see people giving for any preferred policy proposal of theirs, or act… Just like almost any action whatsoever. It seems be a thing, that would have a noticeable effect on people’s lives in the next 100 years. So, in that sense, grandchildren should be enough.

Lucas Perry: Okay. So then long-termism doesn’t matter.

Rohin Shah: Well… I don’t-

Lucas Perry: For getting the action done.

Rohin Shah: Oh, possibly. I still think they’re not thinking about the future. I think it’s more of a… I don’t know, if I had to take my best guess at it, with noting the fact that I am just a random person, who is not at all an expert in these things, because why would I be? And yes listeners, noting that Lucas has just asked me this question, because it sounds interesting, and not because I am at all, qualified to answer it.

It seems to me, the more likely explanation is that there are just always a gazillion things to do. There’s always $20 bills to be picked off the sidewalk, but their value is only $20. They’re not $2 billion. Everyone is just constantly being told to pick up all the $20 bills, and as a result, they are in a perpetual state of having to say no to stuff, and doing only the stuff that seems most urgent, and maybe also important. So, most of our institutions tend to be in a very reactive mindset, as a result. Not because they don’t care, but just because that’s the thing that they’re incentivized to do, is to respond to the urgent stuff.

Lucas Perry: So, getting policymakers to care about the future, whether that even just includes children and grandchildren, not the next 10 billion years, would be sufficient in your view?

Rohin Shah: It might be, it seems plausible. I don’t know that that’s the approach I would take. I think I’m more just saying, I’m not sure that you even need to convince them to care about the future, I think-

Lucas Perry: I see.

Rohin Shah: It’s possible, that what’s needed is people who have the space to bother thinking about it. I get paid to think about the future, if I didn’t get paid to think about the future, I would not be here on this podcast because I would not have enough knowledge to be worth talking, you talking to. I think, there are just not very many people who can be paid to think about the future, and the vast majority of them are in there… I don’t know about the vast majority, but a lot of them are in our community. Very few of them are in politics. Politics generally seems to anti-select for people who can think about the future. I don’t have a solution here, but that is the problem as I see it, and if I were designing a solution, I would be trying to attack that problem.

Lucas Perry: That would be one of the most important things.

Rohin Shah: Yeah. I think on my view, yes.

Lucas Perry: All right. So, as we wrap up here, is there anything else you’d like to add, or any parting thoughts for the audience?

Rohin Shah: Yeah. I have been giving all these disclaimers during the podcast too, but I’m sure I missed them in some places, but I just want to note, Lucas has asked me a lot of questions that are not things I usually think about, and I just gave off-the-cuff answers. If you asked me them again, two weeks from now, I think for many of them, I might actually just say something different. So don’t take them too seriously, and treat… The AI alignment ones, I think you can take those reasonably seriously, but the things that were less about that, take them as some guy’s opinion, man.

Lucas Perry: ‘Some guy’s opinion, man.’

Rohin Shah: Yeah. Exactly.

Lucas Perry: Okay. Well, thank you so much for coming on the podcast Rohin, it’s always a real pleasure to speak with you. You’re a bastion of knowledge and wisdom in AI alignment and yeah, thanks for all the work you do.

Rohin Shah: Yeah. Thanks so much for having me again. This was fun to record.

Filippa Lentzos on Global Catastrophic Biological Risks

  • The most pressing issue in biosecurity
  • Stories from when biosafety labs failed to contain dangerous pathogens
  • The lethality of pathogens being worked on at biolaboratories
  • Lessons from COVID-19

 

Watch the video version of this episode here

0:00 Intro

2:35 What are the least understood aspects of biological risk?

8:32 Which groups are interested biotechnologies that could be used for harm?

16:30 Why countries may pursue the development of dangerous pathogens

18:45 Dr. Lentzos’ strands of research

25:41 Stories from when biosafety labs failed to contain dangerous pathogens

28:34 The most pressing issue in biosecurity

31:06 What is gain of function research? What are the risks?

34:57 Examples of gain of function research

36:14 What are the benefits of gain of function research?

37:54 The lethality of pathogens being worked on at biolaboratories

40:25 Benefits and risks of big data in biology and the life sciences

45:03 Creating a bioweather map or using big data for biodefense

48:35 Lessons from COVID-19

53:46 How does governance fit in to biological risk?

55:59 Key takeaways from Dr. Lentzos

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Dr. Filippa Lentzos and explores increasing global security concerns from the use of the life sciences. As biotechnology continues to advance, the capacity for use of both the harmful and beneficial aspects of this technology is also increasing. In a world stressed by climate change as well as an increasingly unstable political landscape that is likely to include powerful new biotechnologies capable of killing millions, the challenges of biotech to global security are clearly significant. Dr. Lentzos joins us to explain the state of biotech and life sciences risk in the present day, as well as what’s needed for mitigating the risk.

Dr. Filippa Lentzos is a mixed methods social scientist with expertise in biosafety, biosecruity, biorisk assessment and biological arms control. She works at King’s College London as a Senior Lecturer in Science and International Security. Dr. Lentzos also serves as the Co-Director of the Centre for Science and Security Studies, is an Associate Senior Researcher at Stockholm International Peace Research Institute, and is a columnist for the Bulletin of Atomic Scientists. Her work focuses on transparency, confidence-building and compliance assessment of biodefence programmes and high-risk bioscience. She also focuses on information warfare and deliberate disinformation related to global health security.

And with that, I’m happy to present this interview with Dr. Filippa Lentzos.

To start things off here, we’ve had COVID pretty much blindside humanity, at least the general public. People who have been interested in pandemics and bio risk have known about this risk coming for a long time now and have tried to raise the alarm bells about it. And it seems like this other very, very significant risk is the continued risk of synthetic bio agents, engineered pandemics, and also the continued risk of natural pandemics. It feels to me extremely significant and also difficult to convey the importance and urgency of this issue, especially when we pretty much didn’t do anything about COVID and knew that a natural pandemic was coming.

So, I’m curious if you could explain what you think are the least understood aspects of synthetic and natural biological risk by the general public and by governments around the world and what you would most like them to understand.

Filippa Lentzos: I guess one of the key things to understand is that security concerns of life science research is something that we must take seriously. There’s this whole history of using the life sciences to cause harm, of deliberately inflecting disease, of developing biological weapons. But very few people know this history because it’s a story that’s suffused by secrecy. In the 20th century, biological weapons were researched and developed in several national programs, all of which were top secret, including the US one.

These programs were concealed in labs at military sites that were not listed on ordinary maps. Special code names and exceptionally high classification categories were assigned to biological agents and the projects that were devised to weaponize them. Bioweaponeers were sworn to secrecy and under constant surveillance. So, a lot of that just hasn’t become publicly available. Much of the documentation and other evidence of past programs has been destroyed. There were these concerted efforts to bring war crimes and human rights abuses to public light. Information about biological weapons programs tended to be suppressed.

One example of this is the Truth and Reconciliation Commission hearings in South Africa that followed the apartheid. When the commission hearings began to uncover details about South Africa’s biological weapons program that was called Project Coast, they were faced with delays and they were faced with legal challenges and the hearings were eventually shut down before the investigators could complete their work. Now, the head of that program became obvious to the investigators at the time who that was, but he was never brought to justice. Unbelievably, he remained a practicing medical doctor for many, many years afterwards, possibly even to this day.

What hasn’t been concealed or destroyed or silenced from past biological weapons programs often remains highly classified. So, the secrecy surrounding past programs mean that they’re not well known. But there’s also a new, contemporary context that shapes security concerns about life science research that we need to be conscious of and that I think relates back to what I think is important to know about synthetic and natural bio risks today. And that is that advances in science and technology may enable biological weapons to emerge that are actually more capable and more accessible with attacks that can be more precisely targeted and are harder to attribute.

So, synthetic biology, for example, which is one of the currently cutting-edge areas of life science research, that is accelerating our abilities to manipulate genes and biological systems. And that will have all kinds of wonderful and beneficial applications, but if the intent was there, it could also have significant downsides. So, it could, for instance, identify harmful genes and DNA sequences in a much quicker way than we’ve been able to so far. As a result of that, we could, for instance, see greater potential to make pathogens or disease-causing biological agents even more dangerous.

Or we could see greater potential to convert low-risk pathogens into high-risk pathogens that we could potentially even recreate extinct pathogens like the variola virus that causes smallpox, or way further out, we could engineer entirely new pathogens. Now, pathogens in and of themselves are not biological weapons. You need to add some kind of delivery mechanism to have a weapon. The possibilities to manipulate genes and biological systems are coming at a time when new delivery mechanisms for transporting pathogens into our bodies, into human bodies or animal bodies are also being developed.

So, in addition to the bombs and the missiles, the cluster bombs, the sprayers, and all kinds of injection devices of past biological warfare programs, it could now also be possible to use other delivery mechanisms. Things like drones or nanorobots, these incredibly tiny robots that can be inserted into our blood streams for instance, even insects, could be used as vehicles to disperse dangerous pathogens.

So, I guess to get to the bottom of your question, what I’m keen for people to understand, scientists, government officials, the general public, is that current developments in science and technology, or in the life sciences more specifically, are lowering barriers to inadvertent harms as well as to deliberate use and development of biological weapons and that there is this whole history to deliberate attempts to use the life sciences to cause harm.

Lucas Perry: It seems like there’s three main groups of people that are interested in such technology. There’s something like lone wolfs or isolated individuals who are interested in creating a lot of harm to humanity in the same way that mass shooters are. There are also small groups of people who may be interested in the same sort of thing. Then there’s this history of governments pursuing biological weapons. Could you offer some perspective about the risks of these three groups and how you would compare the current technology used for the creating of synthetic pathogens to how strong it was historically?

Filippa Lentzos: Sure. Are we heading towards a future where anyone with a PhD in bioengineering could create a pandemic and kill millions? Is that what you mean? Well, a pathogen, even a bioengineered one, does not on its own constitute a biological weapon, though you will still face issues like agent stability and dealing with large scale production and importantly dealing with efficient delivery, which is much easier said than done. In fact, what the history of bioterrorism has taught us is that the skills required to undertake even the most basic of bioterrorism attacks are often much greater than assumed.

There are various technical barriers to using biological agents to cause harm even beyond the barriers that are being reduced from advances in science and technology. The data that is available to us from past incidents of biological terrorism indicates that a bioterrorism attack is more likely to be crude, more likely to be amateurish and small scale where you’d have casualty levels in single or double digits and not in their hundreds or thousands and certainly not in their millions. Now, my own concern is actually less about lone actors.

Where I see real potential for sophisticated biological weapons in strategic surprise in the biological field is in one of those other categories that you mentioned, so it’s at the state or the state sponsored level. Let me explain. Well, I already told you a little bit about how we’ve recently seen significant advances in genetic manipulation and delivery mechanisms. These developments are lowering barriers to biological weapons development, but that’s really only part of the picture, because in making threat assessments, it’s also important to look at the social context in which these technical developments are taking place.

One of the things we’re seeing there in that social context is a build up into use capacities? What we’re seeing is that high containment labs that are working with the most dangerous pathogens are rapidly being constructed all over the globe. So, they’re now more people and more research projects than ever before working with and manipulating very dangerous pathogens and there are more countries than ever before that have biodefense programs. There’s around 30 biodefense programs that are openly declared. The trends we’re seeing is that these numbers are increasing.

It’s entirely legitimate to have biodefense programs and they do a lot of good, but a side effect of increasing bio-preparedness and biodefense capacities is that capacities for causing harm, should the intent be there, and that’s the crucial part, also increase. So, one person may be setting up all this stuff for good, but if somebody else comes in with different intent, with intent to do harm, that same infrastructure, that same material, that same equipment, that same knowledge, can be turned towards causing harm or creating biological weapons.

Now, another thing we’re seeing that won’t have escaped your notice is the increasingly unstable and uncertain geopolitical landscape. The world that many of us grew up in and know is one in which America was a clear, dominant power. We’re now moving away from that, away from this hegemonic or unipolar power structure towards an international system that is increasingly multipolar. The most clearly rising power today is of course China, but there are others too. Russia is still there. There’s India, there’s Brazil to name a few. Those are things in the social context that we need to pay attention to.

We’re also seeing rapidly evolving nature of conflict and warfare themselves are changing. And that’s changing the character of military challenges that are confronting states. Hybrid warfare, for instance, which blends conventional warfare with irregular warfare and cyber warfare, is increasingly likely to compliment classical military confrontation. So, states that are increasingly outmatched by conventional weapons may for instance start to view novel biological weapons as offering some kind of advantage, some kind of asymmetric advantage, and a possible way to outweigh strategic imbalances.

So, states in this kind of new form of conflict, new form of warfare, may see biological weapons as somehow providing an edge or a military advantage. We are also seeing the defense programs of some states heavily investing in the biological sciences. Again, could well be for entirely legitimate purposes, but it does also raise concerns that adversaries may be looking at those kinds of investments and thinking hedging their bets and similarly investing in more biological programs. These investments, I think, are also an indication that there are some real concerns that adversaries are harnessing or trying to harness biotechnology for nefarious purposes.

And we’ve seen some political language to that effect too, but a lot of this is going under the radar. So, all of these things, and there are more, the flagrant breach of the Chemical Weapons Convention or continuous flagrant breaches of the Chemical Weapons Convention for example, the use of chemical weapons in Syria, or the use of very sophisticated chemicals like Novichok in the UK on Skripal, the Russian, as well as other cases is one other sort of context that plays in, or even our recent experiences of natural disease outbreaks in here. COVID is obviously a key example, but it’s not so long ago we’ve had all kinds of other outbreaks.

Ebola just a few years ago. There’s Zika, there’s MERS, there’s all kinds of other emerging diseases. All of these could serve to focus attention on deliberate outbreaks. And all of these various elements of the social context as well as these technical developments could produce an environment in which a potential military or political utility for biological weapons emerges that alters the balance of incentives and disincentives to comply with the international treaty that prohibits biological weapons.

Lucas Perry: Could you explain the incentives of why a country would be interested in creating a synthetic pathogen when inevitably it would seem like it would come back and harm itself?

Filippa Lentzos: Well, it’s doesn’t have to be an infectious pathogen. What we’re seeing today with COVID, for instance, is an infectious pathogen that spreads uncontrollably throughout the world. But states don’t have to. Not all dangerous pathogens are infectious in that way. Anthrax, for instance, doesn’t spread from person to person through the air. So, there are different kinds of pathogens and states and non-state actors will have different motivations for using biological weapons or biological agents.

One of those which I mentioned earlier is for instance if you feel that another country… you are outmatched conventionally by conventional weapons, you may want to start to develop asymmetric weapons. That would be an example where a state might want to explore developing biological weapons. But of course, we should probably mention that there is this thing called The Biological Weapons, this international treaty, which completely prohibits this class of weaponry. Historically, there’s really only been two major powers that have developed sophisticated biological weapons programs. That is the United States and the Soviet Union.

Today, there are no publicly available documents or any policy statements suggesting that anyone has an offensive biological weapons program. There are many countries who have defensive programs and that’s entirely legitimate. There is no indication that there are states that have offensive programs to date. I think the real concern is about capacities that are building up through biodefense programs, but also through regular bio-preparedness programs, and that’s something that’s just going to increase in future.

Lucas Perry: I’m curious here if you could also explain and expand upon the particular strands of your research efforts in this space.

Filippa Lentzos: Sure. I mean, it’s very much related to the sorts of things we’ve been talking about. One strand that I focus on relates to transparency, confidence building, and compliance assessment of biodefense programs, where I look at how we can build trust between different countries with biodefense programs to trust that they are complying with the Biological Weapons Convention. I’m also looking at transparency around particular high-risk bioscience, so things or projects or research involving genome editing for example, or potentially pandemic pathogens like influenza or coronaviruses.

Another strand that I’m interested in or that I’m looking at focuses on emerging technologies and on governance around these emerging technologies and unresponsible innovation. And there I look particularly at synthetic biology, also a little bit at artificial intelligence, deep learning and robotics, how these other emerging areas are coming into the life sciences and affecting their development and the direction they’re taking, the capacities that are emerging from this kind of convergence between emerging technologies and how we can govern that better, how we can provide better oversight.

Now, one of the projects that I’ve been involved in that has got a lot of press recently is a study that I carried out with Greg Koblentz at George Mason University where we mapped high biocontainment laboratories globally. I mentioned earlier that countries around the world are investing in these kinds of labs to study lethal viruses and to prepare against unknown pathogens. Well, that construction boom has to date resulted in dozens of these commonly called BSL-4 labs around the world. Now, significantly more countries are expected to build these kinds of labs in the wake of COVID 19 as part of a renewed emphasis on pandemic preparedness and response.

In addition, gain-of-function research with coronaviruses and other zoonotic pathogens with pandemic potential is also likely to increase as scientists are seeking to better understand these viruses and to assess the source of risks that they pose of jumping from animals to humans or becoming transmissible between humans. Now, of course, clinical work and scientific studies and pathogens are really important for public health and for disease prevention, but some of these activities pose really significant risks. Surges in the number of labs and expansion and the high risk research that’s carried out within them exacerbate safety and security risks.

But there is no authoritative international resource tracking the number of these kinds of labs out there as they’re being built. So, there is no international body that has an authoritative figure on the number of BSL-4 labs that exist in the world or that have been established. Equally, there is no real international oversight of the sort of research that’s going on in these labs or the sorts of biosafety and biosecurity measures that they have implemented. So, what our study did was to provide a detailed interactive map of BSL-4 labs worldwide that contains basic information on when they were established and the size of the lab labs and some indicators of biorisk management oversight.

That map is publicly available online at globalbiolabs.org. You can go and see for yourself. It’s basically a very large Google map where the labs are indicated and you can scroll over the labs and then up pops information about when it was established, how big it is, what sorts of biorisk management indicators there are, are they members of national biosafety associations? Do they have regulations related to by safety? Do they have codes of conduct? Et cetera, those kinds of things. That all comes up there, so you can go and see for yourself. That’s a resource that we’ve made publicly available on the basis of our project.

Looking at the data we then collated, this was really the first time this kind of concerted effort was made to identify these various labs and bring all that information together. And some of our key findings from looking at that data were that… Well, the first thing is BSL-4 labs are booming. We can see a really quite steep increase in the number of labs that have been built over the last few years. We found that there are many more public health labs than there are biodefense labs. So, about 60% of the labs are public health labs, not focused on defense, but resourced out of health budgets.

We also found that there are many smaller labs and larger labs. In newspapers and on TV, we keep seeing photos of the Wuhan Institute of Virology’s BSL-4 lab.

In terms of oversight, some of our other findings were that sound biosafety and biosecurity practices do exist, but they’re not widely adopted. There’s a lot of difference in between the kinds of biosafety and biosecurity measures that labs adopt and implement. We also found that assessments to identify life science research that could harm health safety or security are lacking in the vast majority of countries that have these BSL-4 labs. So, as I said, that’s one of the studies that’s got a lot of press recently and part of that is because of its relationship to the current pandemic and the lack of some solid information, some solid data on the sort of labs that are out there and on the sorts of research that’s being done.

Lucas Perry: Do you have a favorite story of a particular time that a BSL lab failed to contain some important pathogen?

Filippa Lentzos: Well, there are all kinds of examples of accidental releases. In the UK, for instance, where I’m based, a very long time ago, there was work with variola virus that causes a smallpox, was worked in a sort of high rise building that had multiple floors and the variola virus escaped into the floor above and infected somebody there. That was, I think, at the end of the ’70s. That was the very last time that someone was infected by smallpox in the UK. More recently in the UK, there’s also been the escape of the foot and mouth virus from a lab.

Now, this was not the very large foot and mouth outbreak that we had in the early 2000s, which you know killed millions of animals. I still remember the piles of animal corpses dotted around the country and you could still smell the burning carcasses on the motorway as you drove past, et cetera. That was not caused by a lab leak, but just two, three, four years later, there was a foot and mouth disease virus that escaped from a lab through a leaking pipe that did go on to cause some infections. But by that stage, everyone was very primed to look out for these kinds of infections and to respond to them quickly.

So, that outbreak was contained fairly rapidly. I mean, there are also many examples elsewhere, also in the United States. I mean, there’s the one example where you had variola virus found in a disused closet at the NIH after many years and they were still viable. I think that’s one of the ones that ranked pretty highly in the biosafety community’s memory and maybe even in your own. It was not that long ago, half a dozen years ago or so.

Lucas Perry: What do you think all these examples illustrate of how humans should deal with natural and synthetic pathogens?

Filippa Lentzos: Well, I think it illustrates that we need better oversight, we need better governance to ensure that the life science research done is done safely, it’s done securely, and it’s done responsibly.

Lucas Perry: Overviewing all these BSL safety labs and all these different research threads that you’re exploring, what do you think is the most pressing issue in biosecurity right now, something that you’d really like the government or the public to be aware of and take action on?

Filippa Lentzos: Well, I think there’s a really pressing need to shore up international norms and treaties that prohibit biological weapons. I mentioned the Biological Weapons Convention. That is the key international instrument for prohibiting biological weapons, but there are also others. The arms control communities is not in great shape at the moment. It needs more high profile, political attention, it needs more resources. And I think with more and more breaches that we’re seeing, not on the biological side, but on other sides, breaches of international treaties, I think we need to make sure there is this renewed effort and commitment to these treaties.

So, I think that’s one thing, one issue, that’s really pressing in biosecurity right now. Another is really raising awareness and increasing sensitivities in scientific communities to potentially accidental or inadvertent or deliberate risks of the life sciences. And we see that very clearly in the data that’s coming out of the BSL-4 study that I talked to you about, that that’s something that’s needed, not just what we saw there as actually looking at do they have any laws in the books or do they have any guidance on paper or do they have any written down codes of conduct or codes of practice? That’s really important.

It’s really important to have these kinds of instruments in place, but it’s equally important to make sure that these are implemented and adopted and that there is this culture of safe, secure, and responsible science. That’s something that we didn’t cover in that specific project, but it’s something that some of my other work has drawn attention to and the work of many others as well. So, we do need to have this regulatory oversight governance framework in place, but we also need to make sure that that is reflected or echoed in the culture of the scientists and the labs that are carrying out life science research.

Lucas Perry: One other significant thing going on in the life sciences in terms of biological risk is gain-of-function research. So, I’m curious if you could explain what gain-of-function research is and how you see the debate around the benefits and risks of it.

Filippa Lentzos: Well, gain-of-function research is a very good example of life science research that could be accidentally, inadvertently or deliberately misused. Gain-of-function means different things to different people. To virologists, it generally just means genetic manipulation that results in some sort of gained function. Most of the time, these manipulations result in loss of function, but sometimes different kinds of functions of pathogens can be gained. Gain-of-function has got a lot of media coverage in relation to the discussion around the origins of the pandemic or of COVID.

And here, gain-of-function is generally taken to mean deliberately making a very dangerous pathogen like influenza or coronavirus even more dangerous. So, what you’re trying to do is you’re trying to make it spread more easily, for example, or you’re trying to change its lethality. I don’t think gain-of-function research in and of itself should be banned, but I do think we need better national and international oversight of function experiments. I do think that a wider group of stakeholders beyond just the scientists doing the research themselves and their funders, I think that a wider group of stakeholders should be involved in assessing what is safe, what is secure, and what is responsible gain-of-function research.

Lucas Perry: It seems very significant, especially with all these examples that you’ve illustrated of the fallibility of BSL labs. The gain-of-function research seems incredibly risky relative to the potential payoffs.

Filippa Lentzos: Yeah, I think that’s right. I mean, I think it is considered one of the examples of what has been called dual use research of concern or experiments that have a higher potential to be misused. By that, I mean deliberately, but also in terms of inadvertently or even accidentally because the repercussions, the consequences have the potential to be so large. That’s also why we saw when some of the early gain-of-function experiments gained media attention back in 2011, 2012, that the scientific community itself reacted and said, “Well, we need to have a moratorium.

We need to have a pause on this kind of research to think about how we govern that, how we provide sufficient oversight over the sorts of work that’s being done so that the risk benefit assessments are better essentially.” I think there will be many who argue that… myself among them, that the discussion that was had around gain-of-function at that time were not extensive enough, they were not inclusive enough, there were not enough voices being heard or part of the decision-making process in terms of the policies that came out of this in the United States. To some extent, I think that’s why we’re, again, back at the table now with the discussions around the pandemic origins.

Lucas Perry: Do you have any particular examples of gain-of-function research you’d be interested in sharing? It seemed like a really significant example was what was happening in Wisconsin.

Filippa Lentzos: Sure. And that was the one that was the work in Wisconsin and at the Erasmus University in the Netherlands. What they were trying to do there was they were working with influenza or avian flu and they were seeing if they were able to give that virus a new function, so enable it to spread, not just among birds, but also from birds to mammals, including humans, including ourselves. So, they were actively trying to make it not just affect birds, but also to affect humans.

And they did so successfully, which made that virus more dangerous and that was what that media fuel was about and the discussions at the time were that many felt that the benefits of that research did not outweigh very significant potential risks, the very significant risks that that research involved.

Lucas Perry: What are the benefits of that sort of gain-of-function research?

Filippa Lentzos: Well, the ones that carried out that sort of research both at the time, but also the sorts of gain-of-function research that’s been going on at the Wuhan Institute of Virology, some of it which has been funded by American money, some of it which has been done in collaboration with American Institute argues that in order to prepare for pandemics, we need to know what kind of viruses are going to hit us. New and emerging viruses generally come, spill over from the animal kingdom into humans, so they actively go and look for viruses in the animal kingdom.

In this case, in the coronavirus case, the Wuhan Institute of Virology, they were actively looking in bat populations to see what sort of viruses exist there and what their potentials are for spilling over into humans. That’s their justification for doing that. My own view is that that’s incredibly risky research and I’m not sure and I don’t feel that that sort of justification really outweighs the very significant risks that it involves. How can you possibly hit upon the right virus in the thousands and thousands of viruses that are out there and know how that will then mutate and get modified as it hits the human population?

Lucas Perry: These are really significant and quite serious viruses. You explained an example earlier about this UK case where the final people to die from smallpox was actually from a BSL lab leak. There’s also this research in Wisconsin on avian flu. So, could you provide a little bit of a perspective on, for example, the infection rate and case fatality rate of these kinds of viruses that they’re working on at BSL labs, that they have at BSL labs, that they might be pursuing gain-of-function research on?

Filippa Lentzos: Yeah. I mean, certainly in terms of the coronavirus, what we’ve seen there is that that is clearly many people have died, many people have got infected, but that’s not considered a particularly infectious or particularly lethal pathogen when it comes to pandemics. We’ve seen much more dangerous pathogens that could create pandemics or that are being worked with in laboratories.

Lucas Perry: Yeah. Because some of these diseases, it seems, the case fatality rate gets up to between 10 and 30%, right? So, if you’re doing gain-of-function research on something that’s already that lethal and that has killed hundreds of millions of people in the history of life on earth, with the history of lab leaks and with something so infectious and spreadable, it seems like one of the most risky things humanity is doing on the planet currently.

Filippa Lentzos: Yes. I mean, one of the things gain-of-function is doing is looking at lethality and how to increase lethality of pathogens. There are also other things that gain-of-function is doing, but that is taking out a large part of the equation, which is the social context of how viruses spread and mutate. There are, for instance, things we can do to make viruses spread less and be less lethal. There are active measures we can take equally, there are responses that could increase the effect of viruses and how they spread.

So, lethality is one aspect, a potential pandemic, but it is only one aspect, right? There are these many other aspects too. So, we need to think of ourselves much more as active players, that we also have a role to play in how these viruses spread and mutate.

Lucas Perry: One thing that the digital revolution has brought in is the increase and the birth of big data. Big data can be used to detect the beginning of outbreaks, to detect novel diseases, and to come up with cures and treatments for novel and existing diseases. So, I’m curious what your perspective is on the benefits and risks of the increase of big data in biology, both to health and societies as well as privacy and the like.

Filippa Lentzos: Well, you pointed to many of the benefits that big data has. There certainly are benefits, but as with most things, there are also a number of downsides. I do believe that big data combined with the advances that we’re seeing in genomic technologies as well as with other areas of emerging technology, so machine learning or AI, this poses a significant threat. It will allow an evermore refined record of our biometrics; so our fingerprints, our iris scans, our face recognition, our CCTV cameras that can pick up individuals based on how they walk, all these kinds of biometrics.

It will also allow a more refined record of our emotions and behaviors to be captured and to be analyzed. I mean, you will have heard of companies that are now using facial recognition on their employees to see what kind of mood they’re in and how they engage with clients, et cetera. So, governments are gaining incredible powers here, but increasingly, it’s private companies that are gaining this sort of power. What I mean by that is that governments, but as I said, increasingly private companies, will be able to sort, to categorize, to trade, and to use biological data far more precisely than they have ever been able to do before.

That will create unprecedented of possibilities for social and biological control, particularly through individual surveillance, if you like. So, these game-changing developments will deeply impact how we view health, how we treat disease, how long we live, and how more generally we consider our place on the biological continuum. I think they’ll also radically transform the Julius nature of biological research, of medicine, of healthcare. In terms of my own field of biosecurity, they will create the possibility of novel biological weapons that target particular groups of people and even individuals.

Now, I don’t mean they will target Americans where they will target Brits or they will target Protestants or they will target Jews or they will target Muslims. That’s not how biology works. Genes don’t understand these social categories that we put onto people. That’s how we socially divide people up, but that’s not how genetics divides people up. But there are groupings also genetically that go across cultures, nations, beliefs, et cetera. So, as we come to have more and more precise biological data on these different groups, the possibility of targeting these groups for harm will also be realized.

So, in the coming decade, managing the fast and broad technological advances that are now underway will require new kinds of governance structures that we need to put in place and these new structures need to draw on individuals in groups with cross-sectoral experience; so, from business, from academia, from politics, from defense, from intelligence, and so on to identify emerging security risks and to make recommendations for dealing with them. We need new kinds of governance structures, new kinds of advisory bodies that have different kinds of stakeholders on them to the ones that we have traditionally had.

Lucas Perry: In terms of big data and the international community, with the continued risks of natural pandemics as well as synthetic pandemics or other kinds of a biological agents and warfare, it’s been proposed, for example, to create something like a bio weather map where we have like a widespread, globally distributed early warning detection system for biological agents that is based off of big data or is itself big data. So, I’m curious if you have any perspective and thoughts on the importance of big data in particular for defenses against the modern risks of engineered and natural pandemics.

Filippa Lentzos: Yes, I do think there was a role to play here for data analysis tools of big data. We are, I think, already using some tools in this area where you have, for instance, analysis of social media usage, words that pop up on social media uses, or you have analysis of the sorts of products that people are buying in pharmaceutical companies. So, if there is some kind of disease spreading, people are getting sick and they’re talking about different kinds of symptoms, you are able to start tracking that, you’re able to start mapping that.

All of a sudden, all kinds of people in say Nebraska are going to the pharmacy to buy cough medicine or something to reduce temperature or there’s a big spike for instance, you might want to look into that more. That’s an indicator, that’s a signal that you might want to look at that more. Or if you’re picking up keywords on internet searches or on social media where people are asking about stomach cramps or more specific kinds of symptoms, that again is another kind of signal, you might want to look more into that.

So, I think some of these tools are, are definitely already being developed, some are already in use. I think they will have advantages and benefits in terms of preparing for both natural, but also inadvertent, accidental or deliberate outbreaks of disease.

Lucas Perry: We’re hopefully in the final stages of the COVID-19 pandemic. When we reflect back upon it, it seems like it can be understood as almost like a minimally viable global catastrophe or a minimally viable pandemic, because there’s been far worse pandemics, for example in the past, and it’s tragically taken the lives of many, many people. But at the same time, the fatality rate is just a bit more than the flu and a lot less than many of the other pandemics that humanity has seen in the past few hundred thousand years.

So, I’m curious what your perspective is on what we can learn in the areas of scientific, social, political, and global life, from our experience with the COVID-19 pandemic to be better prepared for something that’s more serious in the future, something that’s more infectious, and has a higher case fatality rate.

Filippa Lentzos: Well, I think, as you said, in the past, disease has been much more present in our societies. It’s really with the rise of antibiotics and the rise of modern healthcare that we’ve been able to suppress disease to the extent that it’s no longer such a pressing feature in our daily lives. I think what the pandemic has done to a whole generation is really it has been a shot across the bow, really crystallized the incredibly damaging effects that disease can have on society.

It’s been this wake up call or this reality check. I think we’ve seen that reflected also politically. International developments like the UN’s Biorisk Working Group that’s been established by the secretary general or efforts by states to develop a new international treaty on pandemics are concrete evidence of increasing awareness of the challenges that diseases pose to humankind. But clearly, that’s not enough. It hasn’t been enough, what we’ve had a place. Clearly, we need to be better prepared. And I guess for me, that’s one of the bigger takeaways from the pandemic.

Equally, what the pandemic origin debate has done is to show that whether or not the pandemic resulted from a lab leak, it could have resulted from a lab leak, it could ironically or tragically have been the result of scientific research actually aimed at preventing future pandemics. So, clearly for me, a huge takeaway is that we need better oversight, we need better governance structures ensure safe, secure, and responsible life science research. Potentially, we also need to rethink some of our preparedness strategies.

Maybe actively hunting for viruses in the wild, mutating them in the lab to see if that single virus might be the one that hits us next, the one that spills over, isn’t the best strategy for preparing for pandemics in the future. But COVID has also highlighted a more general problem, one I think that’s faced by all governments, and that is, how can we successfully predict and prepare for the wide range of threats that there are to citizens and to national security? Some threats like COVID-19 are largely anticipated actually, but they’re not adequately planned for as we’ve seen.

Other threats are not anticipated at all and for the most part are not planned for. The other side, some threats are planned for, but they fail to materialize as predicted because of errors and biases in the analytic process. So, we know that governments have long tried to forecast or to employ a set of futures approaches to ensure they are ready for the next crisis. In practice, these are often general, they’re ad hoc, they’re unreliable, they’re methodologically and intellectually weak, and they lack academic insight. The result is that governments are wary of building on the recommendations of much of this future’s work.

They avoid it in policy planning, in real terms funding, and ultimately in practice and institutionalization. What I and many of my colleagues believe is that we need a new vision of strategic awareness that goes beyond the simple idea of just providing a long-term appreciation of the range of possibilities that the future might hold to one that includes communication with governments about their receptivity to intelligence, how they understand intelligence, how they absorb other kinds of intelligence from private corporations, from academia, et cetera, as well as the manner in which the government acts as a result.

So, strategic awareness to my mind and to that of many others should therefore be conceptualized in three ways. You should first look more seriously and closely at threats. Second, you should invest in prevention and foresighted action. Third, you should prepare for medication, crisis management, and bounce back in case a threat can’t be fully prevented or deterred. This kind of thinking about strategic awareness will require a paradigm shift in how government practices strategic awareness today. And my view is that the academic community must play an integral part in that.

Lucas Perry: Do you have any particular governance solutions that you’re really excited about right now?

Filippa Lentzos: I don’t think there’s a magic bullet. I don’t think there’s one magic solution to ensuring that life science research is safe, that it’s secure, and that it’s carried out responsibly. I think in terms of governance, we need to work both from the top-down and from the bottom-up. We need to have in place both national laws and regulations, statutory laws and regulations. We need to have in place institutional guidance, we need to have in place best practices. But we also need a lot of the commitment, we also need a lot of awareness coming from the bottom-up.

So, we need individual scientists, groups of scientists to think about how their work can best be carried out safely so they can make codes of ethics or codes of practice themselves, they can educate others, they can think through who needs to be involved beyond their own expert community in risk assessing the kinds of research that they’re interested in carrying out. So, we need both this top-down government-enforced, institutionally-enforced governance as well as grassroots governance. Only by having both of these aspects, both of these kinds of governance measures, can we really start to address the potential downsides of life science research.

Lucas Perry: All right. Just to wrap things up, I’m curious if you have any final words or thoughts for the audience or anyone that might be listening, anything that you feel is a crucial takeaway on this issue? I generally feel that it’s really difficult to convey the significance and gravitas and importance of this. So, I’m curious if you have any final words about this issue or a really central key takeaway you’d like listeners to have.

Filippa Lentzos: I think when we’re looking at our current century, this will be the century not of chemistry or physics or engineering, that was the last century, this will be the century of biology and it will be the century of digital information and of AI.

I think this combination, which we talked about earlier, when you combine biological data with machine learning, with AI, with genomic technologies, you get incredible potential of precise information about individuals. I think that is something we are going to struggle with in the years to come and we need to make sure that we are aware of what is happening, that we are aware that when we go buy a phone and we use the face recognition software, which is brilliant, that it can also have downsides, and all these little individuals actions, all these technologies that we just readily accept because they do have upsides in our life, they can also have potential downsides.

I do think we need to make sure we also developed this critical sense or this ability to be critical, think critically about what these technologies are doing to us as individuals and to us as societies. I guess that is the things I would like people to take away from our discussion.

Lucas Perry: All right. Well, thank you so much for coming on the podcast. I really can’t think of too many other issues that are as important as this. It’s certainly top three for me. Thank you very much for all of your work on this, Dr. Lentzos, and for all of your time here on the podcast.

Filippa Lentzos: Thanks for having me, Lucas.

Susan Solomon and Stephen Andersen on Saving the Ozone Layer

  • The industrial and commercial uses of chlorofluorocarbons (CFCs)
  • How we discovered the atmospheric effects of CFCs
  • The Montreal Protocol and its significance
  • Dr. Solomon’s, Dr. Farman’s, and Dr. Andersen’s crucial roles in helping to solve the ozone hole crisis
  • Lessons we can take away for climate change and other global catastrophic risks

 

Watch the video version of this episode here

Check out the story of the ozone hole crisis here

0:00 Intro

3:13 What are CFCs and what was their role in society?

7:09 James Lovelock discovering an abundance of CFCs in the lower atmosphere

12:43 F. Sherwood Rowland’s and Mario Molina’s research on the atmospheric science of CFCs

19:52 How a single chlorine atom from a CFC molecule can destroy a large amount of ozone

23:12 Moving from models of ozone depletion to empirical evidence of the ozone depleting mechanism

24:41 Joseph Farman and discovering the ozone hole

30:36 Susan Solomon’s discovery of the surfaces of high altitude Arctic clouds being crucial for ozone depletion

47:22 The Montreal Protocol

1:00:00 Who were the key stake holders in the Montreal Protocol?

1:03:46 Stephen Andersen’s efforts to phase out CFCs as the co-chair of the Montreal Protocol Technology and Economic Assessment Panel

1:13:28 The Montreal Protocol helping to prevent 11 billion metric tons of CO2 emissions per year

1:18:30 Susan and Stephen’s key takeaways from their experience with the ozone hole crisis

1:24:24 What world did we avoid through our efforts to save the ozone layer?

1:28:37 The lessons Stephen and Susan take away from their experience working to phase out CFCs from industry

1:34:30 Is action on climate change practical?

1:40:34 Does the Paris Agreement have something like the Montreal Protocol Technology and Economic Assessment Panel?

1:43:23 Final words from Susan and Stephen

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This is a special episode with the winners of the 2021 Future of Life Award. This year’s winners are Susan Solomon, Stephen Andersen, and Joseph Farman, who all played an essential part in the efforts related  to identifying and mending the ozone hole. In this podcast, we tell the story of the ozone hole from the perspective of Drs. Solomon and Andersen as  participants to the mystery of the ozone hole and the subsequent governance efforts related to mending it. Unfortunately, Joseph Farman passed away in 2013, so his role in this story will be told through our guests on his behalf.

For those not familiar with the Future of Life Award, this is a $50,000/person annual prize that we give out to honor unsung heroes who have taken exceptional measures to safeguard the future of humanity. In 2017 and 2018 the award honored Vasili Arkhipov and Stanislav Petrov for their roles in helping to avert nuclear war. In 2019 the award honored Dr. Matthew Meselson for his contributions to getting biological weapons banned. In 2020, the award honored Bill Foege and Viktor Zhdanov for their critical contribution to eradicating small pox and thus saving roughly 200 million lives so far.

For some background on this year’s winners, Dr. Susan Solomon was the Chemistry and Climate Processes Group of the National Oceanic and Atmospheric Administration until 2011. She now serves as the Ellen Swallow Richards Professor of Atmospheric Chemistry and Climate science at MIT. Dr. Solomon led an Antarctic ozone research expedition which both confirmed that CFCs caused the ozone hole, and showed that sunlit cloud tops catalyzed the ozone destruction process to be much faster.

Dr. Stephen Andersen is the American Director of Research at the Institute for Governance and Sustainable Development, and is former co-chair of the Montreal Protocol Technology and Economic Assessment Panel. Andersen’s tireless efforts brought together leaders from industry, government and academia to implement the  needed changes in CFC use to mend the ozone hole. His efforts played a critical role in making the Montreal Protocol successful.

Joseph Farman was a British geophysicist who worked for the British Antarctic Survey. In 1985, his team made the most important geophysical discovery of the 20th century: the ozone hole above the Antarctic. This provided a stunning confirmation of the Rowland-Molina hypothesis that human-made chlorofluorocarbons were destroying the ozone layer, and much faster than predicted. This galvanized efforts to take action to mend the ozone hole.

And with that, I’m happy to present the story of the ozone hole and the fight to fix it with Susan Solomon and Stephen Andersen 

Lucas Perry: So I’d like to start off with the beginning where it’s the early 20th century, and we have CFCs, chlorofluorocarbons, which is a difficult word to say. And they’re in quite a lot of our manufacturing and commercial goods. So I’m curious if you could start there by just explaining what CFCs are and what their role was in the early 20th century.

Susan Solomon: They weren’t actually discovered until I think around the twenties, if I recall, maybe Steven can correct me on that, but they were initially used in air conditioning. So that was a great advance over using things like ammonia, which is toxic and explosive and all kinds of horrible things, and the great thing about the CFCs is that they’re non-toxic at least, when you breathe them, although they’re very toxic to the ozone layer, they’re not toxic to you directly. And they became really widespread in use though, when they began to be used as spray cans in aerosol propellants and that didn’t really happen until somewhat later. I think that the spray can business really started exploding in the Post-World War Two era, probably in the 50’s and 60’s. And then it was discovered that these things have very long lifetimes in the atmosphere, they live, depending on which one you talk about, for some 50 to a 100 years.

So that is just staggering because what it means is that every year, whatever we put in, almost all of it will still be there the next year. And it’ll just pile up and pile up and pile up. So initially, a few people who worked on it thought “Hey, this is great! It’s a terrific tracer for atmospheric motion, how cool is that?” But it turned out that although that’s somewhat true in the lower atmosphere, in the upper atmosphere, they actually break down and make chlorine atoms, and those chlorine atoms can go on to destroy ozone. And we first became aware of this through some wonderful work by Molina and Rowland in the mid-seventies, which later won the Nobel Prize for chemistry, all along with Paul Crutzen.

Stephen Andersen: I could circle back on a little bit of the details of the technology if you’d like?

Susan Solomon: Sure, go for it, Steve.

Stephen Andersen: The year was in the twenties, 1929, and it was invented by Thomas Midgley, who was working for Frigidaire Division of General Motors, and just as Susan said, this was viewed as a wonder chemical because the chemicals at that time were things like methylene chloride and ammonia and even gasoline is a refrigerant, so it was very dangerous there were lots of injuries, by that time America’s waters were polluted, so it was difficult to use ice from ponds.

And so it was immediately commercialized to replace these flammable and toxic refrigerants, and then just as Susan said, the companies that produced it slowly looked at other markets; aerosol starting with pesticides for soldiers in World War Two, and then finding a commercial market and then solvents, because it’s a very effective solvent for electronics and aerospace, and then there were other uses that came along such as making rigid and flexible foam, and then finally towards the end there were elegant uses such as hospital sterilization, and as the aerosol for metered-dose inhalers that are used for asthma patients. So it had extraordinary uses and it was not appreciated until 1974 when Mario Molina and Sherry Rowland discovered that these chemicals could destroy the ozone layer.

Lucas Perry: So before we get to Molina and Rowland, I was curious if we could actually start with James Lovelock because we’re releasing and creating all of these CFCs, and it seems like he’s the first person who actually takes a look at these things in the atmosphere. So could you explain James Lovelock’s role in this story?

Susan Solomon: Sure. He invented the gas chromatograph and made a lot of money off of it and became independently wealthy because it’s a very useful instrument for measuring all kinds of things. And he took it on a… He was British, and he took it on a British research vessel and sailed from the Northern Hemisphere down to almost the Antarctic, and showed that he could understand exactly what the distribution of the CFC that he was measuring was, based on the amount that had been emitted. And he said “Oh, isn’t that cool? It’s a great tracer for atmospheric motion”, as I’ve mentioned before. In fact, I’m pretty sure in his paper, there was some kind of sentence like “this couldn’t ever conceivably pose any hazard because it’s so non-toxic”, but of course, having less of an ozone layer is not good for life on the planet because ozone absorbs ultraviolet light that is very, very important for protecting us from sunburn and cataracts and protecting animals and plants and all kinds of things.

Susan Solomon: So depletion of the ozone layer is actually a threat to both humanity and the planet. I do want to say it wasn’t until somewhat later, 1985, when scientists from the British Antarctic Survey actually discovered the ozone hole and what they were doing was measuring total ozone in the Antarctic, they’d been doing it since the 1950s, and they were able to show that sometime around the late seventies, it started dropping like a rock way, more ozone depletion than Molina and Rowland had ever imagined.

So it turns out that these chlorofluoro chemicals are actually even more damaging to the ozone layer in the Antarctic, and actually also to some extent at mid-latitudes as we now know, than we originally thought. So it’s been sort of a cautionary tale of don’t be too sanguine about any chemical made in the lab. When you make something in the lab that nature doesn’t make, I think you should always do a double-take, especially if it has a very long atmospheric lifetime and builds up in the way that I described. So that means you can’t get rid of it if you stop using it. You can’t eventually, but it’s going to take a long time for the atmosphere to cleanse itself.

Lucas Perry: Could you explain why James Lovelock was initially interested in CFCs and what his investigation led to scientifically?

Susan Solomon: As far as I know, he really just wanted to see what their distribution was like to get some sort of a handle on what they might be… What his instrument might be useful for. I don’t actually know of a use beyond that, do you, Steven?

Stephen Andersen: No, I think that’s right. He was looking for gases that were indicators, as you suggest, and of course he had a device that would measure other chemicals, but I think he was immediately struck by the fact that he was seeing the same chemical at all locations that he sampled, and then he made the natural connection of saying “Where did this chemical originate? How long did it take to mix in the lower atmosphere”? So I think it was a good, solid, scientific inquiry of a very intelligent person with a new instrument.

Susan Solomon: Maybe we should clarify, I said he went all the way down almost to the Antarctic, but I neglected to underscore that of course there is no chloro fluoro carbon emission in the Antarctic, right? At that time there was nobody even… There were no stations, there was nobody there. So any chlorofluorocarbon that could get there had to of have gotten there by atmospheric transport and it would also tell you that it has to have a fairly long lifetime because if you emit, let’s just say sulfur dioxide from a power plant in the Ohio Valley, yeah it’s a serious issue, it can cause acid rain, it can cause little particles that are bad for your lungs, it does a lot of bad things, but it’s not going to be found in the Antarctic. It just doesn’t have that long of a lifetime, it rains out. So this proved that they were a great tracer in his mind I think that’s what he was attracted by.

Lucas Perry: We’re in this world where CFCs are basically being used for a ton of different applications. Our current understanding at that time was that they were nonreactive and non-toxic, so basically a wonder chemical that could be used for all sorts of things and was much safer than the kinds of chemicals that we were using at the same time. And here comes James Lovelock, who, from my reading, it seemed like he first got interested in it because his view from his house was hazy, and he didn’t know why it was hazy and he wanted to figure out if it were manmade chemicals or what this pollution that was obscuring his vision was.

And so he starts measuring all these CFCs everywhere, and now we’re in a world where it seems very clear that the CFCs are abundant in the lower atmosphere. So let’s start to pivot into Mario Molina and Frank Rowland’s role in this story and how we begin to move from noticing that there are so many CFCs in the atmosphere to finding out that there might be a problem.

Susan Solomon: I will say that Dr. Rowland has passed away, unfortunately, so has Molina more recently, but he never went by Frank, he went by Sherry. His name was indeed F Sherwood Rowland, but he was known to everyone as Sherry Rowland.

Lucas Perry: Okay.

Susan Solomon: Go ahead, Steve, do you want to take this one?

Stephen Andersen: Yeah, sure. The story of it is actually another great science story. Mario Molina had finished his doctors degree at University of California, Berkeley, and had taken a post-doctorate study with Sherry Rowland at University of California Irvine, and they looked at four or five interesting topics, and I think that the history is that Molina saw this one as being particularly intriguing, even though it was slightly outside either of their expertise.

So it was a stretch for them, but it gave them a chance to look at something that could be potentially very important. And then the story is that as they began to investigate, it started to seem more and more obvious to them. And it became a rush for a conclusion because they were worried about the effect of their work. There’s one story that Sherry Rowland tells us, that he came home from work one day and his wife, Joan, asked “How did your day go? How is your work”?

And he replied something like “Well, the work is fantastic, but I think the earth is ending”. So you can imagine the tension, the creative tension, and then they published their article I think in April of 1974, and there was no uptake by the press, there was no scientific confirmation. It was a quiet time until that fall, the Natural Resources Defense Council, NRDC, saw this, the scientists there, and recognize this was a big public policy issue. So at the American Chemical Society fall meeting, they had a presentation by Molina and Rowland, and then by the best of good luck of ruthless corporate behavior, the industry attacked them and made this scientific study news worthy.

So this was a tremendous good fortune, oddly enough, because then all of the press was asking “What are you talking about? Why is it important and what could happen?” And that cultivated in Molina and Rowland, the ambition and… Sherry Rowland called for a ban on aerosol cosmetic products; hairspray and deodorants. And so this was stepping out of their role of a normal scientist and becoming an activist, and of course there was a boycott that was quite spectacularly successful in the United States, and then some product bans.

Susan Solomon: Yeah and actually I just want to say, I think we should be proud in the US that there was a consumer boycott. People turned away from spray cans and that actually, interestingly enough, did not happen in Europe, they kept using them. So we can look back on that time as one in which people were environmentally very aware in this country, not just on the issue of the ozone layer, but also for things like smog and clean water, all those issues had attracted a lot of attention right around this time. I will also say that it’s interesting that at the time of the Molina & Rowland work, they were talking about the fact that from the best of our understanding of the day, in a hundred years we might see a few percent decrease in the total amount of ozone.

So kind of a small effect, far in the future, kind of like the way some people used to talk about climate change until maybe this year or the last few. And the Antarctic ozone hole was a huge wake up call because what they found was that ozone had dropped by more than 30% over Antarctica already by 1985, something that no one had anticipated. So it was a huge shock to the science community. And at first a fair number of people didn’t really take it seriously. I can remember being in scientific meetings with people who said “Oh, that British group, they must just be wrong”. I won’t say who they were, but it of course turned out that they weren’t wrong. They were confirmed quickly by other stations in the Antarctic and also by satellite data. And we now understand the chemistry that actually made the chlorofluorocarbons even more damaging than we thought they would be much, much better than we did before.

Lucas Perry: Could you explain a little bit of the scientific mystery and the scientific inquiry that Molina and Rowland were engaged in, the kinds of hypotheses that they had and the steps from going from okay, there are lots of CFCs in the lower atmosphere to eventually understanding the chemical pathways in their role in ozone depletion.

Susan Solomon: The big issue that they had to deal with was how do these compounds get destroyed, and what is their atmospheric lifetime? And they actually went into the laboratory themselves to try to make measurements relating to that. So they were able to show, I think through the measurements that they made, that the CFCs didn’t react with… They didn’t rain out, that they weren’t water-soluble, so that was not an issue. That they didn’t react with sand, there was some idea that somebody had suggested that they would be destroyed on the sands of the Sahara and that turned out, of course, not to be true. And then they looked at the way in which they would break down, what would happen to them them? If they have no way to break down in the lower atmosphere, the only place for them to go is up, up, up, up, and as you go up, you reach much more energetic sunlight, the higher you go.

Obviously if you’re in the limits of space, you’re getting the direct light from the full spectrum of what the sun can put out. But if you’re down on the ground, you got a lot of atmospheric attenuation. So they began to realize that once these molecules got into the stratosphere, that they would eventually break down, make chlorine atoms, and it was known already that those chlorine atoms could go on to react with ozone. And then there’s a another process, which I’m not going to go into, that actually leads it to a catalytic cycle that destroys ozone pretty effectively, and that process was already known from other work.

Lucas Perry: Could you actually say a word or two about that? Because I actually think it’s interesting how a single chlorine atom can destroy so much ozone.

Susan Solomon: Sure. The chlorine atom reacts with ozone, that makes chlorine monoxide plus O2. Now, if that was all there was, it would be a one-way process, you could never deplete more ozone than the chlorine that you put in. But what happens is that the chlorine monoxide can react with atomic oxygen, for example, so there’s… If you go up into the well… In the lower atmosphere, most of the oxygen as we know is in the form of O2, right? So it’s the oxygen that we breathe is O2. That’s actually true as far as total oxygen, pretty much all the way up, but as you get up into the stratosphere, oxygen actually also encounters that high energy ultraviolet light, which breaks it down and makes atomic oxygen, and ozone can also be broken down by high energy light, and that makes it atomic oxygen, ozone is O3.

So basically what first happens is the O2 breaks down with ultraviolet light making atomic oxygen, the atomic oxygen reacts with another O2 to make ozone, but then the ozone, let’s say photolyzes to make O, so now if the O comes along and reacts with the ClO, making chlorine atoms again, plus O2, you’ve liberated the chlorine atom, it can go right back around and do it over, and over, and over, and over.

And the reason that’s actually happening in the stratosphere, it’s in the sunlit atmosphere, ozone and atomic oxygen are exchanging with each other really quickly, so there’s always some of both present anytime the sun is lit. At night the O goes away, but during the day, there’s ozone… Breaks up as sunlight enough to make some O. So you can just drive this catalytic cycle over and over again and you can destroy hundreds of thousands of ozone molecules with one chlorine molecule, chlorine atom, from a CFC molecule, in the timescale that this stuff is in the stratosphere.

Lucas Perry: Right, and so I think the little bit of information about that is that the chloro in the chlorofluorocarbon meaning… That means chlorine, right? So there’s these chlorine atoms that are getting chopped off of them, and then once they’re free in the atmosphere they can be used to basically slice many ozone molecules, and the ozone molecules are heavier and more dense and so reflect more UV light?

Susan Solomon: No, no, no, no, no. Density and heaviness has nothing to do with it. The ozone molecule is just capable of absorbing certain wavelengths of ultraviolet light that no other molecule can in our atmosphere to any appreciable extent. That’s why it’s so important to life on the planet surface. It’s just a really good light absorber at certain wavelengths.

Lucas Perry: Okay. And so at this point for Mario Molina and sorry, it’s Sherry Rowland?

Susan Solomon: Yes.

Lucas Perry: And so for both of them, this is still theoretical or model based work, right? There hasn’t been any empirical verification of this.

Susan Solomon: That’s right.

Lucas Perry: So could you explain then how we move from these lab based models to actual empirical evidence of these reactions happening, and I guess starting with where Robert Watson fits in?

Susan Solomon: Bob Watson was a chemical kineticist originally, and actually had measured some of the reactions that we’ve just been talking about in the laboratory. So what people do is they go in the lab, I should have said this earlier, they go in the lab and they evaluate how fast individual chemical processes can happen, it’s really very elegant work, and flow tubes, and lasers, and all that kind of stuff. And it’s something that Watson was known for but he got an opportunity to become a leader of a research program at NASA, which he took up, and he became very much a leader in the community, as far as both organizing missions, field missions to go out there and look at things in real time, and more importantly perhaps, a huge leader in the assessment process which brought the scientists and the policy makers together. I think that you can really look at Bob as a tremendous founder of the whole way that we do scientific assessment, together with a gentleman named Bert Bolin, who has passed away unfortunately.

Lucas Perry: After Watson, could you then bring in how Joe Farman fits in?

Susan Solomon: Yeah, Joe Farman led the British group that discovered the ozone hole, as I mentioned earlier. So they noticed that their ozone over their station at Halley, Antarctica, it just seemed to be decreasing at an alarming rate. And they checked it with another station that they have, which is at a slightly higher latitude, not quite as far to the pole… I should have said lower latitude. Anyway, 65 South is where the other station is, I think Halley is about 73 South, it might be 78. And they found that there was ozone being lost at the other station too, just not as much. And that’s when they decided that they just had to publish, so they did, and it attracted my attention along with the attention of a lot of people. I started working on what chemistry could possibly cause this, and what I came up with was that “Hey, maybe…” And we knew that there was no ozone hole over the Arctic.

We knew it was only over the Antarctic, because we had measurements from places like Sweden and Norway and Canada, if anything was happening, it was nothing like the Antarctic. So measurements in other places showed ups and downs, variability from year to year, but they weren’t showing any kind of trend at that point. They later did, and we can talk about that, but we’re talking about 1985 here, so really early. I was a young scientist, I was 29 at the time, and I decided that I was going to try to take my photochemical model and beat on it and pound on it and make it stand on its head until it produced an ozone hole.

And so I did that, and I figured out that the reason that it was happening was because Antarctica really is the coldest place on earth, and it’s so cold that clouds form in the Antarctic stratosphere. The stratosphere is very dry, so normally there just aren’t any clouds, but down in the Antarctic because it’s so cold, the vapors, mainly water vapor but also actually nitric acid and other things can condense and form these incredible polar stratospheric clouds. And the clouds completely changed the chemistry, we can talk about that, but I think I’ve maybe gone on too long for my enthusiasm for which I apologize.

Lucas Perry: Hey this is post podcast Lucas. I’d like to add some more details here around the story of Joseph Farman’s discovery of the ozone hole, to paint a bit of a better picture here. I’m taking this from the UC Berkeley website, and you can find a link to the article in the description: Dr. Farman started collecting atmospheric data at Halley Bay, Antarctica in 1957, sending a team to measure ozone levels and concentrations of gases, like CFCs. In 1982 his ozone reading showed a dramatic dip of around 40%. He was initially skeptical that this was an accurate reading and thought it must have been an instrument malfunction due to the severe arctic cold. He also reasoned that NASA had been collecting atmospheric data from all over the world, and hadn’t reported any anomalies. His instrument was ground-based and only had a single data point, which was the atmosphere directly above it. Surely, he reasoned, NASA’s thousands of data points would have revealed such a drop in ozone if there had been one. Given this reasoning, he ordered a new instrument for next year’s measurements.

The following year, Dr. Farman still found a drastic decline, and going through his old data, discovered the decline actually started back in 1977. He now suspected that something odd was happening strictly over Halley Bay, leaving other areas unaffected. So the next year, his team took measurements from a different location 1,000 miles northwest of Halley Bay and also discovered a large decline in ozone there as well. With the same data at two different locations the mounting evidence for the ozone hole was clear and he decided to publish his data. This data both shocked and intrigued many scientists, including Susan Solomon, which thus catalyzed further research and inquiry into the ozone hole, the mechanism that was creating it, and the needed governance and industrial solutions to work towards mending it. Alright back to the episode.

Stephen Andersen: Let me just say one thing before we go back to the ozone hole. One of the interesting things that happened was of course, Farman and his research group declared that there was this serious depletion happening in Antarctica. So all the scientists that had been building the case with Sherry Roland and with Mario Molina, instantly jumped on it in the press, and in fact it was Sherry Roland that coined the phrase, ozone hole. He was the first person to utter that phrase.

And that was also a very good case that the public could grasp that, they could look at the NASA graphics, they could talk to scientists, and so there was really a great expectation that someday there would be the smoking gun like this and Antarctic ozone hole or other evidence, and people were ready and prepared to go to the press and go to the public, and in fact the politicians by this time had been briefed a lot, and that the United Nations they’d been working on this since 1970, when they organized a working group on stratospheric ozone depletion. So this great scientists and great science was welcomed into the community and they took full advantage of this and then other great scientists like Susan jumped on it to say, “Well, how can we go beyond simple finding of the ozone depletion and track it back to its origin, the CFCs and the other ozone-depleting substances.” So it was science and politics at its best.

Susan Solomon: Yeah, I guess I also want to say that I didn’t assume that the ozone hole was necessarily due to chlorofluorocarbons. I tried to produce it all kinds of ways with reactive nitrogen from the Aurora with dynamical changes. I just couldn’t get it to happen any other way. And what we knew already, and again I think it’s a real achievement, was that people had been interested in the idea of reactions on surfaces for a while, but mainly because they thought they were perhaps interfering with the measurements they were trying to make in those flow tubes. The flow tube is basically just a glass tube and people assumed that there was no surface chemistry that could happen in the stratosphere. We know there is chemistry on surfaces, in the lower atmosphere, in the troposphere, and it can be really important. Acid rain is a great example.

Surfaces can make chemistry do things that just doesn’t happen in the gas phase. That’s why you have a catalytic converter in your car, it’s a surface that converts the pollutants into something else before it gets out the tailpipe. Surfaces lead to chemistry happening very differently from gas phase. And we assumed the stratosphere was just gas phase, there couldn’t possibly be any surfaces. But interestingly, we sort of knew that there were these polar stratospheric clouds they’d been observed by explorers going back, I think 200 years in the Arctic and 120 or so in the Antarctic. We knew they were there, we just didn’t really carefully evaluate their chemistry. But when people started doing these experiments in the laboratory, they thought certain processes were actually going in the gas phase. They saw, for example, certain kinds of chlorine molecules going away in their flow tubes, and they thought they’d discovered some new gas phase chemistry, turned out to be something happening on the surface.

And they said, “Oh, okay, doesn’t matter. It’s just on the surface.” Well, it turned out to be not just on the surface of the float tube, but also on the surface of those polar stratospheric clouds. And that’s actually the connection that I made. I thought, “Hey, if this is happening in the lab, there’s no, necessarily, reason that it couldn’t also happen on polar stratospheric clouds.” Now that was a leap that perhaps I shouldn’t have taken, but anyway, I did.

Lucas Perry: It’s good that you did. Yeah, could you explain what that moment was like, more so. I mean, that was basically a key, super important scientific discovery.

Susan Solomon: Yeah. I had a very hard time believing it when I… This was back in the days when I was running a computer model. This was in the days that you would wait a long time for your output because things were very, very slow. I don’t remember. I don’t think it was still in the computer punch card day. I think I actually did have a file that I submitted, but the wait for getting it back, I think, felt interminable. And when I did get the results back, I was just shocked to see how ozone behaved. And one of the key things about it is that it doesn’t happen in the winter. In the dead of winter when the polar regions are dark, this process won’t be very important. You have to have not only cold temperatures so the polar stratospheric clouds are there, you also need sunlight to drive certain parts of the chemistry.

And I could go into the details of that, but I’m not sure you need me to do that. It’s a process then that occurs in the Antarctic spring, as it comes out of its long period of dark cold winter, it’s still cold, but the sun starts coming back. And that’s the combination that then drives the ozone depletion. And that began to start happening in my model. So I was pretty shocked. It wasn’t quite for the right reason, I have to admit. The process that I had driving that final step of… What I did identify correctly was that the key reaction is the hydrochloric acid and chlorine nitrate from the chlorofluorocarbons react together on the surface of the polar stratospheric clouds, they do not react in the gas phase.

We thought maybe they did at one time, but then we figured out it was just on the surface of the float tube, so we forgot about it, everybody, except until I remembered. And then the hydrochloric acid and chlorine nitrate react on the surface of those clouds that makes molecular chlorine CO2, which fertilizes breaks apart very readily with sunlight, that makes chlorine atoms, and now you’re off and running to produce ozone depletion. So, that part I had all correct. What I thought was that the chlorine monoxide might react with HO2 to close the catalytic cycle. Cause you don’t have much atomic oxygen in the lower stratosphere where the ozone hole was happening. You need to close that catalytic cycle, we were talking about earlier, with something else. Turned out that really the key thing is ClO reacting with itself to make something called a ClO dimer, which then fertilizes. But we didn’t actually know that chemistry yet. We learned about it not too long after. That was discovered in ’87.

Lucas Perry: I see. So, essentially there are these glass tubes and labs where the scientists at the time were trying to basically create atmospheric conditions in order to experimentally test the models that they were generating for what happens to ozone up in the atmosphere. And because it’s a glass tube, there’s a lot of surface on it and so they were discounting what they were observing in that glass tube, because they’re saying the upper atmosphere doesn’t have any surfaces. So, any surface related reactions don’t really make any sense.

Susan Solomon: Right. That’s basically it.

Lucas Perry: So, you were looking at that, what made you think that maybe there were surfaces in the sky?

Susan Solomon: Well I mean, we knew that polar stratospheric clouds could happen in the Antarctic and also in the Arctic. Like I said, people have visually… You can see them. I’ve seen them myself. They’re clouds. They’re actually very beautiful. There they look like they’re almost a rainbowy kind of appearance because the particles are almost all one size and that creates a particular kind of beauty when the sun hits it. But yeah, you can see them, they’d been seen, literally. There were also satellite data that had been published a couple of years earlier that helped to inspire me to think about it. But I actually knew about the explorers, I was just intrigued by that kind of stuff. So then I was very excited to work with Bob Watson when we formulated a mission to go to the Antarctic and to actually go down there and make measurements that might help to determine whether reactions like that were indeed happening. And that happened in 1986.

Lucas Perry: Right? So you’re creating these models that include the surface reaction. And so you’ve got this 1980s computer that you’re submitting this file to, and… What do you get back from that model? And how does that motivate your expedition to go there and get measurements?

Susan Solomon: If I remember I had it programmed to make plots of the percent change in ozone, and there was this, I didn’t call it a hole at the time, but there was this area over the Antarctic where once I put those reactions in, I got a lot less ozone. I recall something like 30% less. It’s published in my paper that I published on this in 1986. So I wrote it up and submitted it to nature, and it was published in ’86, and that was the same year that a lot of us began thinking about how to get down there and test the different ideas that have been put out because the idea of chemistry involving chlorofluorocarbons was not the only idea out there, other people had meteorological theories. And as I mentioned, there was this possibility that it might be solar activity, I guess somebody thought about, so the…

Scientists are always stimulated to come up with ideas, and we needed to get down there and make the measurements that could discriminate between the different ideas. So, I was very fortunate to be young and able to get on an airplane and go to the Antarctic. So I did, 1986. It was great. Most incredible scientific experience in my life, actually.

Lucas Perry: What made it so incredible, and what is it that you saw? What was your favorite parts about your expedition to the Antarctic?

Susan Solomon: Well, just going to the Antarctic is an unbelievable experience. I mean, even if you just go on a cruise ship, it’s like another planet. It is crystalline, beautiful, unpolluted, full of optical effects that are just amazing. And of course, brutally, brutally cold. We went down in August of 1986. When I got off the plane the temperature was about -40°C, which is also -40°F. So I like to joke that if you’re ever on Jeopardy and the Final Jeopardy question is, “At what temperature are Celsius and Fahrenheit the same?” The answer is -40. I’m originally from Chicago, I’ve been in cold weather, but I’ve never been in anything colder than I think about -15 before. And it was, it’s a shocker.

But after a while, after a couple of weeks, -15 actually feels very warm. You really do. It’s amazing how you acclimatize. Everybody laughs. Stephen’s laughing as I’m saying this, but it’s true, it’s true really. I thought I would just kind of curl up in my room the whole time, but I didn’t, I found that it was easy to acclimatize. Yeah, really. Well, actually people do. People actually even go jogging with shorts on, at -15. Yeah, it’s incredible. Depends on if it’s sunny or not. The atmosphere is very dry also down in the Antarctic. Basically, the cold has rung all the paper out of the air. So-

Lucas Perry: Did you go jogging in your shorts and T-shirt at -15?

Susan Solomon: No, no, I didn’t do that. But I’d certainly remember feeling warm at -15 and opening up my jacket and stuff like that. And I definitely kept my window open if it was -15. So yeah, I did. But, I made some measurements with my colleagues using visible spectroscopy. So we use the sun, or the moon, or the sky as a light source, and we measured chlorine dioxide, which is a closely related molecule to chlorine monoxide, and we were able to show, particularly with the moonlight measurements, that the values we found were a hundred times more than they should have been. We couldn’t measure them anywhere else because they were below our detection limit, but they were actually quite measurable in the Antarctic. So, that was the key measurement that we made and it was an incredible night, the night that we actually did that. And then, I think it was the next day that I made the data analysis and there it was. It was an amazing, amazing moment.

Lucas Perry: Could you explain more about how that particular data that you measured fit into the analysis and what it proved, and the feelings and experience of what it was like to finally have an answer?

Susan Solomon: First of all, there’s the getting of the data which involves putting mirrors up on the roof of a little building in the Antarctic and directing the moonlight right down into the instrument. And doing that when it’s cold and windy can be a bit of a challenge. So setting it up for measurement is physically challenging. And then taking the data, analyzing the data. I was, I think, careful enough to realize that that wasn’t going to be the only thing it would take to convince everyone that chlorine was the cause of the ozone hole. The chlorine dioxide that we measured, had to have come from the chlorofluorocarbons. There was no other even conceivable source for it. And it was a hundred times more of it than there should have been, and that’s because it had gotten the reactive species, and chlorine dioxide is a reactive form of chlorine had gotten liberated from un-reactive forms of chlorine, like hydrochloric acid and chlorine nitrate, which reacted on the surfaces of those clouds. And they don’t do that anywhere else.

So that’s a little more detailed than I thought you might’ve wanted, but that’s why you take these, what we call reservoir species for chlorine, hydrochloric acid, and chlorine nitrate, and you convert them to active chlorine. And now you’re really often running for ozone depletion. And that’s what happens on those clouds.

Lucas Perry: You’re getting this data about these particular chemical molecules, and then you’re… Tell me a little bit more about the actual analysis and what it’s like being in the Antarctic feeling like you’ve discovered why there is a potentially world ending hole.

Susan Solomon: Well, I’ll tell you this, I was really careful, I think, maybe Stephen can correct me if he thinks some wrong, but I was pretty careful about not broadcasting the news before we were really, really sure. So the moon measurements alone were not enough to convince me. And one of the things that actually excited me a lot was when we realized we could also see this in the scattered sunlight that we got. If the sun was low enough on the horizon, there was even enough chlorine dioxide to measure it then. So what it is is a visible spectrograph, it’s got a diode array in it, it’s actually very similar to the diode array that reads the prices when you go to the supermarket today. Back in the ’80s those were incredibly expensive because they had just been invented. And we had one that was cooled to very cold temperatures to keep it from having too much noise in it.

And we had a spectrograph, which you can think of as being sort of like a prism that you shine sunlight through, and you separate out the wavelengths of light and the colors of light come out as a little rainbow that you see when you put a crystal in front of a source of light. And so that’s essentially what we’re doing. We’re putting a grading, in this case, a diffraction grading in the beam of the moonlight, and we’re collecting the separated wavelengths of light on our detector, and we’re looking for the absorption of atmospheric chemicals. And we can measure ozone that way, we can measure nitrogen dioxide at a different wavelength, but chlorine dioxide has a particular band structure in the visible, that is what we measured. And we can also see it. The fact that we could also see it in the skylight and that the difference between the skylight and the moonlight was consistent with what we expected from chemistry and consistent with what you would need to deplete the ozone layer, got me pretty excited.

There was another group on our expedition that measured chlorine monoxide using a different method on microwave emission technique, which is the same one that’s used nowadays by satellite but in those days it was only used from the ground, and they also measured high levels of chlorine monoxide. And last but not least, I’ll say that the following year in 1987, Watson organized another mission, which actually flew on airplanes from Chile down to Antarctica, and measured chlorine monoxide yet another way by laser resonance fluorescence onboard an airplane that actually literally flew right into the ozone hole. So, I would say fair to say that from the science community point of view, when all those measurements were in, people got pretty convinced, but they also had to be written up. I mean, it had to be peer-reviewed before something that important could really be talked about as a known piece of science. So I was very cautious about spreading the word too early.

Stephen Andersen: So if you look at the history of the Montreal Protocol, what you see is that in 1985, there was something called the Vienna Convention that was passed by the United Nations that had about two dozen members signed it. And this is what’s called a Framework Convention that makes it possible to have something like the Montreal Protocol. So that was in the spring of 1985. And shortly thereafter, Berman published his article which was a tremendous reinforcement to the policy members that had anticipated that soon there would be evidence and that they needed to be prepared. And then as we went into 1986, and Susan is doing with her college, this brilliant work in Antarctica, the preparations are underway for the Montreal Protocol. And the scientists were telling the Montreal Protocol that it could be another explanation for the ozone hole, so that you should hold your fire until you’re sure of the results.

And you can find that in lots of the accounts at the time. But by the time of the medium, September of 1987, there was still a lot of uncertainty. But the policymakers were able to talk to their national scientists and others and felt confident enough to go ahead and confirm the Montreal Protocol. So I would view it from my point of view and perspective, that it was a continuous improvement in the science and the threshold of belief occurred for different people at different times. But you could then say, of course there were still skeptics in 1987, but it was my experience that they were mostly gone by 1990. And so the work I was doing mostly with corporations, there was rarely a meeting where there would be science skeptics after 1990, that they were gone from the earth has moved on to climate, in fact.

So it was a tremendous contribution of science. And the other thing that’s important to realize is that the industry that used these chemicals was not devoted to them. They’d been reassured by DuPont and others that they were completely safe and that there was no reason to worry. And then, as soon as the Antarctic ozone hole came along, they panicked and they rushed to the market to find alternatives. And that’s one of the reason the Montreal Protocol happened so fast, is the industry in some ways was faster to grasp the science than even the policymakers.

Lucas Perry: Stephen, I’d love to bring you in here to describe, in particular, the transition from the discovery Susan was involved in to the Montreal Protocol. So, what are the steps between the discovery and the Montreal Protocol, and how do we eventually get to the Technology and Economic Assessment Panel that you co-founded?

Stephen Andersen: I’m glad to describe that. It’s exactly what Susan said, is that there’s a laborious process to prove the science, and then there’s another process to communicate it. And that’s probably partly what Susan did, and Bob Watson and another scientist, Dan Albritton. And they were masters of communication. And there were lots of meetings held between the scientists and the diplomats. But also the scientists and the companies, including the National Academy of Engineering, did its own review of the science on behalf of industry and came up with a confirmation report, I would call it no. New science, a narrow view of science, but nonetheless, it was the message coming from the people they most respected, and Sherry Rowland was involved in that and many, many others. So the communication was very quick. And in fact, I would say by January of 1988, you could see big changes.

So the protocols signed in September ’87. In January of ’88, there was a large conference, and at that conference, there were several important announcements. The most spectacular was that AT&T announced that they had found a nature based solvent made from the terpenes from oranges and lemons, pine trees that could clean half of the electronics equally or better than the CFC-113 had replaced. And they said that transition was technically possible within one year. So they went from skepticism and standing back to becoming the driving force. And it was also important because this terpene was not another synthetic chemical. It was naturally derived and harvested from the disposal of the orange rinds and the lemon rinds, and then put to positive purpose. So this was an eye-opener to a lot of people that thought you had to have an elaborate chemical to have an elegant solution.

And then at that same meeting, the auto industry step forward and realize that most of the emissions from car air conditioning was from servicing and from leakage. And so for the first time they got together a partnership that developed commercial recycling for air conditioning. They did that within one year and the next year after they confirmed and approved the technology, they sold a billion dollars worth of recycling equipment all across the world. So there’s this enthusiasm of going from panic, that there would be high costs and disruption to the enthusiasm of profits and saving money. So it was the science that drove this, but it was the technical innovation that did this. And then very shortly thereafter, and in an overlapping way, there were similar breakthroughs in foam, so it was a commitment by the American food packaging institute to halt use of CFC in foam within one year, and to switch away from all fluorinated gases as quickly as possible.

So we’ve seen this building momentum and enthusiasm. You have international companies that are pledging to get out, and all the while we haven’t reached the Montreal Protocol entering into force, because that occurs later after the signing. And when it was signed and the first big assessment which Susan was involved in as well, was done in 1989. And so this was an assessment that you alluded to, it included the Scientific Assessment Panel, it included the Environmental Effects Assessment Panel, and then it included the work on technology and economics. And this was the idea of how do you make the best available information readily absorbed by policy makers and business community.

Susan Solomon: But Steve, I would just add one thing that you already said, and that is, it all starts with people understanding the problem. You talked about the fact that people everywhere the public all around the world could look at these satellite images of the ozone hole and say, “Hey, that’s actually pretty scary stuff.” And that created the will, the political will, that generated the demand for all the products that you’ve just described. I think without people understanding the whole thing, nothing happens, personally.

Stephen Andersen: You’re absolutely right. And in fact, in the case of that food packaging, it was a school teacher in Massachusetts and her children in the class that wrote to McDonald’s corporation and said, “Why are you destroying the ozone layer?” So the people at McDonald’s commissioned a survey of their customers, including children, and the customers responded, they did not want to destroy the ozone layer. And it made a big difference to where they chose to eat. And in the case of McDonald’s, children drive parents to the restaurant, the parents say, “Where do you want to go today?” And they say, “McDonald’s.” So it was a huge impact. It was an eyes-open business decision. And they had announced, prior to the packaging institute changing, that they were going to stop the purchase.

Susan Solomon: Yeah. They were putting hamburgers in foam clam shells, and they switched over to cardboard, which is fine because McDonald’s is so delicious. You eat it so fast anyway, you don’t need the foam. McDonald’s is so delicious. You eat it so fast anyway, you don’t need the foam.

Stephen Andersen: That’s right. Hot side hot, cold side cold, was the slogan. But this is exactly right. What Susan’s saying is, you have this circular effect, where you have the customers pushing the companies, you have the companies pushing their suppliers, and you have the policy makers setting deadlines. And pretty soon, you’ve got this wheel turning very fast. And as quickly as you catch up with the available technology, then you look to the next strengthening of the Montreal Protocol. And that’s what we saw over the decades of the Montreal Protocol, more and more chemicals control, faster and faster phase out.

Lucas Perry: Steven, could you explain more specifically what the Montreal Protocol was, and who the key stakeholders were, and how it came together and then was signed and ratified, and what that meant?

Stephen Andersen: So my role, I was very fortunate, because I was hired by the EPA in 1986 in preparation for the negotiations of the Montreal Protocol. I’m an economist by training. And so, I had the highest interest in showing that this was going to be cost-effective and feasible, and that the technology would come together. The mastermind behind the Montreal Protocol was the head of the United Nations Environment Program, Dr. Mostafa Tolba, who was a botanist himself and an accomplished author. And I think was very quick at grasping the science.

So you have the force of the United Nations organizing the meetings. And then you have the science that’s providing the justification for the treaty. And then you have leadership countries that were advocates of a treaty. And that included a group that was called the Toronto Group, because it was partly stationed in Toronto, but that was United States, Canada, Norway, Denmark, many other countries, Sweden, that got together as a group and helped craft the language that they could sell to other countries.

And so, it was a masterfully designed document in retrospect. And included in that document was the idea of start and strengthen. So if you look at it, it only was two chemicals, CFC, and then a fire extinguishing agent called halon. And then the first negotiation in 1987, it was just to freeze the production of halon, stop it from growing, and cut back CFC’s 50%. but that was not hard to do because 30% was still aerosol and convenience cosmetic products. So it was a very conservative start. But the science was so persuasive in the years ahead, that they said to the policy makers at the Montreal Protocol, that’s not enough. You will not protect the ozone layer with those two chemicals. And you certainly won’t with those modest reductions. So then they added more CFCs. They added carbon tetrachloride, metal chloroform, methyl bromide. A litany of chemicals were added. And then each time that they would meet, every two or three years, they would have an acceleration of the phase out.

So it was a very practical approach that was done on an international basis. And one of the beauties of this treaty, is it includes incredibly strong trade restrictions so that if a country did not join the Montreal Protocol, they would lose access to these ozone-depleting substances even before the phase out. So it had lots of clever features and lots of brilliant leadership. And what Susan said about people mattering, they mattered a lot over and over again. And there were 200 or 300 people that had a chance to become ozone champions and make a real difference to the world.

Lucas Perry: Could you explain who were the main stakeholders involved in the Montreal Protocol? Was this mostly developed nations leading it?

Stephen Andersen: That is a great question. That’s a fantastic question, and explains a lot of why it was such a challenge. So if you look at the full set of chemicals that are controlled by the Montreal Protocol, they were divided into 240 separate sectors. So, distinguishable industry groups that had their own interests in keeping these chemicals or to phase them out. So if you look at those, and some other ones that Susan mentioned, the air conditioning and refrigeration, and that includes industrial refrigeration, because many chemical processes require that, and commercial refrigeration, and also what’s called cold chain, the processing and the freezing and refrigerating of food in order for it to reach market.

So that alone would have been daunting. But in addition to that, there were these chemicals used as solvents in aerospace, in electronics, in the manufacture of medical devices. It was used as a sterilant. And as I mentioned, as an aerosol for metered dose inhalers. It was used in fire-fighting, including enclosed areas like on airplanes and submarines and ferries and ships and places that you can’t evacuate if there’s a fire, where you have to stay on board the burning vehicle. So it included all of the NASA satellites and the space labs and the rocket equipment. Manufacture of solid rocket motor for the space shuttle required methyl chloroform.

So you have these, and then the were laboratory uses. So it’s used as a tracer-gas, as Susan mentioned. But also, it was used to have a dense gas for a wind turbine. And it was used for pressure check testing of scientific instruments to make sure there were no leakages of gases in or out. So as it got going, also, it was discovered that every weapons system in every military organization depended on ozone-depleting substances. All the command centers were protected with halon. All the ships, tanks, submarines, protected by halon. All the electronics and aerospace manufactured in service with CFC and methyl chloroform, all the gyroscope manufacturing for the weapons guidance. Whole list, all the way down. All the optical raiders, all the AWACS. Everything that they could look at had some use.

And so it required these stakeholders to look fundamentally at the basis of what they were doing, and decide how do you shift from using this chemical to a performance standard that would allow industry to compete as how they could produce an alternative that would be a pure replacement. And so, one measure that I think your listeners will find interesting is that if you ask the public today, or even the effective industries, no one has stories of train wrecks or disappointment or failed systems, because it was so successful. Most consumers would have their entire house changed, and they would not notice this. The glues they used to assemble furniture were ozone-depleting substances, but people have not stopped buying furniture. And so, if you went back, it’s the smallest list of uses that found no substitutes. So it’s quite remarkable.

Lucas Perry: Steven, you were on this panel, I believe the chair of it, for many years. So I’m curious if you could explain, more so, what that experience was like, what is it that you necessarily did on the technology and economic assessment panel, and what the impact of that was for implementing the solution that was needed after Susan helped to discover the mechanism of the problem?

Stephen Andersen: Yeah. Thank you for asking me that, because that’s what I’m most proud of, of course. When I was appointed with Vic Buxton, from Canada, to set up the first technical panel, we were like-minded and we had a great idea, and that was, instead of casting out for experts from various sources and seeking wide participation and balance of interests, we didn’t do that at all. We recruited the experts from the organizations that were already committed to protect the ozone layer, because these would be people motivated to find a solution rather than intellectually interested in describing solutions, or even worse, be a stakeholder against a new alternative, and they would become internal critics.

So the notion was that, on a technical committee, you could not have a better set of people than the people whose success in their enterprise depends on finding alternatives, and realized that a team could find the alternatives faster than others. The other secret of our success is, we had something called self affecting technical solutions. And so, for example, one of the chairs of the Halon Committee, studying halon, was the chair of the National Fire Protection Association that set the standards for where halon is used. So as quickly as a use could be eliminated with an alternative, he would go back to his committee and decertify halon on that use.

We had members of the coast guard on the committee. And as quickly as there were alternatives on ships, they removed from the requirements of the United Nations Maritime Organization the use of halon. So it went from compelling the use for safety to prohibiting the use for environment. So it was this remarkable internal group. And if you go back also, and you’ll notice that some of the most important technologies were invented by people that only met on the committee for the first time. So you had groups of military suppliers that got together to tell the communication suppliers and invented something called no-clean soldering that eliminated the use of solvents and save the ozone layer, but it also increased the reliability of the products. And they were enthusiastic about commercialization to the extent that they patented the technology and then donated it to the public domain so it could be used anywhere in the world at no expense to the user.

So you have this enthusiastic group of genius engineers working on a short deadline and constantly resupplied with motivation from scientists like Susan, because as fast as they would take satisfaction in what they’d accomplish, they were being told, it’s not enough. It’s not enough to just do these chemicals. We have to do more. It’s not enough to do these chemicals on the old schedule, we have to go faster. So some of these sectors halted their uses years ahead of the deadlines of the Montreal Protocol or the Clean Air Act. It was really quite inspirational. And most of those people would tell you, it was the best part of their life because they never would have been allowed to work with the engineers from the competing corporations if it hadn’t been for the TEAP drawing those together for public purpose.

Lucas Perry: So, is a good way to characterize this then that there’s this huge set of compounds, that when they get up in the upper atmosphere they release chlorine? And the chlorine is really the big issue. And so, these hydro chlorofluorocarbon are being used in so many applications. You’ve described a lot of them. And so, the job of this committee is to, one, slowly, through regulation, phase out this long list of ozone depleting chemicals, while also generating alternatives to their use that are not ozone depleting.

Stephen Andersen: Yeah. That’s right. Generating or identifying. And there’s a subtle problem we faced. It’s now being faced again for climate. And that is, as Susan mentioned, most of these chemicals have long atmospheric lifetime. So when you stop producing the CFC, it can be a slow decline in the chlorine that’s been contributed to the stratosphere over many, many years. Others of the chemicals like methyl chloroform, and most of the HCFCs have short lives. And so, any reductions you make in these chemicals that do all their damage within their short number of years has a bigger effect immediately than doing the same amount of effort on one of these other chemicals. And what the scientists were telling the Technical Actions Committee and the Montreal Protocol was that we had to worry about the long run and the short run.

And so HCFCs as refrigerants and foam blowing agents were viewed as a transitional substance. So if you stopped using CFC11 and you started using HCFC22, that was an improvement in both the GWP, and the forcing of ozone depletion. And the same thing for methyl chloroform. So the ambition of the Montreal Protocol was to work incredibly quickly to get rid of the short term chemicals and uses with an alternative that would be solvents, for example, using methyl chloroform, but at the same time, allow some HCFs so that you didn’t have to endure the continued use of the CFCs. And that was the technical challenge, to keep your eye on the long run, and at the same time, keep your eye on the short run. And some of the scientists were also over motivating that kind of ambition, because there was a concern that we might go too far in sending chlorine and bromine to the stratosphere, and do irreparable damage, or damage that would take much longer to solve.

So, true or not. It was highly motivational. And it caused a tremendous effort on our committees, first of all, to get rid of methyl chloroform. If you look at the curve of methyl chloroform and the overall ozone protection, it was a critical first step. And it was accomplished probably in two and a half years worldwide.

Susan Solomon: So let me toot Stephen’s horn a little bit. And then also clarify one point. I think the invention of the Technology and Economic Assessment Panel, TEAP, as we call it, of the Montreal Protocol was a real master stroke because it brought the engineers and scientists from industry into the process to help figure out what could be done. And so, the way the assessment process worked is, on a systematic basis. The science group that I was part of would assess the science. Steve’s group would assess, okay, the science says, we got to phase these things out. What can we phase out? What is technically feasible?

And we would provide these reports, along with the one from the impacts panel, that would say if you keep doing this, you’re going to have so many skin cancer cases a year by 2050 and stuff like that. All three of those reports would be explained to a group of policymakers in a UN meeting. So the decisions weren’t actually made by Steve. But Steve’s group was highly, highly influential in educating the policymakers and guiding them, really, on what would make the most sense, what could be done the most cost effectively, the most quickly, et cetera, et cetera. And then they made the decisions.

But the great thing about it is it’s not a political group at all. In the old days, we would call them a bunch of guys with slide rules. And that included the people who came from industry. They weren’t the political leaders of those companies, they were the people in the trenches trying to actually figure out what to do instead. And that’s what made it work so well. I really have often wished that we had a similar way of doing the intergovernmental panel on climate change assessment process. We have a science panel that’s pretty similar, but we don’t really have quite the same technology panel. And many people have commented that the technology panel that Steve put together was just huge in making the Montreal Protocol work as well as it has. And the ozone layer is actually finally beginning to heal. So it’s a real testimony to their success.

Lucas Perry: Yeah. Steven, please, I invite you to toot your own horn a little bit more here because your contribution was extremely significant towards the elimination of the ozone hole. I have a fun fact here that is from a paper of yours. So in 2007, Steven, you released a paper with Velders’ team, published the importance of the Montreal Protocol in protecting climate. And the team quantified the benefits of the Montreal Protocol and found that it helped prevent 11 billion metric tons of CO2 equivalent emissions per year, and delayed the impacts of climate change by seven to 12 years. So please, what are some of your favorite achievements? And this is something you’re involved in for decades.

Stephen Andersen: Yeah. That’s a great story. And I’m of course, very glad to tell it. One of the things people know about me, that have worked with me for many years is, I worked by slogans a lot. So I try to reduce my ambition to something that’s like a chant or a short instruction that I can give myself to move ahead. And after years of working with Susan and many other scientists and Mario Molina and Sherri Rowland, and struggling with these issues year after year, after year, and waiting for the science to come on board, and there’d be a missing link, and there’d be something that was misinterpreted, and we’d have to go back to square one. I came up with the slogan, science too important to leave to chance. And what that meant was that it was my job to say, what kind of information are the policy members missing? Those are the people I hang out with all the time.

Because at the same time, I was the deputy director for Stratospheric Ozone Protection at the EPA. I was the liaison to the department of defense on climate and ozone layer protection. So I was in those meetings where people were trying to decide, is it worth investing another millions of dollars in this new technology, or should we do something else? So in working with Susan in 1995 on a joint report between the IPCC and the tape, I realized that the Montreal Protocol had done a lot for climate that wasn’t well appreciated over at the Montreal Protocol, that these facts were available. So I put together what we called a dream team, which included Guus Velders, who was the lead author, a brilliant scientist from the Netherlands. It included David Fahey, he was one of the colleagues of Susan at Noah, and John Daniel. And then it included Mack McFarland, who’s a scientist who was once at Noah, but worked the better part of his career at DuPont. And then myself, who had been on the TEAP for oh, so many years, and then EPA.

So the idea of the team was to say, just how big was the contribution of the Montreal Protocol to protecting the climate, and how do we communicate that to the Montreal Protocol so they would consider that as part of their obligation and part of their legacy? Because we were coming up, my concern, we were coming up to a very long interval of HCFC in years, that the Montreal Protocol had plateaued its ambition. And they had accomplished so much, they were resting on their laurels, and they had lost this impulse to get more stringent.

So this committee was put together, it quickly put together all the facts, incredibly complicated at the time, although people have done work like this since confirming it. And this dream team came up with the conclusion that the Montreal Protocol had already accomplished more than the Kyoto Protocol could have accomplished if every party, every state government in the world had joined Kyoto, and if all of them had met its obligations. So this was huge. It was shocking to us. It was shocking to the world. We brought it back the same year, 1997, excuse me, 2007, to the Montreal Protocol. And that year, they accelerated the HCFC phase down.

So it was it a tremendous victory. And it was exactly what I would hope would happen, is if we assembled science in a new way that was headline news, that the policy makers would get the message and do something important. And then two years later, the same team decided, well, why don’t we show the Montreal Protocol, how important it could be if the chemicals that replaced 15% of the ozone depleting chemicals, which are HFCs were phased down under the Montreal Protocol, ozone safe chemicals controlled by the Montreal Protocol. And that was accomplished in 2016. Took a decade. But we’re very proud of ourselves. And I think it’s a perfect example of the advantage of a group of people with a wide set of skills working together, including somebody like me who’s not an atmospheric scientist, working with atmospheric scientists and making more clear what the policy makers need to know.

Lucas Perry: I’d love to pivot in here now into extracting some of these lessons that you’re sharing, Steven, for how we might do better with modern climate issues, from greenhouse gases and also other global catastrophic risks. But I think you guys have done an excellent job of telling the story of the science and the discovery, and then also the strategic part, and the solution making of the story of the ozone hole. So as we get to the end of that, I’m curious if you have any thing else you’d like to wrap up on about that story. What is a key insight? When you look back at everything that you’ve contributed and been through, what is it that stands out to you?

Stephen Andersen: My theory of change, I think, is the same as Susan’s, that people matter the most. That the ability to bring the right people together, at the right place, with the right instructions, is bound to have an important conclusion. And one of the things that I always found was, if you have the best engineers, no matter what their attitude, if they have the goal that is coincident of the environmental protection, they’ll find a solution. So I think this accumulating of the science and the engineers and coordinating the activity, these assessment panels are just everything, that if you do that, you can’t help but be successful.

But the other thing that I’m realizing now is that, when you’re in a hurry, like we are with the climate, you have to take advantage of the existing institutions. So as quickly as we added HFCs to the Montreal Protocol, I would like to add other chemicals, N2O, nitrous oxide, which is an ozone-depleting greenhouse gas that was neglected by the Montreal Protocol. And then there’s other gases like methane that have nothing to do with the sectors that are involved in the Montreal Protocol, but the framework of the Montreal Protocol might be perfect for a methane treaty. So you might have the Montreal Protocol people help design a methane treaty. Or if the Montreal Protocol can find its way to create new capacity with new skill sets, you could have methane drawn into the Montreal Protocol because it’s genius is partly that you turn off the chemical at the source and force all the downstream changes to occur. And that’s different than something like EPA, where you often find one part of the problem, catalytic converters and gasoline, and you implement that, but you’re not focusing on the big picture, which is electrification of the cars.

And so, there’s this inherent advantage of the Montreal Protocol, top down, turn off the chemical, bring on the technical information, bring people together for solutions, reinforce and reward. So I could go on for the longest time. I’m very enthusiastic about the success of the Montreal Protocol. I absolutely believe the lessons could be taken up better than they have been. And I think if they were taken up, we’d be well on our way in many other environmental problems.

Susan Solomon: Yeah. I like to tell students in my classes, and I feel like I’ve learned this over the years with Montreal, and I’ve seen it in so many other problems as well. There’s three Ps that determine how well we do on any environmental problem. The first one is personal. Is the issue personal to me, to us? And in the case of the ozone issue, it was deeply personal because skin cancer. I mean cancer, it doesn’t get any more personal than cancer, right. But also all the other attendant things that it can do. The second P is perceptible. Is the issue perceptible to me? The best case is if I can see it with my own eyes, like smog, but seeing the satellite images that we talked about earlier, that was good enough to make it perceptible to a lot of people. And the third big P is, are there practical solutions? And that’s where Steven’s type of work has been so important. So when you think about climate change and you think about the three P’s, people haven’t really considered it personal until pretty recently because it seemed like a future problem, not a today problem. And we can talk all we want to about caring about our grandchildren, but really what we care about is us, right?

So, and we do care about our grandchildren. Of course, we do, but not as much as we care about us. That’s just a fact, I think. It’s natural and normal and we don’t need to be embarrassed about it. Particularly when you’re talking about a future problem and you can always hope that there’ll be other solutions in place by then. So is it personal? For a long time we thought it wasn’t. Is it perceptible? For a long time we didn’t feel like it was. Nowadays, I would say more and more people are recognizing it as personal and perceptible. The kinds of things that have happened in the world this year have just been amazing. And being wake up calls because so many places have flooded, so many places have had massive fire. These are all the sorts of issues that we knew were happening. So much erosion is going on because of rising sea levels.

People are just, and actually when people would say to me, well, it’s not really perceptible yet. My answer to that would be, yeah, I know. And it’s a problem but it’s going to fix itself with time. And I think we’re just about there. It has fixed itself. And then there’s that big third P is, are the solutions practical. And there’s been a lot of propaganda out there saying the solutions are not practical, but I think we’re reaching the point now where we recognize that they are. So I think that we’re really at a turning point on climate change.

Lucas Perry: I’d love to pivot here more into exploring lessons to extract for what we can do about climate change and other global catastrophic risks for the theory of change about what we might do about those. And I’m also mindful of the time here. So if we could hit on these a little bit faster, that would be great. One thing I don’t think that, or one thing I would like to hit on more clearly is what is the bad thing that would have happened if the work of you, Steven, and the work of you, Susan, and all of the others who were involved in the discovery and solution towards the ozone hole, what is the bad thing that would have happened if we had just continued to use CFCs?

Susan Solomon: There is a lot of work on that now, people call it the world avoided scenarios. So what world did we avoid? Well, by mid century, it would have been about a degree hotter than it’s actually going to get. So that’s a degree Celsius by the way. So instead of a degree from, a degree in AF from mainly CO2 and methane that were trying to avoid, we would have an extra degree on top of that from CFCs that we would have had to avoid. That’s a big deal. Something like 20 million skin cancer cases in the United States sticks in my mind by mid century, but I would have to check that number to be absolutely sure.

Stephen Andersen: Yeah, Susan’s absolutely right. The latters can cancer cataract, but one of the things you can look at is you can say two interesting things. You could say, what if Molina and Rowland had not had this hypothesis? And certainly you could say, well, someone would eventually. But if it had been five years later, or 10 years later, it would have been catastrophic because it did take time as Susan said. It does take time to make a hypothesis, to confirm it, to do the ground measurements, to do the aerial flights and so forth. So it was just in time or a little bit too late that it was really a tight schedule that was working when you include diplomacy and corporate changes and all of those facts. But you can also look and say, what if the Montreal Protocol and Molina had been delayed some period of time.

And what you can see is exactly what Susan said, that the CFCs would have grown in climate forcing, let alone ozone depletion to a level that would have been untenable for earth. That they could have been almost the same level as the CO2 climate forcing. So we were incredibly fortunate to have this early announcement. It was incredibly fortunate that the Antarctic ozone hole was noticed finally, and announced and it was such a spectacular persuasion. And then fortunate that the Montreal Protocol was able to take this and then in a derivative that the corporations were able to make their reductions. I also think it’s important to remember that this really was a training ground for a lot of people, that scientists had not worked together successfully on assessments this large and so continuously brilliant over so many years. If you look back at each of the assessment panels, I find almost no valid criticism of any of the findings at any of the points in time that this was a well done process, actually stunning.

So the World Avoided, if you read the Nobel Award for 1995 for Crutzen and Molina and Rowland, it says that life on earth would not have been possible as we know it. If you read Paul Newman’s report on World Avoided, you find out that it would have been untenable to go outside at most latitudes for very long, without sun protection, far beyond what people wear when they go out today. So it would have been a lot of joy taken away, a lot of misery brought on by these medical effects. And it would be a less successful world because these technologies that replaced the ozone-depleting substances are purely superior, better energy efficiency, less toxic, more durable, more easy to repair and reliable. It’s quite a success story.

Lucas Perry: So let’s explore these P’s as Susan has put it. So we have the issue of greenhouse gases warming the climate today. And a lot of what you were involved in Steven was the economics of making transitions. So both the innovation required to replace HFCs and then also the questions around that being economical. So this is the importance also of industry being involved in the process. So I’m curious if you could explain your experience with industry and how difficult or easy it was to get industry to make this transition and how that compares to the transition that industry and governments need to make in the modern day around climate change. And how much of that difference is a bit circumstantial around the technologies and innovations that need to be made.

Stephen Andersen: Yeah, that’s a great question. Susan, I think agree, and I agree that some of the stakeholders on the climate side were more persistently ruthless than we’ve experienced under the Montreal Protocol. The early days when Molina and Rowland came up with their announcement, they couldn’t thought to have been more ruthless. Character assassination, there were lists of people that were not to be hired. The region said the University of California prohibited Sherwood Rowland for applying to certain organizations for funding and on and on. But if you look at what’s been done by the coal companies and petroleum companies, I think that that was orders of magnitude worse over a longer period of time, including interjecting a lot of wrong science over and over again, which was a terrible distraction and also took a lot of energy away. So there’s no doubt about the differences of the two. But what I have found out working on ozone layer is even sectors that have a bad reputation for other topics can be leaders on a topic once motivated as Susan Solomon says, and come to regret what they’re doing.

I’ll give an example. The automotive industry was among the most rigorous in getting rid of their solvent. Their foam uses, foams and cars include what are called safety fonts. So the, under the dash of the car originally, it was underneath the surface of the fabric, it was all ozone-depleting substances. It was all ozone-depleting substances for the refrigerant, for the solvent to make the components for the electronics and so forth. But they looked at science, they got motivated, and they stopped using these chemicals as quickly as the other sectors. So they were one of the fast to go. So what my lesson is you shouldn’t judge a book by the cover. You should not hold it against an institution because they misbehaved in the past and you should give them a chance to make a new start on leadership right now.

The other thing I would add is the public right now is much more engaged in climate than they ever were in ozone layer protection. So if you go back, there were very few of the industry projects that had active involvement of non-government organizations, environmentalist, because it was being done so well by the companies themselves that would have been futile use of those talents. But you look today, there are thousands of organizations that are demanding changes in industry. They’re in the streets, they are protesting. This did not happen for the ozone layer. It was the smallest amount of activity. So there’s the difficulty of the fossil fuel industry, but it’s offset by the ambition of the non-government organizations and some of the governments. So lots of things are happening there. What would you add Susan?

Susan Solomon: Well, I think you summarized it pretty well. I think the other thing I would add is the engagement of young people today, they have been exercised and have become pretty upset about the future that they see themselves inheriting. Greta Thunberg has done a fantastic job I think of mobilizing worldwide in a sense. The ability to get together on things like the tools that we now have with the internet have allowed that to become an international movement much more effectively than we could have ever done in the 80s for climate. No matter how concerned we were, the telephone only worked so fast. So I think you see a public engagement on climate change that is driving a lot of what’s going on and having a huge influence.

Lucas Perry: Is there anything that you both wish or would suggest as actions or things that are really important for the generation? The generations that face climate change. What it is that they need to understand and do. I mean, we’re talking about making things practical, personal, and then is the last P perspective?

Susan Solomon: Personal, perceptible and practical.

Lucas Perry: Yeah, and so there’s still this, I mean, a large problem also beyond these issues, global catastrophic and existential risk issues, beyond making them personal. And there is certainly difficulty around making action around them practical. With the last P for perceptible, a lot of people don’t even agree about the science of climate change. So I’m wondering if there’s any similarities between that and what you experienced with HFCs, and what you suggest there is to do about that. I mean, because the climate issue has become politicized, certainly.

Susan Solomon: Yeah, I mean, I would argue that it is becoming perceptible. I think most people around the world have noticed that the summers are hotter than they used to be. That heat waves are more extreme than they used to be. So perceptible is not so much the problem anymore I don’t believe. What is the problem is the practical, that they believe that it would cost too much and that it’s not practical to do it. That is increasingly becoming less and less tenable when it comes to power plants, for example. It is, nowadays if you’re going to build a new one, it is cheaper to do it with solar and onshore wind than it is to do it with either coal or gas. And so, and nuclear of course, too are more expensive. So there has been a pivot in the power industry to renewables that’s been very rapid. I think that we need to put a lot more investment into that because it does take an upfront investment.

It’s true that when it came to some of the things we could do for the ozone layer, a lot of them were really, we were getting rid of things that we didn’t have a deep investment with. I mean, how invested are you with your can of spray deodorant in your medicine cabinet? Probably not very, you can throw it away and or maybe you can even use it until it runs out and then go out and buy a roll-on, right. But probably a lot of people went out and just bought the roll-on because they figured it was a very good thing to do for the planet. But so this is indeed a lot tougher because of our investment in existing infrastructure, some of which is tremendously expensive. But I don’t think there’s any real barrier to making the transition.

And as soon as those things become… As soon as the alternatives become cheap enough, they essentially pay for themselves because energy drives everything. So if we can make energy more cheaply with solar and wind, then the cost of doing absolutely anything that requires energy becomes automatically cheaper too. And that makes a sort of a snowball effect of greater and greater demand. We need to make our grid more robust to things like intermittency and able to transmit electricity over broader spatial ranges. That is doable. It’s not, other countries have already done it actually. So it’s not something we couldn’t do. There’s a lot of things that are beginning to really happen quite quickly. And I’m very optimistic, but we do need some real changes in our existing infrastructure. There’s no doubt about it.

Stephen Andersen: Yeah, so there’s two things I would add. The first thing is the United States is now behind the rest of the world on barrier removal. So in Europe, you can use technology that’s been in the market for five years now, absolutely proven safe and effective energy efficient that’s prohibited by the USCPA for use in the United States. So they are a wall against new technology where they used to be a door. And so somebody has to get in there and motivate and get approval.

Susan Solomon: Can you give an example?

Stephen Andersen: Example, there’s a refrigerant called HFC32 that has a third of the global warming potential that has 20% higher energy efficiency. It’s mildly flammable, but it hasn’t been approved in the United States. Similarly, there are natural refrigerants that have not been approved. And if you look at the timeline and another example, which is easy to understand is they approved years ago, a decade or more ago, a chemical that has a GWP of less than one to replace HFC134a, which has a GWP of 1,300. This was allowed for light trucks and cars. But the industry at that time did not apply for what’s called highway trucks to big trucks that move cargo and off-road vehicles like farm tractors and construction equipment, mining equipment, forestry. And that the industry applied months and months ago to have this year’s further equipment.

There’s no difference in using it on an off-road equipment or an on-road equipment or a big truck or a small truck because the cab of a big truck is about the same size as the interior of a car. But for some reason, the EPA has not finished that process, which is now way beyond the statutory limit of time. And they say it’s because they just haven’t had time to do it. Well, this is not acceptable. You have to have a government moving at the pace of industry. And then the last thing, probably important, the United States military and military organizations all over the world we’re a part of protecting the ozone layer. So far that’s not the case. If you look at inside at the documentation of military organizations, they say it’s a force multiplier, it makes everything worse in national security. They say it’s an amplifier of conflict. There’s tremendous concern about the displacement of populations and immigrants across borders and distractions from security because you have to do humanitarian relief. That’s all in the documentation.

But so far, all they do mostly is to do resiliency of their own facilities. So they’re doing what they need to do to protect against the effects of climate change, but they haven’t engaged yet in stopping climate change, which is much more cost effective. The last thing you want to do is let climate change happen and then try to run away and hide and build against it because that’s brutally, brutally expensive. So I think those two things, if we were more aggressive on approving new technology, and if we had the military organizations involved as part of the skill set and part of the solutions and so forth, I think we’d go a long way.

Lucas Perry: As a final question here, so Steven, your panel, the technology and economic assessment panel was super successful in this strategic and coordination front on the technology and the replacement of the technology, which is something we’re also just exploring. Does the Paris Climate Accord have anything similar? And do you see a panel like this as being something also that’s crucial for the climate change crisis and also the governance around other global catastrophic and existential risks?

Stephen Andersen: Yeah, I think you’re right. And Susan and I have both tried over many years to get the climate convention to do something like the tape. Recently, I realized that if you don’t want to wait for the IPCC to do this, you could do it as a shadow chip. And that it’s very easy I think for an individual sector to organize itself under the same principles of being objective and including members that have the coincidence of interest in changing the market and changing the technology. You could put that together within an industry and then bring forward the solutions that you’d like to see implemented. And this is almost happening in Europe right now because they’re phasing down HFCs much faster than the United States. And they’re doing it on a sector by sector basis and they’re involved in the stakeholders. And the stakeholders have figured out that if they come to the EU with a single plan that cuts out their share of the goal, that the European Union will approve that. If they come in with separate views and lots of disagreement, the EU will choose their own plan for them.

So they have two choices, do it their way or do it the government’s way. And so far, they always chose to do it. The practical cost effective technology that they understand the best. And that’s exactly what the team did. So I’m very enthusiastic about that model. And in fact, that’s where, if I were an industry, that’s where I would put my money right now is I try to say, how do we become the leaders on this so that these activists that can cause mayhem in our company would say no use messing with that company that shows an in-state runs, let’s go bother someone else. Let’s let them solve their problems and we’ll go on to a recalcitrant truck sector and to give them bloody hell. I think that’s a very persuasive argument.

Lucas Perry: All right, Steven and Susan, thank you so much for coming on. If you have any final words for the audience about climate change, the ozone hole and existential risk, here’s a space for you to share it.

Susan Solomon: I hope we’ve given you some hope in this period of talking. I mean, it’s easy to become kind of despondent about climate change because there’re terrible events making people suffer day after day. On the other hand, I really do believe there’s light at the end of the tunnel. I think the ozone issue demonstrates that. And I think we are on the road to getting to a solution.

Stephen Andersen: Yeah, I would just add to that that organizations like Future of Life Institute are a tremendous part of the solution. I do believe that recognition and explanation and all of those things make a big difference to getting people motivated, to take on this very hard work. It’s work you love when it’s over, but it’s always hard while you’re doing it. So we have to have the highest motivation possible. And you folks are part of that solution.

Lucas Perry: Well, thank you very much, Steven and Susan, for coming on the podcast and also for your scientific and strategic coordinations to a global risk in our lifetimes.

Susan Solomon: Thank you, Lucas.

Stephen Andersen: Thank you very much.

James Manyika on Global Economic and Technological Trends

  • The modern social contract
  • Reskilling, wage stagnation, and inequality
  • Technology induced unemployment
  • The structure of the global economy
  • The geographic concentration of economic growth

 

Watch the video version of this episode here

29:28 How does AI automation affect the virtuous and vicious versions of productivity growth?

38:06 Automation and reflecting on jobs lost, jobs gained, and jobs changed

43:15 AGI and automation

48:00 How do we address the issue of technology induced unemployment

58:05 Developing countries and economies

1:01:29  The central forces in the global economy

1:07:36 The global economic center of gravity

1:09:42 Understanding the core impacts of AI

1:12:32 How do global catastrophic and existential risks fit into the modern global economy?

1:17:52 The economics of climate change and AI risk

1:20:50 Will we use AI technology like we’ve used fossil fuel technology?

1:24:34 The risks of AI contributing to inequality and bias

1:31:45 How do we integrate developing countries voices in the development and deployment of AI systems

1:33:42 James’ core takeaway

1:37:19 Where to follow and learn more about James’ work

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with James Manyika, and is focused on global economic and technological trends. As the Agricultural and Industrial Revolutions both led to significant shifts in human labor and welfare, so too is the ongoing Digital Revolution, driven by innovations, such as big data, AI, the digital economy, and robotics also radically affecting productivity, labor markets, and the future of work. And being in the midst of such radical change ourselves, it can be quite difficult to keep track of where we exactly are and where we’re heading. While this particular episode is not centrally focused on existential risk, we feel that it’s important to understand the current and projected impacts of technologies like AI, and the ongoing benefits and risks of their use to society at large, in order to increase our wisdom and understanding of what beneficial futures really consist of.

It’s in the spirit of this that we explore global economic and technological trends with James Manyika in this episode. James received a PhD from Oxford in AI and robotics, mathematics, and computer science. He is a senior partner at McKinsey & Company, as well as chairman and director of McKinsey Global Institute. James advised the chief executives and founders of many of the world’s leading tech companies on strategy and growth, product, and business innovation, and was also appointed by President Barack Obama to serve as vice chair of the Global Development Council at the White House. James is most recently the author of the book, No Ordinary Disruption: The Four Global Forces Breaking All the Trends. And it’s with that, I’m happy to present this interview with James Manyika.

To start things off here, I’m curious if you could start by explaining what you think are some of the most important problems in the world today.

James Manyika: Well, first of all, thank you for having me. Gosh, one of the most important problems in the world, I think we have the challenge of climate change, I think we have the challenge of inequality, I think we have the challenge that economic growth and development is happening unevenly. So I should say that the inequality question, I think is most in inequality within countries, but to some extent also between countries. And this idea of uneven development is that some countries are surging ahead and some parts of the world are potentially being left behind. I think we have other social political questions, but those I’m not qualified to about those, I don’t really spend my time, I’m not a sociologist or political scientist, but I think we do have some social-political challenges too.

Lucas Perry: We have climate change, we have social inequality, we have the ways in which different societies are progressing at different rates. So given these issues in the world, what do you think it is that humanity really needs to understand or get right in the century given these problems?

James Manyika: Yeah, by the way, I should also, before I dive into that, also say, even though we have these problems and challenges, we also have incredible opportunities, quite frankly, for breakthrough, progress, and prosperity to solve some of these issues. And quite frankly, do things that are going to transform humanity for the better. So these are challenges at a time of, I think, unprecedented opportunity and possibility. So I just want to make sure we acknowledge both sides of that issue. It terms of what we need to do about these challenges. I think part of it is also just quite frankly, facing them head on. I think the question of climate change is one that is an existential challenge that we just need to face head on. And quite frankly, get on with doing everything we can both to mitigate the effects of climate change, and also quite frankly, to start to adapt how our society, our economy works to, again, address what is essentially an existential challenge.

So I think what we do in the next 10 years is going to matter more than what we do in the 10 years after that. So there’s some urgency to the climate change and climate risk question. I think with regards to the issue of inequality, I think this is one that is also within our capacity to address. I think it’s important to keep in mind that the capitalism and the market economies that we’ve had, and we do have, have been unbelievably successful in creating growth and economic prosperity for the world and in most places where they’ve been applied. Particularly in recent years, I think, we’ve also started to see that in fact, there’ve been growth in inequality, partly because the structure of our economy is changing and we can get into that conversation.

In fact, some people are doing phenomenally well and others are not, and some places are doing phenomenally well and some other places are not. I mean, it’s not lost on me, for example, Lucas, that even if you look at an economy like the United States, something like two thirds of our economic output comes out of 6% of the counties in the country. That’s an inequality of place, in addition to the inequalities that we have of people. So I think we have to tackle the issue of inequality quite head-on. Unless we do something, it has the potential to get worse before it gets better. The way our economy now works, and this is quite different by the way than what it might’ve looked like even as recently as 25 years ago, which is, most of the economic activity, a function of the sectors that are driving economic growth, a function of the types of enterprises and companies that are driving economic growth, they have tended to be much more today than they were 25 years ago to be fairly regionally concentrated and in particular places.

So some of those places include Silicon Valley and other places. Whereas if you had looked for example, 25 years ago where you might’ve seen the kind of sectors and companies that were doing well were much more geographically distributed, so you had more economic activity coming out of more places across the country than you do now. So this is just, again, a function of not that anybody designed it to be that way, but just a function of the sectors and companies and the way our economy works. A related factor by the way, is even the inequality question is also a function of how the economy works. I mean, it used to be the case that whenever we had economic growth and productivity growth, it also resulted in job growth and wage growth. That’s been true for a very long time, but I think in recent years, as in, depending on how you count it, the last 25 years ago or so, when we have productivity growth, it doesn’t lift up wages as much as it used to. Again, it’s a function of the structure of the economy.

In fact, some of the work we’ve been doing and other economists have been doing, is actually being to look at this so-called declining labor share. I think a way to understand that declining labor share is to think about the fact that, if you had to set up a factory say in a hundred years ago, most of the inputs into how that factory worked were all labor inputs, so the labor share of economic activity is much higher. But over time, if you’re setting up a factory today, sure you have labor input, but you also have a lot of capital input in terms of the equipment, the machinery, the robots, and so forth. So the actual labor portion of, you know, of it as a share of the inputs, it’s being going down steadily. And that’s just part of how structure of our economy is changing. So all of these effects are some of what is leading to some of the inequality that we see.

Lucas Perry: So before we drill more deeply into climate change and inequality, are there any other issues that you would add as some of the most crucial problems or questions for the 21st century?

James Manyika: The structure of our global economy is changing, and I think it’s now getting caught up also in kind of geopolitics. I’m not a geopolitical expert, but it’s not lost from a global economy standpoint that, in fact, we now have and will have two very large economies, the United States and China, and China is a very large economy. And it’s not just a source of exports or the things that we buy from them, but it’s also entangled with say the US and other countries economically in terms of monetary, and debt, and lending, and so forth. But it’s also a large economy itself, which is going to have its own consumption. So we now have for the first time, two very large global economies. And so, how that works in a geopolitical sense is one of the complications of the 21st century. So I think that’s an important issue. Others who are more qualified to talk about geopolitics can delve into that one, but that’s clearly in the mix as part of the challenges of the 21st century.

We also, of course, are going to have to think about the role of technology in our economies and our society. Partly because technology can be a force of massive productivity growth, innovation, and good, and all of that, but at the same time we know that many of these technologies raise new questions about privacy, about how we think about information, disinformation. So I think, you know, if you had to write the list of the questions we’re going to need to navigate in the coming decades of the 21st century, it’s a meaty list. Climate change is at the top of that list in my view, inequalities is on that list, and these questions of geopolitics are on that list, the role that technology is going to play is on that list, and then also some of these social questions that we now need to wrestle with, issues of social justice, not just economic justice but also social justice. So we have a pretty rich challenge list even at the same time that we have these extraordinary opportunities.

Lucas Perry: So in the realms of climate change, and inequality, and these new geopolitical situations and tensions that are arising, how do you see the role of incentives pushing these systems and problems in certain directions, and how it is that we come up with solutions to them given the power and motivational force of incentives?

James Manyika: Well, I think incentives play an important role. So take the issue of climate change, for example, I think one of the failures of our economics and economic systems is we’ve never quite priced carbon, and we’ve never quite built that into our incentive systems, our economic systems so we have a price for it. So that when we put up carbon dioxide into the atmosphere and so forth, there’s no economic price for that or incentives, a set of incentives not to do that. We haven’t done enough in that regard. So that’s an area where incentives would actually make a big difference. In the case of inequality, I think this one’s way more complicated beyond just incentives, but I’ll point to something that is in the realm of incentives as regards to inequality.

So, for example, take the way we were talking earlier about the importance of labor and capital in our capital inputs, I don’t mean capital as in the money necessarily, but just the actual capital, equipment and machines, and so forth in our system, we’ve built a set of incentives, for example, that encourage companies to invest in capital infrastructure, you know, capital equipment, they can write it off for example. We encourage investments in R&D for example, and the tax incentives to do that, which is wonderful because we need that for the productivity and growth and innovation of our economy, but we haven’t done anything nearly enough or equivalent to those kinds of incentives with regards to investments in human capital. So you could imagine in much more the productivity and growth and innovation of our economy, but we haven’t done anything nearly enough or equivalent to that, to those kinds of incentives with regards to investments in human capital.

So you could imagine a much more concerted effort to create incentives for companies and others to invest in human capital and be able to write off investments in skilling, for example, to be able to do that at the equivalent scale to the incentives we have for other forms of investment, like in capital or in R&D. So that’s an example of where we haven’t done enough on the incentive front, and we should. But I think there’s more to be done than just incentives for the inequality question, but those kinds of incentives would help.

I should tell you, Lucas, that one of the things that we spent the last year or so looking at is trying to understand how the social contract has evolved in the 21st century so far. So we actually looked at, for example, roughly 23 of the OECD countries, about 37 or 38 of them, but we looked at about 23 of them in detail just to understand how the social contract had evolved. And here because we’re not sociologists, we looked to the social contract in really three ways, right? How people participate in the economy as workers, because that’s part of, you know, when people work hard, and probably the exchange is that they get jobs, and they get income and wages and training. So people participating as workers is an important part of the social contract. People participating as consumers and households who consume products and services, and then people as savers who are kind of saving money for a future, you know, for their kids or for their future, et cetera.

And when you look at those three aspects of the social contract in the 21st century so far, it’s really quite stunning. So take the worker piece of that, for example, what has happened is that in across most countries, we’ve actually grown jobs despite the recession, I guess the recession in 2001, but also the bigger one in 2008, we’ve actually grown jobs. So there’re actually more jobs now than there were at this time of the 21st century. However, what has happened is that many of those jobs don’t pay as well, so the wage associated with that, the picture has actually shifted quite a bit.

The other thing about what’s happened with work is it’s becoming a little bit more brittle in the fact that job certainty has certainly gone down, there’s much more income and wage variability. So we’ve created more fragile jobs relatively to what we had at the start of the 21st century. So you could say, for workers, it’s a mixed story, right? Job growth, yes, wage growth, not so much, job certainty and job fragility has gone up. When you look at people as consumers and households, it also paints an interesting story. And the picture you see there is the fact that, if households and consumers are consuming things like, think about, you know, buying cars or wide goods products, or electronics, basically things that are globally competed and traded, the cost of those has gone down dramatically in the 21st century. So the 21st century in that sense, at least globalization has been great because it’s delivered these very cheap products and services.

But if you look at other products and services that households and consumers consume such as education, housing, healthcare, and in some places, depending which country or place you’re in, transportation, those have actually gone up dramatically, far, far higher and faster than inflation, far higher and faster than wage growth. In fact, if you are in the bottom half of the social income scale, those things have come to dominate your income in terms of what you spend money on. So for those people, it hasn’t worked out so well actually in terms of the social contract. And then on the savers side, people as savers are, very few people now can afford to save for the future. And one of the things that you see is that the growth of indebtedness in the 21st century so far has gone up for most people, especially the middle wage and low wage households and people, their ability to save for the future has gone down.

What’s interesting is, it’s not just that the levels of indebtedness have gone up, but it’s the fact that the people who are indebtedness look a little bit different. They’re younger, they’re also, in many cases, college educated, which is different than what you might’ve seen 25 years ago in terms of who was indebted and what do they kind of look like?

And then finally, just to finish, the 21st century in the social contract sense, also hasn’t worked out very well for women who still earn less than men, for example, and don’t quite have the opportunities as much as others, as well as for people of color. It hasn’t, so they still earn a lot less, employment rates are still much lower, their participation in the economy as any of these roles is also much less. So you get a picture that says, while the economy has grown and capitalism has been great, so far in the social contract sense at least, by these measures we’ve looked, at it hasn’t worked out as well for everybody in the advanced economies. This is a picture that emerges from the 23 OECD countries that we looked at. And the United States is on the more extreme end of most of the trends I just described.

Lucas Perry: Emerging from this is a pretty complex picture, I think, of the way in which the world is changing. So you said the United States represents sort of the extreme end of this, where you can see the largest effect size in these areas, I would assume, yet it also seems like there’s this picture of the Global East and South generally doing better off, like people being lifted out of poverty.

James Manyika: Yeah, it is true. So one of the wonderful things about the 21st century is in fact, close to a billion people have been lifted out of poverty in those roughly 20, 25 years, which is extraordinary, but we should be clear about where that has happened. Those billion people are mostly in China, and to some extent, India. So while we say we’ve lifted people out of poverty, we should be very specific about mostly where that has happened. There are parts of the world where that hasn’t been the case, parts of Africa, other parts of Asia, and even parts of Latin America. So this lifting people out of poverty has been relatively concentrated in China primarily, and to some extent in India.

One of the things about economics, and this is something that people like Bob Solow and others got Nobel Prizes for, if you think about what is it that drives our economic growth, if economic growth is the way we create economic surpluses, that we can then all enjoy and lead to prosperity, right? The growth desegregation models come down to two things: either you’re expanding labor supply, or you are driving productivity. And the two of those when they work well, combine to give you economic GDP growth.

So if you look, for example, at the last 50 years, both across the advanced economies, but even for the United States, the picture that you see is that much of our economic growth has come roughly, so far at least, has come roughly in equal measure from two things: one, you know, this is over the last 50 or so years, half of it has come from expansions in labor supply. You can think about it as a Baby Boomer Generation, more people entering the workforce, et cetera, et cetera. The other half has roughly come from productivity growth. And the two of them have combined to give us roughly the economic GDP growth that we’ve had.

Now, when you look forward from where we are, we’re not likely to get much lift from the labor supply part of it, partly because most advanced economies are aging. And so, the contribution that’s going to come from expansions in labor supply, much less. I mean, you can think of it as kind of a plane flying on two engines, right? If one engine has been expansions in labor supply and the other is in productivity, well, the labour supply engine is kind of dying out to or slowing down in its output.

We’re going to rely a lot more on productivity growth. And so, where does productivity growth come from? Well, productivity growth comes from things like technological innovation, innovating how we do things and how we create products and services and all of that. And technology is a big part of that. But guess what happens? One of the things that happens with productivity growth is that the role of technology goes up. So I come back to my example of the factory. So if you wanted a highly productive factory, it’s likely that your mix of labor inputs and capital inputs, read that as machinery and equipment, is going to change. And that’s why your factory a hundred years ago, it looks very different than a factory today. But we need that kind of technological innovation and productivity to drive it the output. And then the output leads to the output in the sector, and ultimately the output in the economy.

So all of that is to say, I don’t think we should stop the technological innovation that leads to productivity, we need productivity growth. In fact, going forward, we’re going to need productivity growth even more. The question is, how do we make sure that even as we’re pursuing that productivity growth that contributes to economic growth, we’re also paying attention to how we mitigate or address the impacts on labor and work, which is where most people derive their livelihoods.

So, I don’t think you want to set up a set of system of incentives that slows down the technological innovation and product activity growth, because otherwise, we’re all going to be fighting over a diminishing economic pie. I think you want to invest in that and continue to drive that, but at the same time find ways and think about how to address some of the work implications of that, or the impacts on work and workers. And that’s been one of the challenges that we’ve had. I mean, we’ve all seen the hollowing out, if you like, of the middle class in advanced economies like America, where a big part of that is that much of that middle class or middle income workers have been working in these sectors and occupations where the impact of technology and productivity have actually had a huge impact on those jobs and those incomes.

And so, even though we have work in the economy, the occupations and jobs in sectors that are growing have tended to be in the service sectors and less in places like manufacturing. I mean, it’s the reason why I love it when politicians talk about good manufacturing jobs. I mean, they have a point in the sense that historically, those have been good, well paying jobs, but manufacturing today is only what? 8% of the labor force in America, right? It’s diminished, at its peak is probably at best close to the mid forties, 40% as a share of the labor markets back in the ’50s, right? It’s being going down ever since, right? And yet the service sector economy has been growing dramatically. And many, not all, but many of the jobs in the service sector economy don’t pay as much.

My point is, we have to think about not just incentives but the structure of our economy. So if you look forward, for example, over the next few decades, what of the jobs that are going to grow as a function of both demand for that work in the economy, but also as a result of what’s less likely to be automated by technology and AI and so forth? You end up with a list that includes care work, for example, and so forth. And even work that we say is valuable, which it is, like teachers and others that are harder to automate. But labor market system doesn’t reward and pay those occupations as much as some of the occupations that are declining. So those are some of what, when I talk about the changes in the structure of our economy, in a way that goes a little bit beyond just local incentives, is how do we address that? How do we make sure as those parts of our economy grow, which they will naturally. How do we make sure people are earning enough to live as they work in those occupations? And by the way, those occupations are many of the ones that in our current or recent COVID moment or period here, many are where the essential work and workers are by the way, people have come to rely on mostly in those service sector economies that we haven’t historically paid well. Those are real challenges.

Lucas Perry: There was that really compelling figure that you gave at the beginning of our conversation, where you said 6% of counties account for two thirds of our economic output. And so, there’s this change in dynamic between productivity and labor force. And the productivity you’re suggesting is what is increasing, and that is related to and contingent upon AI automation technology. Is that right?

James Manyika: Well, first of all, we need productivity to increase. It’s been kind of sluggish in the last several years. In fact, it’s one of the key questions that economists worry about, which is, how can we increase the growth of our economic productivity? It hasn’t been doing as well as we’d like it to do. So we’d like to actually increase it, partly because, as I said, we needed more than we’ve done in the last 50 years because of the labor supply pieces declining. So we actually would like productivity to go up even more.

Mike Spence and I just wrote a paper recently on the hopeful possibility that in fact we could see a revival in productivity growth coming out of COVID. We hope that happens, but it’s not assured. So we need more productivity growth. And the way you get productivity growth, technology and innovation is a big part of it. The other part of it is just managerial innovation that happens inside companies in sectors where those companies and sectors figure out ways to organize and do what they do in innovative, but highly productive ways. So it’s the combination of technology and those kinds of managerial and other innovations, usually in a competitive context, that’s what drives productivity.

Lucas Perry: Does that lead to us requiring less human labor?

James Manyika: It shouldn’t. One of the things about productivity is, it’s actually, in some ways, labor productivity is a very simple equation, right? It has on the numerator, value-added output, divided by hours worked or labor input, if you like. So you can have, what I think of as a virtuous version of productivity growth versus a vicious one. So let me describe the virtuous one. The virtuous one, which actually leads to job growth, is when in fact you expand the numerator. So in other words, there’s innovations, use of technology in the ways that I talked about before, that means to companies and sectors creating more valuable output, more of it and more valuable output. So you expand the numerator. So if you do that, and you expand the numerator much higher and faster than you’re reducing the denominator, which is the labor hours worked, you end up with a virtuous cycle in the sense that the economy grows, productivity grows, everything expands, the demand for work actually goes up. And that’s a virtuous cycle.

The last time we saw a great version of that was actually in the late ’90s. This is, if you recall before that, Bob Solow kind of framed what ended up being called the Solow Paradox, which is this idea that before the mid and late ’90s, you saw computers everywhere except in the productivity figures, right? And that’s because we hadn’t seen the kinds of deployment of technology, the managerial innovations do the kind of, what I call the numerator-driven productivity growth, which, when it did happen in the mid to late ’90s, it created this virtuous cycle.

Now let me describe the vicious cycle, which is the, if you like, the not so great version of productivity growth. It’s when you don’t expand the numerator, but what you do is simply reduce the denominator. So in other words, you reduce the hours worked. In other words, you become very efficient at delivering the same output or maybe even less of the output. So you reduce the denominator, that lead to productivity, but it’s off the vicious kind, right? Because you’re not expanding the output, it’s simply reducing the inputs, or the labor inputs. Therefore, you end up with less employment, fewer jobs, and that’s not great. That’s when you get what you asked about, which is, where you need less labor. And that’s the vicious version of productivity, you don’t want that either.

Lucas Perry: I see. How does the reality of AI and automation replacing human labor and human work essentially increasingly completely over time factor into and affect the virtuous and vicious versions of productivity?

James Manyika: Well, first of all, we shouldn’t assume that AI is simply going to replace work. I think we should think about this in this context of what you might call complements and substitutes. So if our AI technology is developed and then deployed in a way that is entirely substitutive of work, then you could have work decline. But there’s also other ways to deploy AI technology, where it’s complementary and it complements work. And in that case, you shouldn’t have to think about it as much about losing jobs.

Let me give you some specifics on this. So we’ve done research, and others have too, but let me describe what we’ve done, but I think the general consensus is emerging that it’s close to what at least we found in our research, which is that, so we looked at, so the Bureau of Labor Statistics kind of tracks in the US, tracks roughly 800 plus occupations. We looked at all those occupations in the economy. We also looked at the actual particular tasks and activities that people actually do, this because any of our jobs and occupations are not monolithic, right? They’re made up of several different tasks, right?

I spent part of my day typing, or talking to people, or analyzing things, so we’re all an amalgam of different tasks. And we looked at over 2000 tasks that go into these different occupations, but let me get to where we ended up. So where we ended up was, we looked at what current and expected AI technology and artificial technologies can do. And we came to the conclusion that at least over the next couple of decades at the task level, and I emphasize the task level, not the job level, these are tasks, I’ll come back to jobs, at the task level, these technologies look like they could automate as much as 50% of the tasks and activities that people do. And it’s important to, again, emphasize those are tasks, not jobs.

Now, when you take those highly automatable tasks back and map them to the occupations in the economy, what we concluded was that something like at most 10% of the occupations look like they have all of their constituent tasks automatable. And that’s a very important thing to note, right? 10% of all the occupations look like they have close to a hundred percent of their tasks that are automatable.

Lucas Perry: In what timeline?

James Manyika: This is over the next couple of decades.

Lucas Perry: Okay. Is that like two decades or?

James Manyika: We looked at this over two decades, right? We have scenarios around that because it’s very hard to be precise because you can imagine the rate of technologies development speeding up, I’ll come back to that, but the point is, it’s only 10% of the, in our analysis anyway, 10% of the occupations look like they have all of their constituent tasks that are automatable in that rough timeframe. But at the same time, what we also found is that something like 60% of the occupations have something like a third of their constituent tasks that are automatable in that same period. Well, what does that mean? What that actually means is that many more jobs and occupations are going to change than get fully automated away. Because what happens is, sure, some activity that I used to do myself, now that activity can be done in an automated fashion, but I still do other things too, right? So this effect of kind of the jobs that will change is actually a bigger effect than the jobs that will disappear completely.

Now that’s not to say there won’t be any occupations that will decline. In fact, what we found in our research, and we ended up kind of titling the research report, Jobs Lost and Jobs Gained. We probably should have fully titled the Jobs Lost, Jobs Gained, and Jobs Changed because all three phenomena will happen, right? Yes, there’ll be occupations that will decline, but there will also be occupations that’ll grow actually. And then there’ll be lots more occupations that will change. So I think we need to take the full picture into account. It’s a bit like, I guess a good example of the jobs changed portion is the one of the bank teller, right? Which is, if you had looked at what a bank teller did in 1968 versus what a bank teller does now, it’s very, very different, right? The bank teller back then spent all their time counting money either to take it from you or to give it back to you when you went up to the bank teller. Or the advent of ATM machines or the ATM machine automated much of that.

So we still have bank tellers today, the majority of the time isn’t spent doing that, right? They may do that on an exception basis, but their jobs have changed dramatically, but there’s still an occupation called a bank teller. And in fact, until about, I think the precise date is something like 2006, I think, what we actually had was a number of bank tellers in the US economy had actually grown since the early ’70s to about 2006. And that’s because the demand for bank tellers went up, not on a per bank basis, but on a economy-wide basis because we ended up opening up so many more branch banks by 2006 than we had in 1968. So the collective demand for banking actually drove the growth in the number of bank tellers, even though the number of bank tellers per branch might’ve gone down.

So that’s an example of where a growing economy can create its own demand for work back to this virtuous cycle that I was talking about as opposed to the vicious cycle that I was talking about. So this phenomenon of jobs changing is an important one that often gets lost in the conversation about technology and automation and jobs. And so, to come back to your original question about substitutes, we shouldn’t just think of technology substituting for jobs as the only thing that happens, but also that technology can complement work and jobs. In fact, one of the things to think about, particularly for AI researchers or people who develop these automation technologies, I think, on the one hand, while it’s actually certainly useful to think of human benchmarks when we say, how do we build machines and systems that match human vision or human dexterity and so forth? That’s a useful way to set goals and targets for technology development and AI development. But in an economic sense, it’s actually less useful because it’s less likely to lead to technologies that are more substitutes because we’ve built them to match what humans can do.

Imagine if we said, let’s build technology machines that can see around corners or do the kinds of things that humans can’t do, then we’re more likely in that case to build more complementing technologies than substituted technologies. I think that’s one of the things that we should be thinking about and doing a heck of a lot more to achieve.

Lucas Perry: This is very interesting. So you can think of every job as basically a list of tasks, and AI technology can automate say some number of tasks per job, but then the job changes in a sense that either you can spend more time on the tasks that remain and increase productivity by just focusing on those tasks, or the fact that AI technology is being integrated into the job process will create a few new tasks. The tension I see though is that we’re headed towards a generality with AI where we’re moving towards all tasks being automated. Perhaps over shorter timescales it seems like we’ll be able to spend more time on fewer tasks or our jobs will change in order to meet and work on the new tasks that AI technology demands of us, but generality is a movement towards the end of human level problem solving on work and objective-related tasks. So it seems like it would be increasingly shrinking. Is that a view that you share? Does that make sense?

James Manyika: Your observation makes sense. I don’t know if I fully share it, but just to back up a step, yeah, if you asked me over the next few decades, I mean, our research has looked at the next couple of decades, others have looked at this too by the way, and he’d come up with obviously slightly different numbers and views, but I think they’re generally in the same direction that I just described. So if you say over the next couple of decades, what do I worry about? I certainly don’t worry about the disappearance of work for sure. But that doesn’t mean that all is well, right? There’re still things that I worry about. So I still worry about, well, we’re going to have work, because I think, you know, what we found for example is the net of jobs lost and jobs gained and jobs changed, the net of all of that in the economies that we’ve looked at is still a net positive in the sense that there’s more work gained net then lost.

That doesn’t mean we should all then be, rest in our laurels and be happy that, hey, we’re not facing a jobless future. So I think we still have a few other challenges to deal with. And I want to come back to your future AGI question in a second. So one of the things to worry about, even in this stage where I say don’t worry about the disappearance of work, well, there’re still a few more things to worry about.

I think you want to worry about the transitions, right? The skill transitions. So if some jobs are declining, and some jobs are growing, and some jobs are changing, all of that is going to create a big requirement for skilling and reskilling, either to help people get into these new jobs that are growing, or if their jobs are changing, gain the new skills that work well alongside the task that the machines can do. So all of that says reskilling is a really big deal, which is why everybody’s talking about reskilling now, though, I don’t think we’re doing it fast enough or at scale enough, at the scale and pace that we should be doing it. But that’s one thing to worry about.

The other thing to worry about are the effects on wages. So even when you have enough work, if you look at the pattern of the jobs gained, most of them, not all of them, but many of them, many of them are actually jobs that pay less, at least in our current labor market structure, right? So care work is hard to fully automate because it turns out that, hey, it’s actually harder to automate somebody doing physical mechanical tasks than say somebody doing analytical work. But it turns out the person doing analytical work, where you can probably automate what they do a lot easier, also happens to be the person who’s earning a little bit more than the person doing the physical mechanical tasks. But by the way, that person is one that we don’t pay much in the first place. So you end up with physical mechanical activities that are hard to automate also growing and being demanded, but then we don’t pay much for them.

So the wage effects are something to worry about. Even in the example I gave you of complementing work, that’s great from the point of view of people and machines working alongside each other, but even that has interesting wage effects too, right? Because at one end, which I’ll call the happy end, and I’ll come back to the challenged end, the happy end is when we automate some of what you do, Lucas, and the combination of what the machine now does for you and what you still do yourself as a human, both are highly valuable, so the combo is even more productive. And this is the example that’s often given with the classic story of radiologists, right? So machines can maybe read some of those images way better than the radiologist, but that’s not all the radiologist does, there’s a whole other value-added activities and tasks that the radiologist does that the machine reading doesn’t understand them, MRI doesn’t do. But now you’ve got a radiologist partnered up with a machine, the combination is great. So that’s a happy example. Probably the productivity goes up, the wages of that radiologist go up. That’s a happy story.

Let me describe the less happy end of that complementing story. The less happy end of that is when the machine automates a portion of your work, but the portion that it automates is actually the value-added portion of that work. And what’s left over is even more commoditized, commoditized in the sense that many, many, many more people can do it, and therefore, the skill requirements for that actually go down as opposed to go up, because the hard part of what you used to do is now being done by a machine. The danger with that is that, that then potentially depresses the wages for that work given the way you’re complementing. So even the complementing story I described earlier, isn’t always in one direction from a wage effect and its impact.

So all of that step back is to say, if the first thing is reskilling, the second thing to worry about are these wage effects. And then the final thing to worry about, how we think about redesigning work itself and the workflows themselves. So all of that is to say, even in a world where we have enough work, that’s in the next few decades, we still are going to have to work these issues. Now, you are posing a question about, what about in the long, long future, because I should think it’s in the long future that we’re going to have AGI. I’m not one who thinks it’s as imminent as perhaps others think.

Lucas Perry: Do you have a timeline you’d be willing to share?

James Manyika: No, I don’t have a timeline, I just think that there’re many, many hard problems that we still seem like a long way from… Now, the reason I don’t have a timeline, is that, hey, we could have a breakthrough happen in the next decade that changes the timeline. So we haven’t figured out how to do causal reasoning, we haven’t figured out how to do what Kahneman called System 2 activities. We’ve solved System 1 tasks where we assisted… And so, there’s a whole bunch of things that, you know, we haven’t solved the issues of how we do a higher-level cognition or meta-level cognition, we haven’t solved through how we do meta learning, transfer learning. So there’s a whole bunch of things that we haven’t quite solved. Now we’re making progress on some of those things. I mean, some of the things that have happened with these large language universal models is really breathtaking, right?

But I think that, in my view, at least the collection of things that we have to solve before we get to AGI, there’s too many that still feel unsolved to me. Now we could have somebody breakthrough in a day. That’s why I’m not ready to give a prediction in terms of timeline, but these seem like really hard problems to me. And many of my friends who are working on some of these issues also seem to think these are hard problems. Although there are some of them who think that we’re almost there, that all we need to, you know, deep learning will get us to most places we need to get to and reinforcement learning will get us most of what we need. So those are my friends who think that, think that it’s more imminent-

Lucas Perry: In a decade or two away, sometimes they say.

James Manyika: Yeah, some of them say a decade or two. There’s a lot of real debate about this. In fact, you may have seen one of the things that I participated in a couple of years ago was, and Martin Ford put together a book that was a collection of interviews with a bunch of people, it’s his book, Architects of Intelligence. A wonderful range of people in that book, I was fortunate enough to be included, but there are many more people and way more interesting than me. People like Demis Hassabis and Yoshua Bengio and a whole bunch of people, it’s a really terrific collection. And one of the things that he asked that group who are in that book was to ask them to give a view as to when they think AGI would be achieved. And what came out of it is a very wide range from 2029, and I think that was Ray Kurzweil who stuck to his date, and all the way to something like 500 years from now. And that’s a group of people who are deep in the field, right? And you’d get that very wide range.

So I think, for me, I’m much more interested in the real things that we are going to need to break through, and I don’t know when we’ll make those breakthroughs, it could be imminent, it could be a long time from now, but they just seem to be some really hard problems to solve. But if you take the view, to follow your thought, if you take the view, you know, the thought experiment to say, okay, let’s just assume we’ll truly achieve AGI in all its sense, in both in the AGI and the, some people will say in the oracular.

I mean, it depends what form of the AGI it takes. If the AGI takes the form of both the cognitive part of that coupled with the embodiment of that of physical machines that can physically participate, and you truly have AGI in a fully-embodied sense as well in addition to the cognitive sense, what happens to humans and work in that case? I don’t know. I think that’s where presumably those machines allow us to create enormous surpluses and bounties in an economic sense. So presumably, we can afford to pay everybody, you know, to give everybody money and resources. And so, then the question is, in a world of true abundance, because presumably they’ll help us solve these, you know, these machines, AGIs will help us solve those things, in a world of true abundance, what do people do in that?

I guess it’s kind of akin, as somebody said, to Star Trek economy. What do people do in a Star Trek economy when they can replicate and do everything, right? I don’t know. I guess we explore the universe, we do creative things, I don’t know. I’m sure we’ll create some economic system that takes advantage of the things that people can still uniquely do even though they’ll probably have a very different economic value and purpose. I think humans will always find a way to create either literally or quasi economic systems of exchange of something or other.

Lucas Perry: So if we focus here on the next few decades where automation is increasingly taking over particular tasks and jobs, what is it that we can do to ensure positive outcomes for those that are beginning to be left behind by the economy that requires skill training and those whose jobs are soon to have many of the tasks automated?

James Manyika: Starting now, actually, in the next decade or two, I think there’re several things, there’s actually a pretty robust list of things we need to do actually to tackle this issue. I think one is just reskilling. We know that there’s already a shortage of skills. Even before we think about, we’ve had skill mismatches for quite a while before any of this fully kicks in. So this is a challenge we’ve had for a while. So this question of reskilling is a massive undertaking, and here, the question is really due to pace and scale, because while there are quite a lot of reskilling examples one can come across, and there are many of them that have been very successful. But I think the key thing to note about many of them, not all of them, but many of them is that they tend to be small.

One of the questions one should always ask about all the great reskilling examples we hear of is, how big is it, right? How many people well went through that program? And I think you’ll find that many of them, not all of them, many of them are relatively small. At least small relative to the scale of the reskilling that we need to do. Now, there’ve been a few big ones, I happen to like, for example, Walmart has had these Walmart academies, it’s been written about publicly quite a bit, but what’s interesting about that is, it’s one of the few really large scale reskilling, retraining programs through their academies. Then something like, I can’t remember reading this, but they’ve put something like 800,000 people through those academies. I like that example, simply because the numbers talked to sound big and meaningful.

Now I don’t know. I haven’t evaluated the programs, are they good? But I think the scale is about right. So, the reskilling at scale is going to be really important, number one.

The other thing we’re going to need to think about is, how do we address the wage question? Now, the wage question is important, for lots of reasons here.

One is, if you remember earlier in our conversation, we talked about the fact that over the last two decades, for many people, wages haven’t gone up, been relative wage stagnation, compared to rates of inflation, or the cost of living, and how things have gone up. Wages haven’t gone up.

The wage stagnation is one we already have, before we think about technology. But then, as we’ve just discussed, technology may even exacerbate that, even when there are jobs, and the continuing changing structure of our economy will also exacerbate that. So what do we do about the wage question?

One could consider raising minimum wage, right? Or one could consider ideas like UBI. I mean, we can come back and talk about UBI. I have mixed the views about UBI. What I like about it is the fact that it’s at least a recognition that we have a wage problem, that people don’t earn enough to live. So I like it in that sense.

Now the complication with it, in my view, is that while, of course, one of the primary things that work does for you, for the vast majority of people, that’s how they derive their livelihood, their income. So it’s important, but work also does other things, right? It’s a way to socialize, it’s a way to give purpose and meaning, et cetera.

So I think UBI, it may solve the income part of that, which is an important part of that. It may not address the other pieces of the other things that work does. So, we have to solve the wage problem.

I think we also have to solve this geographic concentration problem. We did some work where we looked at all the counties in America at the time that we did this, because the definition of what’s a county in America kind of changes a little bit year from year. But at the time that we did this work, which was back in 2019, though I think we looked at something like 3,149 counties across America.

What we’re looking at there was, it was a range of factors about economic investment, economic vibrancy, jobs, wage growth. We looked at 40 different variables in each county, but I’m just going to focus on one, which is job growth.

When we looked at job growth across those counties, while at the national level, we’re all celebrating the job growth that had happened, coming out of the 2008 recession, between 2008 and 2018 was the data set we looked at, first of all, at the national level, it was great. But when you looked at it at the county level, what you suddenly found is that a lot of that job growth was concentrated in places where roughly a third of the nation’s workers live.

The other two-thirds of the place where people live either saw flat or no job growth, or even continued job decline. All of that is to say, we also have to solve this question of, how do we get more even job growth and wage growth across the country, in the United States?

We’ve also done similar work with, we’ve looked at these micro regions in Europe, and you see similar patterns, although maybe not quite as extreme as the US, but you see similar patterns where some places get a lot of the job and wage growth, and some cases get less of it. It’s just a function of the structure of our economy. So we’d have to solve that, too.

Then the other thing we need to solve is the classic case of the hollowing out of the middle class. Because if you look at the pattern of, mostly driven by technology, to some extent, a lot of the job declines or the jobs lost as a result of technology have primarily been in the middle wage, middle-class jobs. And a lot of the job growth has been in the low wage jobs.

So this question of the hollowing out of the middle class is actually a really particular problem, which has all kinds of sociopolitical implications, by the way. But that’s the other thing to figure out. So let me stop there.

But I think these are some of the things we’re going to need to tackle in the near term. I’ve made that list mostly in the context of say, an economy like the United States. I think if you go outside of the United States, and outside of the advanced economies, there’s a different set of challenges.

I’m talking about places outside of the OECD countries and China. So you go to places like India, and lots of parts of Africa and Latin America, where you’ve got a very different problem, which is demographically young populations. China isn’t, but India and most of Africa is, and parts of Latin America are.

So there the challenge is, a huge number of people are entering the workforce. The challenge there is, how do you create work for them? That’s a huge challenge, right? When you’re looking at those places, the challenge is just, how do you create enough jobs in very demographically young countries?

The picture’s now gotten a little bit more complicated in recent years than perhaps in the past, because in the past, the story was, if you are a developing country, a poor developing country, your path to prosperity was to join the global economy, be part of either the labor supply or the cheap labor supply often, and go from being an agrarian country to an industrialized country. Then ultimately, maybe some day, you’ll become a service economy. Most advanced economies are.

That path of industrialization is less assured today than it used to be, for a bunch of reasons. Some of those reasons have to do with the fact that advanced economies now no longer seek cheap labor abroad as much as they used to. They still do for some sectors, but less so for many other sectors, I mean, we’re less likely to do that.

Part of that is technology, the fact that in some ways, manufacturing has changed. We can now, going forward, do things more like 3-D printing, and so forth. So the industrialization path is less available to poor countries than it used to be.

In fact, economists like Danny Roderick have written about this, and called it this kind of premature de-industrialization challenge which is facing many low income countries. So we have to think about what, is the path for those countries?

And by the way, these are countries, if you think about it from the point of view of technology and AI, in particular, the AI technological competition globally rapidly seems to come down to be a race between the US, led by the US, but increasingly by China, and others are largely being left behind. That includes in some cases, parts of Europe, but for sure, parts of the poor developing economies.

So the question is, in a future, in which capacity for technology’s developing a different pace, dramatically different paces for different countries, and the nature of globalization itself is changing, what is the path for these poor developing countries? I think that’s a very tough question that we don’t have very many good answers for, by the way.

But there have just been people who think about developing economies in developing economies themselves. That’s one of the tough challenges, I think, for the next several decades of the 21st century.

Lucas Perry: Yeah, I think that this is a really good job of explaining some of these really significant problems. I’m curious what the most significant findings of your own personal work, or the work more broadly being done at McKinsey are, with regards to these problems and issues. I really appreciate some of the figures that you’re able to share. So if you have any more of those, they’re really helpful, I think, for painting a picture of where things are at, and where they’re moving.

James Manyika: Well, I think the only other question on these kind of left behind countries and economies, as I said, these are topics that we’re trying to research and understand. I don’t think we have any kind of pat simple solutions to them.

We do know, though, that in fact, so if you look at the pattern, a lot of our work is very empirical. I mean, typically, I’m looking at what is actually happening on the ground. One of the things that you do see for developing economies is that the developing economies that are part of a regional ecosystem, either because of the value chains and supply chains.

Take the case of a country like Vietnam. It’s kind of in the value chain ecosystem around China, for example. So it benefits from being a participant or an input into the Chinese value chain.

When you have countries, and you could argue that’s what’s happened with countries like Mexico and a few others, so there’s something about being a participant in the value chains or supply chains of these that are emerging somewhat regionally, actually. That seems to be at least one path.

The other path that we’ve seen is that when you’ve got a developing countries that tend to have large and competitive private sectors, and emphasize “competitive,” that actually seems to make a difference. So we did some empirical work where we looked at something like 75 developing countries over the last 25 years, to see what are some of the patterns of which ones are those that have done well, because of their growth and development, and so forth?

Some of the factors that you see, we found in that research, is in fact, when the countries that happened to have, as I said, one is proximity to either all participants in the global value chains of other large ecosystems or economies did well.

Second, those that seem to have these large and vibrant and very competitive private sector economies also seem to do better. Also, those that had resource endowments did well, so that I don’t know, oil and natural resources, and those kinds of things, also seemed to do well.

Then we also find that those that seem to have more mixed economies, so they didn’t just rely on one part of their economy, but they had two or three different kinds of activities going on in their economy, they had maybe a little bit of a manufacturing sector, and a little bit of an agricultural sector, a little bit of a service sector, so the ones that had more mixed economies seem to do well. The other big thing was, the ones that seem to be reforming their economies seem to do well.

So those are some patterns. I don’t think those are guaranteed, in any of them, to be the recipe for the next few decades, partly because much of that picture on global supply chains is changing, and much of the role of technology and how it affects how people participate in the global economy is changing.

I think those are useful, but I don’t know if they’re any short recipe, going forward. There certainly have been the patterns for the last 25 years, but maybe that’s a place to start, if you look forward.

Lucas Perry: To pivot a bit here, I’m curious if you could explain what you see as the central forces that are currently acting on the global economy?

James Manyika: Well, I’ll tell you some of the things that are interesting, that we find interesting. One is, in fact, the fact that more and more and more and more, the role of technology in the global economy is getting bigger and bigger and bigger, in the sense that technology seems to have become way more general purpose in the sense that it’s foundational to every company, every sector and every country.

So the role of that is interesting. It also has these other outsize effects, because we know that technology often lead to the phenomenon of superstar firms and superstar returns, and so forth. You see that quite a bit, so the role of technology is an important one.

The other one that’s going on is what’s happening with globalization itself. And by globalization, I just mean that the movement of value and activity related to the global economy.

We did some work a few years ago, that we’ve tried to update regularly, where we looked at all the things of economic value. So we looked at, for example, the flow of products and goods across the world, the flow of money, finances, and other financing and other things, the flow of services, the movement of people, and even the movement of data, and data-related activities.

What was interesting is that one of the things that has changed is that the globalization in the form of the flow of goods and services, it actually kind of slowed down, actually. That’s why one of the reasons people were questioning is, is globalization dead, has it slowed down?

Well, it certainly looks that way. If you’re looking at it through the lens of the flow of products and goods, but not the case if you’re looking, necessarily, at the flow of money, for example, not necessarily if you’re looking at the flow of people, and for sure not the case, if you’re looking at the flow of data around the world.

One of the things that’s, I think, underappreciated is just how digitized the global economy has become, and just the massive amounts of data flows, digital data flows that now happen across borders between countries, and how much that is tied into globalization works. So if you’re looking at globalization through the lens of digitization, digital data flows, nothing has slowed down. In fact, if anything, it’s accelerated, actually.

That’s why, often, you will hear, people were looking at it through that lens, and say, “Oh no, it’s even more globalized than ever before.” But people who are looking at it through the flow of products and goods, for example, might say, “Oh, it seems, it looks like it is slowed down.” That’s one of the things that’s changing.

Also, the globalization of digital data flows is actually interesting, because one of the things that it does is it changed the participation map quite significantly. So we did some work, where if you look at it through that lens, you suddenly found that you have many more countries participating, and many more kinds of companies participating, as opposed to just a few countries and a few companies participating in the global economy. You have much more diversity of participation.

So you have very tiny companies, a two- or three-person company in some country, plugged into the global economy, using digital technology and digital platforms, in ways that wouldn’t have happened before, if you had a two- or three-person company 30 years ago. So this digitalization of the global economy is really quite fascinating.

The other thing that’s going on to the global economy is the rebalancing, where, with the emergence of China’s a big economy in its own right, that is changing the gravitational structure, if you like, of the global economy in some, in some very profound ways, in ways that we haven’t quite had before. Because, sure, in the past you’ve had other large economies like Germany and Japan and others, as large economies, but none of them were ever as big as the United States.

Also, all of them, whether it’s Japan or Germany or any of the European countries, largely operated in a framework, a globalization and a global framework, that was largely kind of Western-centric in a way. But now you have this very large economy that’s very different, is very, very large, will be the second largest economy in the world. That is quite different, but yet is tied into the, so that gravitational structural shift is very, very important.

Then, of course, the other thing that’s happening is, what’s happening with supply chains and global value chains. And that’s interesting, partly because we’re so intertwined with how supply chains and value chains work, but at the same time, it changes how we think about the resilience of economies. We’ve just seen that during this COVID last year, where, all of a sudden, everybody got concerned about the resilience of our supply chains with respect to, essential products and services like medical supplies and so forth.

I think people are now starting to rethink about how do we think about the structure of the global economy, in terms of these value chains. We should have at some point also mentioned other kinds of technologies that are happening. Because it’s not all AI and digital technologies, as much as I love that, and spend a lot of time on that.

I think other technological developments that are interesting include what’s happening in biosciences or the life sciences. We’ve just seen spectacular demonstrations of that with the MRNA vaccines that were rapidly developed.

But I think a lot more has been happening with just amazing progress, that we’re still at the very early stages of, with regards to the biotechnology and the life sciences. I think we’re going to see even more profound, societally, and profound impact from those developments in the coming decades.

So these are some of the things that I see happening in the global economy. Now, of course, climate change looms large over all of this as a thing that could really impact things in quite dramatic and quite existentially concerning ways.

Lucas Perry: In terms of this global economy, can you explain what the economic center of gravity is, where it’s been, and where it’s going?

James Manyika: Well, undoubtedly, the economic center of gravity has been the United States. If you look at the last 50 years, it’s been the largest economy on the planet, largest in every sense, right? As a market to sell into, as its own market. Everybody around the world for the last 50 years has been thinking about, “How do we access and sell to consumers and buyers in the United States?”

It’s been the largest market. It’s also been the largest developer and deployer of technologies and innovation. So, in all of those ways, it’s been the United States as the big gravitational pull.

But I think, going forward, that’s going to shift, because current course and speed, the Chinese economy will be as large. And you now start to have even other economies becoming large, too, like India.

So I think economic historians have created a wonderful map, where they showed the movement of the gravitational central to the global economy. I think they went back 1,000 years.

While it’s been in the Western Hemisphere, primarily in the United States, I think some of the mid-Atlantic, it’s been shifting east, mostly because of the Chinese economy, primarily, but also India and others that have come to grow. That’s clearly one of the big changes going on at the global economy, to its structure and its center of gravity.

Lucas Perry: With this increase of globalization, how do you see AI as fitting into and affecting globalization?

James Manyika: The impact on globalization? I don’t think that’s the way I would think about the impact of AI. Of course, it’ll affect globalization, because any time you have anything to do with products, goods, and services, because AI is going to effect all of those things.

To the extent that those things are playing out on the global economy landscape, AI will affect those things. I think the impact of AI, at least in my mind, is first and primarily about any economy, whether it’s the global economy or a national economy or a company, right? So I think it’s profoundly going to change many things about how any economic entity works.

Because we know the effect, the capital labor inputs, we know it’ll affect productivity, and we know it’ll change the rates of innovation. Because imagine, in this conversation, at least, we talked, I think mostly about AI’s impact on labor markets, we should not forget AI’s impact on innovation, on productivity, on the kinds of creation of products, goods, and services that we can imagine and how hopefully it’s going to accelerate those developments.

I mean, DeepMind dealing with AlphaFold, which is cracking a 50-year problem, that’s going to lead to all kinds of, hopefully, biomedical innovations and other things. I think one of the big impacts is going to be how AI affects innovation, and ultimately productivity, and the kinds of things we’re going to see, whether it’s products, goods, and services that we’re going to see in the economy.

Of course, any economy that takes advantage of that and embraces those innovations will obviously see the benefit to the growth of their economy. Of course, if on a global scale, on a global stage in the global economy, we have some countries do that more than others, then of course it’ll affect who gets ahead, who’s more competitive, and who potentially gets left behind.

One of the other things we’ve looked at is, what is the rate of AI participation, whether in terms of developments or contributing to developments, or just simply deploying technologies, or having the capacity to deploy technologies, or having the talent and people who can either both contribute to the deployment or the development, and also embrace it in companies and sectors? And you see a picture that’s very different around the world.

Again, you see the US and China, way ahead of everybody, and you’ll see some countries in Europe, and even Europe is not uniform, right, and some countries in Europe doing more of that than others, and then a whole bunch of others who are being left behind. Again, AI will impact the global economy in the sense of how it impacts each of the economies that participate, and each of the companies that participate in the global economy, in their products and services, and their innovations and outputs.

There are other things that’ll play out in the global stage related to AI. But from an economy standpoint, I think I see it through the lens of the participating companies and countries and economies, and how that then plays out on the global stage.

Lucas Perry: There’s also this facet of how this technological innovation and development, for example, with AI, and also technologies which may contribute to and mitigate climate change, all affect global catastrophic and existential risks. So I’m curious how you see global catastrophic and existential risks as potentially fitting into this evolution of the global economy of labor and society as we move forward?

James Manyika: Well, I think it depends whether you’re asking whether AI itself represents a catastrophic or existential risk. Many people have written about this, but I think that question is tied up with the view on how do we think about, how close we are to AI, or how close we are to AGI, and how close we are to superhuman AI capabilities.

As we discussed earlier, I don’t think we’re close yet. But there are other to think about, even as we progress in that direction. This include some of the safety considerations, the control considerations, and how we make sure we’re deploying and using AI safely.

We know that there’s particular problems with regards to things like, even with narrow AI, as it’s sometimes called, how do we think about reward and goal corruption, for example? How do we think about how we avoid the kind of interference, catastrophic interference, between, say, tasks and goals? How do we think about that?

There are all these kinds of safety related things, even on our way to AGI, that we still need to think about. In that sense, these are things to worry about.

I also think we should be thinking about questions of value and goal alignment. And these also get very complicated for a whole bunch of both philosophical reasons, but also quite practical reasons.

That’s why I love the work, for example, that Stuart Russell has been doing on how we think about human compatible AI, and how do we build these kinds of, the value alignment and goal alignment, that we should be thinking about? So these are, even on our way to AGI, both the safety control and these kinds of value alignment, and somewhat normative questions, about how we think about normativity, and what does it even mean, to think about normative things in the case of value alignment with the AI? These are important things.

Now, that’s if you’re thinking about catastrophic, or at least to existential risk, with regards to AI, even way before you get to AGI. Then you have the kinds of things, at that point, that Nick Bostrom and others have worried about.

I think, because those are non-zero probability concerns, we should invest all effort into working on those existential, potentially catastrophic problems. But I’m not super worried about those any time soon, but that doesn’t mean we shouldn’t invest and work on those, the kinds of concerns that Nick and others write about.

But they are also questions about AI governance, in the sense of we’re going to have many participating entities here. We’re going to have the companies that are leading the development of these technologies.

We’re going to have governments that are going to want to participate and use these technologies. We’re going to have issues around when to deploy these technologies, use and misuse. Many of these questions become particularly important when you think about the deployment of AI, especially in particular arenas.

Imagine if, once we have AI or AGI that’s capable of manipulation, for example, or persuasion, or those kinds of things, or capabilities that allow us to detect lies, or be able to interfere or play with signals’ intelligence, or even cryptography and number theory. I mean, our cryptographic systems rely on a lot of things in prime number theory, for example, or if you think about arenas like autonomous weapons.

So questions of governance become evermore important. I mean, they’re already important now, when we think about how AI may or may not be used for things like deep fakes and disinformation.

The closer we get to the kinds of areas that I was describing here, it becomes even more important to think about governance, and what’s permissible to deploy where and how, and how do we do that in a transparent way? And how do we deal with the challenges with AI about attribution?

One of the nice things about other potentially risky technologies or developments like nuclear science, or chemical weapons, and so forth, is at least those things, they’re easy to detect when they happen. And it’s relatively easy to do attribution, and verify that it happened, and it was used.

It’s much harder with the AI systems, so these questions of governance and so forth become monumentally important. So those are some of the things we should think about.

Lucas Perry: How do you see the way in which the climate change crisis arose, given human systems and human civilization? What is it about human civilizations and human systems that has led to the climate change crisis? And how do we not allow our systems to function and fail in the same way, with regards to AI and powerful technologies in the 21st century?

James Manyika: I’m not an expert on climate science, by the way. So I shouldn’t speculate as to how we got to where we are. But I think the way we’ve used certain technologies and fossil fuels, I think, is a big part of that, the way our economies have relied on that as our only mode of energy, is part of that.

The fact that we’ve done that in a relatively costless way, in terms of pricing the effects on our environment and our climate, I think, is a big part of it. The fact that we haven’t had very many as effective and as efficient alternatives, historically, I think, is a big part of that.

So I think all of that is part of how we got here in some ways, but I think others more expert than me can talk about that. I think if I think about AI, I think one of the things that is potentially challenging about AI, if, in fact, we think there’s a chance that we’ll get to these superhuman capabilities, and AGI, is that we may not have the opportunity to iterate our way there. Right?

I think, quite often, with a lot of these deployment of technologies, I think a practical thing that has served us well in the past has been this idea that, well, let’s try a few experiments, we’ll fix it if it fails. Or if it doesn’t work, and we’ll iterate and do better, and kind of iterate our way to the right answer. Well, if we believe that there is a real possibility of achieving AGI, we may not have the opportunity to iterate in that same way.

That’s one of the things that’s potentially different, perhaps, because we can’t undo that, as it were, if we truly get to AGI. Thinking about these existential things, so there’s maybe something of a similarity, or at least an analog with climate change, is that we can’t just undo what we’ve done, in a very simple fashion, right?

Look at how we’re now thinking about, how do we do carbon sequestration? How do we take carbon out of this, out of the air? How do we undo these things? And it’s very hard. It’s easy to go in one direction, it’s very hard to go in the other direction, in that sense, at least.

It’s always dangerous with the analysis. But at least in that sense, AI, on its way to AGI, may be similar in that sense, which is we can’t always quite get to undo it in a simple fashion.

Lucas Perry: Are you concerned or worried that we will use AI technology in the way that we’ve used fossil fuel technologies, such that we don’t factor in the negative effects or negative externalities of the use of that technology? With AI, there’s this deployment of single objective maximizing algorithms that don’t take account all of our other values, and that actually run over and increase human suffering.

For example, the ways in which YouTube or Facebook algorithms work to manipulate and capture attention. Do you have a concern that our society has a natural proclivity towards learning from mistakes, from ignoring negative externalities, until it reaches sort of a critical threshold?

James Manyika: I do worry about that. And then maybe, just to come back to one of your, I think, central concerns, back to the idea of incentives, I do worry about that, in the sense that there are going to be such overwhelming and compelling incentives to deploy AI systems, for both good reasons, and for the economic reasons that go with that. So there are lots of good reasons to deploy AI technology, right?

It’s actually great technology. Look at what it’s probably going to do to, in the case of health science, and breakthroughs we could make there in climate science itself, and scientific discovery and material science. So there’s lots of great reasons to get excited about AI.

And I am, because it’ll help us solve many, many problems, could create enormous bounty and benefits for our society. So we’re going to, people are going to be racing ahead to do that, for those reasons, for those very good and very compelling reasons.

There are also going to be a lot of very compelling economic reasons. The kinds of innovations that companies can make, the kind of contributions to the economic performance of companies, the kinds of economic benefits, and that possibly that AI will contribute to productivity growth as we talked about before.

There’s lots of reasons to want to go full steam ahead. And a lot of incentives will be aligned to encourage that, both the breakthrough innovations that are good for society. As I said, the benefits that companies will get from deploying and using AI until the innovations, the economy-wide productivity benefits, so, all good reasons.

And I think, in the rush to do that, we may in fact find that we’re not paying enough attention, not because anybody is out of malice or anything like that, but we just may not be paying enough attention to these other considerations that we should have alongside, considerations about, what does this mean for bias and fairness?

What does it mean for, potentially for inequality? We know these things have scale superstar effects. What does that mean for others who get left behind? What does this mean for the labor markets and jobs and so forth? So I think we’re going to need to find mechanisms to make sure that there’s continued, but substantial effort, at those kind of other sides of the side effects of AI, and some of the unintended consequences.

That’s why, at least, I think many of us are trying to think about this question, “What are the things we have to get right,” even as we race towards all the wonderful things we want to get out of it, what are the other things we need to make sure we’re getting right along the way?

How do we make sure these things… People are working on them, they’re funded, there’s support for people working on these other problems. I think that’s going to be quite important, and we should not lose sight of that. And that’s something I’m concerned about.

Lucas Perry: So let’s pivot here, then, into inequality and bias. Could you explain the risk and degree to which AI may contribute to new inequalities, or exacerbate existing inequalities?

James Manyika: Well, I think on the inequality point, it’s part of what we talked about before, right? Which is the fact that, even though we may not lose jobs in the near term, we may end up with creating jobs or complementing jobs in a way that have these wage effects, that could worsen the inequality question.

That’s one way in which AI could contribute to attain equality. The other way, of course, is the fact that because of the scale effects of these technologies, you could end up with a few companies or a few entities or a few countries having the ability to develop and deploy, and get the benefits of AI, while the other companies or countries and places don’t. So you’ve got that kind of inequality concern.

Now, some of that could be helped by the way as it is, because it was the case, it has been the case so far, that the kind of compute capacity needed to develop and deploy AI has been very, very large, and the data endowments needed to train algorithms has been very, very high, but we know the talent of people who are working on these things has been, up until now, relatively concentrated.

But we know that that picture’s changing, I think. The advent of cloud computing, which makes it easy for those who don’t have the compute capacity, is helping that. The fact that we now have ways to train algorithms, of pre-trained algorithms or other universal models and others, so that not everybody has to retrain everything every single time.

These scarcities and these kind of scale constraints, I think, in those particular ones, will get better as we go forward. But you do worry about those inequalities, both in a peoples sense, but also in a entity sense, where entities could be companies, countries, or whole economies.

I think the questions of bias are a little bit different. I think the set of questions of biases, just simply has to do with the fact that up until now, at least so far, anyway, most of the data sets that have been used to train these algorithms often come with societally derived biases. And I emphasize this, society derive bias. It’s just because of the way we collect data and the data that’s available and who’s contributing to it.

Often, you start out with data sets, training data sets that reflect society’s existing biases. Not that the technology itself has introduced the bias, but in fact, these come out of society. So what the technologies then do is kind of bake these biases in, into the algorithms and probably deploy them at scale.

That’s why I think this question bias is so important, but I think often it gets conflated with the fact that, well, proponents of using these technologies will say, but humans already have bias in them, anyway. We already make biased decisions, et cetera.

Of course, that’s a two-sided conversation. But at least to the case, the difference that I see between the biases we have already as human beings, versus the biases that could get baked into these systems, is that these systems could get deployed and scale in a way that, if I have biases that I have, and I’m in a room and I’m trying to hire somebody, and I’m making my biased decisions, at least, hopefully that only affects that one hiring decision.

But if I’m using an algorithm that has all these things baked in, and hundreds of millions of people are using the algorithm, then we’re kind of doing that in scale. So I think we need to keep that in mind, as we have the debate about, people already have biases and saturated biases, that’s true. So we need to do work on that.

But one of the things I like about the bias question, by the way, that these technologies are forcing us to confront is that it’s actually forcing us to really think about, what do we even mean when we say things are fair, quite aside from technology?

I think they’re forcing us, just like the UBI debate is forcing us to confront the question that people don’t earn enough to live, the bias question’s also forcing us to confront the question of, what is fair right? What counts as fairness? And I think all too often, in our society, we’ve tended to rely on proxies for fairness, right?

When we define it, we’ll say, “Well, let’s constitute the right group of people, a diverse enough group of people, and we will trust the decision that they make, because it’s a diverse group of people,” right? So yeah, if that group is diverse in the way we expect, then gender or racial or any other social income terms, and they make a decision, we’ll trust it, because the deciding group is diverse.

That’s just a fairness by proxy, in my view. Who knows what those people actually think, and how to make decisions? That’s a whole separate matter, but we trust it, because it’s a diverse group.

The other thing that we’ve tended to rely on is, we trust the process, right? If we trust the process that, “Hey, if it’s gone through a process like this, we will live with the results, because we think that the process like that is fair and unbiased.”

Who knows whether the process is actually fair, and that’s how we’ve typically done it with our legal system, for the most part. That if you follow through, if you’ve been given due process and you’ve gone through a jury trial, then it must be fair. We will live with the results.

But I think, in all of those cases, while they’re useful constructs for us in society, they still somewhat avoid defining what is actually fair. And I think, when we’ve started to deploy technologies, where, in the cases of AI, the process is somewhat opaque, because we have this kind of explainability challenge of these technologies. So the process is kind of black boxy, in that sense.

And if we automate the decisions with no humans involved, then we can’t rely on this constituent group that, “Hey, the group of people decided this, so it must be fair.” This is forcing us to come back to the age-old or even millennia-old question of what is fair? How do we define fairness?

I think there’s some work that was done before, where somebody is trying to come up with all kinds of definitions of fairness, and they came up with something like 21. So I think we now are having an interesting conversation about what constitutes fairness. Do we gather data differently? Do we code differently? Do we have reviews differently? Do we have different people that develop the technologies differently? Do we have different participants.

So we’re still grappling with this question, what counts as fair? I think that’s one of the key questions, as we rely more and more on these technologies to assist, in some cases, eventually take over some of our decision-making, of course, only when it’s appropriate, these questions will continue to persist, and will only grow, on how we think about fairness and bias.

Lucas Perry: In terms of fairness, bias, equality, and beneficial outcomes with technology and AI in the 21st century, how do you view the need for and path to integrating developing countries’ voices in the use and deployment of AI systems?

James Manyika: Well, I don’t know if there’s any magical answers, Lucas. At some level, at a base level, we should have them participate, right? I think any participation, both in the development and deployment, I think, is going to be important. And I think that’s true for developing countries. I think it’s true for parts of even US society that’s often not participating in these things.

I mean, it’s still striking to me how the lack of diversity, and diversity, in every sense of the term, who is developing AI and who’s deploying AI, whether they look within the United States or around the world, there are entities and places and communities and whole countries that are not really part of this. So I think we’re going to need to find a ways to do that.

I think part of doing that is at least for me, it starts out with the recognition that capabilities and intelligence are equally distributed everywhere. I don’t think there’s any one place or country or community that has a natural advantage to capability and intelligence.

On that premise, we just need to get people from different places participating in the development and deployment, and even the decision-making that’s related to AI, and not just go with the places where the money and the resources happen to be, and that’s, who’s racing ahead, both within countries, e.g., in the United States itself, or in other countries that are being left behind. I think participation, in these different ways, I think it’s going to be quite, quite important.

Lucas Perry: If there’s anything you’d like to leave the audience with, in terms of perspective on the 21st century on economic development and technology, what is it that you would share as a takeaway?

James Manyika: Well, I think, when I look ahead to the 21st century, I’m in two minds. On the one hand, I’m actually incredibly excited about the possibilities. I think we’re just at the beginning of what these technologies, both in AI and so forth, but also in the life sciences and biotech, I think that the possibilities in the 21st century are going to be enormous, possibilities for both improving human life, improving economic prosperity, growing economies.

The opportunities are just enormous, whether you’re a company, whether you’re a country, whether you’re a society, the possibilities are just enormous. I think there’s more that lies ahead than behind.

At the same time, though, I think, alongside pursuit of those opportunities are the really complicated challenges we’re going to need to navigate through, right? Even as we pursue the opportunities that AI and these technologies are going to bring us, we’re going to need to pay attention to some of these challenges that we just talked about, these questions of potential inequality and bias that comes out of the deployment of these technologies, or some of the superpower effects that could come out of that, even as we pursue economic opportunities around the world, we’re going to need to think about what happens to poor developing countries who may not keep up with that, or be part of that.

In every case, for all the things that I’m excited about the 21st century, which is plenty, there are also these challenges along the way we’re going to need to deal with. Also the fact that society, I think, demands more from all of us.

I think the demands for a more equal and just society are only going to grow. The demands or desires to have a more inclusive and participative economy are only going to grow, as they should. So we’re going to need to be working both sets of problems, pursuing the opportunities, because without them, these other problems only get harder, by the way.

I mean, try to solve the inequality when there’s no economic surpluses, right? Good luck with that. So we have to solve both. We can’t pick one side or the other, we have to solve both. At the same time, I think we also need to deal with some of the potentially existential challenges that we have, and may grow. I mean, we are living through one right now.

I mean, we’re going to have more pandemics in the future than we have had, perhaps, in the past. So we’re just going to need to be ready for that. We’ve got to deal with climate change. And these kinds of public health, climate change issues, I think, are global. They’re for all of us.

These are not challenges for any one country or any one community. We have to kind of work on all of these together. So that set of challenges, I think, is for everybody, for all of us. It’s on planet Earth, so we’re going to need to work on those things too. So that’s kind of how I think about what lies ahead.

We have to pursue the opportunities, there’s tons of them. I’m very excited about that. We have to solve the challenges that come along with questioning those opportunities, and we have to deal with these collective challenges that we have. I think those are all things to look forward to.

Lucas Perry: Wonderful, James, thank you so much. It’s really interesting and perspective shifting. If any of the audience is interested in following you or checking your workout anywhere, what are the best places to do that?

James Manyika: If you search my name and such McKinsey Global Institute, you will see some of the research and papers that I referenced. For those who love data, which I do, these are very data rich fact-based perspectives. So just look at the McKinsey Global Institute website.

Lucas Perry: All right. Thank you very much, James.

James Manyika: Oh, you’re welcome. Thank you.

Lucas Perry: Thanks for joining us. If you found this podcast interesting or useful, consider sharing it on social media with friends, and subscribing on your preferred podcasting platform. We’ll be back again soon, with another episode in the FLI Podcast.

Michael Klare on the Pentagon’s view of Climate Change and the Risks of State Collapse

  • How the US military views and takes action on climate change
  • Examples of existing climate related difficulties and what they tell us about the future
  • Threat multiplication from climate change
  • The risks of climate change catalyzed nuclear war and major conflict
  • The melting of the Arctic and the geopolitical situation which arises from that
  • Messaging on climate change

Watch the video version of this episode here

See here for information on the Podcast Producer position

Check out Michael’s website here

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Michael Klare and explores how the US Department of Defense views and take action on climate change. This conversation is primarily centered around Michael’s book, All Hell Breaking Loose. In both this podcast and his book, Michael does an excellent job of making clear how climate change will affect global stability and functioning in our lifetimes through tons of examples of recent climate induced instabilities. I was also surprised to learn that despite changes in administrations, the DoD continued to pursue climate change mitigation efforts despite the Trump administration’s actions to remove mention and activism on climate change from the federal government. So, if you’ve ever had any doubts or if the impact of climate change and it’s significance has ever or does feel fuzzy or vague to you, this podcast might remedy that.

 I’d also like to make a final call for applications for the Podcast Producer role. If you missed it, we’re currently hiring for a Podcast Producer to work on the editing, production, publishing, and analytics tracking of the audio and visual content of this podcast. As the Producer you would be working directly with me, and the FLI outreach team, to help grow, and evolve this podcast. If you’re interested in applying, head over to the Careers tab on the Futureoflife.org homepage or follow the link in the description. The application deadline is July 31st, with rolling applications accepted thereafter until the role is filled. If you have any questions, feel free to reach out to socialmedia@futureoflife.org.

Michael Klare is a Five Colleges professor of Peace and World Security Studies. He serves on the board of directors of the Arms Control Association, and is a regular contributor to many publications including The Nation, TomDispatch and Mother Jones, and is a frequent columnist for Foreign Policy In Focus. Klare has written fourteen books and hundreds of essays on issues of war and peace, resource competition, and international affairs. You can check his work at Michaelklare.com. And with that I’m happy to present this interview with Michael Klare.

So to start things off here, I’m curious if you could explain at the highest level, how is it that the Pentagon views climate change and why is the Pentagon interested in climate change?

Michael Klare: So, if you speak to people in the military, they will tell you over and over again that their top concern is China. China, China, China followed by Russia and then maybe North Korea and Iran, and they spend their days preparing for war with China and those other countries. Climate change intercedes into this conversation because ultimately they believe that climate change is going to degrade their capacity to prepare for and to fight China and other adversaries down the road, that climate change is a complicating factor, a distraction that will undermine their ability to perform their military duties, and moreover, they see that the threat posed by climate change is increasing exponentially over time. So, the more they look into the future, the more they see that climate change will degrade their ability to carry out what they see as their primary function, which is to prepare for war with China. And so, it’s in that sense that climate change is critical. Now, then you go down in the specific ways in which climate change is a problem, but it’s ultimately because it will distract them from doing what they see as their primary responsibility.

Lucas Perry: I see, so there’s a belief in the validity of it and the way in which it will basically exacerbate existing difficulties and make achieving more important objectives more difficult.

Michael Klare: Something like that. Climate change they see as an intrusion into their work space. They’re trained as soldiers to carry out their military duties, which is combat related, and they believe that climate change is very real and getting more intense as time goes on and it’s going to hold them back, intrude on their ability to carry out their combat functions. It’s going to be a distraction on multiple levels. It’s going to create new kinds of conflicts that they would rather not deal with. It’s going to create emergencies around the world, humanitarian disasters at home and abroad, all of these are going to suck away resources, time, effort, energy, money that they believe should be devoted to their primary function of preparing for war with major enemies.

Lucas Perry: What would you say the primary interests of the Pentagon are right now other than climate change?

Michael Klare: Other than climate change, well the US Department of Defense at this time has a number of crises going on simultaneously. In addition to climate change, there’s COVID of course. Like every other institution in US society, the military was hampered by COVID, many service people came down with COVID and some died and it forced military operations to be restricted. Ships had to be brought back to port because COVID broke out on ships, so that was a problem. The military is also addressing issues of racism and extremism in the ranks. That’s become a major problem right now that they are dealing with, but they view climate change as the leading threat to national security of a non-military nature.

Lucas Perry: So, China was one of the first things that you mentioned. How would you also rank and relate the space of their considerations like Russia and a nuclear North Korea and Iran?

Michael Klare: Sure, the Department of Defense just released their budget for fiscal year 2022, and they rank the military threats and they say China is overwhelmingly the number one threat to US national security followed by Russia, followed by North Korea and Iran, and then down the list would be terrorist threats like Al-Qaeda and ISIS. But as you know, the administration has made a decision to leave Afghanistan and to downgrade US forces in that part of the world, so fighting terrorism and insurgency has been demoted as a major threat to US security, and even Russia has been demoted to second place. Over the past few years, Russia and China have been equated, but now China has been pushed ahead as the number one threat. The term they use is the pacing threat, which is to say that because China’s the number one threat, we have to meet that threat and if we can overcome China, the US could overcome any other threat that might come along, but China is number one.

Lucas Perry: So, there’s this sense of top concerns that the Department of Defense has, and then this is all happening in a context of climate change, which makes achieving its objectives on each of these threats more and more difficult. So, in the context of this interplay, can you describe the objectives of career officers at the Pentagon and how it’s related to and important for how they consider and work with climate change?

Michael Klare: Sure, so if you’re an aspiring general or admiral right now, as I say, you’re going to be talking about how you’re preparing your units, your division, your ship, your air squadron to be better prepared to fight China, but you also have to worry about what they call the operating environment, the OE, the operating environment in which your forces are going to be operating in, and if you’re going to be operating in the Pacific, which means dealing with China, then you have a whole set of worries that emerges. We have allies there that we count on: Japan, South Korea, the Philippines.

These countries are highly vulnerable to the effects of climate change and are becoming more so very rapidly. Moreover, we have bases in those places. Most of those bases, air bases and naval bases are at sea level or very close to sea level and are over and over again have been assaulted by powerful typhoons and have been disrupted, have had to be shut down for days or weeks at a time, and some of those bases like Diego Garcia in the Indian Ocean for example, or the Marianas Islands are not going to be viable much longer because they’re so close to sea level and sea level rise is just going to come and swamp them. So from an operating environment point of view, you have to be very aware of the impacts of climate change on the space in which you’re going to operate.

Lucas Perry: So, it seems like the concerns and objectives of career officers at the Pentagon can be distinguished in significant ways from the perspective and position of politicians, so there’s like some tension at least between career officers or the objectives of the Pentagon in relation to how some constituencies of the American political parties are skeptical of climate change?

Michael Klare: Yes, this was certainly the case during the Trump administration because the commander in chief, as one of his titles, President Trump forbad the discussion of climate change, and he was a denier. He called it a hoax and he forbad any conversation of that. So, the US military did have a position on climate change during the Obama administration. It had as early as 2010 the Department of Defense stated that climate change posed a serious threat to US security and was taking steps to address that threat. So when Trump came along, all of that had to go underground. It didn’t stop, but the Pentagon had to develop a whole lot of euphemisms, like changing climate or extreme weather events, all kinds of euphemisms used to describe what they saw as climate change, but that didn’t stop them from facing the consequences of climate change. During the Trump administration, US military bases in the US suffered billions and billions of dollars of damage from Hurricane Michael, from others that hit the East Coast and the Gulf of Mexico that did tremendous damage to a number of key US bases.

And, the military is still having to find the money to pay for that damage, and the Navy in particular is deeply concerned that its major operating bases in the United States… A Navy base by definition is going to be at the ocean, and many of these bases are of very low lying areas and already are being repeatedly flooded at very high tides, or when there are storms and the Navy is very aware that their ability to carry out their missions to reinforce American forces either in the Atlantic or Pacific are at risk because of rising seas, and they had to maneuver around Trump all during that period, trying to protect their bases, but calling it by different names, calling the danger they faced by different names.

Lucas Perry: Right, so there’s this sense of Trump essentially canceling mention of climate change throughout the federal government and its branches and the Pentagon responding by quietly still responding to what they see as a real threat. Is there anything else you’d like to add here about the Obama to Trump transition that helps to really paint the picture of how the Pentagon views climate change and what it did despite attempts to suppress thought and action around climate change?

Michael Klare: During the Obama administration, as I say, the Department of Defense acknowledged the reality of climate change number one. Number two said it posed a threat to US national security, and as a result said that the Department of Defense had an obligation to reduce its contribution to climate change to reduce its emissions and made all kinds of pledges that it was going to reduce its consumption of fossil fuels and increase its reliance on renewable energy, begin constructing solar arrays. A lot of very ambitious goals were announced in the Obama period, and although all of this was supposed to stop when Trump came into office because he said we’re not going to do anymore any of this anymore. In fact, the Pentagon continued to proceed with a lot of these endeavors, which were meant to mitigate climate change, but again, using different terminology that this was about base reliance, self-reliance, resiliency, and so on, not mentioning climate change, but nonetheless continued to proceed with efforts to actually mitigate their impact on climate.

Lucas Perry: All right, so is there any sense in which the Pentagon’s view of climate change is unique? And, could you also explain how it’s important and relevant for climate change and also the outcomes related to climate change?

Michael Klare: Yes, I think the Pentagon’s view of climate change, I think, is very distinctive and not well understood by the American public, and that’s why I think it’s so important, and that is that the Department of Defense sees climate change as… The term they use is as a threat multiplier. They say, look, we look out at the world and part of our job is to forecast ahead of time where our threat’s going to emerge to US security around the world. That’s our job, and to prepare for those threats, and we see that climate change is going to multiply threats in areas of the world that are already unstable, that are already suffering from scarcities of resources, where populations are divided and where resources are scarce and contested, and that this is going to create a multitude of new challenges for the United States and its allies around the world.

So, this notion of a threat multiplier is very much a part of the Pentagon’s understanding of climate change. What they mean by that is that societies are vulnerable in many ways and especially societies that are divided along ethnic and religious and racial lines as so many societies are, and if resources are scarce, housing, water, food, jobs, whatever, climate change is going to exacerbate these divisions within societies, including American society for that matter, but it’s going to exacerbate divisions around the world and it’s going to create a social breakdown and state collapse. And, the consequence of state collapse could include increased pandemics for example, and contribute to the spread of disease. It’s going to lead to mass migrations and mass migrations are going to become a growing problem for the US.

The influx of migrants on America’s Southern border, many of these people today are coming from Central America and from an area that’s suffering from extreme drought and where crop failure has become widespread, and people can’t earn an income and they’re fleeing to the United States in desperation. Well, this is something the military has been studying and talking about for a long time as a consequence of climate change, as an example of the ways in which climate change is going to multiply schisms in society and threats of all kinds that ultimately will endanger the United States, but it’s going to fall on their shoulders to cope with and creating humanitarian disasters and migratory problems.

And as I say, this is not what they view as their primary responsibility. They want to prepare for high-tech warfare with China and Russia, and they see all of this as a tremendous distraction, which will undermine their ability to defend the United States against its primary adversaries. So, it’s multiplying the threats and dangers to the United States on multiple levels including, and we have to talk about this, threats to the homeland itself.

Lucas Perry: I think one thing you do really well in your book is you give a lot of examples of natural disasters that have occurred recently, which will only increase with the existence of climate change as well as areas which are already experiencing climate change, and you give lots of examples about how that increases stress in the region. Before we move on to those examples, I just want to more clearly lay out all the ways in which climate change just makes everything worse. So, there’s the sense in which it stresses everything that is already stressed. Everything basically becomes more difficult and challenging, and so you mentioned things like mass migration, the increase of disease and pandemics, the increase of terrorism in destabilized regions, states may begin to collapse. There is, again, this idea of threat multiplication, so everything that’s already bad gets worse.

Lucas Perry: There’s loss of food, water, and shelter instability. There’s an increase in natural disasters from more and more extreme weather. This all leads to more resource competition and also energy crises as rivers dry up and electric dams stop working and the energy grid gets taxed more and more due to the extreme weather. So, is there anything else that you’d like to add here in terms of the specific ways in which things get worse and worse from the factor of threat multiplication?

Michael Klare: Then, you start getting kind of specific about particular places that could be affected, and the Pentagon would say, well this is first going to happen in the most vulnerable societies, poor countries, Central America, North Africa, places like that where society is already divided, poor, and the capacity to cope with disaster is very low. So, climate change will come along and conditions will deteriorate, and the state is unable to cope and you have breakdown and you have these migrations, but they also worry that as time goes on and climate change intensifies, that a bigger and bigger or richer and richer and more important states will begin to disintegrate, and some of these states are very important to US security and some of them have nuclear weapons, and then you have really serious dangers. For example, they worry a great deal about Pakistan.

Pakistan is a nuclear armed country. It’s also deeply divided along ethnic and religious lines, and it has multiple vulnerabilities to climate change. It goes between extremes of water scarcity, which will increase as the Himalayan glaciers disappear, but also we know that monsoons are likely to become more erratic and more destructive with more flooding.

All of these pose great threats to the ability of Pakistan’s government and society to cope with all of its internal divisions, which are already severe to begin with, and what happens when Pakistan experiences a state collapse and nuclear weapons begin to disappear into the hands of the Taliban or to forces close to the Taliban, then you have a level of worry and concern much greater than anything we’ve been talking before, and this is something that the Pentagon has started to worry about and to develop contingency plans for. And, there are other examples of this level of potential threat arising from bigger and more powerful states disintegrating. Saudi Arabia is at risk, Nigeria is at risk, the Philippines, a major ally in the Pacific is at extreme risk from rising waters and extreme storms, and I can continue, but from a strategic point of view, this starts getting very worrisome for the Department of Defense.

Lucas Perry: Could you also paint a little bit of a picture of how climate change will exacerbate the conditions between Pakistan, India, and China, especially given that they’re all nuclear weapon states?

Michael Klare: Absolutely, and this all goes back to water and many of us view water scarcity as the greatest danger arising from climate change in many parts of the world. In the case of India, China, Pakistan, not to mention a whole host of other countries depend very heavily on rivers that originate in the Himalayan mountains and draw a fair percentage of their water from the melting of the Himalayan glaciers and these glaciers are disappearing at a very rapid rate and are expected to lose a very large percentage of their mass by the end of this century due to warming temperatures.

And, this means that these critical rivers that are shared by these countries, the Indus River shared by India and Pakistan, the Brahmaputra River shared by India and China, these rivers, which provide the water for irrigation for hundreds of millions of people if not billions of people, depend on these rivers, the Mekong is another. As the water supply begins to diminish, this is going to exacerbate border disputes. All of these countries, Indian and China, Indian and Pakistan have border and territorial disputes. They have very restive agricultural populations to start with, that water scarcity is going to be the tipping point that will produce massive local violence that will lead to conflict between these countries, all of them nuclear armed.

Lucas Perry: So, to paint a little bit more of a picture of these historical examples of states essentially failing to be able to respond to climate events and the kind of destructive force that was to society and to the status of humanitarian conditions and the increasing need for humanitarian operations, so can you describe what happened in Tacloban for example, as well as what is going on in the Nigerian region?

Michael Klare: So, Tacloban is a major city on the island of Leyte in the Philippines, and it was a direct hit. It suffered a direct hit from Typhoon Haiyan in 2013. This was the most powerful typhoon to make landfall up until that point, an extremely powerful storm that created millions of homeless in the Philippines. Many people perished, but Tacloban was at the forefront of this. A city of several hundred thousand, many poor people living in low lying areas at the forefront of the storm. The storm surge was 10 or 20 feet high. That just over overwhelmed these low lying shanty towns, flooded them. Thousands of people died right away. The entire infrastructure of the city collapsed was destroyed, hospitals, everything. Food ran out, water ran out, and there was an element of despair and chaos. The Philippine government proved incapable of doing anything.

And, President Obama ordered the US Pacific Command to provide emergency assistance, and it sent almost the entire US Pacific fleet to Tacloban to provide emergency assistance on the scale of a major war, aircraft carrier, dozens of warships, hundreds of planes, thousands of troops to provide emergency assistance. Now, it was a wonderful sign of US aid. There are a number of elements of this crisis that are worthy of mention. In addition to all of this, one was the fact that there was anti-government rioting because of the failure of the local authorities to provide assistance or to provide it only to wealthy people in the town, and this is so often a characteristic of these disasters that assistance is not provided equitably, and the same thing was seen with Hurricane Katrina in New Orleans and this then becomes a new source of conflict.

When a disaster occurs and you do not have equitable emergency response, and some people are denied help and others are provided assistance, you’re setting the stage for future conflicts and anti-government violence, which is what happened in Tacloban And the US military had to intercede to calm things down, and this is something that has altered US thinking about humanitarian assistance because now they understand that it’s not just going to be handing out food and water, it’s also going to mean playing the role of a local government and providing police assistance and mediating disputes and providing law and order, not just in foreign countries, but in the United States itself and this proved to be the case in Houston with Hurricane Harvey in 2017 and in Puerto Rico with Hurricane Maria when local authorities simply disappeared or broke down and the military had to step in and play the role of government, which comes back to what I’ve been saying all along. From the military’s point of view, this is not what they were trained to do.

This is not what they want to do, and they view this as a distraction from their primary military function. So, here’s the Pacific fleet engaging in this very complex emergency in the Philippines, and what if there were a crisis with China that were to break out? The whole force would have been immobilized at that time, and this is the kind of worry that they have that climate change is going to create these complex emergencies they call them, or complex disasters that are going to require not just a quick in and out kind of situation, but a permanent or semi-permanent involvement in a disaster area and to provide services for which the military is not adequately prepared, but they see that climate change increasingly will force them to play this kind of role and thereby distracting them from what they see as their more important mission.

Lucas Perry: Right, so there’s this sense of the military increasingly being deployed in areas to provide humanitarian assistance. It’s obvious why that would be important and needed domestically in the United States and its territories. Can you explain why the military is incentivized or interested in providing global humanitarian assistance?

Michael Klare: This has always been part of American foreign policy, American diplomacy, winning friends, winning over friends and allies. So, it’s partly to make the United States look good particularly when other countries are not capable of doing that. We’re the one country that has that kind of global naval capacity to go anywhere and do that sort of thing. So, it’s a little bit a matter of showing off our capacity, but it’s also in the case of the Philippines, the Philippines plays a strategic role in US planning for conflict in the Pacific.

It is seen as a valuable ally in any future conflict with China and therefore its stability matters to the United States and the cooperation of the Philippine government is considered important and access to bases in the Philippines, for example, is considered important to the US. So, the fact that key allies of the US in the Pacific, in the Middle East and Europe are at risk of collapsing due to climate change poses a threat to the whole strategic planning of the US, which is to fight wars over there, in the forward area of operations off the coast of China, or off of Russian territory. So, we are very reliant on the stability and the capacity of key allies in these areas. So, providing humanitarian assistance and disaster relief is a part of a larger strategy of reliance on key allies in strategic parts of the world.

Lucas Perry: Can you also explain the conditions in Nigeria and how climate change has exacerbated those conditions and how this fits into the Pentagon’s perspective and interest in the issue?

Michael Klare: So, Nigeria is another country that has strategic significance for the US, not perhaps on the same scale as say Pakistan or Japan, but still important. Nigeria is a leading oil producer, not as important as it once was perhaps, but nonetheless important, but Nigeria is also a key player in peacekeeping operations throughout Africa and because the US doesn’t want to play that role itself, it relies on Nigeria for peacekeeping troops in many parts of Africa. And, Nigeria occupies a key piece of territory in Central Africa, which is it’s surrounded by countries, which are much more fragile and are threatened by terrorist organizations. So, Nigeria’s stability is very important in this larger picture, and in fact Nigeria itself is at risk from terrorist movements, especially Boko Haram and splinter groups, which continue to wreak havoc in Northern Nigeria despite years of effort by the Nigerian government to crush Boko Haram, it’s still a powerful force.

And, partly this is due to climate change. The Boko Haram operates in areas around Lake Chad, which is now a small sliver of what it once was. It has greatly diminished in size because of global warming and water mismanagement. And so, the farmers and fisher folk whose livelihood depended on Lake Chad has all been decimated. Many of them have become impoverished. The Nigerian government has proved inept and incapable of providing for their needs, and many of these people have therefore fallen prey to the appeals of recruitment by Boko Haram, young men without jobs. So, climate change is facilitating, is fueling the persistence of groups like Boko Haram and other terrorist groups in Nigeria, but that’s only part of the picture. There’s also growing conflict between pastoralists, these are herders, cattle herders whose lands are being devastated by desertification.

In this Sahel region, the southern fringe of the Sahara is expanding with climate change and driving these pastoralists into areas occupied by… These are all Muslim, the pastoralists are primarily Muslims and they’re moving into lands occupied by Christians, mainly Christian farmers, and there’s been terrible violence in the past few years, many hundreds of thousands of people displaced. Again, inept Nigerian response, and so I could go on. There’s violence in the Nigeria Delta region, the Niger Delta area in the south and in the area, their breakaway provinces. So, Nigeria is at permanent risk of breaking apart, and the US provides a lot of military aid to Nigeria and provides training. So, the US is involved in this country and faces a possibility of greater disequilibrium and greater US involvement.

Lucas Perry: Right, so I think this does a really good job of painting the picture of this factor of threat multiplication from climate change. So, climate change makes getting food, water, and shelter more difficult. There’s more extreme weather, which makes those things more difficult, which increases instability, and for places that are already not that stable, they get a lot more unstable and then states begin to collapse and you get terrorism, and then you get mass migration, and then there’s more disease spreading, so you get conditions for increased pandemics. Whether it’s in Nigeria or Pakistan and India or the Philippines or the United States and China and Russia, everything just keeps getting worse and worse and more difficult and challenging with climate change. So, could you describe the ladder of escalation of climate change related issues for the military and how that fits into all this?

Michael Klare: Well, now this is an expression that I made up to try to put this in some kind of context, drawing on the ladder of escalation from the nuclear era when the military talked about the escalation conflict from a skirmish to a small war, to a big war, to the first use of nuclear weapons, to all out nuclear war. That was the ladder of escalation of the nuclear age, and what I see happening is something of a similar nature where at present we’re still dealing mainly with these threat multiplying conditions occurring in the smaller and weaker states of Africa, Chad, Niger, Sudan and the Central American countries, Nicaragua and El Salvador, where you see all of these conditions developing, but not posing a threat to the central core of the major powers, but as climate change advances, the military expects and US intelligence agencies expect, as I indicated, that larger, stronger, richer states will experience the same kinds of consequences and dangers and begin to experience this kind of state disintegration.

So, what we’re seeing in places like Chad and Niger, which involves this skirmishing between insurgents, terrorists, and other factions in which the US is playing a remote role, is playing the role, but it’s remote to situations where a Pakistan collapses, a Nigeria collapses, a Saudi Arabia collapses would require a much greater involvement by American forces on a much larger scale and that would be the next step up the ladder of escalation arising from climate change, and then you have the possibility, as I indicated, where nuclear armed states would engage in conflict, would be drawn into conflict because of climate related factors like the melting of the Himalayan glaciers and Indian and Pakistan going to war or Indian and China going to war, or we haven’t discussed this, but another consequence of climate change is the melting of the Arctic and this is leading to competition between the US and Russia in particular for control of that area.

So, you go from disintegration of small states to disintegration of medium-sized states, to conflict between nuclear armed states, and eventually to conceivable US involvement in climate related conflicts. That would be the ladder of escalation as I see it, and on top of that, you would have multiple disasters happening simultaneously in the United States of America, which would require a massive US military response. So, you can envision, and the military certainly worries about this, a time when US forces are fully immobilized and incapable of carrying out what they see as their primary defense tasks because they’re divided. Half their forces are engaging in disaster relief in the United States and another half are dealing with these multiple disasters in the rest of the world.

Lucas Perry: So, I have a few bullet points here that you could expand upon or correct about this ladder of escalation as you describe it. So at first, there’s the humanitarian interventions where the military is running around to solve particular humanitarian disasters like in Tacloban. Then, there’s limited military operations to support allies. There’s disruptions to supply chains and the increase of failed states. There’s the conflict over resources. There’s internal climate catastrophes and complex catastrophes, which you just mentioned, and then there’s what you call climate shock waves, and finally all hell breaking loose where you have multiple failed states, tons of mass migration, a situation in which no state no matter how powerful is able to handle.

Michael Klare: Climate shock wave would be a situation where you have multiple extreme disasters occurring simultaneously in different parts of the world leading to a breakdown in the supply chains that keep the world’s economy afloat and keep food and energy supplies flowing around the world, and this is certainly a very real possibility. Scientists speak of clusters of extreme events, and we’ve begun to see that. We saw that in 2017 when Hurricane Harvey was followed immediately by Hurricane Irma in Florida, and then Hurricane Maria in the Caribbean and Puerto Rico and the US military responded to each of those events, but had some difficulty moving emergency supplies first from Houston to Florida, then to Puerto Rico. At the same time, the west of the US was burning up. There were multiple forest fires out of control and the military was also supplying emergency assistance to California, Washington State, and Oregon.

That’s an example of clusters of extreme events. Now looking into the future, scientists are predicting that this could occur in several continents simultaneously. And as a result, food supply chains would break down, and many parts of the world rely on imported grain supplies, or other food stuffs and imported energy. And in a situation like this, you could imagine a climate shockwave in which trade just collapses and entire states suffer from a major catastrophe, food catastrophes leading to state collapse and all that we’ve been talking about.

Lucas Perry: Can you describe what all hell breaking loose is?

Michael Klare: Well, this is my expression for the all of the above scenario. You have these multiple disasters occurring and one that we have not discussed at length is the threat to American bases and how that would impact on the military. So, you have these multiple disasters occurring that create a demand on the military to provide assistance domestically, like I say, many areas needing emergency assistance and not just of the obvious sort of handing out water bottles, but as I say, complex emergencies where the military is being called in to provide law and order, to restore infrastructure, to play the role of government. So, you need large capacity organizations to step in. At the same time, it’s being asked to do that in other parts of the world, or to intervene in conflicts with nuclear armed states happening simultaneously. But at the same time, its own bases have been immobilized by rising seas and flooding and fires. All of this is a very realistic scenario because parts of it have already occurred.

Lucas Perry: All right, so let’s make a little bit of a pivot here into something that you mentioned earlier, which is the melting of the Arctic. So, I’m curious if you could explain the geopolitical situation that arises from the melting of the Arctic Ocean… Sorry, the Arctic region that creates a new ocean that leads to Arctic shipping lanes, a new front to defend, and resource competition for fish, minerals, natural gas and oil.

Michael Klare: Yes, indeed. In a way, the Arctic is how the military first got interested in climate change, especially the Navy because the Navy never had much of an Arctic responsibility. It was covered with ice, so its ships couldn’t go there except for submarines on long-range patrols under the sea ice, but the Navy never had to worry about the Arctic. And then around 2009, the Department of the Navy created a climate change task force to address the consequences of a melting Arctic sea ice and came to the view that as you say, this is a new ocean that they would have to defend that they’d never thought about before, and for which they were not prepared.

Their ships were not equipped to operate, for the most part, in the Arctic. So ever since then, the Arctic has become a major geopolitical concern of the United States on multiple fronts. But two or three points in particular that need to be noted, first of all, the melting of the ice cap makes it possible to extract resources from the area, oil and natural gas, and it turns out there’s a lot of oil and natural gas buried under the ice cap, under the seabed of the Arctic and oil and gas companies are very eager to exploit those untapped reserves. So the area, what was once considered worthless, is now a valuable geo-economic prize and countries have exerted claims to the area, and some of these claims overlap. So, you have border disputes in the Arctic between Russia and the United States, Russia and Norway, Canada and Greenland, and so on. There are now border disputes because of the resources that are in these areas. And because of drilling occurring there, you now need to worry about spills and disasters occurring, so that creates a whole new level of Naval and Coast Guard operations in the Arctic. This has also led to shipping lanes opening up into the region, and who controls those shipping lanes becomes a matter of interest. Russia is trying to develop what it calls the Northern Sea Route from the Pacific to the Atlantic going across its Northern territory across Siberia, and potentially, this could save two weeks of travel for container ships, moving from Rotterdam say to Shanghai and could be commercially very important.

Russia wants to control that route but the U.S. and other countries says, “It’s not yours to control.” So, you have disputes over the sea routes. But then, more important than any of the above is that Russia has militarized its portion of the Arctic, which is the largest portion, and this has become a new frontier for U.S.-Russian military competition, and there has been a huge increase in military exercises, base construction. Now, from the U.S. point of view, the Arctic is a new front in the future war with Russia and they’re training for this all the time.

Lucas Perry: Could you explain how the Bering Strait fits in?

Michael Klare: The Bering Strait between the U.S. and Russia is a pretty narrow space and that’s the only way to get from the North Pacific into the Arctic region, whether you’re going to Northern Alaska and Northern Canada, or to across from China and Japan, across the Northern Sea Route to Europe. So, this becomes a strategic passage way, the way Gibraltar has been the past. And both the U.S. and Russia are fortifying that passageway and there’s constant tussling going on there. It doesn’t get reported much, but every other week or so, Russia will send up its war planes right to the edge of U.S. airspace in that region, or the U.S. will send its planes into the edge of Russian airspace to test their reflexes and their naval maneuvers happening all the time. So, this has become seen as a important strategic place on the global chessboard.

Lucas Perry: How does climate change affect the Bering Strait?

Michael Klare: Well, it affects it in the sense that it’s become the entry point to the Arctic and the climate change has made the Arctic a place you want to go that it wasn’t before.

Lucas Perry: All right. So, one point that you made in your book that I like to highlight is that the Arctic is seen as a main place for conflict between the great powers in the coming years. Is that correct?

Michael Klare: Yes. For the U.S. and Russia, it’s important, here we would focus more on the Barents Sea, the area above Norway, and just, it helps of course to have a map in your mind, but Russia shares the border with Norway in it’s extreme north. And that part of Russia is the Kola Peninsula, and it’s where the City of Murmansk is located, and that’s the headquarters of Russia’s Northern Fleet and where it keeps its nuclear or missile submarines are based there. So, that’s how, that’s one of Russia’s few ways of access into the Atlantic Ocean from its own territory, from its major naval port at Murmansk. The waters adjacent to Northern Norway and Russia, like on the other side, have become a strategic, very important strategic military location. The U.S. has started building military bases with Norway in that area close to the Russian border. We’ve now stationed B-1 bombers in that area, so it is seen as a likely first area of conflict in the event of a war between the U.S. and Russia is going to occur at that spot.

Climate change figures into this because Russia views its Arctic region as critical economically as well as strategically and is building up its military forces there. And therefore, from U.S. NATO point of view, it’s a more strategically important region. But you ask about China, and China has become very interested in the Arctic as a source of raw materials, but also as a strategic passageway from its east coast to Europe for the reason I indicated, if once the ice cap melts, they’ll be able to ship goods to Europe in much shorter space of time and bring goods back if they can go through the Arctic. But China also is very interested in drilling for energy at the Arctic and for minerals, there are a lot of valuable minerals believed to be in Greenland.

You can’t get to those now because Greenland is covered with ice. But as that ice melts, which it’s doing at a rapid rate, the ground is becoming exposed and mining activities have begun there for things like uranium, and rare earths, and other valuable minerals. China is very deeply interested in mining there and this has led to diplomatic maneuverings, didn’t Donald Trump once talk about buying Greenland, to geopolitical competition between the U.S. and China over Greenland and this area.

Lucas Perry: Are there any ongoing proposals for how to resolve territorial disputes in the Arctic?

Michael Klare: Well, the shorter answer is no, there’s talk, there is something called the Arctic Council and this is an organization of the states that occupy territory in the Arctic region and it has some very positive environmental agendas and had some success in addressing non-geopolitical issues. But it has not been given the authority to address territorial disputes that members have resisted that. So, you don’t have a, it’s not a forum that would provide for that. There is a mechanism under the United Nations Convention on the Law of the Sea that allows for adjudication of off shore territorial disputes and it’s possible that that could be a forum for discussion, but mostly, these disputes have remained unresolved.

Lucas Perry: I don’t know much about this. Does that have something to do with, you have so many, you have X many miles from your sea shelf or something having to do with like the tectonic plates or ocean something.

Michael Klare: I can… Yes, so under the UN Convention on the Law of the Sea, you’re allowed a 200 nautical mile exclusive economic zone off your coastline. Every, any coastal country can claim 200 nautical miles. But you’re also allowed an extra 150 miles if your outer continental shelf, if you can prove scientifically that your outer continental shelf extends beyond 200 nautical miles, then you can extend your EEZ another 150 nautical miles out to 350 nautical miles. And the Northern Arctic has islands and territories that have allowed contending states to claim overlapping EEZs-

Lucas Perry: Oh, okay.

Michael Klare: … on this bases.

Lucas Perry: I see.

Michael Klare: And Russia has claimed vast areas of the Arctic as part of its outer continental shelf. But the great imperial power of Denmark, which territorially, is one of the largest imperial powers on earth because it owns Greenland, and Greenland also has an extended outer continental shelf that overlaps with Russia’s, as does Canada’s. You have to picture the looking down, not on the kind of wall maps we have of the world in our classrooms that make the Arctic look huge, but from a global map, everything comes closer together up there. And so, these extended EEZs overlap and so Greenland, and Canada, and Russia are all claiming the North Pole.

Lucas Perry: Okay. So, I think that paints really well the picture of the already existing and conflict there and how it will likely only get worse in terms of the amount of conflict. It’d be great if we could focus a bit on nuclear weapons risk in climate change in particular. I’m curious if you could explain the DOD’s concerns an improving China, and a nuclear North Korea, and India, and Pakistan, and other nuclear states in this evolving situation of increasing territorial disputes due to climate change.

Michael Klare: From a nuclear war perspective, the two greatest dangers I think and I’ve mentioned these, one is the collapse or the disappearance of the Himalayan Glaciers, sparking a war between India and China that would go nuclear, or one between India and Pakistan that would go nuclear. That’s one climate-related risk of nuclear escalation. The other is in the Arctic, and here, I think the danger is the fact that Russia has turned the Arctic into a major stronghold for its nuclear weapons capabilities. It stations a large share of its nuclear retaliatory, warheads on submarines, and other forces that are based in the Arctic. And so, in the event of a conflict between the U.S. and Russia, this could very well take place in the Arctic region and trigger the use of nuclear weapons as a consequence.

Lucas Perry: I think we’ve done a really good job of showing all of the bad things that happen as climate change gets worse. The Pentagon has perspective on everything that we’ve covered here, is that correct?

Michael Klare: Yes.

Lucas Perry: So, how does the Pentagon intend to address the issue of climate change and how it affects its operations?

Michael Klare: The Pentagon has multiple responses to this, and this began as early as 2010 in the Quadrennial Defense Review of 2010. This is a every four-year strategic blueprint released by the Joint Chiefs of Staff and the Secretary of Defense. And that years was the first one that, number one, identified climate change as a national security threat and spelled out the responses that the military should make, and there were three parts to that. One part is, I guess you would call it hardening U.S. bases to the impacts of climate change, increasing resiliency and seawalls to protect low-lying bases, but otherwise, enhancing the survivability of U.S. bases in the face of climate change. That’s one response. A second response is in mitigating the department’s own contributions to climate change by reducing its reliance on fossil fuels. And I could talk what specifically they’re doing in that area.

The third is, and I think this is very interesting, they said that we should not only, that because climate change is a global problem, this was specific, climate change is a global problem, affects our allies and friends, and therefore, we should work with our allies and with the military forces of our allies and friends to do the same things we’re doing at home to do in their countries as well, that is to build resilience, to prepare for climate change, to reduce impacts so that this would be a global cooperative effort, military to military which has gotten very little attention, I think, from the media and from Congress and elsewhere, but a very important part of American foreign policy with respect to climate change.

Lucas Perry: So, there’s hardening our own bases and systems, I believe in your book you mentioned, for example, turning bases into operational islands such that their energy and material needs are disconnected from supply lines. The second was reducing the greenhouse emissions of the military, and the third is helping allies with such efforts. I’m curious if you could describe a bit more the first and the second of these, the hardening of our own systems and bases and becoming more green. Because I mean, it is interesting and at least a little bit surprising that the military is trying to become green in order to improve combat readiness through independence of a foreign and domestic fuel needs and sources. So, could you explain a little bit more this, for example, the drive to create a green fleet in the Navy?

Michael Klare: Sure. Now, but this began during the Obama administration and then went semi-underground during the Trump administration, so the information we have is mainly pre-Trump. Now, under president Biden, climate change has been elevated to a national security threat as per an executive order he issued shortly after taking office, and our new Secretary of Defense Lloyd Austin has said, has issued a complementary statement that climate change is a departmental-wide Department of Defense concern, so activities that were prohibited by the Trump administration will now be revived. So, we will now hear a lot more about this in the months ahead, but there is a four-year blackout of information on what was being done. But during the Obama administration, the Department of Defense was ordered to, as I say, to work on adaptation and mitigation both as part of its responsibilities, the adaptation affected particularly bases in low low-lying coastal areas.

And there are a lot of U.S. bases for historic purposes, for historic reasons are located along the East Coast of the U.S., that’s where they started out. Most important of them is the Norfolk Naval Station in Virginia, the most important naval base in the United States. It’s at sea level and it’s on basically reclaimed swamplands and it’s subsiding into the ocean at the same time sea level is rising. But there are many other bases along the East Coast and Florida, and in the Gulf coast that are at equal risk. And so, part of what the military is doing is to build seawalls to protect them against sea surges, moving critical equipment from areas that are in high flood prone areas to areas that are at higher elevation, adopting codes, any new buildings built on these bases have to be hardened against hurricanes, and sensitive equipment, electronic equipment has to be put in the higher stories so that if they are flooded they won’t be damaged.

There’s a lot of very concrete measures that have to do with base construction that have been undertaken to enhance the resilience of bases in response to extreme storms and flooding. That’s one aspect of this. The mitigation aspect is to reduce reliance on fossil fuels and to convert as wherever possible, vehicles, air, ground, and sea vehicles to use alternative fuels. So, the Navy, the Army, the Air Force are converting their non-tactical vehicle fleets, they all have huge numbers of ordinary sedans, and vans, and trucks. Increasingly, these will be hybrids or electric vehicles. And the Air Force is experimenting with alternative fuels produced by algae, and the Navy has experimented with alternative fuels derived from agricultural products, and so on. So, there’s a lot of experimentation going on, a lot of, some of the biggest solar arrays in the U.S. are on U.S. military bases or constructed at the behest of U.S. military bases by private energy companies. Those are some of the activities that are underway.

Lucas Perry: In addition to threatening U.S. military bases and the bases of our allies, climate change will also affect the safety and security of, for example biosafety level 4 labs and also nuclear power plants. So, I’m curious how you view the risks of climate change affecting crucial infrastructure, should it fail, could create global catastrophe, for example, from nuclear power plants melting down or pathogens being released from biosafety labs that fail under the stresses of climate change.

Michael Klare: I have not seen the literature on the bio labs in the Pentagon literature. What they do worry about is the fragility of the U.S. energy infrastructure in particular, in part because they depend on the same energy infrastructure as we do for their energy needs, for electricity transmission, pipelines and the like to supply their bases and their other facilities. And they’re very aware that the U.S. energy infrastructure is exceedingly vulnerable to climate change, either a lot of it, a very large part of our infrastructure is on the East Coast and the West Coast, very close to sea level, very exposed to storm damage and a lot of it is just fragile. A clearer example of that is Hurricane Maria in Puerto Rico when the electric system collapsed entirely and the Army Corps of Engineers had to come in and were there for almost an entire year rebuilding the energy infrastructure of Puerto Rico.

They’ve had to do this and other places as well. So, they are very worried that climate change disasters, multiple disasters, will knock out the power in the U.S. causing major cascading failures. So, when energy fails, then petrochemical facilities fail. And that’s what happened in Houston in Hurricane Harvey. The power failure went out and these petrochemical facilities, which Houston has many of, failed and toxic chemicals spilled out, and also the sewer system collapsed. So, you have, cascading failures producing toxic threat. And the military had to issue toxic protective clothing to its personnel in doing rescue operations because the water in flooded areas of Houston was poisonous. So, it’s the cascading effects that they worry about. This happened in New York City with Hurricane Sandy in 2012 where power went out, then gas stations couldn’t operate and hospitals and nursing homes couldn’t function. Well, I’m going on here, but you get a sense of the interrelationship between these critical elements of infrastructure. Fires are another aspect of this, as we know from California. A lot of US bases in California are at risk from fires and the transmission lines that carry the energy. I was going to mention the Colonial Pipeline disaster, which was a cyber attack, not climate related, but that exposes the degree to which our energy infrastructure is fragile.

Lucas Perry: If it rains or snows just enough, we’ve all experienced losing power for six hours or more. The energy grid seems very fragile even to relatively normal weather.

Michael Klare: Yes, but with climate change and these multiple, simultaneous disasters where the whole systems break down.

Lucas Perry: Do you see lethal autonomous weapons as fitting into the risks of escalation in a world stressed by climate change?

Michael Klare: Well, I see lethal autonomous weapons as a major issue and problem, which I’ve written about and I worry about a great deal. Now, what is their relationship to climate change? I couldn’t say. I think the military in general is facing the world in which they feel that humans are increasingly unable to cope with the demands of time compression and decision-making and the complexity of the environment in which decision-makers have to operate and that’s partly technological, it’s partly just the complexity of the world that we’ve been discussing.

And so, there’s ever increasing sense among the military that commanders have to be provided with computer assisted decision-making and autonomous operations because they can’t process the amount of data that’s coming into them simultaneously. This is behind not just autonomous weapons systems, but autonomous weapons systems’ decision-making. The new plans for how the Army, Navy, and Air Force will operate will be fewer human decision-makers and more machine information processors and decision-makers, and which humans will be given a menu of possible choices, but they will be strike this set of targets or that set of targets, but not stop and think about this, and maybe we should de-escalate. They’re going to be militarized options.

Lucas Perry: So, some sense of lethal autonomous weapons is potentially exacerbating or catalyzing the speed at which the ladder of escalation is moved through.

Michael Klare: No question about it. Many factors are contributing to that. The speed of weaponry, the introduction of hypersonic missiles, which cuts down flight time from 30 minutes to five minutes, the fact that wars are being conducted in what they call multiple domains simultaneously: cyber, space, air, sea, and ground, that no commander can know what’s happening in all of those domains and make decisions. So, you have to have what they want to create, a super brain called the Joint All-Domain Command and Control System, the JADC2 system, which will collect data from sensors all over the planet and compress it into simplified assessments of what’s happening, and then tell commanders, here are your choices, one, two, and three, and you have five seconds to choose, and if not, we’ll pick the best one and we’ll be linked directly to the firers to launch weapons. This is what the future will look like, and they’re testing this now. It’s called Project Convergence.

Lucas Perry: So, how do you see all of this affecting the risks of human extinction and of existential risks?

Michael Klare: I’m deeply concerned about this inclination to rely more on machines to make decisions of life and death for the planet. I think everybody should be worried about this, and I don’t think enough attention is being paid to these dangers of automating life and death decision-making, but this is moving ahead very rapidly and I think it does pose enormous risks. The reason that I’m so worried is that I think the computer assisted decision-making will have a bias towards military actions.

Humans are imperfect and sometimes we make mistakes. Sometimes we get angry and we go in the direction of being more violent and brutal. There’s no question about that, but we also have a capacity to say, stop, wait a minute, there’s something wrong here and maybe we should think twice and hold back. And, that’s saved us on a number of occasions from nuclear extinction. I recommend the book Gambling with Armageddon by Martin Sherwin, a new account of the Cuban Missile Crisis day by day, hour by hour account, and which it was clear that the US and Russia came very close, extremely close to starting a nuclear war in 1962, and somebody said, “Wait a minute, let’s just think about this. Let’s not rush into this. Let’s give it another 24 hours to see if we can come up with a solution.”

Adlai Stevenson apparently played a key role in this. I fear that the machines we designed are not going to have that kind of thinking built into them, that kind of hesitancy, that second thinking. I think the machines are going to be designed… The algorithms that inhabit them are going to reflect the most aggressive possible outcomes, and that’s why I fear that we move closer to human extinction in a crisis than before, and because of the time of decision-making is going to be so compressed that humans are going to have very little chance to think about this.

Lucas Perry: So, how do you view the interplay of climate change and autonomous weapons as affecting existential risk?

Michael Klare: Climate change is just going to make everything on the planet more stressful in general. It’s going to create a lot of stress, a lot of catastrophes occurring simultaneously and creating a lot of risk events happening that people are going to have to be dealing with, and they’re going to create a lot of hard, difficult choices. Let’s say you’re the president, you’re the commander in chief, and you have multiple hurricanes striking and fires striking the United States, that’s hardly an unlikely outcome, at the same time that there’s a crisis with China and Russia occurring where war would be a possible outcome. There’s a naval clash at sea in the South China Sea or something happening on the Ukraine border, and meanwhile, Nigeria is breaking apart and India and Pakistan are at the verge of war.

These are very likely situations in another 10 to 20 years if climate change proceeds the way it is. So, just the complexity of the environment, the stress that people will be under, the decisions they’re going to have to make swiftly between do we save Miami or do we save Tokyo? Do we save Los Angeles or do we save New York, or do we save London? We only have so many resources. In these conditions, I think the inclination is going to be to rely more on machines to make decisions and to carry out actions, and that I think has inherent dangers in it.

Lucas Perry: Do you and/or the Pentagon have a timeline for… How much and how fast is the instability from climate change coming?

Michael Klare: This is a progression. We’re on that path, so there’s no point at which you could say we’ve reached that level. It’s just an ever increasing level of stress.

Lucas Perry: How do you see the world in five or 10 years given the path that we’re currently on?

Michael Klare: I’m pessimistic about this, and the reason I am pessimistic is because if you go back and read the very first reports of the Intergovernmental Panel on Climate Change, the IPCC, their very first reports, and they would give a series of projections based on their estimates of the pace of greenhouse gas emissions. If they go this high, then you have these projections. If they go higher, then these projections out to 2030, 2040, 2050, we’ve all seen these charts.

So, if you go back to the first ones, basically we’re living in 2021 what they said were the worst case projections for 2040 to 2050 by and large. So, we’re moving into the danger zone. So, what I’m saying is we’re moving into the danger zone much, much faster than the most worst case scenarios that scientists were talking about 10 years ago, or 20 years ago, and if that’s the case, then we should be very, very worried about the pace at which this is occurring because we’re off the charts now from those earlier predictions of how rapidly sea level rise was occurring, desertification was occurring, heat waves. We’re living in a 2050 world now. So, where are we going to be in a 2030? We’re going to be in a 2075 world and that world was a pretty damn scary world.

Lucas Perry: All right, so I’m mindful of the time here. So, just a few more questions about messaging would be nice. So, do you think that tying existential risks to national security issues would benefit the movement towards reducing existential risks, given that climate change is elevated in some sense by the DOD taking it seriously on account of national security?

Michael Klare: So, let me explain why I wrote this book, and this is very much a product of the Trump era, but I think it’s still true today that you have a country that’s divided between environmentalists and climate deniers, and this divide has prevented forward movement in Congress to pass legislation that will make a significant difference in this country, and I believe this has to come from national level, the kind of changes we need, the massive investments in renewables and charging stations for electric vehicles, all these things require national leadership, and right now that’s impossible because of the fundamental divide between the Democrats and Republicans or denialists and environmentalist, however you want to put it. Some of my friends in the environmental community, dear friends, think if we could only get across the message that things are getting worse that those deniers will finally wake up and change their views.

I don’t think that’s going to happen. I think more scientific evidence about climate change is not going to win over more people. We’ve tried that. We’ve done everything we can to make the scientific evidence known. So, the way to win, I believe the military perspective that this is a threat to the national security of the United States of America, are you a patriotic American or not? Do you care about the security of this country or not?

This is not a matter of environmentalism or anti-environmentalism. This is about the national security of this country. Where do you stand on that? That this is a third approach that could possibly win over some segment of that population that until now has resisted action on climate change, that’s not going to listen to an environmentalist or green argument. There is evidence that this approach is making a difference, that Republicans who won’t even talk about the causes of climate change, but who acknowledge that their communities are at risk or the country is at risk on a national security basis, and therefore are willing to invest in some of the changes that are necessary for that reason. So, I do believe that making this argument, it could win over enough of that resistant population to make it possible to actually achieve forward momentum.

Lucas Perry: Do you think that relating climate change to migration issues is helpful for messaging?

Michael Klare: I’m not sure because I think people who are opposed to migration don’t care what the cause is, but I do think that it might feed into the argument that I was just making that our security would be better off by emphasizing climate change and therefore taking steps to reduce the pressures that lead climate migrants to migrate. The military certainly takes that view, so it could be helpful, but I think it’s a difficult topic.

Lucas Perry: All right, so given everything we’ve discussed here today, how would you characterize and summarize the Pentagon’s interest, view, and action on climate change and why that matters?

Michael Klare: So, now we have a new test because as I’ve indicated, we had a blackout period of four years during the Trump administration when all of this was hidden and couldn’t be discussed. So, we don’t know how much was accomplished. Now, this is an explicit priority for the Department of Defense and the defense budget, and other documents say that this is a priority for the department and the Armed Forces, and they are required to take steps to adapt to climate change and to mitigate their role in climate change.

So, we have to see how much actually is accomplished in this new period before really you can make any definitive assessment, but I think that you can see that the language adopted by the Biden administration and Lloyd Austin at the Department of Defense is so much stronger and more vigorous than what the Pentagon was saying in the Obama administration. So, even though there was a four year blackout period, there was a learning curve going on, and what they’re saying today is much more advanced and the sense of recognizing the severity of the risks posed by climate change and the necessity of making this a priority.

Lucas Perry: All right, so as we wrap up, are there any final words or anything you’d like to share that you feel is left unsaid or any parting words for the audience?

Michael Klare: As I started out, we mustn’t forget that if you asked anybody in the military what their job is, they’re going to come back to China number one. So, we shouldn’t forget that, defending against China. It’s only after you peel away the layers of how they’re going to operate in a climate altered world that all of these other concerns start spilling out, but it’s not going to be the first thing that they’re likely to say. I think that has to be clear, based on my conversations, but there is a real awareness that in fact climate change is going to have an immense impact on the operations of the military in the years ahead, and that its impact is going to grow exponentially.

Lucas Perry: All right. Well, thank you very much for coming on Michael, and for sharing all of this with us. I really appreciated your book and I recommend others check it out as well, it’s All Breaking Loose. I think it does a really good job of showing the ways in which the world is going to get worse through climate change. There’s a lot of really great examples in there. So, also the audiobook has a really great narrator, which I very much liked. So, thank you very much for coming on. If people want to check or follow you out on social media, where and how can they do that?

Michael Klare: Oh, I’m at michaelklare.com and let’s start there.

Lucas Perry: All right. Do you also have a place for you where you list publications?

Michael Klare: At that site.

Lucas Perry: At that site? Okay.

Michael Klare: And, it’s K-L-A-R-E, Michael Klare, K-L-A-R-E.

Lucas Perry: All right, thank you very much, Michael.

Avi Loeb on UFOs and if they’re Alien in Origin

  • Evidence counting for the natural, human, and extraterrestrial origins of UAPs
  • The culture of science and how it deals with UAP reports
  • How humanity should respond if we discover UAPs are alien in origin
  • A project for collecting high quality data on UAPs

Watch the video version of this episode here

See here for information on the Podcast Producer position

See the Office of the Director of National Intelligence report on unidentified aerial phenomena here

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This is a follow up interview to the main release that we’ve done with Avi Loeb. After our initial interview, the US Government released a report on UFOs, otherwise now known as UAPs, titled, Preliminary Assessment: Unidentified Aerial Phenomena. This report is a major and significant event in the history and acknowledgement of UFOs as a legitimate phenomena. As our first interview with Avi focused on Oumuamua and its potential alien origin, we also wanted to get his perspective on UFOs, this report, his views on whether they’re potentially alien in origin, and what this all means for humanity.

In case you missed it in the main episode, we’re currently hiring for a Podcast Producer to work on the editing, production, publishing, and analytics tracking of the audio and visual content of this podcast. As the Producer you would be working directly with me, and the FLI outreach team, to help grow, and evolve this podcast. If you’re interested in applying, head over to the Careers tab on the Futureoflife.org homepage or follow the link in the description. The application deadline is July 31st, with rolling applications accepted thereafter until the role is filled. If you have any questions, feel free to reach out to socialmedia@futureoflife.org.

And with that, I’m happy to present this bonus interview with Avi Loeb.

The Office of the Director of National Intelligence has released a preliminary assessment on unidentified aerial phenomena, which is a new word that they’re using for UFO, so now it’s UAP. So can you summarize the contents of this report and explain why the report is significant?

Avi Loeb: The most important statement made in the report is that some of the objects that were detected are probably real and that is based on the fact that they were detected in multiple instruments using radar systems, or infrared cameras, or optical visual cameras, or several military personnel seeing the same thing, doing the same thing at the same time. And so that is a very significant statement because the immediate suspicion is that unusual phenomena occur when you have a smudge on your camera, when there is a malfunction of some instruments, and the fact that there is corroborating evidence among different instruments implies that it must be something real happening. That’s the first significant statement.

And then there were 144 incidents documented but it was also mentioned there is a stigma on reporting because there is a taboo on discussing extraterrestrial technologies, and as a result only a small minority of all events were reported. But nevertheless, the Navy established in March 2019 a procedure for reporting, which was not available prior to that and the Air Force followed on that in December 2020. So it’s all very recent that there is this procedure or formal path through which reports can be obtained. And of course, that helps in the sense that it provides a psychological support system for those who want to report about unusual things they have witnessed, and prior to that they had to dare to speak given the stigma.

And so the second issue is of course we have objects, if some of them are real. And by the way, we only saw a small fraction of the evidence most of it is classified and the reason is that the government owns the sensors that were used to obtain this evidence. And these sensors are being used to monitor the sky, and therefore our national security importance and we don’t want to release information about the quality of the sensors to our adversaries, to other nations. And so the data itself is being classified because the instruments are classified, but nevertheless one can think of several possible interpretations of these real objects.

And I should say that CIA Directors, say like Brennan, and Woolsey, and President Barack Obama spoke about these events as serious matters so that, to me, it all implies that we need to consider them seriously. So there are several possible interpretations, one of course is that they are human made and some other nation produced them, but some of the objects behaved in ways that supersede our technologies, the limits of the technologies we have in the U.S. and we have some intelligence on what other nations are doing. And moreover, if there was another nation with far better technologies they would find themselves in the consumer market because there would be a huge financial benefit to using them or we would see some evidence for them in the battlefield. And at the moment we have a pretty good idea, I would argue, as to what other nations are doing technologically speaking.

So if there are real objects behaving in ways that exceed human technologies, then the question is what could it be? And there are two possibilities, either these are natural phenomena that occur in the atmosphere that we did not expect or they are of extraterrestrial origin, some other technological civilization produced these objects and deployed them here. And of course, both of these possibilities are exciting because we learned something new. So the message I take from this report is that the evidence is sufficiently intriguing for the subject to move away from the talking points of politicians, national security advisors, military personnel that were really not trained as scientists. It should move into the realm of science where we use state-of-the-art equipment, such as cameras installed on wide field telescopes that scan the sky. These are telescopes you can buy off the shelf and you can position them in similar geographical locations, and monitor the sky with open data, and analyze the data using the best computers we have in a completely transparent way like a scientific experiment.

And so that’s what we should do next and instead what I see is that a lot of scientists just ridicule the significance of the report or say business as usual, that there is no need to attend to these statements. And I think it’s an opportunity for science to actually clarify this matter and clear up the fog and this is definitely a question that is of great interest to the public. What is the nature of these unidentified objects or phenomena? Are they natural in origin or maybe extraterrestrial? And I’m very much willing to address this with my research group given proper funding for it.

Lucas Perry: Let’s stick here with this first point, right? So I actually heard Neil deGrasse Tyson on MSNBC just before we started talking and he was mentioning that he thought that there could have been hardware or software artifacts that could have been generating artifacts on these systems. And I think your first point very clearly refutes that because you have multiple different systems plus eyewitness reports all corroborating the same thing. So it’s confusing to me why he would say that.

Avi Loeb: Well, because he is trying to maximize the number of likes he has on Twitter and doubting the reality of these reports appears to be popular among people in academia, among scientists, among some people in the public and so he’s driven by that. My point is, an intelligent culture is driven or actually is following the guiding principles of science and those are sharing evidence based knowledge. What Neil deGrasse Tyson is doing is not sharing evidence based knowledge, but rather dismissing evidence. And my point is, this evidence that is being reported in Washington, D.C. is intriguing enough to motivate us to collect more evidence rather than ignore it. So obviously, if you look at the history of science, we often make discoveries when we find anomalies, things that do not line up with what we expected.

And the best example is quantum mechanics that was discovered a century ago, nobody expected it. It was forced upon us by experiments and actually, Albert Einstein at the time resisted one of its fundamental facets, entanglement, or what he called spooky action at a distance. That the quantum system knows about its different parts even if they are separated by a large distance such that light signals cannot propagate over the time of the experiment. And he argued that this cannot be the case and wrote a paper about it, it’s with his postdocs, it’s called the Einstein-Podolsky-Rosen experiment. That experiment was done and demonstrated that he was wrong and even a century later we are still debating the meaning of quantum mechanics.

So it’s sort of like a bone stuck in the throat of physicists, but nevertheless, we know that quantum mechanics holds and applies to reality. And in fact the reason the two of us are conversing is because of our understanding of quantum mechanics, the fact that we can use it in all the instruments that we use. For example, the speaker that the two of us are using, and the internet, and the computers we are using, all of these use principles of quantum mechanics that were discovered over the century. And my point is, that very often when you do experiments there are situations where you get something you don’t expect, and that’s part of the learning experience. And we should respect deviations from our expectations because they carve a new path to new understanding, new learning about reality and about nature rather than ridiculing it, rather than always thinking that what we will find will match what we expected.

And so if the government among all comes along with reports about unusual phenomena, the government is very conservative very often, you would expect the scientific community to embrace that as an exciting topic to investigate because the government is saying something, let’s figure out what is it about. Let’s clarify the nature of these phenomena that appear anomalous rather than saying business as usual, I don’t believe it, it could be a malfunction of the instrument. So if it is a malfunction of the instrument, why did many instruments show the same thing? Why did many pilots show the same thing? And I should clarify that in the courtroom if you have two eyewitness testimonies that corroborate each other you can put people in jail as a result. So we believe people in the legal system and somehow when it comes to pilots, who are very serious people that serve our country, then Neil deGrasse Tyson dismisses their testimony.

So my point is not that we must believe it, but rather that it’s intriguing enough for us to collect more data and evidence and let’s not express any judgment until we collect that evidence. The only sin we can make is to basically ignore the reports and do business as usual, which is pretty much what he is preaching for. And my point is, no instead we should invest funds in new experiment or experiments that will shed more light on the nature of these objects.

Lucas Perry: So the second point that you made earlier was that the government was establishing a framework and system for receiving reports about UFOs. So part of this and part of this document, is it true to say then there is also a confirmation that the government does not know what these are and that they are not a secret U.S. project?

Avi Loeb: Yeah, they stated it explicitly in the report. They said the data is not good enough, the evidence is not good enough to figure out the nature of these objects so we don’t know what they are. And by the way, you wouldn’t expect military personnel or politicians to figure out the nature of anomalous objects because they were not trained as scientists. So when you go to a shoemaker, you won’t expect the shoemaker to bake you a cake. These are not people that were trained to analyze data of this type or to collect new data such that the nature of these objects will be figured out.

That is what scientists are supposed to do and that’s why I’m advocating for moving this subject away from Washington, D.C. into an open discussion in the scientific community where we’ll collect open data, analyze it in a transparent way, not with government owned sensors or computers, and then it will be all clear. It will be a transparent process. We don’t need to rely on Washington, D.C. to tell us what appears in our sky. The sky is unclassified in most locations. We can look up anytime we want and so we should do it.

Lucas Perry: So with your third point, there’s this consideration that people are more likely to try and give a conventional explanation of UAPs as coming from other countries like Russia or China, and you’re explaining that there are heavy commercial incentives. For example, that if you had this kind of technology it could for example, revolutionize your economy and you wouldn’t just be using it to pester U.S. Navy pilots off the coast, right? It could be used for really significant economical reasons. And so it seems like that also counts as evidence against it being conventional in origin or a secret government project. What is your perspective on that?

Avi Loeb: Yes and it would not only find its place in the consumer market, but also in the battlefield. And we have a sense of what other nations are doing because the U.S. has its own intelligence and we pretty much know what the status of their science and technology is. So it’s not as if we are completely in the dark here and I would argue if the U.S. government reports these objects there is good evidence that they are not made by those other nations because if our intelligence would tell us that they were potentially made by other nations then we would try to develop the same technologies ourselves. And another way to put it is if a scientific inquiry into the nature of these objects allows us to get a high resolution photograph of one of them and then we see the label “Made in China” or “Made in Russia,” then we would realize that there was a major failure of national intelligence and that would be a very important conclusion of course, that would have implications for our national security.

But I doubt that this is the case because we have some knowledge of what other nations are doing and the data would not have been released this way in this kind of a report if there was suspicion that these objects are human made.

Lucas Perry: So the report makes clear then that it’s not U.S. technology and there’re also reasons that count against it being for example, Russian or Chinese technology because the incentives are aligned for them to just deploy it already and use it in the public sector. So before we get into more specifics about thinking about whether or not these are human or extraterrestrial in origin, I’m curious if you could explain a bit more the flight and capability characteristics of these UAPs and UFOs and what you feel are the most significant reports of them and their activity.

Avi Loeb: Well, I didn’t have direct access to the data, especially not the classified one and I would very much want to see the full dataset before expressing an opinion, but at least some of the videos that were shown indicated motions that cannot be reproduced by the kind of crafts that we own. But what I would like to know is whether when the object moves faster than the speed of sound for example in air, whether it produces a sonic boom, a shockwave that we see for example, when jets do the same because that would be an indication that indeed there is a physical object that is compressing air as it moves around. Or if it moves through water I want to see the splash of water and from that I can infer some physical properties.

And of course, I would like to have a very high resolution image of the object so that I can see if it has screws, if there is something written on it, either Made in China or Made on Planet X. Either messages would be of great importance. So what I’m really after is access to the best data that we have and obviously, it will not be released from the government because the best data is probably classified. But I would like to collect it through using scientific instrumentation, which by the way could be far better than the instruments that were on airplanes that the pilots were using or on Navy because these were designed to be in combat situations and they were not optimal for analyzing such objects. And we can do much better if we choose our scientific instruments carefully and design the experiment in a way that would reproduce the results with a much higher fidelity of the data.

Lucas Perry: There is so much about this that is similar to Oumuamua in terms of there being just barely… The imaging is not quite enough to really know what it is and then there being lots of interesting evidence that counts for extraterrestrial in origin. Is that a perspective you share?

Avi Loeb: Well, yes I wrote a Scientific American article where I said one thing we know about Oumuamua is that it probably had a flat shape, pancake like, and also if its push away from the sun was a result of reflecting sunlight it must have been quite thin and the size of a football field. And in that case, I thought maybe it serves as a lightsail, but potentially it could also be a receiver intended to detect information or signals from probes that were sprinkled on planets in the habitable zone around the sun. So if for example the UAP are probes transmitting signals, then the passage of such a receiver near Earth was meant to obtain that information. And Oumuamua for example, was tumbling every eight hours, was looking in all directions in principle for such signals, so that could be one interpretation that it was thin not because it was a lightsail, but because it served a different purpose.

And in September 2020, we saw another object that also exhibited an excess push away from the sun by reflecting sunlight and had no cometary tail. It was given the name 2020 SO and it was a rocket booster from a 1966 mission. It had thin walls for a completely different purpose, not having anything to do with it being a lightsail. So I would argue that perhaps Oumuamua had these weird properties because it served a different purpose and that’s why we should both try to get a better image of an object like Oumuamua and of the unidentified objects we find closer to Earth. And in both cases, a high resolution photograph is better than a 1,000 words, in my case better than 66,000 words, the number of words in my book.

Lucas Perry: In both cases, a little bit better instrumentation would have seemingly made a huge difference. So let’s pivot into again, this area of conventional explanations. And so we talked a little bit earlier about one conventional explanation being that this is some kind of advanced secret military technology of China or Russia that’s used for probing our aerial defenses. And the argument that counts against that again, was that there are military and economic incentives to deploy it more fully, especially because the flight characteristics that these objects are expressing are so much greater than anything that America has in terms of the speed and the agility. So one theory is that instead of the technology being actual, like they actually have the technology that goes that fast and is that agile, that this is actually some form of spoofing technology, so some kind of way of an adversary training electronic countermeasures to simulate, or emulate, or create the illusion of what we witnessed in terms of the U.S. instruments. So do you think that such an explanation is viable?

Avi Loeb: I mean, it’s possible and that’s why we need more data. But it’s not easy to produce an illusion in multiple instruments, both radar, infrared, and optical sensors because you can probably create an illusion for one of these sensors, but then for all of them it would require a great deal of ingenuity and then you would need a good reason to do that. Why would other nations engage in deceiving us in this way for 20 years? I mean, that would look a bit off and also, we would have probably found something, some clue about them trying to do that because they would have trained such probes or such objects first in their own facilities and we would see some evidence for that. So I find it hard to believe, I would think it’s either some natural phenomena that we haven’t yet experienced or suspected or it’s this unusual possibility of an extraterrestrial origin.

And either way we will learn something new by exploring it more. We should not have any prejudice. We should not dismiss it. That would be the worst we can do, just dismiss it, and ridicule it, and continue business as usual because actually it’s exciting to try and figure out a puzzle. That’s what detectives often do and I just don’t understand the spirit of dismissing it and not looking into it at all.

Lucas Perry: So you just mentioned that you thought that it might have some kind of natural explanation. There are very strange things in nature. I’m not sure if this is for example, real or not, but there’s a Wikipedia page for example, for ball lightning, and there’s also really weird phenomena that you can have in the sky if the lighting in the sky is just right, and where the sun is where you get weird halos and things. And throughout history there are reports of dancing lights in the sky or things that might have been collective hallucinations or actually real. In terms of it being something natural that we understand or it being something human made that we’re not aware of, what to you is the most convincing natural or conventional explanation of these objects? An explanation that is not extraterrestrial in origin.

Avi Loeb: Well, if it’s dancing lights it wouldn’t produce a radar echo. So as I said, I don’t have access to the data in each and every incident, but there are some fundamental logic that one can use for each of these datasets and figure out if it could be an illusion. If not, if it must be a real object, somehow nature has to produce a real object that behaves this way and until I get my own data and reproduce those I won’t make any statement. But I’m optimistic that given the appropriate investment of funds, which I’m currently discussing with private sector funders, that we can do it. And just to give you an example, if you wanted to get a high resolution image, like a megapixel image of a one meter size object at a distance of a kilometer, you just need a one meter telescope for that and observing it at optical light. And you will be able to see millimeter size features on it, like the head of a pin.

People ask why didn’t we see it already in iPhone images of the sky? Well, the iPhone camera is a millimeter or a few millimeters in aperture size and it’s too small. You can’t get anything better than a fuzzy image of a very distant object. So you really need to have a dedicated experiment, and I think one can do it, and I’m happy to engage in that.

Lucas Perry: You would also wonder that these things are probably… So if they were extraterrestrial in origin, you would expect that they would be pretty intelligent and that they might understand what our sensor capabilities are. So, I think perhaps that might count as evidence for why given that there are billions of camera phones around the planet that there aren’t any good pictures. What is your perspective on that?

Avi Loeb: If I had to guess, I would think of these systems as equipped with artificial intelligence. We already have systems of artificial intelligence that are capable of superseding human abilities and within a decade they will be more intelligent than people, we’ll be able to learn from machine learning, and adapt to changing circumstances, and behave very intelligently. So in fact, I can imagine that if another civilization had that technological development of more than a century, more than we did, they could have produced systems that are autonomous. It doesn’t make any sense to communicate with the sender because the nearest star is four light years away. It takes four years for a signal to reach even the nearest star and it takes tens of thousands of years to reach the end of the galaxy, the Milky Way.

And so there is no sense of a piece of equipment communicating with its senders in order to get guidelines as to what to do. Instead, it should be autonomous, it has its own intelligence, and it could outsmart us. We already almost have such systems. So in fact, we may need to use our own artificial intelligence systems in order to interpret the actions of those artificial intelligence systems. So it will resemble the experience of asking our children to interpret the content that we find on the internet because they have better computer skills. We need our computers to tell us what their computers are doing. And that’s the way I think about it and these systems could be so intelligent that they do things that are subtle. They don’t appear and start a conversation with us. They are sort of gathering information of interest to them and acting in a way that reflects the blueprint that guided whoever created them.

And the question is, what is their agenda? What is their intent? And that will take us a while to figure out. We have to see what kind of information they’re seeking, how do they respond to our actions, and eventually we might want to engage with them. But the point is, many people think of contact as being a very sort of abrupt and interaction of extraordinary proportion that is impossible to deny, but in fact, it could be very subtle because they are very intelligent. If you look at intelligent humans, they’re not aggressive, they’re often thinking about everything they do and select their actions appropriately. They don’t get into very violent confrontations often. We need to rely on evidence rather than prejudice and the biggest mistake we can make is the mistake made by philosophers during the days of Galileo. They said, “We don’t want to look through at telescope because we know that the sun moves around the Earth.”

Lucas Perry: We spoke a little bit earlier about Bayesian reasoning and Oumuamua. Do you have the same feelings about not having priors about the UAPs being extraterrestrial or human in origin? Or is there a credence that you’re able to assign to it being extraterrestrial in origin?

Avi Loeb: The situation is even better in the context of UAP because they are here, not far from us and we can do the experiment with a rather modest budget. And therefore, I think we can resolve this issue with no need to have a prejudice. Often you need a prior if the effort requires extraordinary funds, so you have to say, okay is it really worth investing those funds? But my point is, that finding the answer to the nature of UAP may cost us much less than we already spent in the search for dark matter. We haven’t found anything. We don’t know where the dark matter is. We spent hundreds of millions of dollars and at a cost lower than that, maybe by an order of magnitude, we can try and figure out the nature of UAP. So given the price tag, let’s not make any assumptions. Let’s just do it and figure it out.

Lucas Perry: If these are extraterrestrial in origin, one might expect that they are here for probing or information gathering. So you said there are reports going back 20 years, if they are extraterrestrial in origin who knows how long they’ve been here. They could have been sent out through nanoscale shots out into the cosmos that land, and then grow, and replicate on some planet, and act as scouts. So if this were the case, that they were here as information gathering probes, one might wonder why they don’t use much more advanced technology. So for example, why not use nanotechnology that we would have no hope of detecting? In one report for example though, the pilot explains it following him and they’re kind of like… It comes right in front of him and then it disappears, so that disappearing seems a bit more like magic, right? Any sufficiently powerful technology is indistinguishable to magic to a less advanced civilization. But the other characteristics seem maybe like 100 or 200 years away of human technological advancement, so what’s up with that?

Avi Loeb: Well, yeah so for us to figure out what’s going on we need more data and it may well be that there are lots of things happening that we haven’t yet realized, because they are done in a way that is very subtle, that we cannot really detect easily, because our technologies are quite limited to the century and a half that we developed them. So you are right, there may be a hidden reality that we are not aware of, but we are seeing traces of things that attract our attention. That’s why when we see something intriguing we should dig into that. It could be clues for something that we have never imagined.

So for example, if you were to present a cellphone to a caveman, obviously the cellphone would be visible to the caveman and the caveman would think, oh it’s probably a rock, a shiny rock. So the caveman will recognize part of reality, that there is an object reflecting light, that is a bit more shiny than a typical rock because the caveman is used to playing with rocks. But the caveman, initially at least, will not figure out the features on the cellphone and the fact that he can speak to other people through this rock. That’s what will take us a while to be educated about. And the question is, among the things that are happening around us, which fraction are we aware of with the correct interpretation? And maybe we are not.

Lucas Perry: Moving on a bit here, so if these UAPs, or Oumuamua itself, or some new interstellar object that we were able to find were fairly conclusively shown to be extraterrestrial technology, what do you think our response should be? It seems like on one hand this would clearly potentially be an existential threat, which then makes it relevant to the Future of Life Institute. On the other hand, it’s likely that we could do nothing to counter such a threat. We couldn’t even counter humanity probably 50 years from now if we had to defend ourselves against a 50 year old, wiser, more technologically advanced version of ourself. And on cosmological timescales you would expect that even a 1,000, 2,000 year lead would be pretty common, but also indefensible. So there’s a sense that also an antagonistic attitude would probably make things worse, but also that we couldn’t do anything. So how do you think humanity should react?

Avi Loeb: The question of intent is indeed the next question after you identify an object that appears to be of extraterrestrial technological origin. We should all remember the story about the Trojan horse that looked very innocent to the citizens of Troy, but ended up serving a different purpose. That of course implies that we should collect as much evidence as possible about the objects that we find at first and see how they behave, what kind of information they are seeking, how do they respond to our actions, and ultimately we might want to engage with them. But I should mention, if you look at human history, nations that traded with each other benefited much more than nations that went into war with each other. And so a truly intelligent species might actually prefer to benefit from the interaction with us rather than kill us or destroy us, and perhaps take advantage of the resources, use whatever we are able to provide them with. So it’s possible that they are initially just spectators trying to figure out what are the things that they can benefit from.

But from our perspective, we should obviously be suspicious, and careful, and we should speak in one voice. Humanity as a whole, there should be an international organization perhaps related to the United Nations or some other entity that makes decisions about how to interact with whatever we find. And we don’t want one nation to respond in a way that would not represent all of humanity because that could endanger all of us. In that forum that makes decisions about how to respond, there should be of course physicists that figure out the physical properties of these objects and there should be policymakers that can think about how best to interact with these objects.

Lucas Perry: So you also mentioned earlier that you were talking with private funders about potentially coming up with an action plan or a project for getting more evidence and data on these objects. So I guess, there’s a two part question here. I’m curious if you could explain a little bit about what that project is about and more generally what can the scientific, non-governmental, and amateur hobbyist communities do to help investigate these phenomena? So are there productive ways for citizen scientists and interested listeners to contribute to the efforts to better understand UAPs?

Avi Loeb: Well, my hope is to get a high resolution photograph. It’s a very simple thing to desire. We’re not talking about some deep philosophical questions here. If we had a megapixel image, an image with a million resolution elements of an object, if it has a size of a meter that would mean each pixel is a millimeter in size, the size of the head of a pin, you can pretty much see all the details on the object and try and figure out, reverse engineer what it’s meant to do and whether it’s human made or not. So even a kid can understand my ambition. It’s not very complicated. Just get a megapixel image of such an object. That’s it.

Lucas Perry: They seem common enough that it wouldn’t be to difficult if the-

Avi Loeb: Well, the issue is not how common they are but what device you are using to image them, because if you use an iPhone the aperture on the iPhone will give you only fuzzy image. What you need is a meter sized telescope collecting the information and resolving an object of a meter size at the distance of a kilometer down to a millimeter resolution.

Lucas Perry: Right, right. I mean that Navy pilots, for example, have reported seeing them every day for years, so if we had such a device then you wouldn’t have to wait too long to get a really good picture of it.

Avi Loeb: So that’s my point, if these are real objects we can resolve them, and that’s what I want to have, a high resolution image. That’s all. And it will not be classified because it’s being taken by off the shelf instruments. The data will be open and here comes the role that can be played by amateurs, once the data is available to the public anyone can analyze it. Nothing is classified about the sky. We can all look up and if I get that high resolution image, believe me that everyone will be able to look at it.

Lucas Perry: Do you have a favorite science fiction book? And what are some of your favorite science fiction ideas?

Avi Loeb: Well, my favorite film is Arrival and in fact, I admired this film long ago, but a few months ago the producer of that film had a Zoom session with me to tell me how much he liked my book Extraterrestrial. And I told him, “I admired your film long before you read my book.” The reason I like this film is because it deals with the deep philosophical question of how to communicate with an alien culture. In fact, even the medium through which the communication takes place in the film is unusual and the challenge is similar to code breaking, sort of like the project that Alan Turing led during the Second World War of the enigma, trying to figure out, to break the code of the Nazis. So if you have some signal and you want to figure out the meaning of it, it’s actually a very complex challenge depending on how the information is being encoded. And I think the film addresses it in a very genuine and original fashion and I liked it a lot.

Lucas Perry: So do you have any last minute thoughts or anything you’d just really like to communicate to the audience and the public about UAPs, these reports, and the need to collect more evidence and data for figuring out what they are?

Avi Loeb: My hope is that with a high resolution image we will not only learn more about the nature of UAP but change the culture of the discourse on this subject. And I think that such an image would convince even the skeptics, even people that are currently ridiculing it, to join the discussion, the serious discussion about what all of this means.

Lucas Perry: And if there are any private funders or philanthropists listening that are interested in contributing to the project to capture this data, how is it best that they get in contact with you?

Avi Loeb: Well, they can just send me an email to aloeb@cfa.harvard.edu and I would be delighted to add them to the group of funders that are currently showing interest in it.

Lucas Perry: All right, thank you very much Avi.

Avi Loeb: Thank you for having me.

Avi Loeb on ‘Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

  • Whether ‘Oumuamua is alien or natural in origin
  • The culture of science and how it affects fruitful inquiry
  • Looking for signs of alien life throughout the solar system and beyond
  • Alien artefacts and galactic treaties
  • How humanity should handle a potential first contact with extraterrestrials
  • The relationship between what is true and what is good

3:28 What is ‘Oumuamua’s wager?

11:29 The properties of ‘Oumuamua and how they lend credence to the theory of it being artificial in origin

17:23 Theories of ‘Oumuamua being natural in origin

21:42 Why was the smooth acceleration of ‘Oumuamua significant?

23:35 What are comets and asteroids?

28:30 What we know about Oort clouds and how ‘Oumuamua relates to what we expect of Oort clouds

33:40 Could there be exotic objects in Oort clouds that would account for ‘Oumuamua

38:08 What is your credence that ‘Oumuamua is alien in origin?

44:50 Bayesian reasoning and ‘Oumuamua

46:34 How do UFO reports and sightings affect your perspective of ‘Oumuamua?

54:35 Might alien artefacts be more common than we expect?

58:48 The Drake equation

1:01:50 Where are the most likely great filters?

1:11:22 Difficulties in scientific culture and how they affect fruitful inquiry

1:27:03 The cosmic endowment, traveling to galactic clusters, and galactic treaties

1:31:34 Why don’t we find evidence of alien superstructures?

1:36:36 Looking for the bio and techno signatures of alien life

1:40:27 Do alien civilizations converge on beneficence?

1:43:05 Is there a necessary relationship between what is true and good?

1:47:02 Is morality evidence based knowledge?

1:48:18 Axiomatic based knowledge and testing moral systems

1:54:08 International governance and making contact with alien life

1:55:59 The need for an elite scientific body to advise on global catastrophic and existential risk

1:59:57 What are the most fundamental questions?

 

See here for information on the Podcast Producer position

See the Office of the Director of National Intelligence report on unidentified aerial phenomena here

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Avi Loeb, and in it, we explore ‘Oumuamua, an interstellar object that passed through our solar system and which is argued by Avi to potentially be alien in origin. We explore how common extraterrestial life might be, how to search for it through the space archaeology of bio and techno-signatures they might create. We also get into Great Filters and how making first contact with alien life would change human civilization.

This conversation marks the beginning of the continuous uploading of video content for all of our podcast episodes. For every new interview that we release, you will also be able to watch the video version of each episode on our YouTube channel. You can serach for Future of Life Institute on YouTube to find our channel or check the link in the description of this podcast to go directly to the video version of this episode. There is also bonus content to this episode which has been released speararetly on both our audio and visual feeds.

After our initital interview, the U.S. government released a report on UFOs, otherwise now known as UAPs, titled “Prelimiary Assessment: Unidentified Aerial Phenomena”. Given the release of this report and the relevance of UFOs to ‘Oumuamua, both in terms of the culture of science surrounding UFOs and their potential relation to alien life, I sat down to interview Avi for a second time to explore his thoughts on the report as well as his assessment of unidentified aerial phenomena. You can find this bonus content wherever you might be listening.

We’re also pleased to announce a new opportunity to join this podcast and help make existential risk outreach content. We are currently looking to hire a podcast producer to work on the editing, production, publishing, and analytics tracking of the audio and visual content of this podcast. You would be working directly with me, and the FLI outreach team, to help produce, grow, and evolve this podcast. If you are interested in applying, head over to the “Careers” tab on the FutureofLife.org homepage or follow the link in the description. The application deadline is July 31st, with rolling applications accepted thereafter until the role is filled. If you have any questions, feel free to reach out to socialmedia@futureoflife.org. 

Professor Loeb received a PhD in plasma physics at the age of 24 from the Hebrew University of Jerusalem and was subsequently a long-term member at the Institute for Advanced Study in Princeton, where he started to work in theoretical astrophysics. In 1993, he moved to Harvard University where he was tenured three years later. He is now the  and is a former chair of the Harvard astronomy department. He also holds a visiting professorship at the Weizman Institute of Science and a Sackler Senior Professorship by special appointment in the School of Physics and Astronomy at Tel Aviv University. Loeb has authored nearly 700 research articles and four books. The most recent of which is “Extraterrestial: The First Sign of Intelligent Life Beyond Earth”. This conversation is centrally focused on the contents of this work. And with that, I’m happy to present this interview with Avi Loeb.

To start things off here, I’m curious if you could explain what ‘Oumuamua’s wager is and what does it mean for humanity in our future?

Avi Loeb: ‘Oumuamua was the first interstellar object that was spotted near earth. And by interstellar, I mean an object that came from outside the solar system. We knew that because it moved too fast to be bound to the sun. It’s just like finding an object in your backyard from the street. And this saves you the need to go to the street and find out what’s going on out there. In particular, from my perspective, it allows us to figure out if the street has neighbors, if we are the smartest kid on the block, because this object looked unusual. It didn’t look like any rock that we have seen before in the solar system. It exhibited a very extreme shape because it changed the amount of reflected sunlight by a factor of 10 as it was tumbling every eight hours.

It also didn’t have a cometary tail. There was no gas or dust around it, yet it showed an excess push away from the sun. And the only possible interpretation that came to my mind was a reflection of sunlight. And for that, the object had to be very thin, sort of like a sail, but being pushed by sunlight rather than by the wind. This, you often find on a boat. And the nature doesn’t make sail, so in a scientific paper, we propose that maybe it’s artificial in origin. And since then in September 2020, there was another object found that was pushed away from the sun by reflecting sunlight. And without the cometary tail, it was discovered by the same telescope in Hawaii, Pan-STARRS, and was given the name 2020 SO. And then, the astronomers realized actually it’s a rocket booster that we launched in 1966 in a lunar landing mission. And we know that this object had very thin walls, and that’s why it had a lot of area for its mass and it could be pushed by reflecting sunlight.

And we definitely know that it was artificial in origin, and that’s why it didn’t show cometary tail because we produced it. The question is, who produced ‘Oumuamua? And my point is that just like Blaise Pascal, the philosopher, argued that we cannot ignore the question of whether God exists, because Pascal was a mathematician and he said, okay, logically the two possibilities either God exists or not. And we can’t ignore the possibility that God exists because the implications are huge. And so, my argument is very similar. The possibility that ‘Oumuamua is a technological relic carries such great consequences for humanity, that we should not ignore it. Many of my colleagues in academia dismiss that possibility. They say we need extraordinary evidence before we even engage in such a discussion. And my point is requiring extraordinary evidence is a way of brushing it aside.

It’s a sort of a self-fulfilling prophecy if you’re not funding research that looks for additional evidence, it’s sort of like stepping on the grass and claiming the grass doesn’t grow. Because for example, to the tape gravitational waves required an investment of $1.1 billion by the National Science Foundation. We would never discover gravitational waves unless we invest that amount. To search for dark matter, we invested hundreds of millions of dollars so far. We didn’t find what the dark matter is. It’s a search in the dark. But without the investment of funds, we will never find. So on the one hand, the scientific community puts almost no funding towards the search for technological relics, and at the same time argues all the evidence is not sufficiently extraordinary for us to consider that possibility in the first place. And I think that’s a sign of arrogance. It’s a very presumptuous statement to say, we are unique and special. There is nothing like us in the universe.

I think a much more reasonable down to earth kind of approach is a modest approach. Basically saying, look, the conditions on earth are reproducing in tens of billions of planets within the Milky Way galaxy alone. We know that from the capital satellite, about half of the sun-like stars have a planet the size of the earth, roughly at the same separation. And that means that not only we are not at the center of the universe, like Aristotle argued, we are also what we find in our backyard is not privileged. There are lots of sun-earth systems out there. And if you arrange for similar circumstances, you might as well get similar outcomes. And actually most of the stars formed billions of years before the sun. And so that to me indicates that there could have been a lot of technological civilization like ours that launched equipment into space just like we launched the Voyager 1, Voyager 2, New Horizons, and we just need to look for it.

Even if these civilizations are dead, we can do space archeology. And what they mean by that is when I go to the kitchen and I find an ant, I get alarm because there must be many more ants out there. So we found ‘Oumuamua, to me, it means that there must be many more out there and weird objects that do not look like a comet or an asteroid that we have seen before within the solar system. And we should search for them. And for example, in a couple of years, there would be the Vera Rubin Observatory that would be much more sensitive than the Pan-STARRS telescope and could find one such ‘Oumuamua like object every month. So when we find one that approaches us and we have an alert of a year or so, we can send a spacecraft equipped with a camera that will take a close up photograph of that object and perhaps even land on it, just like OSIRIS-REx landed on the asteroid Bennu recently and collected a sample from it, because they say a picture is worth a thousand words.

In my case, a picture is worth 66,000 words. The number of words in my book. If we had the photograph, I will need to write the book. It would be obvious whether it’s a rock or an artificial object. And if it is artificial and we land on it, it can read off the label made on Planet “X” and even import the technology that we find there to earth. And if it’s a technology representing our future, let’s say a million years into our future, it will save us a lot of time. It will give us a technological leap and it could be worth a lot of money.

Lucas Perry: So, that’s an excellent overview. I think of a really good chunk of the conversation, right? So there’s this first part of an interstellar object called ‘Oumuamua, entering the solar system in 2017. And then there are lots of parameters about and properties of this object, which are not easily or readily explainable as an asteroid or as a comet. Some of these things that we’ll discuss are for example, its rotation, its brightness variation, its size, its shape, how it was accelerating on its way out. And then the noticing of this object is happening in a scientific context, which has some sense of arrogance of not being fully open to exploring hypotheses that seem a bit too weird or too far out there. People are much more comfortable trying to explain it as some kind of like loose aggregate of a cosmic dust bunny or other things which don’t really fit or match the evidence.

And so then you argue that if we look into this with epistemic humility, then if we follow the evidence, it takes us to having a reasonable amount of credence that this is actually artificial in origin rather than something natural. And then that brings up questions of other kinds of life, and the Drake equation, and what it is that we might find in the universe, and how to conduct space archeology. So to start off, I’m curious if you could explain a bit more of these particular properties that ‘Oumuamua had and why it is that a natural origin isn’t convincing to you?

Avi Loeb: Right. I basically follow the evidence. I didn’t have any agenda. And in fact, I worked on the early universe and the black holes throughout most of my career, and then came along this object that was quite unusual. A decade earlier, I predicted how many rocks from other stars should we expect to find. And that was the first paper predicting that. And we predicted the Pan-STARRS telescope that discovered the ‘Oumuamua will not find anything. And the mere detection of ‘Oumuamua was a surprise by all this with magnitude, I should say. And it is still a surprise given what we know about the solar system, the number of rocks that the solar system produce. But nevertheless, that was the first unusual fact, but it still allowed for ‘Oumuamua to be a rock. And then, it didn’t show any cometary tail. And the Spitzer Space Telescope then put very tight limits on any carbon-based molecules in its vicinity or any dust particles.

And it was definitely clear that it’s not a comet because if you wanted to explain the excess push that it exhibited away from the sun through cometary evaporation, you needed about 10% of the mass of this object to be evaporated. And that’s a lot of mass. We would have seen it. The object size is of over the size of the football field, the 100 to 200 meters. And we would see such evaporation easily. So, that implied that it’s not a comet. And then if it’s not the rocket effect that is pushing it through evaporation, the question arose as to what actually triggers that push. And the suggestion that we made in the paper is that it’s the reflection of sunlight. And for that to be effective, you needed the object to be very thin. The other aspect of the object that was unusual is that as it was tumbling, every eight hours, the amount of sunlight reflected from it changed by a factor of 10.

And that implied that the object has an extreme shape, most likely pancake-shaped, flat and not cigar-shaped. Depiction of the object that’s cigar was based on the fact that projected on the sky as it was tumbling, the area that it showed us changed by a factor of 10. So then of course, if you look at the piece of paper tumbling in the wind and you look at it when it’s sideways, it does look like a cigar, but intrinsically it’s flat. And that is at the 90% confidence when trying to model the amount of light reflected from it as it was tumbling. The conclusion was at the 90% confidence that it should be pancake-shaped, flat, which again is unusual. You don’t get such objects very often in the context of rocks. And the most that we have seen before was of the order of a factor of three in length versus width. And then came the fact that it originated from a special frame of reference called the local standard of rest, which is sort of like the local parking lot of the Milky Way galaxy.

If you think about it, the stars are moving relative to each other in the vicinity of the sun, just like cars moving relative to each other in the center of a town. And then there is a parking lot that you can get to when you average over the motions of all of the stars in the vicinity of the sun, and that is called the local standard of rest. And ‘Oumuamua originated at rest in that frame. And that’s very unusual because only one in 500 stars is so much at rest in that frame as ‘Oumuamua was. So firstly, it tells you it didn’t originate from any of the nearby stars. Also, not likely from any of the far away stars because they are moving even faster relative to us, if they’re far away because of the rotation around the center of the Milky Way galaxy.

So it was not a natural result yet, a very small likelihood to have an object that is so rare. But then, or sort of like a buoy sitting at rest on the surface of the ocean and the sun bumped into it like a giant ship. And the question is if it’s artificial in origin, why would it originate from that frame? And one possibility is that it’s a member of objects on a grid that’s for navigation purposes. If you want to know your coordinates as you’re navigating an interstellar space, you find your location relative to this grid. And obviously you want those objects to be stationary, to be addressed relative to the local frame of the galaxy. And another possibility is that it’s a member of relay stations for communication. So to save on the power needed for transmission of signals, you may have relay stations like we have on earth and it’s one of them.

We don’t know the purpose of this object because we don’t have enough data on it. That’s why we need to find more of the same. But my basic point is there were six anomalies of this object that I detail in my book, Extraterrestrial, and I also wrote about in Scientific American. And these six anomalies make it very unusual. If you assign a probability of 1% to the object having each of these anomalies, when you multiply them, you get the probability of one in a trillion that this object is something that we have seen before. So clearly, it’s very different from what we’ve seen before. And response from the scientific community was to dismiss the artificial origin. And there were some scientists that took the scientific process more seriously and tried to explain the origin of  from a natural source. And they suggested four possibilities after my paper came out.

And one of them was maybe it’s a hydrogen iceberg, a chunk of frozen hydrogen that we’ve never seen before by the way. And then the idea is that when hydrogen evaporates, you don’t see the cometary tail because it’s transparent. The problem with that idea is that hydrogen evaporates very easily. So, we showed in a follow-up paper that such a chunk of frozen hydrogen the size of a football field would not survive the journey through interstellar space from its birth site to the solar system. And then there was another suggestion, maybe it’s a nitrogen iceberg that was chipped off the surface of a planet like Pluto. And then we showed in a follow-up paper that in fact you need more mass in heavy elements than you find in all the stars in the Milky Way galaxy by orders of magnitude more just to have a large enough population of nitrogen icy objects in space to explain the discovery of ‘Oumuamua.

And the reason is that there is a very thin layer of nitrogen, solid nitrogen on the surface of Pluto. And that makes a small fraction of the mass budget of the solar system. And so you just cannot imagine making enough chunks, even if you rip off all the nitrogen on the surface of exo-Plutos. It just doesn’t work out this scenario. And then there was a suggestion, maybe it’s a dust bunny as you mentioned it, a cloud of dust particles very loosely bound. And it needs to be a hundred times less dense than air so that when reflecting sunlight, it will be pushed like a feather. And the problem with that idea is that such a cloud would get heated by hundreds of degrees when it gets close to the sun and they would not maintain its integrity. So, that also has a problem.

And the final suggestion was maybe it’s a fragment, a shrapnel from a bigger object that pass close to a star. And the problem with that is the chance of passing close to a star is very small, most objects do not. So, why should we see the first interstellar object is belonging to that category? And the second is when you tidally disrupt a big object when passing through nearest star, the fragments usually get elongated and not pancake-shaped. You get often a cigar-shaped object. So, all of these suggestions have major flows. And my argument was simple. If it’s nothing like we have seen before, we better leave on the table the possibility that it’s artificial. And then, take a photograph of future objects that appears weirdest as this one.

Lucas Perry: So you mentioned the local standard of rest, which is the average velocity of our local group of stars. Is that right?

Avi Loeb: Yes. Well, it’s the frame that you get to after you average over the motions of all the stars relative to the sun, yes.

Lucas Perry: Okay. And so ‘Oumuamua was at the local standard of rest until the sun’s gravitation pulled it in, is that right?

Avi Loeb: Well, no. So the way to think of it, it was sitting at rest in that frame and just like buoy on the surface of the ocean. And then the sun happened to bump into it, the sun simply intercepted it along. And as a result, gave it a kick just like a ship gives a kick to a buoy. The sun acted on it through its gravitational force primarily. And then in addition, there was this excess push which was a smaller fraction of the gravitational force, just a fraction of a percent.

Lucas Perry: Right. And that’s the sun pushing on it through its suspected large surface area and structure.

Avi Loeb: Yeah. So in addition to gravity, there was an extra force acting on it, which was a small correction to the force of gravity, the other 10%. But it’s still, it was detected at very high significance because we monitored the motion of ‘Oumuamua. And to explain this force given that there was no cometary evaporation, you needed a thin object. And as I said, there was another thin object discovered in September 2020 called , that also exhibited an excess push by reflecting sunlight. So, it doesn’t mean necessarily that ‘Oumuamua was a light sail. It just means that it had the large area for its mass.

Lucas Perry: Can you explain why the smooth acceleration of ‘Oumuamua is significant?

Avi Loeb: Yeah. So what we detected is an excess acceleration away from the sun that declines inversely with distance squared in a smooth fashion. And first of all, the inverse-square law is indicative of a force that acts on the surface of the object. And the reflection of sunlight is exactly giving you that. And the fact that it’s smooth cannot be easily mimicked by cometary evaporation because often you had jets. These are spots on the surface of a comet from where the evaporation takes off. And that introduces jitter as the object tumbles, there is a jitter introduced to its motion because of the localized nature of these jets that are pushing it. You can think of the jets as the jets in a plane that push the airplane forward by ejecting gas backwards. But in the case of a comet, the comet is also tumbling and spinning.

And so, that introduces some jitter because the jets are exposed to sunlight at different phases of the spin of the object. And moreover, beyond a certain distance, water does not sublimate, does not evaporate anymore. You have water ice on the surface and beyond a certain distance, it doesn’t get heated enough to evaporate. So the push that you get from cometary evaporation has a sharp cutoff beyond a certain distance, and that was not observed. In the case of ‘Oumuamua, there was a smooth push that didn’t really cut off, didn’t show an abrupt change at the distance where water ice would stop evaporate. And so, that again is consistent with the reflection of sunlight being the origin of the excess push.

Lucas Perry: Can you explain the difference between comets and asteroids?

Avi Loeb: Yeah. So, we’re talking about the bricks that were left over from the construction project of the solar system. So the way that the planets form is that first you make a star like the sun, and you make it from a cloud of gas that condenses and collapses under the influence of itself gravity, its own gravitational force contracted and it pulls, and makes a star in the middle. But some of the gas has rotation around the center. And so when you make a star like the sun, a small fraction of the gas of the other for few percent or so remains in the leftover disks around the star that was just formed. And that debris of gas in the disks is the birthplace of the planets. And that disks of gas that is leftover from the formation process of the sun of course includes hydrogen and helium, the main elements from which the sun is made, but also includes heavy elements.

And they condensed in the mid-plane of the disks and make dust particles that stick to each other, get bigger and bigger over time. And they make the so-called planetesimals. These are the building blocks, the bricks that come together in making planets like the earth or the core of Jupiter that created also hydrogen and helium around the central rocky region. So, the idea is that you have all these bricks that just like Lego pieces make up the planets. And some of them get scattered during the formation process of the planets and they remain as rocks in the outer solar system. So, the solar system actually extends a thousand times farther than the location of the most distant planet in a region called the  that extends to a 100,000 times the earth-sun separation. And that is a huge volume. It goes halfway to the nearest star.

So in fact, if you imagine each star having an Oort cloud of these bricks, these building blocks that were scattered out of the construction process of the planets around the star, then these Oort clouds are touching each other, just like densely packed billiard balls. So just imagine the spherical region of planetesimals, these rocks. And so, comets are those rocks that are covered with water ice. So since they’re so far away from the sun, the ice freezes, the water freezes on their surface. But some of them have orbits that bring them very close to the sun. So when they get close to the sun, the water ice evaporates and creates a cloud of gas, a water vapor and some dust that was embedded in this rock that creates this appearance of a cometary tail. So what you see is the object is moving and then its surface layers get heated up by absorbing sunlight and the gas and dust evaporate and create this halo around the object and a tail, but always points away from the sun because it’s calmed by the solar wind, the wind coming from the sun.

And so you end up with a cometary tail, that’s what the comet is. Now, some rocks remain closer to the sun and are not covered with ice whatsoever. So, they’re just bare rocks. And when they get close to the sun, there is no ice that evaporates from them. These are called asteroids. And they’re just rock without any ice on the surface. And so, we see those as well. There is actually a region where asteroids, it’s called the main belt of asteroids, that’s we don’t know what the origin of that is. It could be a planet that was disintegrated, or it could be a region that didn’t quite make a planet and you ended up with fragments floating there. But at any event, there are asteroids, bare rocks without ice on them because they were close enough to the sun that the ice evaporated and we don’t have the water there.

And then these objects are also seen in the vicinity of the earth every now and then, these are called asteroids. And we see basically two populations. Now, ‘Oumuamua was not a comet because we haven’t seen a cometary tail around it. And it wasn’t an asteroid because there was this excess push. If you have a piece of rock, it will not be pushed much by reflecting sunlight because it’s area is not big enough relative to its mass. So it gets a push, but it’s too small for it to exhibit it in its trajectory.

Lucas Perry: Right. So, can you also explain how much we know about the composition of Oort clouds and specifically the shape and size of the kinds of objects there? And how ‘Oumuamua relates to our expectation of what exists in the Oort cloud of different stars?

Avi Loeb: Yeah. So, the one thing that I should point upfront is when scientists that try to attend to the anomalies of ‘Oumuamua suggests that it’s a hydrogen iceberg or a nitrogen iceberg. By the way, that notion gathered popularity in the mainstream. People said, oh, they had a sigh of relief. We can explain this object with something we know. But the truth is, it’s not something we know. We’ve never seen a nitrogen iceberg that was chipped off Pluto in our solar system. The Oort cloud does not have nitrogen icebergs that we witnessed. So claiming that ‘Oumuamua, the first interstellar object is a nitrogen iceberg or a hydrogen iceberg implies that there are nurseries out there around other stars or in molecular clouds that are completely different than the solar system in the sense that they produce most of the interstellar objects because ‘Oumuamua was the first one we discovered.

So they produce a large fraction of the interstellar objects, yet they are completely different from the solar system. It’s just like going to the hospital and seeing a baby that looks completely different than any child you have seen before. It’s your home from any child you had. And it implies that the birthplace of that child was quite different, but yet that child appears to be the first one you see. So, that’s to me an important signal from nature that you have to rethink what the meaning of this discovery is. And the other message is we will learn something new no matter what, so we need to get more data on the next object that belongs to this family. Because even if it’s a naturally produced object, it will teach us about environments that produce objects that are quite different from the ones we find in the solar system.

And that means that we miss something about the nature. And even if it’s natural in origin, we learn something really new in the process of gathering this data. So we should not dismiss this object and say, business as usual, we don’t have to worry about it, rather we should attempt to collect as much data as possible on the next weird object that comes along. I should say there was a second interstellar object discovered by an amateur astronomer from Russia that called Gennady Borisov. And it was given the name Borisov discovered in 2019. That one looked just like a comet. And I was asked, does that convince you that ‘Oumuamua was also natural because this one looks exactly like the comets we have seen? And I reclined, when you go along the beach and most of the time you see rocks and suddenly you see a plastic bottle. And after that you see rocks, the fact that you found rocks afterwards doesn’t make the plastic bottle a rock.

Each object has to be considered on its own merit. And therefore, it makes ‘Oumuamua even more unusual. The fact that we see Borisov as a natural comet. So in terms of the object that come from the Oort cloud, our own Oort cloud, there is a size distribution that there are objects that are much smaller than ‘Oumuamua and objects that are much bigger. And of course, the bigger objects are more rare. And then roughly speaking, there is equal amount of mass per logarithmic size bin. So, there are many more small objects. And most of them we can’t see because ‘Oumuamua was roughly at the limit of our sensitivity with Pan-STARRS. And that means that objects much smaller than the size of a football field cannot be noticed within a distance comparable to the distance to the sun. The sun acts as a lamppost that illuminates the darkness around us.

And so, an object is detected when it reflects enough sunlight for us to detect with our telescopes. And so small objects do not reflect enough sunlight, and we will notice them. But I calculated that in fact, if there are probes moving very fast through the solar system, let’s say at a fraction of the speed of light that were sent by some alien civilizations, we could detect the emission from them, the infrared emission from them with the James Webb Space Telescope. They would move very fast across our sky, so we just need to be ready to detect them.

Lucas Perry: Do you think given our limited knowledge of Oort clouds that there are perhaps exotic objects or rare objects, which we haven’t encountered yet, but that are natural in origin that may account for ‘Oumuamua?

Avi Loeb: Of course, there could be. As I mentioned, there are people suggested the hydrogen iceberg and nitrogen iceberg, dust bunny. These were suggestions that were already made and each of them has its own challenges. And it could be something else, of course. And the way to find out, that’s the way science operates. The science is guided by evidence by collecting data. And the way science should be done is you leave all possibilities on the table and then you collect enough data to rule out all but one interpretation that looks most plausible. And so, my argument is we should leave the artificial origin possibility on the table, because all the other possibilities that were contemplated invoke something that we’ve never seen before. So, we cannot argue based on speculations that it’s something that we’ve never seen before. We cannot argue that proves the point that it’s not artificial. So, it’s a very simple point that I’m making, and I’m arguing for collecting more data. I mean, I would be happy to be proven wrong, but it’s not artificial in origin, and then move on. The point is that science is not done by having a prejudice, knowing the answer in advance. It’s done by collecting data, and the mistake that was made by the philosophers during Galileo’s time is not to look through his telescope and argue that they know that the sun moves around the Earth. And that only maintained their ignorance.

The reality doesn’t care whether we ignore it. The Earth continued to move around the sun. If we have neighbors that exists out there, and it doesn’t really matter whether we shut down the curtains on our windows and claim, “No, we’re unique and special, and there is nobody out there on the street.” The fact that we say that, we can get a lot of likes on Twitter saying that, and then we can ridicule anyone that argues differently, but that would not change the fact whether we have neighbors or not. That’s an empirical fact. And, in order for us to improve our knowledge of reality, I’m talking about reality, not about philosophical arguments, just figuring out whether we have neighbors, whether we are the smartest kid on the block, that’s within the realm of science, and finding out the answer to this question is not a matter of debate.

It’s a matter of collecting evidence. But of course, if you are not willing to find wonderful things, you will never discover them. So, my point is, we should consider this possibility as real, as very plausible, as mainstream activity, just like the search for dark matter or the search for gravitational waves. We exist. There are many planets out there just like the Earth. Therefore, we should search for things like us that existed or exist on them. That’s a very simple assumption to make, an argument to make, and to me, it sounds like this should be a mainstream activity. But then, I realize that my colleagues do not agree, and I failed to understand this dismissal, because it’s a subject of great interest to the public, and the public fund science. So, if you go back a thousand years, there were people saying the human body has a soul, and therefore anatomy should be forbidden.

So imagine if scientists would say, “Oh, this is a controversial subject. The human body could have a soul. We don’t want to deal with that, because some people are claiming that we should not operate the human body,” where would modern medicine be? My argument is, if science has the tool to address the subject of great interest to the public, we have an obligation to address it and clear it up. Let’s do it bravely, with open eyes. And by the way, there is an added bonus. If the public cares about it, there will be funding for it. So, how is it possible that the scientific community ridicules this subject, brushes it aside, claims, “We don’t want to entertain this unless we have extraordinary evidence,” yet fails to fund at a very substantial level the search for that extraordinary evidence? How is that possible in the 21st century?

Lucas Perry: So, given the evidence and data that we do have, what is your credence that ‘Oumuamua is alien in origin?

Avi Loeb: Well, I have no certainty in that possibility, but I say, it’s a possibility that should be put left on the table, with at least as high likelihood as a nitrogen iceberg or a hydrogen iceberg or a dust bunny. That’s what I consider as the competing interpretations. I don’t consider statements like, “It’s always rocks. It’s never aliens,” as valid scientific statements, because they remind me of the possibility. If you were to present a cell phone to a caveman, and the caveman is used to playing with rocks all of his life, the caveman would argue that the cell phone is just a shiny rock. And, just basing your assertions on past experience is no different than what the philosophers were arguing. We don’t want to look through Galileo’s telescope because we know that the sun moves around the Earth. So, this mistake was made over and over again, throughout human history. I would expect modern scientists to be more open-minded to thinking outside the box, to entertain possibilities that are straightforward.

And what I find is, the strange thing is not so much that there is conservatism regarding this subject. But at the same time, in theoretical particle physics, you have whole communities of hundreds of people entertaining ideas that have no experimental verification, no experimental tests in the foreseeable future whatsoever, ideas like the string theory landscape or the multiverse. Or some people argue we live in a simulation, or other people talk about supersymmetry. And awards were given to people doing mathematical gymnastics, and these studies are part of the mainstream. And I ask myself, “How is it possible that this is considered part of the mainstream and the search for technological signatures is not?” And my answer is, that these ideas provide a sandbox for people to demonstrate that they’re smart, that they are clever, and a lot of the country, the academia is about that. It’s not about understanding nature. It’s more about showing that you’re smart and getting honors and awards. And that’s unfortunate, because physics and science is a dialogue with nature. It’s a learning experience. We’re supposed to listen to nature. And the best way to listen to nature is to look at anomalies, things that do not quite line up with what we expected. And by the way, whether Oumuamua is artificial or not, that doesn’t require very fancy math. It’s a very simple fact that any person can understand. I mean, nature is under no obligation to reveal its most exciting secrets without fancy math. It doesn’t need to be sophisticated.

Aristotle had this idea of the spheres surrounding us, that we are at the center of the universe, and there are these beautiful spheres around us. That was a very sophisticated idea that many people liked, because it flattered their ego to be at the center of the universe, and it also had this very clever arrangement. But it was wrong. So, who cares how sophisticated an idea is? Who cares if the math is extremely complicated? I mean, of course, it demonstrates that you are smart if you’re able to maneuver through these complicated mathematical gymnastics. But that doesn’t mean that it’s reflecting reality. And my point is, we better pay attention to anomalies that nature gives us, than to promoting our image.

Lucas Perry: Right. So it seems like there’s this interesting difference between the extent to which the scientific community is willing to entertain ‘Oumuamua as being artificial and origin, whereas at the same time, there is a ton of theories that, at least at the moment, are unfalsifiable. Yet, here we have a theory that is simple, matches the data, and can be falsified.

Avi Loeb: Right. And the way to falsify it, I mean, it’s not by chasing ‘Oumuamua, because by now, it’s a million times fainter than it was close to the sun. But then, it’s by finding more objects that look as weird as it was. And this was the first object we identified. There must be many more. If we found this object over by serving the sky for a few years, we will definitely find more by serving the sky for a few more years, because of the Copernican principle. Copernicus discovered that we are not positioned in a special location, in a privileged location in the universe. We’re not at the center of the universe, and you can extend it also, not just space, but also time. And, when you make an observation over a few years time, the chance of these few years being special and privileged is small.

I mean, most likely, it’s a typical time, and you would find it if you were to look at the previous three years, so then the following three years… That’s the Copernican principle and I very much subscribe to it, because again, the one thing I learned from practicing astronomy over the decades was a sense of modesty. We are not special. We are not unique. We are not located at the center of the universe. We don’t have anything special in our backyard. The Earth-sun system is very common. So, that’s the message that nature gives us. And, we are born into the world like actors put on a stage. And first thing we see is the stage is huge. It’s 10 to the power 26 times larger than our body. And the second thing we see is that the play has been going on for 13.8 billion years since the big bang, and we just arrived at the end of it.

So, the play is not about us. We are not the main actors. So let’s get a sense of modesty, and let’s look for other actors that may have been around for longer than we did. There’s a technological civilization. Maybe they have a better sense of what the play is about. So, I think it all starts from a sense of modesty. My daughters, when they were young, they were at home and they had the impression that they are the center of the world, that they are the smartest, because they haven’t met anyone else outside the family. And then, when we took them to the kindergarten, they got the better sense of reality by meeting others and realizing that they’re not necessarily the smartest kid on the block. And so, I think our civilization has yet to mature, and the best way to do that is by meeting others.

Lucas Perry: So before we move on to meeting others, I’m curious if you’re willing to offer a specific credence. So, you said that there are these other natural theories, like the dust bunny and the iceberg theories. If we think of this in terms of Bayesian reasoning, what kind of probability would you assign to the alien hypothesis?

Avi Loeb: Well, the point is that these objects that were postulated for natural origin of ‘Oumuamua were never seen before. So, there is no way of assigning likelihood to something that we’ve never seen before. And it needs to be the most common object in interstellar space. So, what I would say is that, we should approach it without a Bayesian prior. Basically, we should leave all of these possibilities on the table, and then get as much data as possible on the next object that shows the same qualities as ‘Oumuamua. By these qualities, I mean, not having a cometary tail, so not being a comet, and showing an excess push away from the sun.

And as I mentioned, there was such an object, 2020 SO, but it was produced by us. So, we should just look for more objects that come from interstellar space that exhibit these properties, and see what the data tells us. It’s not a matter of a philosophical debate. That’s my point. We just need a close up photograph, and we can easily tell the difference between a rock and an artificial object. And I would argue that anyone on Earth should be convinced when we have such a photograph. So, if we can get such a photograph in the next few years, I would be delighted, even if I’m proven wrong, because we will learn something new no matter what.

Lucas Perry: So, there’s also been a lot of energy in the news around UFO sightings and UFO reports recently. I’m curious how the current news and status of UFO interest in the United States and the world, how that affects your credence of ‘Oumuamua being alien in origin, and if you have any perspective or thoughts on UFOs.

Avi Loeb: Yeah, it’s a completely independent set of facts that is underlying the discussion on UFOs. But of course, again, it’s the facts, the evidence that we need to pay attention to. I always say, “Let’s keep our eyes on the ball, not on the audience.” Because if you look at the audience, the scientists are responding to these UFO reports in exactly the same way as they responded to ‘Oumuamua. They dismiss it. They ridicule it. And, that’s unfortunate, because the scientists should ask, “Who do we have access to the data? Could we analyze the data? Could we see the full data? Or could we collect new data on these objects, so that we can clear up the mystery?” I mean, science is about evidence. It’s not about prejudice. But instead, the scientists know the answer in advance. They say, “Oh, these reports are just related to human-made objects, and that’s it.”

Now, let’s follow the logic of Sherlock Holmes. Basically, Sherlock Holmes, as I mentioned in my book Extraterrestrial, Sherlock Holmes made the statement that you put all possibilities on the table, and then, whatever remains after you sought out all the facts must be the truth. That’s the way he operated as a detective. So, that’s the way we should operate as scientists. And what do we know about the latest UFO report, from the Pentagon and Intelligence agencies? So far, a few weeks before it’s being released, we know from leaks that there is a statement that some of the objects that were found are real. Okay? They are not artifacts of the cameras. They are not illusions of the people who saw them, because they were detected by multiple instruments, including infrared cameras, radar systems, optical cameras, and a lot of people from different angles.

And, when you consider that statement coming from the Pentagon, you have to take it seriously, because it’s just the tip of the iceberg. That the data that will be released to the public, presumably, is partial, because they will never released the high quality data, because it will inform other nations of the capabilities, the kind of sensors that the US has in monitoring the sky. Okay? So, I have no doubt that a lot of data is being hidden for national security reasons, because otherwise, it will expose the capabilities of these sensors that are being routinely used to monitor the sky. But, if people that had access to the full data, and that includes officials such as former president Barack Obama, former CIA director James Woolsey and others, that saw the data, and they make the case that these objects are real, then these objects may very well be real.

Okay? And I take that at face value. Of course, as a scientist, I would like to see the full data, or collect new data. There is no difference, because science is about reproducibility of results. So, if the data is classified, I would much rather place state-of-the-art cameras that you can buy in the commercial sector, or scientific instrumentation that we can purchase, and just place those in the same locations and record the sky. The sky is not classified. In principle, anyone could collect data about the sky. So, I would argue that, if all data is classified, we should collect new data that would be open to the public. And it’s not a huge investment of funds to have such an experiment. But the point of the matter is, that we can infer if the objects are real using the scientific method, then let’s assume that they are real, like the people that saw the full data claim.

So, if they’re real, then there are three possibilities. Either they were produced, manufactured by other nations, because we certainly know what we are doing, the US. So, if they were produced by other nations, like China or Russia, then humans have the ability to produce such objects, and they cannot exceed the limits of our technology. And, if the maneuvering of these objects look as if they exceed, substantially, the limits of the technologies we possess, then we would argue it’s not made by humans, because there is no way that the secret about an advanced technology would be preserved on Earth by humans. And because it has huge benefits commercially, so it would appear in the market, in the commercial sector because you can sell it for a lot of money, or it would appear in the battlefield, if it’s being used by other nations.

And we pretty much know what humans are capable of producing. We are also probably getting intelligence on other nations. So, we know what are the limits of human technology. I don’t think we can leave that possibility vague. If there is an object behaving in a way that far exceeds what we are able to produce, then that looks quite intriguing. But the remaining possibilities are, that somehow it’s a phenomenon that occurs in the Earth atmosphere. There is something that happens that we didn’t identify before, or that these are objects that came from an extraterrestrial origin. Okay? And, once again, I make the case that, the way to make progress on this is not to appear on Twitter and claim we know the answer in advance and ridicule the other side of the argument. This is not the way by which we make progress, but rather collect better evidence, better clues and figure it out, clear up the fog.

It’s not the mystery that should be unraveled by philosophical arguments. It’s something that you can measure and get data on and reproduce with future experiments. And once we get that, we will have a clear view of what it means. And then, that’s how mysteries get resolved in science. So, I would argue, for a scientific experiment that will clear up the fog. And the way that we would not do that is if the scientific community would ridicule these reports, and the public would speculate about the possible interpretations. That’s the worst situation you can be in, because you’re basically leaving a subject of great interest to the public unresolved.

And, that’s not the right way. Again, in the 21st century, to treat the subject of interest to the public, that obviously reaches the Congress, it’s not an eyewitness on the street that says, “I saw something unusual.” It’s military personnel. We have to take it seriously, and we have to get to the bottom of it. So that’s the way I look at it. Then, it may well be that it’s not the extraterrestrial in origin, but I think the key is by finding evidence.

Lucas Perry: So, given the age of the universe and the age of our galaxy and the age of our solar system, would you be surprised if there were alien artifacts almost everywhere or in many places, but we were just really bad at finding them? Or those artifacts were really good at hiding?

Avi Loeb: No, I wouldn’t be surprised, because as I said, most of the stars formed billions of years before the sun. And, if there were technological civilizations around them, many of these stars died by now and these civilizations may have perished, but if they send equipment, that equipment may operate, especially if it’s being operated by artificial intelligence or by things that we haven’t invented yet. It may well survive billions of years and get to our environment. Now, one thing you have to realize is, when you go in the wilderness, you better be quiet. You better not make a sound, and listen, because there may be predators out there. Now, we have not been careful in that sense, because we have been broadcasting radio waves for more than a century. So, these radio signals reached a hundred light years by now.

And, if there is another advanced civilization out there with radio telescopes of the type that we possess, they may already know about us. And then, if they use chemical rockets to get back to us, it would take them a million years to traverse a hundred light years. But if they use much faster propulsion, they may be already here. And the question is, are we noticing them? There was this Fermi paradox, formulated 70 years ago by Enrico Fermi, a famous physicist, who said that, “Where is everybody?” And of course, that’s a presumptuous statement, because it assumes that we are sufficiently interesting for them to come and visit us. And, when I met my wife, she had a lot of friends that were waiting for prince charming on a white horse to make them a marriage proposal, and that never happened, and then they compromise.

We, as a civilization, would be presumptuous in assuming that we are sufficiently interesting for others to have a party in our backyard. But nevertheless, it could be that it already happened. As you said, that we didn’t notice. One thing to keep in mind is full geological activity. Most of the surface of the Earth gets mixed with the interior of the Earth, over a hundred million years time scales. So, it could be that some of the evidence was buried by the geological activity on Earth, and that’s why we don’t see it.

But the moon, for example, is like a museum, because it doesn’t have geological activity, and also, it doesn’t have an atmosphere that would burn up an object that is smaller than the size of a person, like the Earth’s atmosphere does, say, for meteors. So in principle, once we establish a sustainable base on the moon, we can regard it as an archeological site, and survey the surface of the moon to look for artifacts that may have landed, may have crashed on it. Maybe we will find a piece of equipment that we never sent, that came from somewhere else that crashed on the surface of the moon.

Lucas Perry: So, it’d be wonderful if we could pivot into Great Filters and space archeology here, but before we do that, you’re talking about the Fermi paradox and whether or not we’re sufficiently interesting to merit the attention of other alien civilizations. I wonder if interesting is really the right criteria, because if advanced civilizations converge on some form of ethics or beneficence, then whether or not we’re interesting is not perhaps the right criteria for whether or not they would reach out. We have people on earth who are interested in animal ethics, like how the ants and bees and other animals are doing. So, it could be the same case with aliens, right?

Avi Loeb: Right. I completely agree. One thing I should say… Well, actually, two things is that, first, that you mentioned before the Drake’s equation. It doesn’t apply to relics. It doesn’t apply to objects. And the Drake equation talks about the likelihood of detecting radio signals. And, that has been the method we used over the past 70 years in searching for other civilizations. And, I think it’s misguided, because in order to get a signal, it’s just like trying to have a phone conversation. You need the counterpart to be alive. And it’s quite possible that most of the civilizations are dead by now. So, that’s the Great Filter idea that there is a narrow window of opportunity for us to communicate with them. But, on the other hand, they may have sent equipment into space, and we can search for it through space archeology, and find relics from civilizations that are not around anymore, just like we find relics from cultures that existed on the surface of Earth through archeological digs.

So I think a much more promising approach to find evidence for dead civilizations is looking for objects floating in space. And, the calculation of what’s the likelihood of finding them, is completely different from the Drake equation. It resembles more the calculation of what’s the chance that you would have stumbled across a plastic bottle on the beach or on the surface of the ocean. And, you just need to know how many plastic bottles are per unit area on the surface of the ocean, and then you will know what’s the likelihood of crossing one of them. And, the same is true for relics in space. You just need to know the number of such objects per unit volume, and then you will figure out what’s your chance of bumping into one of them.

And that’s a completely different calculation than the Drake equation, which talks about receiving radio signals. This is one point that should be born. And the other point that I would like to mention is that, during our childhood, we always have a sense of adults looking over our shoulders, and then making sure that everything goes well, and they often protect us. And then, as we become independent and grow up, we encounter reality on our own. There is this longing for a higher power that overlooks our shoulder. And, that is provided by the idea of God in a religion. But interestingly enough, it’s also related to the idea of some unidentified flying objects that are looking over our shoulders, because if a UFO was identified to be of extraterrestrial origin, it may imply that there is an adult wiser than we are in the room, looking over our shoulder. The question of whether that adult is trying to protect us is still open, remains open, but we can be optimistic.

Lucas Perry: All right. So, let’s talk a little bit about whether or not there might be adults in the room. So, you defined what Great Filter was. So, when I think of Great Filters, I think of there being potentially many of them, rather than a single Great Filter. So, there’s the birth of the universe, and then you need generations of stars to fuse heavier elements. And then there’s the number of planets and Goldilocks zones. And then there’s abiogenesis or the arising of life on Earth. And then there’s moving from single to multicellular life. And then there’s intelligent life and civilization, et cetera. Right? So, it seems like there’s a lot of different places where there could be Great Filters. Could you explain your perspective on where you think the most likely Great Filters might be?

Avi Loeb: Well, I think it’s self destruction, because I was asked by Harvard Alumni, how much longer do I expect our civilization to survive? And I said, “When you look at your life, and you just select a random day throughout your life, what’s the chance that it’s the first day after you are born. That probability is tens of thousands of times smaller, than the probability that the day you select would be during your adulthood, because there are tens of thousands of days in the life of an adult.” So, we existed for about a century as an advanced technological civilization. And you ask yourself, “Okay. Well, if we are in our adulthood, which is the most probable state for us to be in?” As I mentioned before, we’re just sampling randomly a time, and most likely during your adulthood, then that means that we have only a few more centuries left, because the likelihood that we will survive for millions of years, is tens of thousands of times smaller.

It would imply that we are in the first day of our life. And that is unlikely. Now, the one caveat I have for this statement is, that the human spirit can defy all odds. So, I believe that in principle, if we get our act together, we can be an outlier, in the statistical likelihood function. And, that’s my hope. I’m an optimist. And I hope that we will get our act together. But if we continue to behave the way we are, not to care so much about the climate. You can even see it in world politics nowadays. Even when you have administrations that care about climate, they cannot really convince the commercial sector to cooperate. And, suppose our civilization is on a path to self destruction, then we don’t have more than a few centuries left. So, that is a Great Filter. And of course, there could be many other Great Filters, but that seems to me as the most serious one.

And, then you ask yourself, “Okay, so which civilization is more likely to survive?” It’s probably the dumber civilization that doesn’t create the technologies that destroy it. If you have a bunch of crocodiles swimming on the surface of a planet, they will not create an atomic weapon. They would not change the climate. So, they may survive for billions of years. Who knows? So maybe the most common civilizations are the dumb ones. But, one thing to keep in mind is that, when you create technological capabilities, you can create equipment that will reproduce itself, like Von Neumann machines, or you can send it to space. You can escape from the location.

Or you can send it to space. You can escape from the location that you were born on. And so that opens up a whole range of opportunities in space. And that’s why I say that once a civilization ventures into space, then everything is possible. Then you can fill up space with equipment that reproduces itself. And there could be a lot of plastic bottles out there. And we don’t know. We shouldn’t assume anything. We should just search for them. And ‘Oumuamua, as far as I’m concerned, was the wake up call. And the other thing I would like to say is if I imagine a very advanced civilization that understands how to unify quantum mechanics with gravity, something we don’t possess at the moment… there’s such a unification scheme that we know works… perhaps they know how to irritate the vacuum and create a baby universe that would lead to more civilizations.

So it’s just like having a baby that can make babies that can make babies, and you would get many generations as a result of that. So this could be an origin of the Big Bang. Maybe the umbilical cord of the Big Bang started in a laboratory. And by the way, it would say that intelligence, technological advance is an approximation to God because in the religious stories, God created the universe. We can imagine a technology that would create a baby universe. And then the same is true for life. We don’t know if life was seeded, the origins of life was seeded in a laboratory somewhere. And so that remains a possibility. And that’s what’s so fascinating about the search for intelligent life out there, because it may provide answers to the most fundamental questions we have, like the meaning of life.

Lucas Perry: Would you consider your argument there about human extinction? Given what we are currently observing, is that like the doomsday argument?

Avi Loeb: Yeah. Well, you can call it the doomsday. I would call it risk assessment. And then I don’t think we are statistical systems in the sense that there is no escape from a particular future, because I think that once we recognize the risk in a particular future, we can respond and avoid it. The only question is whether as a civilization, we will be intelligent enough. And frankly, I’m worried that we are not intelligent enough. And it may be just like a Darwinian principle where if you are not intelligent enough, you will not survive and we will never be admitted to the club of intelligent civilizations in the Milky Way Galaxy unless we change our behavior. And it’s yet to be been whether we will change our behavior accordingly. One way to convince people to change their behavior is to find evidence for other civilizations that didn’t, and perished as a result. That would be a warning for us, a history lesson.

Now, one caveat I should mention is we always imagined things like us. And when we go to meet someone, it’s a fair assumption to assume that that person has eyes and nose and ears the way we have. And the reason it’s a reasonable assumption is because we share the same genetic heritage as the person that we are meeting. But if you think about life on a planet that had no causal contact with Earth, it could be very different.

And so calculating the likelihood of self-destruction, the likelihood of life of one form versus another, the likelihood of intelligence, all of these very often assume something similar to us, which may not be the case. I think it might be shocking to us to find the creatures from another planet or technologies from another planet. And so my solution to this ambiguity is to be an observer. Even though I’m a theorist, I would argue, let’s be modest. Let’s not try to predict things in this context. Let’s just explore the universe. And the biggest mistake we are making over and over again is to argue about the answer before seeing the evidence. And that’s the biggest mistake because it convinces you to be lazy, not to collect more evidence to say, “I know the answer in advance. I don’t need to look through the telescope. I don’t need to invest funds in searching for this. Even though it’s an important question, I know the answer in advance.” And that’s the biggest mistake we can make as a species.

I’m willing to go through all the hardships of arguing something outside the box of confronting these personal attacks against me just because it’s a question of such great importance to humanity. If that was a minor question about the nature of dark matter, I would not risk anything for that. Who cares? If the dark matter is axions or weakly interacting massive particles, that has very little impact on our daily lives. It’s not worth confronting the mainstream on that. And by the way, the response would not be so emotional in that case either. But on a subject as important as this one to the future of humanity, which is the title of your organization, there is no doubt in my mind that it’s worth the extra effort.

It’s worth the hardship, bringing people to recognize that such a search for technological relics in space is extremely important for the way we view ourselves in the big scheme of things, our aspirations for space, our notions about religion, and what we might do in response to the knowledge that we acquire will completely change the future of humanity. And on such a question, I’m willing to put my body on the barbed wire.

Lucas Perry: Well, thank you very much for putting your body on the barbed wire. I think you mentioned that there was something in… Was it Israeli training where soldiers are taught to put their body on the barbed wire so people can climb over them?

Avi Loeb: Yeah. That was a statement that in the battlefield, very often, a soldier is asked to put his body on the barbed wire so that others can pass through. The way I see it historically is you look at Socrates, the ancient Greek philosopher. He advocated for doubting the wisdom of influential politicians at the time and other important figures, and he was blamed for corrupting the youth by dismissing the gods that were valued by the civilians of the city-state of Athens at the time. And he was prosecuted and then forced to drink poison. Now, if Socrates would have lived today, he would have been canceled on the Athenian social media. That would be the equivalent of the poison. And then you see another philosopher, Epicurus, that made many true statements, but again, was disliked by some religious authorities at the time. And you see, of course, Galileo Galilei that was put in house arrest.

Later on, you see Giordano Bruno. I mean, he was an obnoxious person that was not liked by a lot of people, but he simply argued that other stars are just like the sun, and therefore, they might have a planet just like the Earth that could have life on it. And the church at the time found it offensive because if there is life that is intelligent out there, then that life may have sinned, and then Christ could have saved that life. And then you need billions of copies of Christ to be distributed throughout the galaxy to visit all these planets. And that makes little sense. That made little sense to the church. And so they burned Giordano Bruno on a stake. And even though nowadays we know that indeed, a lot of stars are like the sun, a lot of planets are just like the Earth at roughly the same separation from their host stars where life may exist [inaudible 01:12:03]. So in that sense, he was correct.

And obviously you find many such examples also in modern science over the past century, of people advocating for the correct ideas and being dismissed and ridiculed. Just to give you an example, a former chair of the astronomy department at Harvard that preceded me… I chaired the astronomy department for nine years. I was the longest-serving chair in the history of the astronomy department at Harvard. Before me was Cecilia Payne-Gaposchkin. And in her PhD thesis, which was the first thesis in astronomy at Harvard, she argued based on analyzing the spectrum of the sun that most of the surface of the sun is made of hydrogen. And while defending her PhD thesis, Henry Norris Russell, who was the director of the Princeton University Observatory, an authority on stars at that time, dismissed her idea and said, “That is ridiculous because we know that the sun is made of the same elements as the Earth. So there is not much hydrogen on Earth. It cannot be the case that the sun is made mostly of hydrogen.”

So she took out that conclusion from her PhD. And then in the subsequent few years, he redid the analysis, got more data, and wrote an extended paper, Industrial Physical Journal, arguing the same, that she was correct. And interestingly enough, in a visiting committee to the Princeton University Department of Astrophysics, the chair of that department was bragging that Henry Norris Russell discovered that the sun is made mostly of hydrogen. So you can see that history depends pretty much from who tells it. But the point of the matter is that sometimes, when you propose an idea that even though it has to be correct because it’s based on evidence, it’s being dismissed by the authorities, and science is not dictated by authority.

In the 1930s, there was a book co-authored by tens of scientists arguing that Einstein’s Theory of Relativity must be wrong. And when Einstein was asked about it, he said, “Why do you need tens of scientists to prove that my theory is wrong? It’s enough to have one author that would explain why the theory is wrong.” Science is not based on authority. It’s based on reasoning and on evidence. And there is a lot of bullying going on nowadays. And I witness it. And throughout my career, I’ve seen a number of ideas that I proposed that were dismissed and ridiculed at first. And then they became the interest of mainstream. And now, there are hundreds of people working on them. That was true for my work on the first stars. I remember that it was dismissed early on. There were people claiming even that there are no stars beyond the redshift [inaudible 01:14:59]. And then I worked on imaging black holes. I suggested that there could be a correlation between black or mass and characteristic velocity dispersion of stars in the vicinity of those supermassive black holes at the centers of galaxies.

I worked on gravitational wave astrophysics long before it was fashionable. And in all of these cases, the interest that I had early on was ridiculed. I gave a lecture in a winter school in 2013, in January 2013, winter school in Jerusalem, on gravitational wave astrophysics. And one of the other lecturers, who still is 20 years younger than I am, stood up and said, “Why are you wasting the time with these young students on a subject that will not be of importance in their career?” And he said it publicly. He stood up in front of everyone it’s on video. And two and a half years later, the LIGO experiment detected the first gravitational wave signal.

Many of these students were still doing their PhD, and this became the hottest frontier in astrophysics in subsequent years, and the Nobel prize was awarded. So here you have a situation where someone says, “Why are you giving a lecture on this subject to students? Because it would never be of importance through their careers.” And two and a half years later, it becomes the hottest topic, the hottest frontier in astrophysics. And it involves a new messenger other than light that was never used before in astrophysics. Gravitational waves, wrinkles in space and time. It opens up a whole new window into the universe. So how is it possible that someone that is 20 times younger than I am stands up, feels that it’s completely appropriate for him to stand up in front of all the students and say that?

And to me, it illustrates narrow-mindedness. It’s not a matter of conservatism. It’s a matter of thinking within the box and not allowing to think outside the box. And that, you might say, okay, it’s acceptable because there are lots of people suggesting crazy ideas. But at the same time, you have whole communities of theoretical physicists working on very strange ideas that were not verified experimentally. And that is part of the mainstream. And the common threads between these two communities of people is that they both don’t pay attention to evidence. They both do not recognize the fact that evidence leads the way. In the case of gravitational waves, it’s the fact that we detect the signal. So just wait for LIGO to find the signal, and then everything will change.

The case of ʻOumuamua, we saw some anomalies. Let’s pay attention to them. Let’s talk about them. And in the case of String Theory, it’s let’s say this shouldn’t be at the fringes of mainstream because we haven’t found evidence that supports the idea of extra dimensions as of yet. So it doesn’t deserve to be center stage. But you have these two communities living side to side because both of them feel comfortable not paying attention to evidence.

Lucas Perry: We like to think of science as this really clean epistemic process of hypothesis-generating and creating theories, and then verification and falsification through evidence and data-gathering. But the reality is that it’s still made up of lots of humans who have their own need for recognition and meaning and acceptance and validation. And so in order to improve the process of science, it’s probably helpful to bring light to the reality of the humanity that we all still have when we’re engaged in the scientific pursuit. And that helps to open our minds to the truth when our pursuit of the truth is not being obscured by things we’re not being honest with ourselves about.

Avi Loeb: Right. And I was the founding director of the Black Hole Initiative at Harvard University, which brings together physicists, mathematicians, astronomers, and philosophers. And my motivation in creating this center was to bring people from different perspectives so that they will open the minds of other disciplines to possible breakthroughs in the context of black holes. And I think this is key. I think we should be open-minded and we should also fund risky propositions, risky ideas. There should be a certain fraction of the funding that goes in those directions. And even though I founded this Black Hole Initiative, in the first annual conference that we had, a philosopher gave a lecture, and then at the end of the lecture, the philosopher argued that… After speaking to a lot of string theories, he made this statement that if a bunch of physicists agree on something as being true for a decade, then it must be true because physics is what physicists decide to do.

And I raised my hand. I said, “How can you make… No, I would expect philosophers to give us a litmus test of honesty.” It’s just like the canary in the cave. They should tell us when truth is not being spoken. And I just couldn’t understand how a philosopher could make such a statement. I said, “There are many examples in history where physicists agreed on something and it was completely wrong. And the only way for us to find out is by experimental evidence.” Nature is teaching us. It’s a learning experience. And we can all agree that we are the wealthiest people in the world. And if we go to an ATM machine, that’s equivalent to doing an experiment and testing that idea. Now we can feel happy until we try to cash the money out of the ATM machine, and then we realized that our ideas were wrong.

If someone mentions an idea, how do we tell whether it’s a Ponzi scheme or not? Bernie Madoff told a lot of people that if they give him their money, he would give them more in return, irrespective of what the stock market will do. Now, that was a beautiful idea. It appealed to a lot of people. They gave him their money. What else can you expect from people that believe a beautiful idea? They made money and gave it to Bernie Madoff because the idea was so beautiful. And he felt great about it. They felt great about it. But when they wanted to cash out, which was the experiment, he couldn’t provide them the money. So this idea turned out to be wrong.

And it’s not just the nuance of science to say, “Oh, okay. The recent experimental tests, but we can give up on this as long as we’re happy and we feel very smart and we completely agree that we should pursue these questions and just do mathematical gymnastics and give each other awards and feel great about life, and in general, just make the general statement that experiments will be great, but we can’t do them right now. And therefore, let’s not even discuss them.”

Having a culture of this type is unhealthy for science because how can you tell the difference between the idea of Bernie Madoff and reality? You can feel very happy until you try to cash it out. And if you don’t have an experimental test during your life, then you might spend your life as a physicist on an idea that doesn’t really describe reality. And that’s a risk that as a physicist, I’m not willing to take. I want to spend my life on ideas that I can test. And if they are wrong, I learn something new.

And by the way, Einstein was wrong three times in the last decade of his career. He argued that black holes don’t exist, gravitational waves don’t exist, and quantum mechanics doesn’t have spooky action at a distance. But that was part of his work at the frontiers of physics. You can be wrong. There’s nothing bad about it. When you explore new territories, you don’t always know if you’re heading in the right direction. As long as you’re doing it with dignity and honesty and integrity and you’re just following what is known at the time, it’s part of the scientific pursuit. And that’s why people should not ridicule others that think outside the box. As long as they’re doing it honestly, and as long as the evidence allows for what they’re talking about, that should be considered seriously.

And I think it’s really important for the health of the scientific endeavor because we’re missing on opportunities to discover new things. Just to give you an example, in 1952, there was an astronomer named Otto Struve that argued that we might find planets close in to a star like the sun, that if they have the mass of Jupiter, because if they’re close in, if they’re hot Jupiters, heated by the sun, they’re getting very close to the sun, then they would tag the sun like star back and forth in a way that we can measure, or they would a occlude significant portion of the area of the star. So we can see them when they transit the star. So he argued let’s search for those. And for four decades, no time on major facilities was allocated for such a search because astronomers argued, “Oh, we pretty much understand why Jupiter formed so far away from the sun, and we shouldn’t expect hot Jupiters.” And then in 1995, a hot Jupiter was discovered. And the Nobel Prize was given for that a couple of years ago.

So you might say, “Okay, that baby was born.” Eventually, even though four decades were wasted, eventually, we found a hot Jupiter. And that opened up the field of exoplanets. But my argument is that this is a baby that was born. For each baby like that, there must be many babies that were never born because it’s still being argued that it’s not worth the effort to pursue those frontiers.

And that’s unfortunate, because we are missing opportunities to discover new things. If you’re not open to discover new things, you will never discover them.

Lucas Perry: I think that’s some great wisdom for many different parts of life. One thing that you mentioned earlier that really caught my attention was you were talking about us becoming technologically advanced, and that would unlock replicators, and that replicators could explore the universe and fundamentally change it and life in our local galactic cluster. That was also tied into the search for the meaning for life. And a place where I see these two ideas as intersecting is in the idea of the cosmic endowment. The cosmic endowment is this idea of the total amount of matter and energy that an intelligent species has access to after it begins creating replicators. So since the expansion of the universe is accelerating, there’s some number of galaxies which exist outside of a volume that we have access to. So there’s a limited amount of energy and matter that we can use for whatever the meaning of life is or whatever good is. So what do you think the cosmic endowment should be used for?

Avi Loeb: Right. So I actually had an exchange with Freeman Dyson on this question. When the accelerating universe was discovered, I wrote a paper saying, “When the universe ages by a factor of 10, we will be surrounded by vacuum beyond our galaxy, and we will not have contact with other civilizations with resources.” And he wrote back to me and said, “We should engage in a cosmic engineering project where we propel our star and come together with other civilizations. And by that, we will not be left alone.” And I told him, “Look, this cosmic engineering project is very ambitious. It’s not practical. In fact, there are locations where you have much more resources, 1,000 thousand times more than in our Milky Way Galaxy. These are called clusters of galaxies, and we can migrate to the center of the nearest cluster of galaxy. And in fact, there might be a lot of journeys taken by advanced civilizations towards clusters of galaxies that would avoid the cosmic expansion.”

So that’s my answer of how to prepare for the cold winter that awaits us, where we will be surrounded by vacuum. It’s best to go to the nearest cluster of galaxy, where the amount of resources is 1,000 times larger. In addition to that, you can imagine that then in the future, we will build the accelerators that bring particles to energies that far exceed the large Hadron Collider. And the maximum particle energy that we can imagine is so-called Planck energy scale. And if you imagine developing our accelerator techniques, you can, in principle, imagine building an accelerator within the solar system that will reach Planck energies. And if you collect particles at these energies, we don’t really know the physics of quantum gravity, but you can imagine a situation where you would irritate the vacuum to a level where the vacuum will start burning up. Because we know the vacuum has some mass density, some energy density that is causing the accelerated expansions, the so-called cosmological constant.

And if you bring the vacuum to zero energy density state, then you have an excess energy that is just like a burning front. It’s the energy you get from a propellant that burns. And you get a domain wall that can expand and consume all the vacuum energy along its path. And of course, it moves at the speed of light. So if you were to be on the path of such a domain wall, you would not get an advanced warning and it will burn up everything along its path at the speed of light.

So I think if we ever meet advanced civilizations that have the capabilities of building accelerators that reach the Planck scale, we should sign a treaty, a galactic treaty, whereby we will never collide particles approaching that energy in order not to risk everyone else from domain walls that would burn them up. That’s just the matter of cosmic responsibility.

Lucas Perry: I think Max Tegmark calls these “death bubbles”?

Avi Loeb: Yeah. I mean, these are domain walls that, of course, we have no evidence for, but they could be triggered by collisions at the Planck scale. And a matter of cosmic responsibility is not to generate these domain walls artificially.

Lucas Perry: So let’s pivot into looking for life and space archeology, which is a cool term that you’ve created, and looking for them through bio-signatures and techno-signatures. One place that I’m curious to start here is since we were just talking about replicators, why is it that we don’t find evidence of replicators or large-scale super structures in other galaxies or in our own galaxy? For example, a galaxy where half of it has been turned into Dyson spheres. And so it’s like half illuminated.

Avi Loeb: Right. I mean, presumably such things do not exist. It’s actually very difficult to imagine an engineering project that will construct a Dyson sphere. And I think it’s much more prudent for an advanced civilization to build small pieces of equipment that go through the vast space in between stars. And that is actually very difficult for us to detect with existing instrumentation. Even a spacecraft as big as a football field would be noticed only when it passes within the Earth’s orbit around the sun. That’s the only region where Pan-STARRS detected objects the size of ʻOumuamua, from the reflected sunlight. So we will notice such objects the farther than the Earth is from the sun. And the distance to the nearest star is hundreds of thousands of times bigger than that. So most of space could be filled with things passing through it that are not visible to us.

A spacecraft the size of a football field is huge. We cannot imagine something much bigger than that. And so I would argue that there could be a lot of things floating through space. Also, as of now our telescopes were not monitoring for objects that move very fast, a fraction of the speed of light, obviously astronomers saw something moving across the sky so fast, they would dismiss it. They would say, “It makes no sense. We are looking for asteroids or comets that are moving at a percent of a percent of the speed of flight, 10 to the minus four of the speed of light.” So part of it is our inability to consider possibilities that may exist out there. But most of the fact that we haven’t yet detected a lot of these objects is a lack of sensitivity. We can’t really see these things when they’re far away unless there are major megastructures, as you pointed out. But I think such engineering projects are unlikely.

Lucas Perry: I’m curious why you feel that engineering projects like that are unlikely. It seems like one of the most interesting things you can do is computation. Computation seems like it has something to do with creating consciousness, and consciousness seems like it is the bedrock of value given that all value arises in conscious experience. I would imagine using the energy of suns to enable vast amounts of computation is one of the most interesting things that a civilization can do. And the objects that they might send out to other solar systems would be a nanoscale, right? You send out nano scale replicators. They would be even smaller than football fields or smaller than Amomum. Then, those would begin Dyson sphere engineering projects. With artificial super intelligence and billions and billions of years to go in the universe, in some sense it feels like we’re in the early universe. It feels curious to me why superstructures would be unlikely. I’m not sure I totally understand that.

Avi Loeb: If you think about what the star is, a star is just a nuclear reactor that is bound by gravity. That doesn’t seem to be like the optimal system for us to use. It’s better to build an artificial nuclear reactor that is not bound by gravity like nuclear engine, nuclear reactor. We’re trying to do that. It’s not easy to build a fusion reactor on earth, but we do have a fission reactor. If I were to think about using nuclear energy, I would say it’s much better to use artificially-made nuclear engines than to use the energy produced by a giant nuclear reactor that nature produced in particular locations. Because then you can carry your engine with you. You are always close to it. You can harness all of its energy and you don’t need to put a huge structure around the star, which brings in a lot of engineering difficulties or challenges.

I would be leaning in the direction of having small systems, a lot of small systems sent out rather than a giant system that covers the star. But once again, I would argue that, we should look at the evidence and there are constraints on Dyson’s spheres that imply that they are not very common. I should say a couple of weeks ago, I wrote a paper with an undergraduate student in Stanford, Eliza Tabor, that considers the possibility of detecting artificial lights on the night side of Proxima B: the habitable planet around the nearest star, Proxima Centauri using the James Webb Space Telescope. We show that one can put very interesting limits on the level of artificial illumination on the dark side of that planet if there are any CT lights out there.

The other technological signatures that one can look for are, for example, industrial pollution in the atmosphere planet. I wrote the paper about that six years ago. You can look for reflectance that indicates photovoltaic cells on the day side of a planet, which is quite different than the reflectance of rock as a spectral edge. You can look for light beams that sweep across the sky. You see them as a flash of light. For example, the light being used for propulsion using light sails. If you imagine in other planetary system where cargoes are being delivered from an Earth-like planet to a Mars-like planet using light sails, the beam of light could cross our line of sight and we could see it as a flash of light, and we can have even correlate it with the two planets passing a longer line of sight. That would give us confidence that indeed it’s a light sail traveling between those two planets that we are witnessing. I wrote a paper about that in 2016.

There are all kinds of technological signatures we can search for but we need to search for it and we need to put funds towards this.

Lucas Perry: We have both bio-signatures and techno-signatures. In terms of bio-signatures, you’ve proposed looking in the clouds of brown dwarfs and green dwarfs. There is looking around our own solar system through looking at the elements in, for example, the atmosphere of Venus, there was phosphene which we thought could not exist except their biological pathways. So, it’s hypothesized that maybe there’s some kind of life in the atmosphere of Venus. They’re searching other planets for elements that can’t exist without life. Then, in terms of techno-signatures, they are searching for radio waves which you’ve talked about. That is a primary way of looking for life, but it potentially needing a refresh where it is, for example, looking for artificial light or the remnants of industry. You’ve also proposed increasing the threshold of sensitivities for developing imaging that is increasingly sensitive. Because ‘Oumuamua was that basically, was it at the limit of our telescopes’ capacity? 

Avi Loeb: It was roughly. I mean, it was at the level of sensitivity that allows us definite detection, but we can’t see objects that are much smaller than that or reflect much less light than ‘Oumuamua did. I should say that all of these, both biological signatures and technological signatures, are being reviewed in a textbook that I wrote together with my former post doc Manasvi Lingam that is coming out to be published on the 29th of June, 2021. It’s more than a thousand pages long. It’s 1,061 pages long and it has an overview of the current scientific knowledge we have and the expectations we have for biological signatures and technological signatures. The title of the book is “Life in the Cosmos” and it’s to be published by Harvard University Press. It is meant to be a textbook for scientific research as a follow-up on my popular level book, Extraterrestrial.

Lucas Perry: Pivoting a bit here, do you feel that, and we mentioned this a little bit earlier when we were talking about the difference between aliens being interested in us or compelled to reach out to us because of ethical concerns. Do you think that advanced alien civilizations can converge on ethics and beneficence?

Avi Loeb: That’s an interesting question. It really depends on their value system. It also depends on Darwinian selection. The question is what kind of civilizations will be most abundant? If you look at human history, very often, the more aggressive, less ethical cultures survived because they were able to destroy the others. It’s not just a matter of which values appear to be more noble. It’s a question of which set of values leads to a survival in the long run and domination in the long run? Without knowing the full spectrum of possibilities, we can’t really assess that. Once again, I would say the smart thing for us to do is be careful. I mean, not transmit too much to the outside world until we figure out if we have neighbors. There was this joke when I Love Lucy was replayed the again and again, that we might get a message from another planet saying, “If you keep replaying reruns of I Love Lucy, we will invade you.”

I think, well, it’s important for us to be careful and figure out first whether there are smarter kids on the block. But having said that, if we ever establish contact or if you find the equipment in our neighborhood, the question is what to do? It’s a policy question how to respond to that and it really depends on the nature of what we find. How much more advanced is the equipment that we uncover? What were the intentions of those who produced it and sent it? These are fundamental questions that will guide our policy and our behavior. Until we find conclusive evidence, we should wait until that moment.

Lucas Perry: To push back a little bit on the Darwinian argument that’s, of course, a factor where we have this kind of game theoretic expression of genes, the selfish gene trying to propagate itself through generations and that leading to behaviors and how the human being is conditioned by evolution in that way. There’s also the sense that over time humanity has become increasingly moral. We’re, of course, doing plenty of things right now that are wrong, but morality seems to be improving over time. This leads to a question where, for example, do you think that there is a necessary relationship between what is true and what is good? You need to know more and more true facts in order to, for example, spread throughout the universe. So, if there’s a necessary relationship between what is true and what is good, there would be a convergence then also on what is good as truth continues to progress.

Avi Loeb: I was asked in a forum when I joked about the fact that a man seeking intelligence in space in the sky, because I don’t find it often here on earth, a member of the audience chuckled and asked me: how do you define an intelligent civilization? The way I define it is by the guiding principles of science, which is sharing or cooperation on evidence-based knowledge. The word cooperation is extremely important. I believe that intelligence is marked by cooperation, not by fighting each other because that’s a sink for our energy, for our resources, that doesn’t do any good. Promoting a better future for ourselves through cooperation is a trademark of intelligence. It’s also the guiding principle of science.

The second component of these guiding principles is evidence-based knowledge. The way I view science is it’s an infinite sum game. In economics, you have a zero sum game where if someone makes a profit, another person loses. In science, when we increase the level of knowledge we have, everyone benefit. When a vaccine was developed for COVID-19, everyone on earth benefited from it. Science aims to increase the territory of this island of knowledge that we have in the ocean of ignorance that surrounds it. It should be evidence-based, not based on our prejudice. That’s what I hope the future of humanity is. It will be coincident with the guiding principles of science, meaning, people will cooperate with each other, nations will cooperate with each other and try to share evidence-based knowledge rather than what’s the alternative.

The alternative is what we are doing right now: fighting each other, trying to feel superior relative to each other. If you look at human history, you find racism, you find attempts to feel supremacy or elitism, or all kinds of phenomena that stem from a desire to feel superior relative to other people. That’s ridiculous in the big scheme of things, because we are such an unimportant player in the cosmic stage that we should all feel modest, not trying to feel superior relative to each other. Because any advantage that we have relative to each other is really minuscule in the big scheme of things. Now, the color of the skin is completely meaningless. Who cares what the color of the skin is? What significance could that have for the qualities of a person?

Yet, a lot of human history is shaped around that. This is not the intelligent way for us to behave as a species. We should focus on the guiding principles of science which are cooperation and sharing of evidence-based knowledge. Rather than ridiculing each other, rather than trying to feel superior relative to each other, rather than fighting each other, let’s work together towards a better future and demonstrate that we are intelligent so that we will acquire a place in the club of intelligent species in the Milky Way galaxy.

Lucas Perry: Do you see morality as evidence-based knowledge?

Avi Loeb: I think morality, if you listen to Kant, it’s the logical thing to do if you consider a principle such that it will promote the better good of everyone around. You’re basically taking into consideration others and shaping your behavior so that if other people follow the same principles, we will be in a better world. That to me is a sign of recognizing evidence because the evidence is that you don’t live alone. If you are to live alone, if you are the only person on earth, morality loses significance. Not only that there is nobody else for you to consider morality relative to. That’s not the issue. The issue is that it’s irrelevant. You don’t need to consider morality because you’re the only person. You can do whatever you want. It has no effect on other people, therefore, morality is not relevant. You can do whatever you want. But given the fact that you look at the evidence and you realize that you’re not alone, that’s evidence. You shape your behavior based on that evidence, and I do think that’s evidence-based knowledge. Definitely.

Lucas Perry: How do you see axiomatic based knowledge? For example, axioms of morality and mathematics that build these structures, they’re also axioms, for example, of science, like this value of communication and evidence-based reasoning. Axioms and morality are, for example, might be the value and disvalue are innate and intrinsically experienced in consciousness. Then, there are axioms in mathematics which motivate and structure that field. We’ve talked a lot about science and evidence-based reasoning, but what about knowledge in the philosophical territory which is almost a priori true, like things which we rest fields upon? How do you see that?

Avi Loeb: I do believe that there is room for humanities of the future. The way that philosophy was handled in past centuries should be updated. Let me illustrate that with an example related to your question. Basically, suppose we want to decide about the principles of morality, the way to do that is you can construct a simulation that includes a lot of people. In principle, if you include all the ingredients that make people behave one way or another. It doesn’t need to be rational reasoning. You can include some randomness or some other elements that shape human behavior based on their environment. You can include that in the simulation.

Let’s just imagine this simulation where you put individual people and you have an algorithm for the way that they respond to their environment. It doesn’t need to be by rational reasoning. It could be emotional, it could be any other way that you find appropriate. You have the building blocks. Each of them is a person and you introduce the randomness that is in the population. Then, you run the simulation and you see what happens. This is just like trying to produce human history artificially. Then, you introduce principles for the behavior of people, guiding principles, just like moral principles.

First you let people behave in a completely crazy way, like, anything they want then you will get killed as the outcome of this simulation. But if you introduce principles of morality, you can see the outcomes that will come out of it. What I would say is in principle, in the future, if we have a sophisticated enough computer algorithm to describe behavior of people based on our understanding of how people behave, if we get the better sense of how people behave and respond to their environment, we can design the optimal code by which people should behave such that we will end up in a stable society that is intelligent, that follows the kind of principles I mentioned before that is orderly and that benefits everyone for a better future.

That’s one way of approaching it. Obviously in the past, philosophers could not approach it this way because they didn’t have the computer capabilities that we currently have. You can imagine artificial intelligence addressing this task in principle.

Lucas Perry: You can set moral principles and moral parameters for a system and then evolve the system, but the criteria for evaluating the success or not of that system and those are more like moral axioms. As a scientist, I’m curious about how you approach, for example, moral axioms that you use for evaluating the evolution of a particular moral system.

Avi Loeb: My criterion, the one that I think that guides me is maintaining the longevity of the human species. Whatever will keep us for the longest amount of time. Of course, bearing in mind that the physical conditions will change on earth. Within a billion years, the sun will boil off all the oceans on earth, but let’s leave that aside. Let’s just ask, suppose you put the people in a box and, generation after generation, let them follow some principles. What would be the ideal principles to maintain the stability of society and the longevity of the human species? That’s what will guide me. I think survival is really the key for maintaining your ideas. That’s the precondition. In nature, things that are transient, they go away. They don’t survive and they lose their value so they have less value. I mean, obviously in the short term, they could have more value, but I care about the long-term and I define the principles based on how long they would allow us to survive.

Lucas Perry: But would you add expected value to that calculation? It’s not just time, but it’s actually like the expected reward or expected value over time. Because some futures are worse than others and so maybe we wouldn’t want to just have longevity.

Avi Loeb: There is the issue of being happy and pleased with the environment that you live in. That could be factored in. But I think the primary principle would be survival because within any population you always will find a fraction of the components that are happy. It partly depends on the circumstances that they live in, but partly on the way they accept those circumstances. You can live in the barn and be happy. You can be in a mansion and be unhappy. It’s complicated as to what makes you happy and I would put that as a secondary condition. I would worry more about social structures that maintain longevity.

Lucas Perry: All right. On humanity’s longevity, we’re basically beginning to become technologically advanced, we’re facing existential risks in the 21st century from artificial intelligence and nuclear weapons and synthetic biology. There’s UFO’s and there’s ‘Oumuamua and a lot of really interesting, crazy things are going on. I’m curious if you could touch on the challenge of humanity’s response and the need for international governance for potentially communicating and encountering alien life.

Avi Loeb: Well, I do think it’s extremely important for us to recognize that we belong to the same species. All the confrontations we often have in politics, between nations, they should play a lesser role in guiding our behavior. Cooperation on the global scale, international cooperation, is extremely important. Let me give an example from recent history. There was a virus that came from Wuhan, China. If scientists were allowed to gather all the information of how this virus came and what the characteristics of this virus are, then the vaccine would have been developed earlier and it could have saved the lives of many people.

I would say, in the global world that we live in today, many of our problems are global and therefore we should cooperate on the solutions. That argues against putting borders in our knowledge, trying, again, to gain superiority of one nation relative to another, but instead help each other towards a better future. It’s really the science that provides the glue that can bind us internationally. I realized, I’m trying to be a realist, that it may not happen anytime soon that people will recognize the value of science as the international glue. But I hope that eventually we will realize that this is the only path that will bring us to survival, to a better future, if we act based on cooperation on evidence-based knowledge.

Lucas Perry: In 2020, you have an article where you advocate for creating an elite scientific body to advise on global catastrophes. In the Future of Life Institute we’re interested in reducing the risks of existential risks, ways in which technology can be misused or lead to accidents which lead to the extinction of life on earth. Could you comment on your perspective on the need for an elite scientific body to advise on existential risk and global catastrophic risks?

Avi Loeb: Well, we noticed that during the pandemic, we were not really prepared especially in the Western world because the last major pandemic of this magnitude took place a century ago, and nobody around today in politics or otherwise was around back then. As a result, we were not ready. We were not prepared. I think it’s prudent to have an organization that will cultivate cooperation globally. It could be established by the United Nations. It could be a different body. But once again, it’s important for us to plan ahead and avoid catastrophes that could be more damaging than COVID-19. If you prevent them, it would more than overpay for the investment of funds.

Just to give you another example, solar eruptions, solar storms, if there was a Carrington Event. About 150 years ago, it was big eruption on the sun that brought energetic particles to earth and back in the mid-19th century, there wasn’t much technological infrastructure. But if the same event would have happened today, it would cost trillions of dollars to the world economy because it would damage power grids and satellites, communication, and so forth. It would be extremely expensive. It’s important for us to plan ahead. About seven years ago, there was a plume of hot gas that was ejected by the sun and it just missed the earth. We should be ready for that and build infrastructure that would protect us for such a catastrophe.

There are many more. One can go through the risks and some of them are bigger than others. Some of them are rarer than others. Of course, one of them is the risk from an asteroid hitting the earth and the Congress that tasked NASA to find all asteroids or rocks bigger than the size of ‘Oumuamua, about 140 meters. They wanted NASA to find the 90% of all of those that could potentially intercept earth and collide with earth. The Pan-STARRS telescope that we started from, that discovered Oumuamua, was funded for finding such near earth objects. The Vera Rubin Observatory will most likely fulfill two-thirds of the Congressional task and find 60% of all the near earth asteroids bigger than 140 meters.

That shows that the human brain is actually much more useful for survival than the body of a dinosaur because the dinosaurs had huge bodies. 66 million years ago, they were very proud of themselves. They dominated their environment, they ate grass, and were happy. Then, from the sky came this giant rock the size of Manhattan Island. When it hit the ground, it tarnished their ego trip abruptly. Just to show you that the human brain, even though it’s much smaller than the dinosaur body is much more precious for protecting us because we can design telescopes that would alert us to incoming objects. That’s a catastrophe that obviously we can protect ourselves against by shifting the trajectories of objects heading our way.

Lucas Perry: As a final question, I’m curious, what are the most fundamental questions to you in life and what motivates and excites you from moment to moment as a human being on earth? I’ve read or heard that you were really interested in existentialism as a kid. What are the most foundational or important questions to you?

Avi Loeb: The fundamental issue is that we live for a finite time, with short time. The question is what’s the meaning of our existence? You see, because very often we forget that this trip that is very exciting that we’re having and could be very stimulating and intriguing, is finite. When I realized that, when both my parents passed away over the past three years, I came to the realization that I can give a damn of what other people think. Let’s focus on the substance. Let’s keep our eyes on the ball and not on the audience. Then, it was the focusing of my attention to the important things in life that we should appreciate.

Then, there is this fundamental question of why is life worth living? What are we living life for? What is the meaning of our life? You know, it may well be that there is no meaning, that we just go through this interesting trip; that we are spectators of the universe. We should enjoy the play while it lasts. But that, again, argues that we should be modest and behave like spectators rather than trying to shape our immediate environment and feel a sense of deep arrogance as a result of that. That was the view of the dinosaurs before the rock hit them. In a way, what gives me a sense of a meaningful life is just looking at the universe and learning from it. I don’t really care about my colleagues.

Every morning I jog at 5:00 AM. I developed this routine during the pandemic. I enjoy the company of birds, ducks, wild turkeys, and rabbits. I really enjoy nature left to its own much more than people because there is something true in looking at it. Every morning, I see something different. Today, I saw a red bird. I saw the sunrise was completely different than yesterday. Everyday, you can learn new things and we just need to pay attention, not to feel that you know everything. It’s not about us. It’s about what is going on around us that we should pay attention to. Once we behave more like kids appreciating things around us and learning from them, then we would feel happier. I was asked by the Harvard Gazette: what is the one thing I would like to change about the world? I said, I would like my colleagues to behave more like kids, basically not being driven by promoting their image but rather willing to make mistakes, putting skin in the game, and learning regarding life as a learning experience. We might be wrong sometimes, but we are doing our best to figure out when we are wrong.

Lucas Perry: All right, Avi. Thank you very much for inspiring this childlike curiosity in science, for also helping to improve the cultural and epistemic situation in science, and also for your work on ‘Oumuamua and everything to do with extraterrestrials and astronomy. Thank you very much for coming on the podcast.

Avi Loeb: Thanks for having me. I had a great time.

 

Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

  • What wisdom consists of
  • The role of ideas in society and civilization
  • The increasing concentration of power and wealth
  • The technological displacement of human labor
  • Democracy, universal basic income, and universal basic capital
  • Living an examined life

 

Check out Nicolas Berggruen’s thoughts archive here

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Nicolas Berggruen and explores the importance of ideas and wisdom in the modern age of technology. We explore the race between the power of our technology and the wisdom with which we manage it , what wisdom really consists of, why ideas are so important, the increasing concentration of power and wealth in the hands of the few, how technology continues to displace human labor, and we also get into democracy and the importance of living an examined life. 

For those not familiar with Nicolas, Nicolas Berggruen is an investor and philanthropist. He is the founder and president of Berggruen Holdings, and is a co-founder and chairman of the Berggruen Institute. The Berggruen Institute is a non-profit, non-partisan, think and action tank that works to develop foundational ideas about how to reshape political and social institutions in the face of great transformations. They work across cultures, disciplines and political boundaries, engaging great thinkers to develop and promote long-term answers to the biggest challenges of the 21st Century. Nicolas is also the author, with Nathan Gardels, of Intelligent Governance for the 21st Century: A Middle Way between West and East as well as Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism. And so without further ado, let’s get into our conversation with Nicolas Berggruen. 

So, again, thank you very much for doing this. And to set a little bit of stage for the interview and the conversation, I just wanted to paint a little bit of a picture of wisdom and technology and this side of ideas, which is not always focused on when people are looking at worldwide issues. And I felt that this Carl Jung quote captured this perspective well. He says that “indeed it is becoming ever more obvious that it is not famine, not earthquakes, not microbes, not cancer, but man himself who is man’s greatest danger to man for the simple reason that there is no adequate protection against psychic epidemics, which are infinitely more devastating than the worst of natural catastrophes.” So, I think this begins to bring us to a point of reflection where we can think about, for example, the race between the power of our technology and the wisdom with which we manage it. So, to start things off here, I’m curious if you have any perspective about this race between the power of our technology and the wisdom with which we manage it, and in particular what wisdom really means to you.

Nicolas Berggruen: So, I think it’s an essential question. And it’s becoming more essential every day because technology, which is arguably something that we’ve empowered and accelerated is becoming increasingly powerful to a point where we might be at the cusp of losing control. Technology, I think, has always been powerful. Even in very early days, if you had a weapon as technology, well, it helped us humans on one side to survive by likely killing animals, but it also helped us fight. So, it can be used both ways. And I think that can be said of any technology.

What’s interesting today is that technology is potentially, I think, at the risk or in the zone of opportunity where the technology itself takes on a life. Go back to the weapon example. If the weapon is not being manned somehow, well, the weapon is inert. But today, AIs are beginning to have lives of their own. Robots have lives of their own. And networks are living organisms. So, the real question is when these pieces of technology begin to have their own lives or are so powerful and so pervasive that we are living within the technology, well, that changes things considerably.

So, going back to the wisdom question, it’s always a question. When technology is a weapon, what do you do with it? And technology’s always a weapon, for the good or for the less good. So, you’ve got to have, in my mind at least, wisdom, intention, an idea of what you can do with technology, what might be the consequences. So, I don’t think it’s a new question; I think it’s a question since the beginning of time for us as humans. And it will continue to be a question. It’s just maybe more powerful today than it ever was. And it will continue to become more potent.

Lucas Perry: What would you say wisdom is?

Nicolas Berggruen: I think it’s understanding and projection together. So, it’s an understanding of maybe a question, an issue, and taking that issue into the real world and seeing what you do with that question or that issue. So, wisdom is maybe a combination of thinking and imagination with application, which is an interesting combination at least.

Lucas Perry: Is there an ethical or moral component to wisdom?

Nicolas Berggruen: In my mind, yes. Going back to the question, what is wisdom? Do plants or animals have wisdom? And why would we have wisdoms, and they not? We need to develop wisdom because we have thoughts. We are self-aware. And we also act. And I think the interaction of our thinking and our actions, that makes for need for wisdom. And in that sense, a code of conduct or ethical issues, moral issues become relevant. They’re really societal, they’re really cultural questions. So, they’ll be very different depending on when and where you are. If you are sitting, as we are, it seems both in America today, in 2021; or if we were sitting 2,000 years ago somewhere else; or even today, if we’re sitting in Shanghai or in Nairobi.

Lucas Perry: So, there’s this part of understanding and there’s this projection and this ethical component as well and the dynamics between our own thinking and action, which can all interdependently come together and express something like wisdom. What does this projection component mean for you?

Nicolas Berggruen: Well, again, to me, one can have ideas, can have feelings, a point of view, but then how do you deal with reality? How do you apply it to the real world? And what’s interesting for us as humans is that we have an inner life. We have, in essence, a life with ourselves. And then we have the life that puts us in front of the world, makes us interact with the world. And are those two lives in tune? Are they not? And how far do we push them?

Some people in some cultures will say that your inner life, your thoughts, your imagination are yours. Keep them there. Other cultures and other ways of being as individuals will make us act those emotions, our imaginations, make them act out in the real world. And there’s a big difference. In some thinking, action is everything. For some philosophers, you are what you do. For others, less so. And that’s the same with cultures.

Lucas Perry: Do you see ideas as the bridge between the inner and the outer life?

Nicolas Berggruen: I think ideas are very powerful because they activate you. They move you as a person. But again, if you don’t do anything with them, in terms of your life and your actions, they’ll be limited. What do you do with those ideas? And there, the field is wide open. You can express them or you can try to implement them. But I do think that unless an idea is shared in any way that’s imaginable, unless that idea is shared, it won’t live. But the day it’s shared, it can become very powerful. And I do believe that ideas have and will continue to shape us humans.

We live in a way that reflects a series of ideas. They may be cultural ideas. They may be religious ideas. They may be political ideas. But we all live in a world that’s been created through ideas. And who created these ideas? Different thinkers, different practitioners throughout history. And you could say, “Well, these people are very creative and very smart and they’ve populated our world with their ideas.” Or we could even say, “No, they’re just vessels of whatever was the thinking of the time. And at that time, people were interested in specific ideas and specific people. And they gained traction.”

So, I’m not trying to overemphasize the fact that a few people are smarter or greater than others and everything comes from them, but in reality, it does come from them. And the only question then is, were they the authors of all of this? Or did they reflect a time and a place? My feeling is it’s probably a bit of both. But because we are humans, because we attribute things to individuals one way or another, ideas get attributed to people, to thinkers, to practitioners. And they’re very powerful. And I think, undoubtedly, they still shape who we are and I think will continue to shape who we are at least for a while.

Lucas Perry: Yeah. So, there’s a sense that we’ve inherited hundreds of thousands of years of ideas basically and thinking from our ancestors. And you can think of certain key persons, like philosophers or political theorists or so on, who have majorly contributed. And so, you’re saying that they may have partially been a reflection of their own society and that their thought may have been an expression of their own individuality and their own unique thinking.

So, just looking at the state of humanity right now and how effective we are at going after more and more powerful technology, how do you see our investment in wisdom and ideas relative to the status of and investment that we put into the power of our technology?

Nicolas Berggruen: To me, there’s a disconnect today between, as you say, the effort that we put in developing the technologies versus the effort that’s being invested in understanding what these technologies might do and thinking ahead, what happens when these things come to life? How will they affect us and others? So, we are rightly so impressed and fascinated by the technologies. And we are less focused on the effects of these technologies on ourselves, on the planet, on other species.

We’re not unaware, and we’re getting more and more aware. I just don’t know if, as you say, we invest enough of our attention, of our resources there. And also, if we have the patience and you could almost say the wisdom, to take your word, to take the time. So, I see a disconnect. And in going back to the power of ideas, and you could maybe ask the question in a different ways: our ideas or technology. Which one is more influential? Which one is more powerful? I would say they come together. But technologies alone are limited or at least historically have been limited. They needed to be manifested or let’s say empowered by the owners or the creators of the technologies. They helped people. They helped the idea-makers or the ideas themselves enormously. So, technology has always been an ally to the ideas. But technology alone, without a vision, I don’t think ever got that far. So, without ideas, technology is a little bit like an orphan.

And I would argue that the ideas are still more powerful than the technologies because if you think about how we think today, how we behave, we live in a world that was shaped by thinkers a few thousand years ago, no matter where we live. So, in the West, we are shaped by thinkers that lived in Greece 2, 3,000 years ago. We are shaped by beliefs that come from religions that were created a few thousand years ago. In Asia, the cultures have been shaped by people who lived also 2, 3,000 years ago. And the technology, which has changed enormously in every way, East or West, may have changed the way we live, but not that much. In the way we behave with each other, the way we use the technologies, those still reflect thinking and cultures and ideas that were developed 2, 3,000 years ago. So, I would almost argue the ideas are more powerful than anything. The technologies are an ally, but they themselves don’t change the way we think, behave, feel, at least not yet.

It’s possible that certain technologies will truly… and this is what’s so interesting about living today, that I think some technologies will help us transform who we are as humans, potentially transform the nature of our species, maybe help us create a different species. That could be. But up to now, in my mind at least, the ideas shape how we live; the technologies help us live maybe more deeply or longer, but still in a way that reflects the same ideas that were created a few thousand years ago. So, the ideas are still the most important. So, going back to the question, do we need, in essence, a philosophy for technology? I would say yes. Technologies are becoming more and more powerful. They are powerful. And the way you use technology will reflect ideas and culture. So, you’ve got to get the culture and the ideas right because the technologies are getting more and more powerful.

Lucas Perry: So, to me, getting the culture and the ideas right sounds a lot like wisdom. I’m curious if you would agree with that. And I’m also curious what your view might be on why it is that the pace of the power of our technology seems to rapidly outpace the progress of the quality of our wisdom and culture and ideas. Because it seems like today we have a situation where we have ideas that are thousands of years old that are still being used in modern society. And some of those may be timeless, but perhaps some of them are also outdated.

Nicolas Berggruen: Ideas, like everything else, evolve. But my feeling is that they evolve actually quite slowly, much more slowly than possible. But I think we, as humans, we’re still analog, even though we live increasingly in a digital world. Our processes and our ability to evolve is analog and is fairly slow still. So, the changes that happened over the last few millennials, which are substantial, even in the world of ideas, things like the enlightenment and other important changes, happened in a way that was very significant. Changed entirely the way we behave, but it took a long time. And technology helps us, but it’s so part of our lives, that there’s a question at some point, are we attached to the technology? Meaning, are we driving the car, or is the car driving us? And we’re at the cusp of this, potentially. And it’s not necessarily a bad thing, but, again, do we have wisdom about it? And can we lose control of the genie, in some ways?

I would argue, for example, social media networks have become very powerful. And the creators of it, even if they control the networks, and they still do in theory, they really lost control of them. The networks really have a life of their own. Could you argue the same to other times in history? I think you could. I mean, if you think of the Martin Luther and the Gutenberg Bible, you could say, “Well, that relates ideas and technologies.” And in a way that was certainly less rapid than the internet, technology, in the case of the printed material, really helped spread an idea. So, again, I think that the two come together. And one helps the other. In the example I just gave, you had an idea; the technology helped.

Here, what’s the idea behind, let’s say, social networks? Well, giving everybody a voice, giving everybody connectivity. It’s a great way to democratize access and a voice. Have we thought about the implications of that? Have we thought about a world where, in theory, everyone on earth has the same access and the same voice? Our political institutions, our cultures really are only now dealing with it. We didn’t think about it ahead. So, we are catching up in some ways. The idea of giving every individual an equal voice, maybe that’s a reflection of an old idea. That’s not a new idea. The instrument, meaning let’s say social media, is fairly new. So, you could say, “Well, it’s just a reflection of an old idea.”

Have we thought through what it means in terms of our political and cultural lives? Probably not enough. So, I would say half and half in this case. The idea of the individual is not new. The technology is new. The implications are something that we’re still dealing with. You could also argue that the nature of anything new, an idea, in this case helped by technology. And we don’t really know where the journey leads us. It was a different way of thinking. It became incredibly powerful. You didn’t know at the beginning, how powerful and where it would lead. It did change the world.

But it’s not the technology that changed the world; it’s the ideas. And here, the question is the technology, let’s say social networks, are really an enabler. The idea is still the individual. And the idea is democratizing access and voices, putting everybody on the same level playing field, but empowering a few voices, again, because of network. So, it’s this dance between technology and humans and the ideas. At the end, we have to know that technology is really just a tool, even though some of these tools are becoming potential agents themselves.

Lucas Perry: Yeah. The idea of the tools becoming agents themselves is a really interesting idea. Would you agree with the characterization then that technology without the right ideas is orphaned, and ideas without the appropriate technology is ineffectual?

Nicolas Berggruen: Yes, on both.

Lucas Perry: You mentioned that some of the technology is becoming an agent in and of itself. So, it seems to me then that the real risk there is that if that technology is used or developed without the wisdom of the appropriate ideas, that that unwise agentive technology amplifies that lack of wisdom because being an agent, its nature is to self-sustain and to propagate and to actualize change in the world of its own accord. So, it seems like the fact that the technology is becoming more agentive is like a calling for more wisdom and better ideas. Would you say that that’s fair?

Nicolas Berggruen: Absolutely. So, technology in the form of agents is becoming more powerful. So, you would want wisdom, you would want governance, you would want guidance, thinking, intention behind those technologies, behind those agents. And the obvious ones that are coming, everything around AI. But you could say that some of the things we are living with are already agents, even though they may not have been intended as agents. I mentioned social networks.

Social networks, frankly, are living organisms. They are agents. And no matter if they’re owned by a corporation and that corporation has a management, the networks today are almost like living creatures that exist for themselves or exist as themselves. Now, can they be unplugged? Absolutely. But very unlikely that they’ll be unplugged. They may be modified. And even if one dies, they’ll be replaced most likely. Again, what I’m saying is that they’ve become incredibly powerful. And they are like living organisms. So, governance does matter. We know very well from these agents that they are amazingly powerful.

We also know that we don’t know that much about what the outcomes are, where the journey may lead and how to control them. There’s a reason why in some countries, in some cultures, let’s say China or Russia or Turkey, there’s been a real effort from a standpoint of government to control these networks because they know how powerful they are. In the West, let’s say in the US, these networks have operated very freely. And I think we’ve lived with the real ramifications as individuals. I don’t know what the average engagement is for individuals, but it’s enormous.

So, we live with social networks. They’re part of us; we are part of them equally. And they’ve empowered political discourse and political leaders. I think that if these networks hadn’t existed, certain people may not have gotten elected. Certainly, they wouldn’t have gotten the voice that they got. And these are part of the unintended consequences. And it’s changed the nature of how we live.

So, we see it already. And this is not AI, but it is in my mind. Social networks are living creatures.

Lucas Perry: So, following up on this idea of technology as agents and organisms. I’ve also heard corporations likened to organisms. They have a particular incentive structure and they live and die by their capacity to satisfy that incentive, which is the accumulation of capital and wealth.

I’m curious, in terms of AI, when you were at Beneficial AI 2017. So, I’m curious what your view is of how ideas play a role in value alignment with regards to technology that is increasingly agentive, so specifically artificial intelligence. So, there’s a sense that training and imbuing AI systems with the appropriate values and ideas and objectives, yet at the same time dealing with something that is fundamentally alien, given the nature of machine learning and deep learning. And so, yeah. I’m curious about your perspective about the relationship between ideas and AI.

Nicolas Berggruen: Well, you mentioned corporations. And corporations are very different than AIs, but at the same time, the way you mentioned corporations I think makes them very similar to AI. And they are a good example because they’ve been around for quite a while. Corporations, somebody from the outside would say, “Well, they have one objective is to accumulate capital, make money.” But in reality, money is just fuel. It’s just, if you want, the equivalent of energy or blood or water. That’s all it is. Corporations are organisms. And their real objective, as individual agents, if you want, as sort of creatures, is to grow, expand, survive. And if you look at that, I would say you could look at AIs very similarly.

So, any artificial intelligent agent, ultimately any robot, if you put it in a embodied form, if they’re well-made, if you want, or if they’re well-organized, if they’re going to be truly powerful, a bit like a corporation is really very powerful and it’s helped progress, it’s helped… if you think capitalism has helped the world, in that sense, it’s helped. Well, strong AIs will also have the ability over time to want to grow and live.

So, going back to corporations. They have to live within society and within a set of rules. And those change. And those adapt to culture. So, there’s a culture. When you look at some of the very old corporations, think the East India Company or so, employed slaves. That wouldn’t be possible today for the East India Company. Fossil fuels were really the allies of some of the biggest corporations that existed about 100 years ago, even 50 years ago. Probably not in the future. So, things change. And culture has an enormous influence. Will it have the same kind of influence over AI agents? Absolutely.

The question is, as you can see from criticism of corporations, some corporations thought to become too powerful, not under the control or governance of anyone, any country, supernational, if you want. I think the same thing could happen to AIs. The only difference is that I think AIs could become much more powerful because they will have the ability to access data. They’ll have the ability to self-transform in a way that hasn’t really been experienced yet. And we don’t know how far… it’ll go very far. And you could imagine agents being able to access all of the world data in some ways.

And the question is, what is data? It’s not just information the way we think of information, which is maybe sort of knowledge that we memorize, but it’s really an understanding of the world. This is how we, as creatures and animals, as creatures, are able to function in that they understand the world. Well, AIs, if they really get there will sort of understand the world. And the question then is, can they self-transform? And could they, and this is the interesting part, begin to think and develop instincts and maybe access dimensions and senses that we as humans have a tough time accessing? And I would speculate that, yes.

If you look at AlphaGo, which is the DeepMind Google AI that beat the best Go players, the way that they beat the best Go players, and this is a complicated game that’s been around for a long time, is really by coming up with moves and strategies and a way of playing that the best human players over thousands of years didn’t think of. So, a different intuition, a different thinking. Is it a new dimension? Is it having access a new sense? No, but it’s definitely, very creative, unexpected way of playing. To me, it’s potentially a window into the future, where AIs and machines become in essence more creative and access areas of thinking, creativity and action that we humans don’t see. And the question is, can it even go beyond?

I’m convinced that there are dimensions and senses that we, humans, don’t access today. It’s obvious. Animals don’t access what we access. Plants don’t access what animals do. So, there was change in evolution. And we are certainly missing dimensions and senses that exist. Will we ever access them? I don’t know. Will AIs help us access them? Maybe. Will they access them on their own by somehow self-transforming? Potentially. Or are there agents that we can’t even imagine, who we have no sense of, that are already there? So, I think all of this is a possibility. It’s exciting, but it’ll also transform who we are.

Lucas Perry: So, in order to get to a place where AI is that powerful and has senses and understanding that exist beyond what humans are capable of, how do you see the necessity of wisdom and ideas in the cultivation and practice of building beneficial AI systems? So, I mean, industry incentives and international racing towards more and more powerful AI systems could simply ruin the whole project because everyone’s just amplifying power and taking shortcuts on wisdom or ideas with which to manage and develop the technology. So, how do you mitigate that dynamic, that tendency towards power?

Nicolas Berggruen: It’s a very good question. And interestingly enough, I’m not sure that there are many real-world answers or that the real-world answers are being practiced, except in a way that’s self-disciplined. What’s interesting in the West is that government institutions are way, way behind technology. And we’ve seen it even in the last few years when you had hearings in Washington, D.C. around technology, how disconnected or maybe how naive and uninformed government is compared to the technologists. And the technologists have, frankly, an incentive and also an ethos of doing their work away from government. It gives them more freedom. Many of them believe in more freedom. And many of them believe that technology is freedom, almost blindly believing that any technology will help free us as humans. Therefore, technology is good, and that we’ll be smart enough or wise enough or self-interested enough not to mishandle the technology.

So, I think there’s a true disconnect between the technologies and the technologists that are being empowered and sort of the world around it because the technologists, and I believe it, at least the ones I’ve met and I’ve met many, I think overall are well-intended. I also think they’re naive. They think whatever they’re doing is going to be better for humanity without really knowing how far the technology might go or in whose hands the technology might end up in. I think that’s what’s happening in the West. And it’s happening mostly in the US. I think other parts of the West are just less advanced technologically. When I say the US, I include some of the AI labs that exist in Europe that are owned by US actors.

On the other side of the world, you’ve got China that is also developing technology. And I think there is probably a deeper connection, that’s my speculation, a deeper connection between government and the technologies. So, I think they’re much more interested and probably more aware of what technology can do. And I think they, meaning the government, the government is going to be much more interested and focused on knowing about it and potentially using it. The questions are still the same. And that leads to the next question. If you think of beneficial AI, what is beneficial? In what way, and to who? And it becomes very tricky. Depending on cultures and religions and cultures that are derivatives of religions, you’re going to have a totally different view of what is beneficial. And are we talking about beneficial just to us humans or beyond? Who is it beneficial for? And I don’t think anybody has answered these questions.

And if you are one technologist in one lab or little group, you may have a certain ethos, culture, background. And you’ll have your own sense of what is beneficial. And then there might be someone on the other side of the world who’s developing equally powerful technology, who’s going to have a totally different view of what’s beneficial. Who’s right? Who’s wrong? I would argue they’re both right. And they’re both wrong. But they’re both right to start with. So, should they both exist? And will they both exist? I think they’ll both exist. I think it’s unlikely that you’re going to have one that’s dominant right away. I think they will co-exist, potentially compete. And again, I think we’re early days.

Lucas Perry: So, reflecting on Facebook as a kind of organism, do you think that Mark Zuckerberg has lost control of Facebook?

Nicolas Berggruen: Yes and no. No, in the sense that he’s the boss of Facebook. But yes, in the sense that I doubt that he knew how far Facebook and other, I would say, engines of Facebook would reach. I don’t think he or anyone knew.

And I also think that today, Facebook is a private company, but it’s very much under scrutiny, not just from governments, but actually from its users. So, you could say that the users are just as powerful as Mark Zuckerberg, maybe more powerful. If tomorrow morning, Mark Zuckerberg turned Facebook or Instagram or WhatsApp off, what would happen? If they were tweaked or changed in a way that’s meaningful, what would happen? It’s happening all the time. I don’t mean the switch-off, but the changes. But I think the changes are tested. And I think the users at the end have an enormous amount of influence.

But at the end of the day, the key is simply the engine has become so powerful or the kind of engine has become so powerful that it’s not in the hands of Mark Zuckerberg. And if he didn’t exist, there would be another Facebook. So, again, argument is even though one attributes a lot of these technologies to individuals, a little bit like ideas are attributable to individuals and they become the face of an idea, and I think that’s powerful, that’s incredibly powerful even with religions, I think that the ideas are way beyond, all the technologies are way beyond the founders. They reflect capability in terms of technology at the time when they were developed. There are a number of different social networks, not just one. And they reflect a culture or a cultural shift in the case of ideas, of religions.

Lucas Perry: So, I have two questions for you. The first is, as we begin to approach artificial general intelligence and superintelligence, do you think that AI labs and the leaders of them like Mark Zuckerberg may very well lose control of the systems and the kind of inertia that it has in the world, like the kind of inertia that Facebook has as a platform for its own continued existence? That’s one question. And then the second is that about half the country is angry at Facebook because it deplatformed the president, among other people. And the other half is angry because it was able to manipulate enough people through fake news and information and allow Russian interference in advertising certain ideas.

And this makes me think of the Carl Jung quote from the beginning of the podcast about there not being adequate protection against psychic epidemics, kind of like there not being adequate protection against collectively bad ideas. So, I’m curious if you have any perspective, both on the leaders of AI labs losing control. And then maybe some antivirus malware for the human mind, if such a thing exists.

Nicolas Berggruen: So, let’s start with the second question, which is the mind and mental health. Humans are self-aware. Very self-aware. And who knows what’s next? Maybe another iteration, even more powerful. So, our mental health is incredibly important.

We live in our minds. We live physically, but we really live in our minds. So, how healthy is our mind? How healthy is our mental life? How happy or unhappy? How connected or not? I think these are essential questions in general. I think that in a world where technology and networks have become more and more powerful, that’s even more important for the health of people, nations, countries and the planet at the end. So, addressing this seems more important than ever. I would argue that it’s always been important. And it’s always been an incredibly powerful factor, no matter what. Think of religious wars. Think of crusades. They are very powerful sort of mental commitments. You could say diseases, in some cases, depending on who you are.

So, I would say the same afflictions that exist today that make a whole people think something or dream something healthy or maybe in some cases not so healthy, depressed, or the opposite, euphoric or delusional, these things have existed forever. The difference is that our weapons are becoming more powerful. This is what happened half a century ago or more with the atomic power. So, our technology’s becoming more powerful. Next one obviously is AI. And with it, I also think that our ability to deal with some of these is also greater. And I think that’s where we have, on one side, a threat, but, on the other side, I think an opportunity. And you could say, “Well, we’ve always had this opportunity.” And the opportunity is really, going back to your first question, around wisdom. It’s really mental. We can spending time thinking these things through, spending time with ourselves. We can think through what makes sense. Let’s say what’s moral in a broad sense. So, you could say that’s always existed.

The difference, in terms of mental health, is that we might have certain tools today that we can develop, that can help us be better. I’m not saying that it will happen and I’m not saying that there’s going to be a pill for this, but I think we can be better and we are going to develop some ways to become better. And these are not just AI, but around bio-technology. And we’ll be able to affect our mental states. And we will be able to do it through… and we do already through drugs, but there’ll be also implants. There’ll be maybe editing. And we may one day become one with the AIs, at least mentally, that we develop. So, again, I think we have the potential of changing our mental state. And you could say for the better, but what is better? That’s goes back to the question of wisdom, the question of who do we want to be, and what constitutes better.

And to your other question, have the developers or the owners of some of the AI tools, do they control them? Will they continue to control them? I’m not sure. In theory, they control them, but you could argue, in some cases, “Well, they may have the technology, the IP. And in some cases, they have so much data that is needed for the AIs that there’s a great synergy between the data and the technology.” So, you need it almost in big places like a Facebook or Google or Tencent or an Alibaba. But you could very well say, “The technology’s good enough. And the engineers are good enough. You can take it out and continue the project.” And I would argue that at some point if the agents are good enough, the agents themselves become something. They become creatures that, with the right help, will have a life of their own.

Lucas Perry: So, in terms of this collective mental health aspect, how do you view the project of living an examined life or the project of self-transformation, and the importance of this approach to building a healthy civilization that is able to use and apply wisdom to the creation and use of technology? And when I say “examined life” I suppose I mean it in a bit of the sense in the way the Greeks used it.

Nicolas Berggruen: The advantage that humans have is that we can examine ourselves. We can look at ourselves. And we can change. And I think that one of the extraordinary things about our lives, and certainly I’ve witnessed that in my life, is that it’s a journey. And I see it as a journey of becoming. And that means change. And if you are willing to self-examine and if you are willing to change, not only will life be more interesting and you will have a richer, fuller life, but you will also probably get to a place that’s potentially better over time. For sure, different. And at times, better.

And you can do this as an individual. You can do that as many individuals. And as we have longer lives now, we have the opportunity to do it today more than ever. We also have not only longer lives, but longer lives where we can do things like what we are doing now, discussing these things. When, at the time of Socrates, few people could do it and now many people can do it. And I think that that trend will continue. So, the idea of self-transformation of self-examination, I think, is very powerful. And it’s an extraordinary gift.

My still favorite book today is a book by Hermann Hesse called Siddhartha, which, the way I look at it, one way to read it is really a journey of self-transformation of chapters of life, where each chapter is not necessarily an improvement, but each chapter is part of living and each chapter is what constitutes maybe a full life. And if you look at Siddhartha, Siddhartha had totally different lives all within one. And I think we have this gift given to us to be able to do a lot of it.

Lucas Perry: Do you think, in the 21st century, that given the rapid pace of change, of the power of our technology, that this kind of self-examination is more important than ever?

Nicolas Berggruen: I think it’s always important. It’s always been important as a human because it makes our lives richer on one side, but it also helps us deal with ourselves and our excitement, but also our fears. In the 21st century, I think it’s more important than ever because we have more time, not only in length, but also in quantity, within a quantum of time. And also because our effect on each other is enormous. Our effect on the planet is enormous. By engaging in social networks, by doing a podcast, by doing almost anything, you influence so many others, and not just others as humans, but you influence almost everything around you.

Lucas Perry: So, in this project of living an examined life in the 21st century, who do you take most inspiration from? Or who are some of the wisest people throughout history who you look to as examples of living a really full human life?

Nicolas Berggruen: Right. So, what is, let’s call it the best life, or the best example of an examined life? And I would argue that the best example that I know of, since I mentioned it, even though it’s an incredibly imperfect one, is the life, at least the fictional life, in the book of Hermann Hesse, Siddhartha, where Siddhartha goes through different chapters, in essence different lives, during his life. And each one of them is exciting. Each one of them is a becoming, a discovery. And each one of them is very imperfect. And I think that reflects the life of someone who makes it a mission to understand and to find themselves or find the right life. And it tells you how difficult it is. It also tells you how rich it can be and how exciting it can be and that there is no right answer.

On the other hand, there are people who may be lucky enough who never question themselves. And they may be the ones who live actually the best lives because, by not questioning themselves, they just live a life almost as if they were dealt a set of cards, and that’s the beginning and the end. And they may be the luckiest of all, or the least lucky because they don’t get to live all the potential of what a human life could be.

So, it’s a long-winded answer to say I don’t think there is an example. I don’t think there is a model life. I think that life is discovery, in my mind, at least for me. It’s living, meaning the experience of life, the experience of change, allowing change. And that means there will never be perfection. You also change. The world changes. And all of these become factors. So, you don’t have a single answer. And I couldn’t point to a person who is the best example.

That’s why I go back to Siddhartha because the whole point of the story of Siddhartha, at least the story by Hermann Hesse, is that he struggled going through different ways of living, different philosophies, different practices. All valid. All additive. And even the very end in the story, where in essence before his death he becomes one with the world is actually not the answer. So, there is no answer.

Lucas Perry: Hopefully, we have some answers to what happens to some of these technological questions in the 21st century. So, when you look at our situation with artificial intelligence and nuclear weapons and synthetic biology and all of the really powerful emerging tech in the 21st century, what are some ideas that you feel are really, really important for this century?

Nicolas Berggruen: I think what we’ve discovered through millennials now, but also through what the world looks like today, which is more and more the coexistence, hopefully peaceful coexistence, of very, very different cultures. We see that we have two very powerful factors. We have the individual and the community. And what is important, and it sounds almost too simple and too obvious, but I think very difficult, is to marry the power, the responsibilities of the individual with that of community. And I’m mentioning it on purpose because these are totally different philosophies, totally different cultures. I see that there’s always been a tension between those two.

And the technologies you’re talking about will empower individual agents even more. And the question is, will those agents become sort of singular agents, or will they become agents that care about others or in community with others? And the ones who have access to these agents or who control these agents or who develop these agents will have enormous influence, power. How will they act? And will they care about themselves? Will they care about the agents? Will they care about the community? And which community? So, more than ever, I think we have those questions. And in the past, I think philosophers and religious thinkers had a way of dealing with it, which was very constructive in the sense that they always took the ideas to a community or the idea of a community, living the principles of an idea one way or another. Well, what is it today? What is a community today? Because the whole world is connected. So, some of these technologies are technologies that will have an application way beyond a single culture and a single nation or a single system.

And we’ve seen, as an example, what happened with the COVID pandemic, in my mind, accelerated every trend and also made every sort of human behavior and cultural behavior more prevalent. And we can see that with the pandemic, technology answered pretty quickly. We have vaccines today. Capital markets also reacted quickly. Funded these technologies. Distributed them to some extent. But where things fell down was around culture and governance. And you can see that everybody really acted for themselves in very different ways, with very little cooperation. So, at a moment when you have a pandemic that affects everyone, did we have global cooperation? Did we have sharing of information, of technology? Did we have global practices? No. Because we didn’t, we had a much worse health crisis, incredibly unevenly distributed. So, health, but also economic and mental health outcomes, very different depending where you were.

So, going back to the question of the powerful technologies that are being developed, how are we going to deal with them? When you look at what happened recently, and the pandemic is obviously a negative event, but powerful event. You could say it’s technology. It’s a form of technology that spread very quickly, meaning the virus. Well, look at how we behaved globally. We didn’t know how to behave. And we didn’t behave.

Lucas Perry: It seems like with these really powerful technologies, that it will enable very few persons to accumulate a vast amount of wealth and power. How do you see solutions to this problem of the wealth still being inherited by like… more evenly and distributed by the rest of humanity, as technologies will increasingly empower a few individuals to have control and power over that wealth and technology?

Nicolas Berggruen: In my mind, you’re right. I think that the concentration of power, the concentration of wealth will only be helped by technology. With technology, with intellectual property, you create more power and more wealth, but you need less and less people and less and less capital. So, how do you govern it? And how do you make it fair?

Some of the thinking that I believe in and that we’ve also been working on at the Institute is the idea of sharing the wealth, but sharing the wealth from the beginning, not after the fact. So, our idea is simply from an economic standpoint, as opposed to redistribution, which is redistributing the spoils through taxes, which is not only toxic, but sort of you’re transferring from the haves to the haves-not. So, you always have a divide.

Our thinking is make sure that everybody has a piece of everything from the beginning. Meaning, let’s say tomorrow Lucas starts a company, and that company is yours. Well, as opposed to it being yours, maybe it’s 80% of yours, and 20% goes to a fund for everyone. And these days, you can attribute everyone as individuals through technology, through blockchain. You can give a piece of Lucas’ company to everyone on paper. So, if you become very successful, everybody will benefit from your success. It won’t make a difference to you because if you have 80% or 100%, you’ll be successful one way or another, but your success will be the success of everyone else. So, everyone is at least in the boat with Lucas’ success. And this kind of thinking I think is possible, and I think actually very healthy because it would empower others, not just Lucas. And so, the idea is very much, as technology as wealth becomes even more uneven, make sure it’s shared. And as opposed to it being shared through redistribution, make sure everybody is empowered from the beginning, meaning, has a chance to access it economically or otherwise.

The issue still remains governance. If whatever you’re creating is the most powerful AI engine in the world, what happens to it? Besides the economic spoils, which can be shared the way I described it, what happens to the technology itself, the power of the technology itself? How does that get governed? And I think that’s very early days. And nobody has a handle of it because if it’s yours, you, Lucas, will design it and design the constraints or lack of constraints of the engine. And I do think that has to be thought through. It can’t just be negative; it also has to be positive. But they always come together. Nuclear power creates energy, which has beneficial and empowers weapons, which is not. So, every technology has both sides. And the question is you don’t want to kill the technology out of fear. You also don’t want to empower the technology where it becomes a killer. So, we have to get ahead of thinking these things through.

I think a lot of people think about it, including the technologists, the people who develop it. But not enough people spend time on it and certainly not across disciplines and across cultures. So, technologists and policymakers and philosophers and humans in general need to think about this. And they should do it in the context of let’s call it Silicon Valley, but also more old-fashioned Europe, but also India and China and Africa, so that it includes some of the thinking and some of the cultural values that are outside of where the technology is developed. That doesn’t mean that the other ideas are the correct ones. And it shouldn’t mean that the technology should be stopped. But it does mean that the technology should be questioned.

Lucas Perry: It seems like the crucial question is how do we empower people in the century in which basically all of their power is being transferred to technology, particularly the power of their labor?

Right so, you said that taxing corporations and then using that money to provide direct payments to a country’s population might be toxic. And I have some sense of the way in which that is enfeebling, though I have heard you say in other interviews that you see UBI as a potential tool, but not as an ultimate solution. And so, it seems like this, you call it universal basic capital, which is where this, say, 20% of my company is collectively owned by the citizens of the United States, that this puts wealth into the pockets of the citizenry, rather than being completely disconnected from the companies and not having any ownership in them.

I’m curious whether this really confers the kind of power that would be really enabling for people because the risk seems like people lose their ability to perform work and to have power at workplaces and then they become dependent on something like UBI. And then the question is, is whether or not democracy is representative enough of their votes to give them sufficient power and say over their lives and what happens and how technology is used?

Nicolas Berggruen: Well, I think there’s a lot of pieces to this. I would say that the… let’s start with the economic piece. I think UBC, meaning universal basic capital, is much more empowering and much more dignified than universal basic income, UBI. UBI is, in essence, a handout to even things out. But if you have capital, you have participation and you’re part of the future. And very importantly, if you have a stake in all the economic agents that are growing, you really have a stake, not only in the future, but in the compounding, in terms of value, in terms of equity, of the future. You don’t have that if you just get a handout in cash.

The reason why I think that one doesn’t exclude the other, you still need cash to live. So, the idea is that you could draw against your capital accounts for different needs, education, health, housing. You could start a business. But at times you just need cash. If you don’t have universal basic capital, you may need universal basic income to get you through it. But if it’s well done, I think universal basic capital does the job. That’s on the economic side.

On the side of power and on the side of a dignity, there will be a question because I think technology will allow us, that’s the good news, to work less and less in the traditional way. So, people are going to have more and more time for themselves. That’s very good news. 100 years ago, people used to have to work much more hours in a shorter life. And I think that the trend has gone the other way. So, what happens to all the free time? A real question.

And in terms of power, well, we’ve seen it through centuries, but increasingly today, power, and not just money, but power, is more concentrated. So, the people who develop or who control, let’s say, the technological engines that we’ve been talking about really have much more power. In democracies, that’s really balanced by the vote of the people because even if 10 people have much more power than 100 million people, and they do, the 100 million people do get to vote and do get to change the rules. So, it’s not like the 10 people drive the future. They are in a very special position to create the future, but in reality, they don’t. So, the 100 million voters still could change the future, including for those 10 people.

What’s interesting is the dynamics in the real world. You can see about big tech companies. This is ironic. Big tech companies in the West, they’re mainly in the US. And the bosses of the big tech companies, let’s say Google or Facebook, Amazon, really haven’t been disturbed. In China, interestingly enough, Alibaba, Jack Ma was removed. And it looks like there’s a big transition now. ByteDance, which is the owner of TikTok. So, you can see, interestingly enough, in democracies, where big changes could be made because voters have the power, they don’t make the changes. And in autocracies, where the voters have no power, actually the changes have been made. It’s an ironic fact.

I’m not saying it’s good. I am not saying that one is better than the other. But it’s actually quite interesting that in the case of the US, Washington frankly has had no influence, voters have had pretty much no influence, when at the other side of the world, the opposite has happened. And people will argue, “Well, we don’t want to live in China where the government can decide anything any day.” But going back to your question, we live in an environment where even though all citizens have the voting power, it doesn’t seem to translate to real power and to change. Voters, through the government or directly, actually seem to have very little power. They’re being consulted in elections every so often. Elections are highly polarized, highly ideological. And are voters really being heard? Are they really participants? I would argue in a very, well, manipulated way.

Lucas Perry: So, as we’re coming to the end here, I’m curious if you could explain a future that you fear and also a future that you’re hopeful for, given the current trajectory of the race between the power of our technology and the wisdom with which we manage it.

Nicolas Berggruen: Well, I think one implies the other. And this is also a philosophical point. I think a lot of people are thinking sort of isolates one or the other. I believe in everything being connected. It’s a bit like if there is light, that means there is dark. And you’ll say, “Well, I’m being a little loopy.” It’s a bit like…

Lucas Perry: It’s duality.

Nicolas Berggruen: Yeah. And duality exists by definition. And I would say in the opportunity that exist in front of us, what makes me optimistic… and my feeling is if you live, you have no choice but to be an optimist. But what makes me optimistic is that we can, if we want, deal with things that are planetary and global issues. We have technologies that are going to hopefully make us healthier, potentially happier, and make our lives more interesting. That gives us also the chance, but also the responsibilities to use it well. And that’s where the dangers come.

We have, for the first time, two totally different political and cultural powers and systems that need to coexist. Can we manage it?

Lucas Perry: China and the US.

Nicolas Berggruen: Yes. China and the US. Technologically, between AI, gene editing, quantum computing, we are developing technologies that are extraordinary. Will we use them for right common good and wisely? We have a threat from climate, but we also have an opportunity. This opportunity is to address those issues, to a little bit like what happened with the pandemic, sort of create sort of the vaccines for the planet, if you want, because we are forced to do it. But then the question is, do we distribute them correctly? Do we do the fair thing? Do we do it in a way that’s intelligent and empowering? So, the two always come together. And I think we have the ability, if we’re thoughtful and dedicated, to construct, let’s say, a healthy future.

If you look at history, it’s never been in a straight line. And it won’t be. So, there’ll be, I hate to say, terrible accidents and periods. But over time I think our lives have become richer, more interesting, hopefully better. And in that sense, I’m an optimist. The technologies are irresistible. So, we’ll use them and develop them. So, let’s just make sure that we do it in a way that focuses on what we can do with them. And then what are the minimums, in terms of individuals, economically, in terms of power, voice and protection? And what the minimums in terms of addressing cooperation between countries and cultures, and with addressing planetary issues that are important and that have become more front and center today?

Lucas Perry: All right. So, as we wrap up here, is there anything else that you’d like to share with the audience? Any final words to pass along? Anything you feel like might be left unsaid?

Nicolas Berggruen: I think your questions were very good. And I hope I answered some of them. I would say that the journey for us, humans, as a species, is only getting more exciting. And let’s just make sure that we are… that it’s a good journey, that we feel that we are at times the conductor and the passenger both, not so bad to be both, in a way that you could say, “Well, listen, we’re very happy to be on this journey.” And I think it very much does depend on us.

And going back to your very first question, it depends on some of our wisdom. And we do have to invest in wisdom, which means we have to invest in our thinking about these things because they are becoming more and more powerful, not just in the machines. We need to invest in the souls of the machines. And those souls are our own souls.

Lucas Perry: I really like that. I think that’s an excellent place to end on.

Nicolas Berggruen: Well, thank you, Lucas. I appreciate it. Very good questions. And I look forward to listening.

Lucas Perry: Yeah. Thank you very much, Nicolas. It was a real pleasure, and I really appreciated this.

Bart Selman on the Promises and Perils of Artificial Intelligence

  • Negative and positive outcomes from AI in the short, medium, and long-terms
  • The perils and promises of AGI and superintelligence
  • AI alignment and AI existential risk
  • Lethal autonomous weapons
  • AI governance and racing to powerful AI systems
  • AI consciousness

 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with Bart Selman and explores his views on possible negative and positive futures with AI. The importance of AI alignment and safety research in computer science, facets of national and international AI governance, lethal autonomous weapons, AI alignment, and safety at the association for the advancement of artificial intelligence and a little bit on AI consciousness.

Bart Selman is a Professor of Computer Science at Cornell University, and previously worked AT&T Bell Laboratories. He is a co-founder of the Center for Human-Compatible AI, and is currently the President of the Association for the Advancement of Artificial Intelligence. He is the author of over 90 publications and has a special focus on computational and representational issues. Professor Selman has worked on tractable inference, knowledge representation, stochastic search methods, theory approximation knowledge compilation, planning, default reasoning, and the connections between computer science and statistical physics.

And so without further ado, let’s get into our conversation with Bart Selman.

So to start things off here, I’m curious if you can share with us an example of a or a few futures that you’re really excited about and an example of a or a few futures that you’re quite nervous about or which you fear most.

Bart Selman: Okay. Yeah. Thank you. Thank you for having me. So just let me start with an example of a future in the context of AI that I’m excited about is the new capabilities that AI brings it should have the potential to make a life for everyone much easier and much more pleasant. I see AI as complementing our cognitive capabilities. So I can envision household robots or smart robots that assist people living in their houses, living independently longer, including doing kinds of work that are sort of monotonous and not that exciting for humans to do. So, AI has the potential to compliment our capabilities and to hugely assist us in many ways, including in areas you might not have thought of. Like for example, policymaking and governance. So AI systems are very good at thinking in high dimensional terms, trade offs between many different factors.

For humans, it’s hard to actually think in a multi-dimensional trade-off. We tend to boil things down to one or two central points and argue about a trade off in one or two dimensions. Most policy decisions involve 10, 20 different criterias that may conflict, or be somewhat contradictory and exploring that space, AI can assist us. I mean, finding better policy solutions and better governance for everybody. So I think AI has this tremendous potential to improve life for all of us provided that we learned to share this capabilities that we have policies in place and mechanisms in place to make this a positive experience for humans. And i’ll have to draw a parallel, human labor, physical labor machines have freed us from heavy duty physical labor. AI systems can help us with a sort of monotonous cognitive labor or as I mentioned, household robots and other tools that will make our life much better. So that’s for the positive side. So should I continue with a negative?

Lucas Perry: So before we get into the negative, I’m curious if you could explain a little bit more specifically what these possible positive futures look like on different timescales. So you explained AI assisting with cognitive capabilities with monotonous jobs. And so, over the coming decades, it will begin to occupy some of these roles increasingly, but there’s also the medium term, the long term and the deep future in which the positive fruits of AI may come to bear.

Bart Selman: Yeah. So, that’s an excellent point. I think one thing that in any transition and as I say, these medium cognitive capabilities that will help us live better lives, it will also disrupt the labor force and the workforce. And that this is a process that I can see play out over the next five, 10, maybe 15 years, a significant change in workforce. And I am somewhat concerned about how that will be managed because basically I feel we are moving to our future where people would have more free time. We’d have more time to be creative, to travel and to live independently. But of course, everybody needs to have the resources to do that. So there is an important governance issue of making sure that in this transition to a world was more leisure time that we find ways of having everybody benefit from these new future.

And this is really I think, 5, 10, 15 years process that we’re faced now, and important, that is done right. Further out in the future, it’s a little my own view of AI is that, machines will excel at certain specific task as we’ve seen very much with AlphaGo, AlphaZero. So, very good at that specific tasks and those systems will come in first self-driving cars, specialized robots for assisting humans. So we’ll first get these specialized capabilities. Those are not yet general AI capabilities. That’s not AGI. So the AGI future, I think is more 20, 25 years away.

So we first have to find ways of dealing with incorporating these specialized capabilities, which are going to be exciting as a scientist. You know, I already see AI transforming the way we approach science and do scientific discovery and really complementing our ways. I hope people get excited in the areas of creativity, for example, in computers or AI system, bringing a new dimension to these type of human activities that will actually be exciting for people to be part of. And that’s an aspect that we started to see emerge, but people are not fully aware of yet.

Lucas Perry: So we have AI increasingly moving its way into specialized kind of narrow domains. And as it begins to proliferate into more and more of these areas, it’s displacing all of the traditional human solutions for these areas, which basically all just includes human labor. So there’s an increase in human leisure time. And then what really caught my attention was you said AGI maybe 20, 25 years away. Is that your sense of the timeline where you start to see real generality or?

Bart Selman: Yeah. That’s in my mind a reasonable sense of a timeline, but we cannot be absolutely certain about that. And it’s sort of, for AI researchers it is a very interesting time. The hardest thing at this point in the history of AI is to predict what AI can and cannot do. I’ve learned as a professor, never to say that deep learning can’t do something because every time it surprises me and it can do it a few years later. So, we have a certain sense that, oh! the field is moving so fast that everything can be done. On the other hand, in some of my research, I look at some of these advances and if I can give you a specific example. So, my own research is partly in planning, which is a process of how humans plan out activities.

They have certain goals, and then they plan, what steps should I take to achieve those goals? And that’s can be very long sequences of actions to achieve complicated goals. So we worked on sort of a puzzle style domain, which is called Sokoban. And most people will not be familiar with it but it’s a kind of a game where it’s modeled after workers in a warehouse that have to move around boxes. And so there is a little grid world and you push around the boxes to get them from certain initial state to goal states somewhere else on the grid. And there are walls, and there are corners and all kinds of things you have to avoid. And what’s amazing about the planning task is for traditional planning, this was really a very challenging domain. So we picked it because traditional planners could do maybe a hundred steps, a hundred pushes as we call them, but that was about it.

There were puzzles available on the web that required 1500 to 2000 steps. So it was beyond way beyond any automated program. And AI researchers had worked on this problem for decades. So we of course used reinforcement learning RL with specific some clever curriculum training, some clever forms of training. And suddenly we could solve these 2000 steps, 1500 steps, Sokoban puzzles. So, we were very calm. We’re still very excited about that capability. And then we started looking, what did the deep net actually know about the problem? And our biggest surprise there was that although the system had learned very subtle things, that humans that are beyond human capabilities, it also was totally ignorant about other things that were trivial for humans. So in the Sokoban puzzle you don’t want to push your box in a corner because once it’s in a corner, and you can’t get it out of a corner. This is something that a human player discovers in the first, I would say first minute of pushing some boxes around.

We realized, I guess the deep learning routine network never conceptualized the notion of a corner. So it would only learn about corners if it had seen something being pushed in a particular corner. And if it had never seen that corner being used or encountered, it would not realize it shouldn’t push the box in there. So we had to sort of, we realized that this did this deep net had a capability that is definitely super human, in terms of being able to solve these puzzles. But also holes in its knowledge of the world that were very surprising to us. And that’s, I think part of what makes AI at this time, very difficult to predict. Will these holes be filled in while we develop AI systems that also get these obvious things, right?

Or will AI be at this amazing level of performance, but do things in ways that are to us, like quite odd. And so I think there are hard challenges that we don’t quite know how to fill in, but because of the speed with which things are developing, it’s very hard to predict whether they will be solved in the next two years on the next, or it will take another 20 years. But I do want to stress, there are surprising things about the I call it “the ignorance of tje learn models” that surprised us humans. Yeah.

Lucas Perry: Right. There are ways in which models fail to integrate really rudimentary parts of the world into their understanding that lead to failure modes that even children don’t encounter.

Bart Selman: Yeah. It’s… So the problem when we as humans interact with AI systems or think about AI systems, we anthropomorphize. So we think that they do think similar to the way we do things, because that’s sort of how we look at complex systems, even animals are anthropomorphized. So, we think that things has to be done a way similar to our own thinking, but we’re discovering that they can do things very differently and leave out pieces of knowledge that are sort of trivial to us.

I have discussion with my students and I point that out and they’re always sort of even skeptical of my claim. And they say, “well, it should know that somewhere.” And we actually do experiments. They say, “no. Okay, if it never seen the box go in that corner, it will just put it in the corner next time.” And so they actually have to see it to believe it, because it sounds that how can you be the world’s best Sokoban solver, and not know what a human knows in the first minute, but that’s the surprise. And that it also makes the field exciting, but that bakes the challenges of super intelligence and general intelligence and the impacts of an AI safety, particularly challenging topic.

Lucas Perry: Right. So, predicting an actual timeline seems very difficult, but if we don’t go extinct, then do you see the creation of AGI and superintelligence as inevitable?

Bart Selman: I do believe so. Yes. I do believe so. I think the path I see is we will develop these specialized capabilities, but in more and more areas in almost all areas, and then how they start merging together and in systems that do a two or three or four, and then a thousand specialized tasks. And so generality they will emerge almost inevitably. My only hesitation is what could go wrong? Why might it not happen if there is some aspect of cognition that is really beyond our capabilities of modeling. But I think that is unlikely. I think one of the surprises in the deep net world and the neural network world is that, before the deep learning revolution, if you can call it that before it happened, a lot of people looked at artificial neural networks as being too simplistic compared to real neurons.

So, there was this sense that, yeah, these little artificial neural networks are nice models, but they’re way too simplistic to capture what goes on in the human brain. The big surprise was is that apparently that level of simplification is okay, that you can get the functionality of a much more complex, real neural network. You get that level of performance and complexity using much simpler units. So that sort of convinced me that yes, the digital approximations we make, simplifications we make, as long as we connect things in sufficiently complex networks, we get properties emerged that match our human brain capabilities. So that makes me think that at some point we will reach AGI. It’s just a little hard to say exactly when, and I think it may not matter that much exactly when we’ll have challenges, in terms of AI safety and value alignment that are already occurring today before we have AGI. So we have to deal with challenges right from the start, we don’t have to wait for AGI.

Lucas Perry: So in this future where we’ve realized AGI, so do you see superintelligence as coming weeks, months, or years after the invention of AGI. And then what is beautiful about these futures to you in which we have realized AGI and superintelligence.

Bart Selman: Yeah. So what’s exciting about these possible, I mean, there are certain risks and the superintelligence would go against humans. I don’t think that is inevitable. I think these systems, they will do things… They will show us aspects of intelligence that to us will look surprising, but will also be exciting. So some of my other work, we’d look at mathematical theorem improving. And we look at AI systems for approving new open conjectures in mathematics. The systems clearly do a very different kind of mathematics than humans do, are very different kinds of proofs, but it’s sort of exciting to see a system that can check a billion step proof in a few seconds and generate a billion steps proof in an hour, and realizing that we can prove something to be true mathematically.

So find a mathematical truth that is beyond the human brain. But since we’ve designed to program and we know how it works and we use the technology, it’s actually a fun way to compliment our own mathematical thinking. So that’s what I see as the positive sense in which superintelligence will actually be of interest to humans to have around as a compliment to us, assuming they will not turn on us. But I think that’s manageable. Yeah.

Lucas Perry: So how long do you see superintelligence arising after the invention of AGI, even though it’s all kind of vague and fuzzy, like what’s what…

Bart Selman: Yeah. How long… So I think when I think of superintelligence, I think of it more as superintelligent in certain domains. So I assume you are referring to superintelligence as superseding AGI.

Lucas Perry: What I mean is like vastly more intelligent than the sum of humanity.

Bart Selman: I think that’s a super interesting question. And I have discussed that I can see capability at vastly more intelligent in areas like mathematical discovery, scientific discovery, thinking about problems with multiple conflicting criteria that has to be weighted against each other. So a particular task I can see superintelligence being vastly more powerful than our own intelligence. On the other hand, there is also a question in what sense superintelligence would manifest itself. That is, if I had to draw an analogy is if you meet somebody who is way smarter than you are, and everybody now meets such a person I’ve met a few in my life, these people will impress you about certain things and gets you insight. That, “Oh, this is exciting.” But when you go to have dinner with them and have a good meal, and they’re just like regular people.

So, superintelligence doesn’t necessarily manifest itself in all aspects. It will be surprising as certain kinds of areas and tasks and insights, but it will not… I do not believe it will come out. I guess If I draw an analogy like you can’t, if you go for dinner with that’s bad for dogs, but if you go for dinner with a dog, you will fully dominate all the conversations and be sort of superintelligence compared to the dog.

I’m not sure it’s not clear to me that there is an entity that will dominate our intelligence in all aspects. So there will be lots of activities, lots of conversations, lots of things we can have as a superintelligent being, that are quite understandable, quite accessible to us. So the analogy that there will be an entity that dominates our intelligence uniformly, that I’m not convinced exists. And it sort of, that goes back to the question, what human intelligence is, and human intelligence is actually quite general. So there’s an interesting question. What is meant by superintelligence? How would we recognize it? How would it manifest itself?

Lucas Perry: When I think of superintelligence, I think of like a general intelligence that is more intelligent than the sum of humanity. And so part of that generality is its capability to run an emulation of like maybe 10,000 human minds within its own general intelligence. And so the human mind becomes a subset of the intelligence of the superintelligence. So in that way, it seems like it would dominate human intelligence in all domains.

Bart Selman: Yeah, what I’m trying to say is I can see that, and that’s sort of, if you would play a game of chess with such a superintelligence they would beat you if they would give you a… It would not be fun if they… If he would do some maths with you, the mathematics and then show you some proofs of Fermat’s Last Theorem, it will be trivial for the superintelligence. So I can see a lot of specific task and domains where the superintelligence would indeed run circles around you and around any human, but how would it manifest it? So, yeah, on these individual questions, but you have to have the right questions as it is, I guess what I struggle is a little bit, you have to have the right questions to show to superintelligence.

So, like for example. The question, what should we do about income inequality? So like a practical problem in the United States. Would a superintelligence necessarily have something superintelligent to say about that? And that’s not that clear to me because there might actually… That it’s a tough problem, but it may just be as tough for the superintelligence as it is for any human. So a superintelligent politician has solutions to all our problems suddenly, would it win every debate? I think interestingly, the answer’s probably no. There’re certain… So super intelligence manifests itself on tasks that require high level of intelligence, like problem solving task, mathematical domains, scientific domains games, but daily life and governance. It’s a little less clear to me. So in that sense, and that’s what I mean by you’re going to have dinner with a superintelligence, which you be just sitting there. I can’t say anything useful about income inequality, because the superintelligence will say much more better things about it. I’m not so sure.

Lucas Perry: Maybe you’ve both had a wine or two, and you you ask the superintelligence and you know, why is there some thing rather than nothing? Or like, what is the nature of moral value? And they’re just like…

Bart Selman: What’s the purpose of life. I’m not sure the superintelligence is going to get me a better answer to that. So, yeah.

Lucas Perry: And this is where philosophy and ethics and metaethics merges with computer science, right? Because it seems like you’re talking about, there are domains in which AI will become superintelligent. Many of these domains, the ones that you listed sounded very quantitative. Ones which involve kind of the scientific method and empiricism, not that these things are necessarily disconnected from ethics and philosophy, but if you’re just working with numbers with a given objective, then there’s no philosophy that really needs to be done, if the objective is given. But if you ask about how do we deal with income inequality, then the objective is not given. And so you do philosophy about what is the nature of right and wrong? What is good? What is valuable? What is the nature of identity and all of these kinds of things and how they relate to building a good world. So I’m curious, do you think that there are true or false answers to moral questions?

Bart Selman: Yeah, I think that there are clearly wrong answers in this. So, I think moral issues, I think that’s a spectrum to me and that we can probably, as humans agree on certain basic moral values, it’s also very human kind of topic. So I think, we can agree on basic moral values, but I think the hard part is we also see among people and among different cultures, incredible different views of moral value. So saying which one is right and which one is wrong, may actually be much harder than we would like it to be. So this comes back to the value alignment problem and not like these into discussions about it. It’s a very good research field and a very important research field. But the question always is, whose values? And, we now realize that even within a country, people have very different values that are actually hard to understand between different groups of people.

So there is a challenge. There might be uniquely human of other… So it feels like there should be universal truths in morality thinks about equality, for example, but I’m a little hesitant because I’m surprised about how much disagreement I see about these, what I would think are universal truths that somehow are not universal truths for all people. So that’s another complication. And again, if you tie that back to superintelligence, so a superintelligence is going to have some position on it, but it is going to be yes or no, but it may not agree with everybody. And there’s no superintelligent position on it in my mind. So that’s, that’s a whole area of AI and value alignment that is very challenging.

Lucas Perry: Right. So, it sounds like you think that you have some intuition that there are universal moral truths, but it’s conflicting to why there was so much disagreement across different persons. So I guess I’m curious about two things. The first is one thing that you’re excited about for the future and about positive outcomes from AGI. Is it worlds in which AGI and superintelligence can help assist with moral and philosophical issues like around how to resolve income inequality and truth around moral questions? And then the second part of the question is, do you think that superintelligences is created by other species across the universe? Do you think that they would naturally converge on certain ethics, whether those ethics be universal truths or whether they be relative game theoretic expressions of how intelligence can propagate in the universe.

Bart Selman: Yeah, so two very good questions. So, as the first one, I am quite excited about the idea that a superhuman level of intelligence or an extreme level intelligence will help us better understand moral judgments and decisions and issues of ethics. I almost feel that humans are a little stuck in this debate. And a lot has to do, I think, with an inability to explain clearly to each other, why certain values matter and other values should be viewed differently, that it’s often even a matter of, can we explain to each other what are good moral judgments and good moral positions? So, I have some hope that AI systems, smart AI system would be better at actually sorting out some of these questions, and then convincing everybody, because in the end, we have to agree on these things. And perhaps these systems will help us find more common ground.

So that’s a hope I have for AI systems that truly understand our world, and are truly capable of understanding, because part of the alpha super smart AI would be understanding many different positions, and maybe something that limits humans in getting agreements on ethical questions, is that we actually have trouble understanding the perspective of another person that has a conflicting position. So superintelligence might be one way of modeling everybody’s mind, and then, being able to bring a consensus about … I have an optimistic view of, there may be some real possibilities there for superintelligence. Your second question of whether some alien form of superintelligence would come to the same basic ethical values as we may come to? That’s possible. I think it’s very hard to, yeah.

Lucas Perry: Yeah, sorry, whether those are ultimate truths, as in facts, or whether they’re just relative game theoretic expressions of how agents compete and cooperate in a universe of limited resources.

Bart Selman: Yes, yes. From a human perspective, you would hope there are some universal shared ethical perspective, or an ethical view of the world. I’m really on the fence, I guess. I could also see that, that in the end, very different forms of life, that we would even hardly recognize, would basically interact with us via sort of a game theoretic competition mode, and that they cannot, and because they’re so different from us, that we would have trouble finding shared values. So I see possibilities for both outcomes. If other life forms share some commonality with our life form, I’m hopeful for a common ground. But that seems like a big assumption, because they could be so totally different, that they cannot connect at a more fundamental level.

Lucas Perry: Taking these short and long term perspectives, what is really compelling and exciting for you about good futures from AI? Is it the short to medium term benefits? Are you excited and compelled by the longer term outcomes, the possibility of superintelligence allowing us to spread for millions or billions of years into the cosmos? What’s really compelling to you about this picture of what AI can offer?

Bart Selman: Yeah. I’m optimistic about the opportunities, both short term and longer term. I think it’s fairly clear that humanity is actually struggling with, there’s an incredible range of problems right now, sustainability, global warming, political conflicts. You could be quite pessimistic, almost, about the human future. I’m not, but these are real challenges. So I’m hopeful that actually AI will help humanity in finding a better path forward. Now, as I mentioned briefly, even in terms of policy and governance, AI systems may actually really help us there. So far this has never been done. AI systems haven’t been sufficiently sophisticated for that, but in the next five to 10 years, I could see systems starting to help human governance. That’s the short term. I actually think AI can have a significant positive impact in resolving some of our biggest challenges.

In a longer term, it’s harder to anticipate what the world would look like, but of course, spreading out as a superintelligence and living on, in some sense, spreading out across the universe and over many different timescales, having AI continue the human adventure is actually sort of interesting, how we wouldn’t be confined to our little planet. We would go everywhere. We’d go out there and grow. So that could actually be an exciting future that might happen. It’s harder to imagine exactly what it is, but it could be quite a human achievement. In the end, whatever happens to AI, it is, of course, a human invention. So I think science and technology are human inventions, and that’s almost what we can be most proud of, in some ways, of things that we actually did figure out how to do well, aside from creating a lot of other problems on the planet. So we could be proud of that.

Lucas Perry: Is there anything else here in terms of the economic, political and social situations of positive futures from AI that you’d like to touch on, before we move on to the negative outcomes?

Bart Selman: Yeah. I guess the main thing, I’m hoping that the general public and politicians will become more aware, and will be better educated about the positive aspects of AI, and the positive potential it has. The range of opportunities to transform education, to transform health care, to deal with sustainability questions, to deal with global warming, scientific discovery, the opportunities are incredible.

What I would hope is that those aspects of AI will receive more attention in the broader public, and with politicians and with journalists. It’s so easy to go after the negative aspects. Those negative aspects and the risk have received a disproportional attention from the positive aspect. So that’s my hope.

As part of the AAAI organization, the professional organization for artificial intelligence, part of our mission is to inform Washington politicians of these positive opportunities, because we shouldn’t miss out on those. That’s an important mission for us, to make that clear, that there’s something to be missed out on, if we don’t take these opportunities.

Lucas Perry: Yeah. Right. There’s the sense that all of our problems are basically subject to intelligence. As we begin to solve intelligence, and what it means to be wise and knowing, there’s nothing in the laws of physics that are preventing us from solving any problem that is, in principle, solvable within the laws of physics. It’s like intelligence is the key to anything that is literally possible to do.

Bart Selman: Yeah. Underlying that is rational thought, our abilities to analyze things, to predict a future, to understand complex systems. And as the rationality underlies that scientific thought process, and humans have excelled at that, and AI can boost that further. That’s an opportunity we have to grab. And I hope we, people, recognize that more.

Lucas Perry: I guess, two questions here, then. Do you think existential risk from AI is a legitimate threat?

Bart Selman: I think it’s something that we should be aware of, that it could develop as a threat, yeah. The timescale is a little unclear to me, how near that existential threat is, but it’s something that we should be aware of that there is a risk of runaway intelligence systems not properly controlled. Now, I think that the problems will emerge much more concretely and earlier, for example, cybersecurity and AI systems that break into computer networks that are hard to deal with. So it will be very practical threats to us, that will take most of our attention. But the overall existential threat, I think, is indeed also there.

Lucas Perry: Do you think that the AI alignment problem is a legitimate, real problem, and how would you characterize it, assuming you think it’s a problem?

Bart Selman: I do think it’s a problem. What I like about the term, it sort of makes it crisp, that if we train a system for a particular objective, then it will learn how to be good at that objective. But in learning how to do that, it may violate basic human principles, basic human values. I think, as a general paradigm statement, that we should think of what happens to systems that we train to optimize a certain objective, that they need to achieve that in a way that aligns with human values, I think, is a very fundamental research question and a very valid question. In that sense, I’m a big supporter, in the research community, of taking the value alignment problem very serious.

As I said before, there are some hesitation about how to approach the problem. I think, sometimes, the value alignment folks gloss over this issue, of what are common values, and are there any common values? So the value alignment, solving it assumes, “Okay, well, when we get the right values in, we’re all done.” What worries me a little bit in that context, as well, these common values are possibly not as common as we think they are. But that’s the issue of how to deal with the problem. But the problem itself, and as a research domain, is very valid. As I said early on, with the little Sokoban example, it is an absolutely surprising aspect of the AI systems for training, how they can achieve incredible performance, but doing it in a way, not knowing certain things that are obvious to us, in some very nonhuman ways. So that’s clearly coming out in a lot of AI systems. And that’s related to the value alignment problem. This fact that we can achieve a super high level of performance, even when we train it carefully with the human generated training data, and things like that, it still can find ways of doing things that are very nonhuman, and potentially very non-value aligned. That makes it even more important to study the topic.

Lucas Perry: Do you think that the Sokoban example, that you can translate the pushing the boxes into corners as a expression of the alignment problem, like imagining if pushing boxes into corners was morally abhorrent to humans?

Bart Selman: Yes. Yeah, that’s an interesting way of putting it. It is an example of what I sort of think of as it’s a domain, and it’s a toy domain, of course, but there’s certain obvious truths to us that are obvious. In that case, pushing a box in a corner is not a moral issue, but it’s definitely something that is obvious to us. If you replace it with some moral truths to us that is obvious to us, it is an illustration of the problem. It’s an illustration of when we think of training a system, and even if you think of, let’s say, bringing up a child, or a human learner, you have a model of what that system will learn, what that human learns, and how the human will make decisions. The Sokoban example is sort of a warning of, with an AI system, it will learn the performance, the test, so it will pass the final test. But it may do so in ways that you would never have expected to achieve it.

With the corner example, it’s a little strange, almost, to me, to realize that oh, you can solve this very hard Sokoban problem, without ever knowing about what a corner is. And it literally doesn’t. It’s the surprises of getting to a human level performance, and missing, and not quite understanding how that’s done. I think another, for me, a very good example, is machine translation systems. So machine translation systems, we see incredible performance of machine translation systems, where they basically map strings in one language, to a string in English to Chinese, or English to French, having discovered a very complex transformation function in the deep net, trained on hundreds of thousands of sentences, but it’s doing it without actually understanding. So it can translate and an English text into a French text or a Chinese text, at a reasonable level, without having any understanding of what the text is about. Again, to me, it’s that nonhuman aspect. Now, researchers might push back and say, “Well, the network has to understand something about the texts, deep in the network.”

I actually think that we’ll find out that the network understands next to nothing about a text. It just has found a very clever transformation that we initially, when we started working on a natural language translation didn’t think would exist. But I guess it exists, and you can find it with a gradient descent deep network. Again, it’s an example of showing a human level cognitive ability achieved in a way that is very different from the way we think of intelligence. That means, when we start using these systems, we are not aware. So if people in general are not aware that your machine translation app has no idea what you’re talking about.

Lucas Perry: So, do you think that there’s an important distinction here to be made between achieving an objective, and having knowledge of that particular domain?

Bart Selman: Yes, yes. I think that’s a very good point, yeah. So by boiling things down, in my sense, by boiling tasks in AI down too much to an objective, in machine learning, the objective is do well on the test set. By boiling things down too much to a single measurable objective, we are losing something, and we’re losing underlying knowledge, the way in which the system actually achieves it.

We’re losing an understanding, and we’re losing the attention to that aspect of the system. That’s why interpretability of deep nets has become sort of a, so it’s definitely a hot area.

But it’s trying to get back to some of that issue is, what’s actually being learned here? What’s actually in these systems? But if you focus just on the objective, and you get your papers published, you’re actually not encouraged to think about that.

Lucas Perry: Right. And there’s the sense, then, also, that human beings have many, many, many different objectives and values that are all simultaneously existing. So when you optimize for one, in a kind of unconstricted way, it will naturally exploit the freedom in the other areas of things that you care about, in order to maximize achieving that particular objective. That’s when you begin to create lots of problems for everything else that you value and care about.

Bart Selman: Yeah, yeah. No, exactly. That’s the single objective problems. Actually, you lay out, a potential path is saying, “Okay, I should not focus on a single objectives task. I actually have to focus on multiple objectives.”

And I would say, go one step further. Once you start achieving objectives, or sets of objectives, and your system performs well, you actually should understand, to some extent, at least, what knowledge is underlying, what is the system doing, and what knowledge is it extracting or relying on, to achieve those objectives? So that’s a useful path.

Lucas Perry: Given this risk of existential threat from AI, and also, the AI alignment problem as its own kind of issue, which, in the worst of all possible cases, leads to existential risk. What is your perspective on futures that you fear, or futures that have quite negative outcomes from AI, in particular, in the light of the risk of existential threat, and then, also, the reality of the alignment problem?

Bart Selman: Yeah, I think, so the risk is that we continue on a path of designing a system, with a single objective in mind, and just measuring the achievement there, and ignored yet alignment problem. People are starting to pay attention to it, but paying attention to it, and actually really solving it is, two different things. There is a risk that these systems just become so good and so useful, and commercially valuable, that the alignment problem gets sort of pushed to the background as being not so relevant, and that we don’t have to worry about it.

So I think that’s sort of the risks that AI is struggling with. And it’s a little amplified by the commercial interest. I think you had a clear example, there is the whole social network world, and how that has spread fake news, and then got people into different groups of people to think totally different things, and to believe totally different facts. In that, I see a little warning sign there for AI. Those networks are driven by tremendous commercial interests. It’s actually hard for society to say there’s something wrong about these things, and maybe we should not do it this way. So that’s a risk, it works too well to actually push back and say, “We have to take a step back and figure out how to do this well.”

Lucas Perry: Right? So you have these commercial interests, which are aligned with profit incentives, and attention becomes the variable which is trying to be captured for profit maximization. So attention becomes this kind of single objective that these large tech companies are training their massive neural nets and algorithms to try and capture the most of, from people. You mentioned issues with information.

And so people are more and more becoming aware of the fact that if you have these algorithms, that are just trying to capture as much attention as possible, then things like fake news, or extremist news and advertising is quite attention capturing. I’m curious if you could explain more of your perspective on how the problem of social media algorithms attempting to capture, and also commodifying human attention, as a kind of single objective that commercial entities are interested in capturing, how that represents the alignment problem?

Bart Selman: Yeah, so I think it’s a very nice analogy. First, I would say, to some extent the algorithms that try to maximize the time spent online, basically, are getting most attention. Those are not particularly sophisticated. Those are actually, very basic sort of, you can sample little TikTok videos. How often are they watched by some subgroup? And if they’re watched a lot, you give them out more. If they’re not watched, you start giving them out less. So the algorithms are actually not particularly sophisticated, but they do represent an example of what can go wrong with this single objective optimization.

What I find intriguing about it, it’s not that easy to fix, I think. Because the companies, of course, their business model is user engagement, is advertising, which, you have to tell that the company’s not to make as much money as they could. If there was an easy solution, that would have happened already. I think we’re actually in the middle of trying to figure out, is there a balance between making profits from a particular objective and societal interests, and how can we align those ideas? And it’s a value alignment problem between society and companies that profit from them. Now, I should stress, in the whole of social networking, and that’s, I think, what makes the problem sound intriguing. There’s an incredible positive aspects to social networks, and people exchanging stories, and interacting. Again, that I think is what makes it complex. It’s not that that it’s only negative, it’s not. There’s tremendous positive sides to having interesting social networks, and exchanges between people. People, in principle, could learn more from each other.

Of course, what we’ve seen is actually, strangely, people seem to listen less to each other. Maybe it’s too easy to find people that think the same way as you do, and the algorithms encourage that. In many ways, the problems with the social networks and the single objective optimization are a good example of a value alignment challenge. It shows that the solution, finding a solution to that is probably, it will require way more than just technology. It will require society and governance companies to come together and find a way to manage these challenges. It will not be an AI researcher in an office that finds a better algorithm. So it is a good illustration of what can go wrong. To me, it’s a good illustration, of what can go wrong. And in part, because, if people didn’t expect this, actually. They saw the positive sides of these networks, and they’re bringing people closer together, and that no one actually had thought of fake news, I think. It’s something that emerged, and that shows how technology can surprise you. That’s of course, in terms of AI, one of the things we have to watch out for, the unexpected things that we did not think would happen, yeah.

Lucas Perry: Yeah, so it sounds like the algorithms that are being used are simpler than I might have thought, but I guess maybe that seems like it accounts for the difficulty of the problem, if really simple algorithms are creating complete chaos for most of humanity.

Bart Selman: Yeah. No, no, exactly. I think that that’s an excellent point. So yeah, you don’t have to create very complicated … You might think, “Oh, this is some deep net doing reinforcement learning.”

Lucas Perry: It might be closer to statistics that gets labeled AI.

Bart Selman: Yeah. Yeah, it gets labeled AI, yeah. So it’s actually just plain old simple algorithms, that now do some statistical sampling, and then amplify it. But you’re right, that maybe the simplicity of the algorithm makes it so hard to say, “Don’t do that.”

It’s like, if you run a social network, you would say, “Let’s not do that. Let’s spread the posts that don’t get many likes.” That’s almost against your interests. But it is an example of, the power is partly also, of course, the scale on which these things happen.

With the social network, I think, what I find interesting is why it took awhile before people became aware of this phenomenon is, because everybody had their personalized content. There was no share to one news channel, or something like that. There’s one news channel. Everybody watches it, and then you see what’s on it.

I have no idea what’s in my newsfeed of the person who’s sitting next to me. So there was also certain things like, “Ah, I didn’t know you got all your news articles with a certain slant.”

So not knowing what other people would see and having a huge level of personalization was another factor in letting this phenomenon go unnoticed for quite awhile. But luckily people are now at least aware of the problem. I haven’t solved it yet.

Lucas Perry: I think two questions come up for me. One thing that I liked, that Yuval Noah Harari has said is, he’s highlighted the importance of knowledge and awareness and understanding in the 21st century, because you said this isn’t going to be solved by someone in Big Tech creating an algorithm that will perfectly … captures the collective value of all of the United States or planet earth and how it is that the content be ethically distributed to everyone. It also, it requires some governance, as you said, but then also some degree of self-awareness about how the technology works and like how your information is being biased and constrained and for what reasons. The first question is, I’m curious how you see the need for collective education on technology and AI issues in the 21st century, so that we’re able to navigate it as people become increasingly displaced from their jobs and it begins to really take over. Let’s just start there.

Bart Selman: So, I think that’s a very important challenge that we’re facing. And I think education of everyone is a key issue there. So, AI should not be, or these technologies should not be presented as some magic boxes. I think it’s much better for people to get some understanding of these technologies. And, I think that’s possible in, in our educational system. It has to start fairly early that people get some idea of how AI technologies were. And most importantly, perhaps people need to start understanding better what we can do and what we cannot do and what AI technologies are about. A good example to me is something like the data privacy initiative in Europe, which I think is a very good initiative.

But for example, there’s a detail in it, is where you have a right. I think- I’m not sure whether it’s part of the law, but there’s definitely discussions and how you have a right to get an explanation of a decision by an AI system. So there’s a right to an explanation. And what I find interesting about it, that sounds like, oh, that’s a very good thing to get. Until you’ve worked with AI system and machine learning systems, and you realize, you can make up pseudo explanations pretty easily, and you can actually ask your systems to explain it without using the word gender or race, and they will come up with good explanation.

So the idea that a machine learning algorithm has sort of a crisp explanation, that is the true explanation of the decision is actually far from trivial and can actually easily be circumvented. So, it’s an example to me, of policymakers coming up with regulations that sounds like they’re making progress, but they’re missing something about what AI system can and cannot do. Yeah. That’s another reason why I think people need much better education and insights into AI technologies and at least hear from different perspectives about what’s possible and what’s not possible.

Lucas Perry: So, given the risk of AI and algorithms increasingly playing a role in society, but also playing this part of single objective optimization and then us as humanity, having to collectively face the negative outcomes and negative externalities from widely spread algorithms that are single objective maximizing. In light of this, what are futures that you most fear in the short term from 5, 10, 15, 20 years from now where we’ve really failed it at AI alignment and working on these ethics issues.

Bart Selman: Yeah, so one thing that I do fear is an increased income inequality, and it’s as simple as that, that the companies that are the best at AI, that have the most data will get such an advantage over other organizations, that the benefits will be highly concentrated on a small group of people. And that, I think is real, because AI technology, in some sense, amplifies your ability to do things. So it is like in finance, if you have a good AI trading program that can mine text and a hundred or a thousand different indicators, you could build a very powerful financial trading firm. And of course trading firms are working very hard on that, but it concentrates a lot of the benefits in the hands of a small group of people. That I actually think is sort of- in my mind, sort of the biggest short-term risk of AI technology.

It’s a risk any technology has, but I think AI sort of amplifies it. So that, has to be managed and that comes back to what I mentioned fairly early on, the benefits of AI. It has to be ensured that it will benefit everyone, and maybe not all to the same extent, but at least everyone should benefit to some extent, and that’s not automatically going to happen. So that’s a risk I see in development of AI and then more dramatic risks. I think short term cybersecurity issues, smart tax on our infrastructure. AI programs could be quite dangerous, deep fakes and so sophisticated, some deep fakes. There are some specific risks that we have to worry about because they are going to be accelerated with AI technology. And then there’s of course the military autonomous weapon risk.

There’s an enormous pressure… Since it’s a competitive world of developing systems that use as much automation as possible. So, it’s not so easy to tell a military or country not to develop autonomous weapon systems. And so I’m really hoping that people start to realize, and this is again, an educational issue, partly of people, the voters basically that there is a real risk there just like nuclear weapons was a real risk and we have to get together to make agreements about at least a management of nuclear weapons. So we have to have agreements, global agreements about autonomous weapons and smart weapons, and what can be developed or what should at least be controlled somehow that will benefit older players. And that’s one of the short-term risks I see.

Lucas Perry: So if we imagine in the short term that there’s just all of these algorithms, proliferating that are single objective maximizing, that are aligned with whatever corporation that is using them, there is a lack of international agreement on autonomous weapons systems. Income inequality is far higher due to the concentration of power in particular individuals who control vast amounts of AI. So, if you have the wealth to accumulate AI, you begin to accumulate most of the intelligence on earth, and you can use that to create robots or use robotics so that you’re no longer dependent on human labor. So there’s increase in income and power inequality, and lack of international governance and regulation. Is that as bad as the world gets in the short term? Or is there anything else that makes it even worse?

Bart Selman: No, I think that’s about as bad as it gets. And I assumed I would be a very strong reaction in almost every country of the regular person as of the voter or the person in the street. There would be a strong reaction to that. And it’s real.

Lucas Perry: So, is that reaction though, possible to be effective in any way if lethal autonomous weapons have proliferated?

Bart Selman: Well, so legal autonomous weapons, yeah. So there are two different aspects to sort of what- In one aspect is what sort of happens within a country. And do the people accept that extreme levels of inequality, income inequality, and power distributions, and I think people will push back and there will be backlash against that. Lethal autonomous weapons when they start proliferating, I think- So I just have some hope that countries will realize that that is in nobody’s interest. So, that countries are able to manage risks that are unacceptable to everyone, I think. So I’m sort of hopeful that in the air of lethal autonomous weapons, that we will see a movement by countries to say that, “Hey, this is not going to be good for any one of us.”

Now, I’m being a little optimistic here, but with nuclear weapons. We did see it’s always a struggle and it remains a struggle today. But so far, countries have sort of managed these risks reasonably well. It’s not easy, but it can be done. And I think it’s partly done because everybody realizes nobody will be better off if we don’t manage these risks. So legal autonomous weapons, I think there has to be first a better understanding that these are real risks. And if you let that get out of hand, like let small groups develop their own autonomous weapons, for example, that that could be very risky to the global system. I’m hoping that countries will realize this and start developing a strategy to manage it, but it’s a real risk. Yeah.

Lucas Perry: So should things like this come to pass, or at least some of them, in the medium to long-term, what are futures that you fear in the time range of fifty to a hundred years or even longer?

Bart Selman: Yeah, so the lethal autonomous weapon risk would be, that could just be as bad as nuclear weapons being used at some point. So that sort of could wipe out humanity. So there is, I think that’s sort of worst case scenario is that we would go down in flames. There are some other scenarios where I think, and this is more about the inequality issue. Where a relatively small group of people grabs most of the resources and is enabled to do so by AI technology and the rest can live reasonable lives, but are limited by their resources.

So that’s, I think a somewhat dark scenario that I could see happen if we don’t pay attention to it right now. That could play out in 20, 30 years. It’s a little hard to… Again, one thing that’s a little difficult to predict is how fast the technology will grow and you combine it with advances in biology and medicine. I’m always a little optimistic. We could be living in just a very different and very positive world too, if that’s what I’m hoping that we’ll choose. So I am staying away a little bit from too dark a scenario.

Lucas Perry: So, a little bit about AI alignment in particular. I’m curious, it seems like you’ve been thinking about this since at least 2008, perhaps. I mean, even earlier you can let us know how have your views shifted and evolved. I mean, it’s been, what is that, about 13 years?

Bart Selman: Yeah. No, very good question. So yeah, in 2008 we had the, Eric Horvitz and I co-chaired a AAAI presidential panel on the risks of AI. It’s very interesting because at that time, this was before the real deep learning revolution. People saw some concerns, but the general consensus of… And this was a group of about 30 or 40 AI researchers and a very, very good group of people. There was a sort of a general census that it was almost too early to worry about the value alignment and the risks of AI. And I think it was true that AI was still a very academic discipline and a theme talking about, oh, what if this AI system starts to work? And then people start using it and what’s going to happen, seemed premature and was premature at the time, but it was good for, I think for people to get together and at least discuss the issue of what could happen. Now that really dramatically changed over the last 10 years, as, particularly the last five years.

And in part to people like Stuart Russell, Max Tegmark who basically brought to the forefront these concerns of AI systems, combined with the fact that we see the system starting to work, I guess yeah, if that hadn’t happened. So now, we see these incredible investments and companies really going after AI capabilities and suddenly these questions that were quite academic early on are now very real, and we have to deal with them and we have to think about them. And you do, I mean, the good thing is if I look at, for example, NSF and the funding in the United States, but around the world, actually also in Europe and in China, people are starting to fund AI safety, AI ethics, work on value alignment. You see it in conferences and people start looking at those questions. So I think that’s the positive side.

So I’m actually quite encouraged, but it was how much was achieved in a fairly short time. You know, FLI played a crucial role in that too, in bringing awareness to the AI safety issues. And now I think among most AI researchers, maybe not all, but most AI researchers, these are viewed as legitimate topics for study and legitimate challenges that we have to address. So it’s not, sometimes I feel good about that aspect. Of course, the questions remain urgent and the challenges are real, but at least I think the research community has found the attention. And in Washington, I was actually quite pleased if I look at the significant investments being planned for AI and the development of the AI R&D in the United States and yeah, safety, fairness, a whole range of issues that touches on how AI will affect society are getting serious attention, so they are being funded. And that happened last five years, I would say. So, that’s a very positive develop in this context.

Lucas Perry: So given all this perspective on the evolution of the alignment problem and the situation in which we find ourselves today, what are your plans or intentions as the president of AAAI?

Bart Selman: Yeah, so as part of AAAI, we’ve definitely stepped up our involvement, with Washington policy making process to try to inform policy makers better about the issues. And we’ve had, actually, we did a roadmap for AI research in the United States, and there was also of planning 20 years ahead of topics. And we proposed there to, I think what was a key component to us to build a national AI infrastructure, as we called it, that there’s an infrastructure to do AI research and development that would be shared among institutions and be accessible to almost every organization. And the reason is that we don’t want AI research and development to be concentrated just in a few big private companies. We actually would like to make it accessible to many more stakeholders and many more groups in society.

And to do that, you need an AI infrastructure where you have capabilities to store, curate large data sets, large facilities to cloud computing to give access to other groups in society, to build AI tools that are good for them, and that are useful for them. So as AAAI, we are pushing for this sort of generally making AI R&D generally available and to boost the level of funding. Keeping in mind, these issues of fairness, value alignment as valid research topics that should be part of anybody’s research proposal. People who do research proposals should have a component of where they consider whether their work is relevant in that context. And if it is, what contributions they can make. So, that’s what our society is doing and this is of course, a good time to be doing this because Washington is actually paying attention because not just the US, every country is developing AI R&D initiatives. So our goal is to provide input and to steer it in a positive way. Yeah and that’s actually a good process to be part of.

Lucas Perry: So you mentioned alignment considerations being explicitly, at least covered in the publication of papers, is that right?

Bart Selman: So at least I think people are, yeah- so there are papers purely on the alignment problem, but people are- if I look at sort of like reinforcement learning world, people are aware that value alignment is an issue. And to me, it feels so closely related to interpretability and understanding. We talked about it a little bit before is, you are not just getting to a certain objective, quantitative objective, single objective not just optimizing for that, actually understanding the bounds of your system safety bounds, for example, in the work on cyber-physical systems and self-driving cars, as of course, a key issue is how do I guarantee that whatever policy has learned, how do I guarantee that that policy is safe? So it’s getting more attention now. You know the pure value alignment problem like when it gets to ethics. I think there is- we talked about values or there’s a whole issue of how you define values and what are the basic core values.

And these are partly ethical questions. There I think is still room for growth. But I also see that for example, at Cornell, there are ethics people in the philosophy departments are thinking about ethics, are starting to look at this problem again and looking at the way AI is going in these directions. So partly I’m encouraged by an increase of collaborations between different disciplines that traditionally have not collaborated much, but the fact that ethics is relevant to computer science students. I think now five years ago, nobody even thought of mentioning that. And now I think that most departments realize, yes, we actually should tell our students about ethical issues and we should educate them about algorithmic bias and value alignment is a more challenging thing because you have to know a little bit more about AI, but most AI courses will definitely cover that now.

So, I think there’s great progress and I’m hoping that we just keep continuing to make those connections and make it clear that when we train students to be the next generation of AI engineers, that they’re very aware, they should be very aware of these ethical components. And that’s, I think is, is- it might even be somewhat unique in engineering. I don’t think engineering normally would touch on ethics too much, but I think AI is forcing us to do so.

Lucas Perry: So you see understanding of, and a sense of taking seriously the AI alignment problem at for example, AAAI as increasing.

Bart Selman: Yes. Yes, yes. And definitely it’s increasing and people are- yeah partly, there’s always- it takes time for people to become familiar with the terminology, but people are much more familiar with the questions and yeah, we’ve even had job candidates talk about AI alignment. And so then the department has to learn about what that means. So it’s partly an educational mission, it’s you actually have to understand how reinforcement learning, optimize and decision-making, and you have to understand a little bit how things work, but I think we’re starting to educate people and definitely people are much more aware of these problems so that’s good.

Lucas Perry: Yeah. Does global catastrophic or existential risk from AI fit into AAAI.

Bart Selman: I would say that at this point, yeah. I don’t think we got research. Well actually, it’s hard to say because we, I think we have like 10,000 submissions and I think there’s room at AAAI for those kind of papers. I just haven’t actually- personally haven’t actually seen them, but that’s actually- as president of AAAI, I would definitely encourage us to branch out and if somebody has an interesting paper, this could be a position paper or some other types of papers now that we have that sort of say, okay, let’s have a serious paper on existential risks because there is room for it. It just hasn’t so far has not happened much, I think, but I think it fits into our missions. So I would encourage that.

Lucas Perry: So you mentioned that one of the things that AAAI was focusing on was collaborating with government and policy making decisions, offering comments on documents and suggestions or proposals. Do you have any particular policy recommendations for existing AI systems or the existing AI ecosystem that you might want to share?

Bart Selman: Yeah, I think my sense there is sort of more like a meta level comment is- I think what we want is people designing systems. There’s a significant AI component and that the big tech companies, for example, our main input dare is we want people to pay serious attention to various things like bias, fairness, and these kind of criteria, AI safety. So it’s not- I wouldn’t have a particular recommendation for any particular system. But, with the AAAI submissions now we asked for sort of an impact statement and that’s somebody who does research. And that’s what we’re asking from the researchers, is when you do research that touches on something like value alignment or AI safety that you should actually think about societal component and possible impact on work. So we’re definitely asking people to do that.

In companies, I would say it’s more we ask company, we encourage company to have those discussions and make their engineers aware of these issues. And there’s one organization. Now the global partnership on AI that that’s actually also now very actively trying to do this on an international scale. So it’s a process. And it’s partly, I think you mentioned earlier an educational process where people have to learn about these problems to start incorporating them into our daily work.

Lucas Perry: I’m curious about what you think of AI governance and the relationship needed between industry and government. And one facet of this is for example, we’ve had Andrew Critch on the podcast and he makes quite an interesting point that some number of subproblems in the overall alignment problem will be naturally solved via industry incentives. Whereas, some of them won’t be. The ones that are, that will naturally be solved by industry incentives are those which align with whatever industry incentives are, so profit maximization. I’m curious, your view on the need for AI governance and how it is that we might cover these areas of the alignment problem that won’t naturally be solved by industry.

Bart Selman: That’s a good question. I think not all these problems will be solved by industry. So their objectives are sometimes a little too narrow to just solve them, a broad range of objects. So I really think it has to occur in a discussion, in a dialogue between policymakers, government, and public and private organizations. And it may require whether it requires regulation or at least form of self regulation that may be necessary to even level the playing field. Very early on, earlier we talked about social networks are spreading, fake news. You might actually need regulations to tell people not to do certain things because it will be profitable for them to do it. And so then you have to have regulations to limit that.

On the other hand, I do think a lot of things will be through self-regulation. So self-driving cars is a very circumscribed area. There’s a clear interest of all the participants in all the companies working on self-driving cars to make them very safe. So for some kind of AI systems, the objectives are sort of self-reinforcing and you need safety. Otherwise, people will not accept them. Other areas, I’m thinking for example, finance industry and that’s a big issue on- that it’s actually the competitive advantages often in proprietary system. It’s actually hard to know what these systems do. That I haven’t- I don’t have a good solution for that. One of my worries is that financial companies developed technologies that they will not want to share because that would be detrimental to the business, but actually expose risks that we don’t even know of.

So society actually has to come to grips with, are risks being created by AI systems that we don’t know of? So it has to be a dialogue and interaction between public and private organizations.

Lucas Perry: So in the current AI ecosystem, how do you view and think about narratives around a international race towards more and more powerful AI systems, particularly between the United States and China?

Bart Selman: Yeah. Yeah. So, yeah, I think that’s a bit of an unfortunate situation right now. So in some sense, the competition between China and the US and also Europe is good from an AI perspective, in terms of investments in AI R&D which actually does address also some of the AI safety issues and issues of alignment. So in some sense that’s a good benefit of these extra investments. And the competition aspect is less positive. And as AI scientists, we actually interact with AI scientists in China and we enjoy those interactions and a lot of good work comes out of that. When things become proprietary, people have data sets that other people don’t have and other organization don’t have and some countries do have, others don’t, I think the competition is not as positive. And, again, my hope is that by bringing out potentially positive aspects of AI, much stronger in terms of how it can… To me, for example, AI can transform the healthcare system.

It can make it much more efficient, much more widely available with remote healthcare delivery and things like that and better diagnosis systems. So there’s an enormous upside for developing AI for healthcare. I’ve actually interacted with people in China that work on healthcare for AI. Whether it gets developed in China or it gets developed here, that actually doesn’t matter. It would benefit both countries. So I really hope that we can keep these channels open instead of totally separate developments in these two countries. But there is a bit of a risk because the situation has become so competitive, but, again, I’m hoping people see it will improve healthcare in both countries is probably the right way to do it and we shouldn’t be too isolationist in this regard.

Lucas Perry: How do you feel the sense of these countries competing towards more and more powerful AI systems, how do you feel that that affects the chances of successful value alignment?

Bart Selman: Yeah, so that could be an issue. If the countries really start not sharing their technology and not sharing potential advances, it is harder, I think, to keep value alignment issues and AI safety issues under control. I think we should be open about the risk of countries going at it by themselves because the more people look at systems, the more researchers look at different AI systems from different angles, the better. And I guess a very odd example is, I always thought that it would be nice if AlphaZero was available to the AI research community to probe the brain of AlphaZero, but it’s not. And so there are already systems in industry that would actually benefit from study by much broader group of researchers. And there’s a risk there.

Lucas Perry: Do you think there’s also a risk with sharing? It would seem that you would accelerate AGI timelines by sharing the most state-of-the-art systems with anyone, right? And then you can’t guarantee that those people will use it in value-aligned ways.

Bart Selman: Yeah. No, no, no. That’s a flip side. It’s good you brought that up. Yeah. There is a flip side in sort of sharing even the latest deep learning code or something like that, that other people, that malicious actors could use it. In general, I think an openness is better in terms of keeping an eye on what gets developed. So in general, I think openness allows different researchers to develop common standards and common safety guards. So I see that risk of sharing, but I do think overall the international research community can set standard. We see that in synthetic biology and other areas where openness in general leads to better managing of risks. But you’re right. There is effect that it accelerates progress, but the countries are big enough, even if China and the US would completely separate their AI developments, both countries would do very well in their development of technology, so.

Lucas Perry: So I’m curious, do you think that AI is a zero-sum game? And I’m curious how you view an understanding of AI alignment and existential risk at the highest levels of Chinese and US government affecting the extent to which there is international cooperation for the beneficial development of AI, right? So there’s this sense of racing because we need to capture the resources and power but there’s the trade-off with the risks of alignment and existential risk.

Bart Selman: So yeah. I firmly believe that it’s not a zero-sum game. Absolutely not. I give the example of the healthcare system. Both China and the US have interest in better accessible healthcare, more available healthcare and lower cost healthcare. So actually the objectives are very similar there, and AI can make incredible difference for both countries. Similarly in education, you can improve education by having AI assisted education, adult education, continuous learning education. So there are incredible opportunities and both countries would benefit. So definitely AI is not a zero-sum game. So I hope countries realize that, when China declared they want to be a leading AI nation by 2030, I think there’s room for several leading nations.

So I don’t think one nation is better at AI that there will be the best outcome. The better outcome is if AI gets developed and used by many nations and shared. So I hope that politicians and governments see that shared interest. Now, as part of that shared interest, they may actually realize that the existential risks of bad actors, and that can be small groups of people, it could be a company or an organization, a bad actor using AI for negative goals that’s a global risk that, again, should be managed by countries collaborating. So I’m hoping that there are actually some understanding of the global benefits and not a zero-sum game, we all can gain, and the risk is a global risk and we should actually have a dialogue about some of these risks. The one component that is a tricky one, I think, is always the military component. But even there, as I mentioned before, the risk of autonomous lethal weapons is, again, something that affects every nation. So I can see countries realizing it’s better to collaborate and cooperate in these areas than to just take it as a pure competition.

Lucas Perry: So you said it’s not a zero-sum game and that we can all benefit. How would you view the perspective that the relative benefits for me personally, for racing are still higher, even if it’s not a zero-sum game, therefore I’m going to race anyway?

Bart Selman: Yes. I mean, yeah. There may be some of that, except that… I almost look at it a little different. I can see a race where we still share technology. So, the race is one of these strange… It’s almost like we’re competing with each other but we’re trying to get better all together. You can have a race and that can still be beneficial for progress, as long as you don’t want to keep everything to yourself. And I think what’s interesting, and that’s the story of scientific discovery and the way scientists operates, in some sense, scientists compete with each other because we all want to discover the next big thing in science. So there’s some competition. There is also a sense that we have to share because if I don’t share, I don’t get the latest from what my colleague is doing. So there’s a mutual understanding that yes, we should share because actually helps me, even individually. So that’s how I see.

Lucas Perry: So how do you convince people to share the thing which is like the final invention? Do you know what I mean? If I need to share it because then I won’t get the other thing that my colleague will make, but I’ve just made the last invention that means I will never have to look to my colleague again for another invention.

Bart Selman: Yeah. So that’s a good one, but yeah. So in science, we don’t think there’s sort of an endpoint. There will always be something novel, so.

Lucas Perry: Yeah, of course there’s always something novel, but you’ve made the thing that will more quickly discover every other new novel thing than any other agent on the planet. How do you get someone to share that?

Bart Selman: So, well, I think partly the story still is even if one person, if one country gets so dominant, there still is the question, is that actually beneficial for even the country? I mean, there are many different capabilities that we have. There are still nuclear weapons and things like that. So you might get the best AI and somebody might say, “Okay, I think it’s time to terminate you.” So there’s a lot of different forces. So I think it’s a sufficiently complex interaction game that I’m thinking that, to think of it as a single dimension issue is probably not quite the way the worlds will work. And I hope politicians are aware of that. I think they are.

Lucas Perry: Okay. So in the home stretch here, we’ve brought up lethal autonomous weapons a few times. What is your position on the international and national governance of lethal autonomous weapons? Do you think that a red line should be drawn at the fact that life or death decisions should not be delegated to machine systems?

Bart Selman: That’s a reasonable goal. I do think there are practical issues to specify exactly in what sense and how the system should work. So, decisions that have to be made very quickly, how are you going to make those if there’s no time for a human to be in the loop? So I like it as an objective that there should always be a human in the loop, but the actual implementation of system, I think, needs further work and it might even just come down to actual systems and somebody looking at those systems and say, “Okay, this has sufficient safeguards. And this system doesn’t.” Because there’s this issue of how quickly do we have to react and can this be done?

And of course, that’s partly you see that a defensive system may have to make a very quick decision, which could endanger the life of, I don’t know, incoming pilots, for example. So there are some issues, but I’d like it as a principle that legal autonomous systems should not be developed and that there should always be this human decision making as part of it, but that it probably has to be figured out for each individual system.

Lucas Perry: So would you be in favor of, for example, international cooperation in limiting, I guess, having treaties and governance around autonomous weapons?

Bart Selman: Oh, definitely. Yeah. Yeah, definitely. And I think people are sort of sometimes skeptical or wonder whether it’s possible, but I actually think it’s one of those things that is probably possible because when militaries start to develop those systems because that’s the real tricky part, when these systems are being developed or start being sold, they can be in the hands of any group. So I think countries actually have an interest in treaties and agreements on regulating or limiting any kind of development of such systems. So I’m a little hopeful that people will see this would be in nobody’s interest to have countries competing on developing the most deadly lethal autonomous weapon. That would actually be a bad idea. And I’m hopeful that people will actually realize. That is partly again an educational thing. So people should be more aware of it and will directly ask their governments to get agreements.

Lucas Perry: Do you see the governance of lethal autonomous weapons as being a deeply important issue around the international regulation and governance of AI, kind of like first key issue in AI as we begin to approach AGI and superintelligence? So does our ability to regulate and come up with beneficial standards and regulation for autonomous weapons, is that really important for long-term beneficial outcomes from things like AGI and superintelligence?

Bart Selman: Yeah, I think it would be a good exercise, in some sense, of seeing what kind of agreements you can put in place. So lethal autonomous weapons, I think, is a useful starting place because I think it’s fairly clear. I also think there are some complications. You can say, “Oh, we’d never do this.” What about if you have to decide in a fraction of a second what to do? So there are things that have to be worked out, but in principle, I think countries can agree that it needs collaboration between countries and then that same kind of discussion, the same kind of channels, because these things take time to form the right channels and the right groups of people to discuss these issues, could then be put towards other risks that AI may pose. So I think it’s a good starting point.

Lucas Perry: All right. A final question here, and this one is just a bit more fun. At Beneficial AGI 2019, I think you were on a panel that was about, do we want machines to be conscious. On that panel, you mentioned that you thought that AI consciousness was both inevitable and adaptive. I’m curious if you think about the science and philosophy of consciousness and if you have a particular view that you subscribe to.

Bart Selman: No, it’s a fun topic. And actually when I thought about consciousness more and will it emerge, there’s an area of AI that actually, because I’ve been in the field a long time, and the area generally is called knowledge representation and reasoning and it’s about how knowledge is represented in an AI system and how an AI system can reason with it. And one big sub-area there was the notion of self-reflection, the notion in a multi-agent system. So self-reflection is not only you know certain things, you also know about what you know, and you know about what you don’t know. And similarly in multi-agent systems, you have to know not only what you know, but you have to have some ideas of what others may know. And yes, you might have some ideas of what other agents don’t know but that is to facilitate interactions with other agents.

So this whole notion of reflection on your own knowledge and other agents’ knowledge, in my mind, is somewhat connected to consciousness of yourself and your environment, of course. So that led to my comment that if you build sufficiently complex systems that behave intelligently, they will have to develop those capabilities. They have to know about what they know, what they don’t know and what others know and others don’t know. And knowing about knowing what others might know about you, it actually goes arbitrary levels of interactions. So I think it’s going to be a necessary part of developing intelligence system. And that’s why my sense is that some notion of consciousness will emerge in such systems because it’s part of this reflection mechanism.

And then what I think is exciting about it is, in consciousness research there’s also a lot of research now on what is the neurological basis for consciousness. There’s some neurological basis in the brain that points at consciousness. Well, now we have worked at that. We see how deep reinforcement learning interacts with neuroscience. And we’re looking for analogies of deep reinforcement learning approaches in AI and what insights it gives in actual brains, in actual biological neurological systems. So perhaps when we see things like reflection and consciousness emerge in AI systems, we will get new insights in what happens potentially in the brain. So it’s a very sort of interesting potential there.

Lucas Perry: My sense of it is that it may be possible to disentangle constructing a self model, like a model of both what I am and also what it is that I know and that I don’t know and then also my world model and that these things seem to be correlated with consciousness, with the phenomenal experience of being alive. But it seems to me they would be able to come apart just because it seems conceivable to me that I could be a sentient being with conscious awareness that doesn’t have a self model or a world model. You can imagine just like awareness of a wall that’s the color green. The no sense of no duality there between self and object. So, it’s a bit different the way philosophers come at the problem and computer scientists. There’s the computational aspect, of course, the modeling that’s happening, but it seems like the consciousness part can become disentangled from the modeling perhaps. And so I’m curious if you have any perspective or opinion on that and how we could ever know if an AI was conscious given that they may come apart.

Bart Selman: No, you raise an interesting possibility that maybe they can come apart. And then the question is, can we investigate that? Can we study that? And that’s a question in itself. Yeah. So, I was more coming at it from the sense of that when the system gets complex enough and it starts having these reflections, it will be hard not to have it be conscious. But you’re right. It probably could still be, although I would be a little surprised, but yeah. So my point in part is that the deep learning reinforcement approach or whatever deep learning framework we will use to get these reflective capabilities, I’m sort of hoping it might give us new insights into how to look at it from the brain perspective and a neural perspective because these things might carry over. And is consciousness a computational phenomena? My guess is it is, of course, but that still needs to be demonstrated.

Lucas Perry: Yeah. I would also be surprised if sophisticated self and world modeling didn’t most of the time or all the time carry along conscious awareness with it. But even prior to that, as we have domain specific systems, it’s a little bit sci-fi to think about, but there’s the risk of proliferating machine suffering if we don’t understand consciousness and we’re running all of these kinds of machine learning algorithms that they don’t have sophisticated self models or world models, but the phenomenal experience of suffering still exists then that could… We had factory farming of animals and then maybe later in the century, we have the running of painful, deep learning algorithms.

Bart Selman: No, that’s indeed a possibility. It sort of argues we actually have to dig deeper into the questions of consciousness. And so far, I think, most AI researchers have not studied it. I’m just starting to see some possibility of studying it again. I’m starting to study it as AI researchers. And it just brought me back a little bit that this notion of reflection that topics go in and out of fashion, but that used to be actually quite seriously studied, including with philosophers about what it means to know what you know, what does it mean to know what you don’t know, for example. And then there’s the things you don’t know that you don’t know kind of thing. So, we thought about some of these issues and now consciousness brings in that new dimension and you’re quite right. It could be quite separate, but it could also be related.

Lucas Perry: So as we wrap up here, is there a final comment you’d like to make or anything that you feel like is left unsaid or just a parting word for the audience about alignment and AI?

Bart Selman: So, comment to the audience is that the alignment question, value alignment, AI safety are super key important topics for AI researchers, there are many research challenges there that are far from solved. And in terms of the development of AI there are tremendous positive opportunities if things get done right, and that we should not… So one concern I have as an AI researcher is that if we get overwhelmed by the concerns and the risks and decide not to develop positive capabilities for AI. So we should keep in mind that can really benefit society if AI is done well and that we should take that as our primary challenge and manage the risk while doing so.

Lucas Perry: All right, Bart, thank you very much.

Bart Selman: Okay. Thanks so much. It was fun.

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

  • Intelligence and coordination
  • Existential risk from AI, synthetic biology, and unknown unknowns
  • AI adoption as a delegation process
  • Jaan’s investments and philanthropic efforts
  • International coordination and incentive structures
  • The short-term and long-term AI safety communities

1:02:43 Collective, institutional, and interpersonal coordination

1:05:23 The benefits and risks of longevity research

1:08:29 The long-term and short-term AI safety communities and their relationship with one another

1:12:35 Jaan’s current philanthropic efforts

1:16:28 Software as a philanthropic target

1:19:03 How do we move towards beneficial futures with AI?

1:22:30 An idea Jaan finds meaningful

1:23:33 Final thoughts from Jaan

1:25:27 Where to find Jaan

 

Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

  • Understanding the universe through digital physics
  • How human consciousness operates and is structured
  • The path to aligned AGI and bottlenecks to beneficial futures
  • Incentive structures and collective coordination

You can find FLI’s three new policy focused job postings here

1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?

1:19:39 Non-duality and collective coordination

1:22:53 What difficulties are there for an idealist worldview that involves computation?

1:27:20 Which features of mind and consciousness are necessarily coupled and which aren’t?

1:36:40 Joscha’s final thoughts on AGI

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

  • Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
  • The relationship between AI safety, control, and alignment
  • Virtual worlds as a proposal for solving multi-multi alignment
  • AI security

You can find FLI’s three new policy focused job postings here

 

Paper’s discussed in this episode:

On Controllability of AI

Unexplainability and Incomprehensibility of Artificial Intelligence

Unpredictability of AI

 

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons

  • The current state of the deployment and development of lethal autonomous weapons and swarm technologies
  • Drone swarms as a potential weapon of mass destruction
  • The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
  • The difficulty of attribution, verification, and accountability with autonomous weapons
  • Autonomous weapons governance as norm setting for global AI issues

You can check out the new lethal autonomous weapons website here

Beatrice Fihn on the Total Elimination of Nuclear Weapons

  • The current nuclear weapons geopolitical situation
  • The risks and mechanics of accidental and intentional nuclear war
  • Policy proposals for reducing the risks of nuclear war
  • Deterrence theory
  • The Treaty on the Prohibition of Nuclear Weapons
  • Working towards the total elimination of nuclear weapons

4:28 Overview of the current nuclear weapons situation

6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war

9:27 Accidental nuclear war and human systems

12:08 The risks of nuclear war in 2021 and nuclear stability

17:49 Toxic personalities and the human component of nuclear weapons

23:23 Policy proposals for reducing the risk of nuclear war

23:55 New START Treaty

25:42 What does it mean to maintain credible deterrence

26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons

28:00 Deterrence theoretic arguments for nuclear weapons

32:36 Reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons

39:13 Arguments for and against nuclear risk reduction policy proposals

46:02 Moving all of the United State’s nuclear weapons to bombers and nuclear submarines

48:27 Working towards and the theory of the total elimination of nuclear weapons

1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons

1:14:26 Elevating activism around nuclear weapons and messaging more skillfully

1:15:40 What the public needs to understand about nuclear weapons

1:16:35 World leaders’ views of the treaty

1:17:15 How to get involved

 

Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

  • FLI’s perspectives on 2020 and hopes for 2021
  • What our favorite projects from 2020 were
  • The biggest lessons we’ve learned from 2020
  • What we see as crucial and needed in 2021 to ensure and make improvements towards existential safety

54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue

56:00 Jared Brown on the need for robust government engagement

57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation

1:00:10 Outro

 

Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox

  • William Foege’s and Victor Zhdanov’s efforts to eradicate smallpox
  • Personal stories from Foege’s and Zhdanov’s lives
  • The history of smallpox
  • Biological issues of the 21st century

18:51 Implementing surveillance and containment throughout the world after success in West Africa

23:55 Wrapping up with eradication and dealing with the remnants of smallpox

25:35 Lab escape of smallpox in Birmingham England and the final natural case

27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov

29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov

31:05 Michael Burkinsky’s memories of Victor Zhdanov Sr.

39:26 Victor Zhdanov Jr.’s memories of Victor Zhdanov Sr.

46:15 Mushrooms with meat

47:56 Stealing the family car

49:27 Victor Zhdanov Sr.’s efforts at the WHO for smallpox eradication

58:27 Exploring Alissa’s book on Victor Zhdanov Sr.’s life

1:06:09 Michael’s view that Victor Zhdanov Sr. is unsung, especially in Russia

1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century

1:07:32 The origin and history of smallpox

1:10:34 The origin and history of variolation and the vaccine

1:20:15 West African “healers” who would create smallpox outbreaks

1:22:25 The safety of the smallpox vaccine vs. modern vaccines

1:29:40 A favorite story of William Foege’s

1:35:50 Larry Brilliant and people central to the eradication efforts

1:37:33 Foege’s perspective on modern pandemics and human bias

1:47:56 What should we do after COVID-19 ends

1:49:30 Bio-terrorism, existential risk, and synthetic pandemics

1:53:20 Foege’s final thoughts on the importance of global health experts in politics

 

Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

  • Important intellectual movements and their merits
  • The evolution of metaphysical and epistemological views over human history
  • Consciousness, free will, and philosophical blunders
  • Lessons for the 21st century

Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity

 Topics discussed in this episode include:

  • How Big Tobacco used it’s wealth to obfuscate the harm of tobacco and appear socially responsible
  • The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation
  • How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers
  • How to combat the problem of ethics-washing in Big Tech

 

Timestamps: 

0:00 Intro

1:55 How Big Tech actively distorts the academic landscape and what counts as big tech

6:00 How Big Tobacco has shaped industry research

12:17 The four tactics of Big Tobacco and Big Tech

13:34 Big Tech and Big Tobacco working to appear socially responsible

22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities

32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists

51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility

1:00:24 Big Tech and being authentically socially responsible

1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems

1:16:56 Ethics-washing as systemic

1:17:30 Action items for solving Ethics-washing

1:19:42 Has Mohamed received criticism for this paper?

1:20:07 Final thoughts from Mohamed

 

Citations:

Where to find Mohamed’s work

The Future of Life Institute AI policy page

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Mohamed Abdalla on his paper The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. We explore how big tobacco has used and still uses its wealth and influence to obfuscate the harm of tobacco by funding certain kinds of research, conferences, and organizations, as well as influencing scientists, all to shape public opinion in order to avoid regulation and maximize profits. Mohammed explores in his paper and in this podcast how big technology companies engage in many of the same behaviors and tactics of big tobacco in order to protect their bottom line and appear to be socially responsible. 

Some of the opinions presented in the podcast may be controversial or inflammatory to some of our normal audience. The Future of Life Institute support’s hearing a wide range of perspectives without taking a formal stance on it as an institution. If you’re interested to know more about FLI’s work in AI policy, you can head over to our policy page on our website at futureoflife.org/ai-policy, link in the description. 

Mohamed Abdalla is a PhD student in the Natural Language Processing Group in the Department of Computer Science at the University of Toronto and a Vanier scholar, advised by Professor Frank Rudzicz and Professor Graeme Hirst. He holds affiliations with the Vector Institute for Artificial Intelligence, the Centre for Ethics, and ICES, formerly known as the Institute for Clinical and Evaluative Sciences.

And with that, let’s get into our conversation with Mohamed Abdalla

So we’re here today to discuss a recent paper of yours titled The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity. To start things off here, I’m curious if you could paint in broad brush strokes how you view big tech as actively distorting the academic landscape to suit its needs, and how these efforts are modeled and similar to what big tobacco has done. And if you could also expand on what you mean by big tech, I think that would also be helpful for setting up the conversation.

Mohamed Abdalla: Yeah. So let’s define what big tech is. I think that’s the easiest of what we’re going to tackle. Although in itself, it’s actually not a very easy label to pin down. It’s unclear what makes a company big and what makes a company tech. So for example, is Yahoo still a big company or would it count as big tech? Is Disney a tech company? Because they clearly have a lot of technical capabilities, but I think most people would not consider them to be big tech. So what we did was we basically had a lot of conversation with a lot of researchers in our department, and we asked for a list of companies that they viewed as big tech. And we ended up with a list of 14 companies. Most of them we believe will be agreeable. Google, Facebook, Microsoft, Apple, Amazon, Nvidia, Intel, IBM Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI. This is a very restrictive set on what we believe the big tech companies are. Like for example, a clear missing one here is Oracle. There’s a lot of other big companies that are missing, but we’re not assigning this as a prescriptive or a definitive list of what big tech companies are.

But adding more companies to this list would only help us strengthen the conclusions we’ve drawn in our paper because they will show how much more influence these companies have. So by limiting it to a small group, we’re actually taking a pessimistic view on the maximum amount of influence that they have. That’s what we define as big tech.

Then the question comes, what do we mean by they have an outsized influence or how they go about influencing policy? And we will get into specific examples here, but I think the best way of demonstrating why there should be cause for concern is through a very simple analogy.

So imagine if there was a health policy conference, which had tens of thousands of researchers. And among the topics they discussed was how do you deal with the negative effects of increased tobacco usage. And its largest funding bodies were all big tobacco companies. Would this be socially acceptable? Would the field of health policy accept this? No. In fact, there are guidelines such as the Article 5.3 on the World Health Organization’s Framework for Convention on Tobacco Control. Which states that if you are developing public health policies with respect to tobacco control, you are required to act to protect these policies from commercial and other vested interests of the tobacco industry. So they are aware of the fact that industrial funding has a very large negative effect on the types of research, the types of conclusions, how strong the conclusions are that can be drawn from research.

But if we flip that around. So instead of health policy, replace machine learning policy or AI policy. Instead of big tobacco, you replace it with big tech. And instead of the negative effects of increased tobacco usage, the ethical concerns of increased AI deployment. Would this be accepted? And this is not even a hypothetical, because all of the big machine learning conferences among their top funding bodies are all these big tech companies. If you look at NeurIPS or you look at FAccT, the Fairness, Accountability, and Transparency Conference, their platinum sponsors or gold sponsors, whatever their highest level is depending on the conference, is all of these companies. Even if one wants to say it’s okay, because these companies are not the same as big tobacco, this should be justified. There is no justification for why we allow big tech to have such influence. I haven’t proven that this influence exists yet in my speaking so far. But there is precedence to believe that industrial funding warps research. And there’s been no critical thought of whether or not big tech, as computer science industrial funding warps research. And I argue in the paper that it does.

Lucas Perry: All right. So with regards to understanding how industry involvement in research and development of a field or area can have a lot of influence, what you take as a historical example to learn from the strategies and the playbook of a historical industry to see if big tech might be doing the same things as big tobacco. So can you explain in the broadest brush strokes, how big tobacco became involved in shaping industry research on the health effects of tobacco, and how big tech is using all of the same or most of the same moves to shape the research landscape to make big tech themselves have a public image of trust and accountability when that may not be the case?

Mohamed Abdalla: The history is that shortly after World War II in the mid 1950s, there was a pronounced decrease in demand for their product. What they believe caused or at least in part caused this drop in demand was a Reader’s Digest article that was published called Cancer by the Carton. And it discussed the scientific links between smoking and lung cancer.

It was later revealed after litigation that big tobacco actually knew about these links, but they also admitted it would result in increased legislation, decreased profits. So they didn’t want to publicly agree with the conclusions of the paper, despite having internal research which showed that this indeed was the case.

After this article was published, it was read by a lot of people and people were getting scared about the health effects of smoking. They employed a PR firm. And their first strategy was to publish a full-page ad in the New York Times that was seen by I think approximately 43 million people or so, which is a large percentage of the population. And they would go into state and I quote, “They accept an interest in people’s health as basic responsibility, paramount to every other consideration in our business.” So despite having internal research that showed that these links were conclusive in their full-page ad, not only did they state that they believed these links were not conclusive, they also lied and said that the health of their people was paramount to every other consideration, including profit. So clear, blatant lies, but dressed up really nicely. And it reads really well.

Another action that they were instructed to do by the PR firm was to fund academic research that would not only draw questions on the conclusiveness of these links, but also to sort of add noise, cause controversy, slowed down legislation. And I’ll go into the points specifically. But the idea to fund academia was actually the PR firms’ idea. And it was under the instruction of the PR firm that they funded academia. And despite their publicly stated goal of funding independent research because they wanted the truth and they care about health, it was revealed again after litigation that internal documents existed that showed that their true purpose was to sow doubt into the research that showed conclusive links between smoking and cancer.

Lucas Perry: Okay. And why would an industry want to do this? Why would they want to lie about the health implications of their products?

Mohamed Abdalla: Well, because there’s a profit motive behind every company’s actions. And that is basically unchanged to this day. Where while they may say a lot of nice sounding stuff, it’s just a simple fact of life that the strongest driving factor between any company’s decision is the profit motive. Especially if they’re publicly traded, they have a legal obligation to their shareholders to maximize profit. It’s not like they’re evil per se. They’re just working within the system that they’re in.

We see sort of the exact same thing with big tech. People can argue about when the decline of opinion regarding these big tech companies started. Especially if you’re American centric. Since I’m in Canada, you’re in the States. I think that the Cambridge Analytica scandal with Facebook can be seen as sort of a highlight, although Google has its own thing with Project Maven or Project Dragonfly.

And the Pew Research Firm shows that the amount of people that view big tech as having a net positive social change in the world started decreasing around the mid 2010s. And I’m only going to quote Facebook here, well Mark Zuckerberg specifically in his testimony to Congress. But in Congress, he would state, “It’s clear that we didn’t do enough and we didn’t focus enough on preventing abuse.” And he stressed that he admitted fault and they would double down on preventing abuse. And this was simply that they didn’t think about how people could do harm. And again, this statement did not mention the leaked internal emails, which stated that they were aware of companies breaking their policies. And they explicitly knew that Cambridge Analytica was breaking their scraping policies and chose to do nothing about it.

Even recently, there have been leaks. So Buzzfeed News leaked Sophie Zhang’s resignation letter, which basically stated that unless they thought that they were going to catch flack in terms of PR, they would not act to moderate a lot of the negative events that was happening on their platforms.

So this is a clear profit incentive thing, and there’s no reason to think that these companies are different. So then the question is how much of their funding of AI ethics or AI ethicists is driven by benevolent desire to see goodness in the world? And I’m sure there are people that work that have this sort of desire. But this is the four criticisms that you can get for reasons that you can fund academia. How much of it is used to reinvent yourself as socially responsible, influence the events and decisions made by funded universities, influence the research questions of individual scientists, and discover receptive academics.

So while we can assume that some people there may actually have the social good in mind and may want to improve society, we need to also consider these the reasons that big tobacco funded academia, and we need to check does big tech also fund academia? And is the effects of their funding academia the exact same as the effects of big tobacco funding academia? And if so, we as academics, or the public body, or government, or whoever needs to take steps to minimize the undue influence.

Lucas Perry: So I do want to spend the main body of this conversation on the big tobacco and big tech engage in these four points that you just illustrated, in order to reinvent themselves in the public image and be seen as socially responsible, even when they may not be actually trying to be socially responsible. It’s about the image and the perception. And then two, that they are trying to influence the events and decisions made by funded universities. And that three, they’ll be influencing the research questions and plans of individual scientists. This helps funnel research into areas that will make you look benevolent or socially responsible. Or you can funnel people away from topics that will lead to regulation.

And then the last one is to discover receptive academics who can be leveraged. If you’re in the oil industry and you can find a few scientists who have some degree of reputability and are willing to cast doubt on the science of climate change, then you’ve found some pretty good allies in your fight for your industry.

Mohamed Abdalla: Yep, exactly.

Lucas Perry: So before we jump into that, do you want to go through each of these points and say what big tobacco did in each of them and what big tech did in each of them? Or do you want to just start by saying everything that big tobacco did?

Mohamed Abdalla: Since there’s four points and there’s a lot of evidence for each of these points, I think it’s probably better to do for the first point, here’s what big tobacco did. And then here’s what big tech did.

Lucas Perry: Let’s go ahead and start then with this first point. From the perspective of big tobacco and big tech, what have they done to reinvent themselves in the public image as socially responsible? And again, you can just briefly touch on why it’s their incentive to do that, and not actually be socially responsible.

Mohamed Abdalla: So the benefit of framing yourself as socially responsible without having to actually take any actions to become socially responsible or to earn that label is basically increased consumer confidence, increased consumer counts, a decreased chance of legislation if the general public, and thereby the general politician believes that you are socially responsible and that you do care about the public good, you are less likely to be regulated as an industry. So that’s a big driving factor for trying to appear socially responsible. And we actually see this in both industries. I’ll cover it later, but a lot of stuff that’s leaked basically shows that spoiler, a lot of the AI research being done, especially in AI ethics is seen as a way to either delay or prevent the legislation of AI. Because they’re afraid that it will eat into their profit, which is against their profit motive, which is why they do a lot of the stuff that they do.

So first, I’ll go over what big tobacco did. And then we’ll try to draw parallels to what big tech did. To appear socially responsible, they were suggested by their PR firm Hill+Knowlton Strategies to fund academics and to create research centers. The biggest one that they created was CTR, which is the Council for Tobacco Research. And when they created CTR, they created it in a very academically appealing way. What I mean by that is that CTR was advised by distinguished scientists who serve on its scientific advisory boards. They went out of their way to recruit these scientists so that the research center gains academic respectability and is trusted by if not only the lay person, by academics in general. That they’re a respectable organization despite being funded by big tobacco.

And then what they would do is fund research questions. They’d act essentially as a pseudo granting body and provide grants to researchers who were working on specific questions that was decided by this council. At surface level, it seems okay. I’m not 100% sure how it works in the States, but at least in Canada we have research funding bodies. So we have NSERC or CIHR Natural Sciences and Engineering Research Council, which decides who gets the grants from the government’s research money. And we have it for all the different fields. And in theory, the research questions should be given in terms of validity of research, potential impact. A lot of the academically relevant considerations.

But what we ended up showing after litigation again, was that at least in the case of big tobacco, there were more lawyers than scientists involved in the distribution of money. And the lawyers were aware of what would and would not likely hurt the bottom line of these companies. So quoting previous work, their internal documents showed that they would simply refuse to fund any proposal that acknowledged that nicotine was addictive or that smoking was dangerous.

They basically went out of their way to fund research that was sort of unrelated to tobacco use so that they get good PR while minimizing the risk that said research would harm their profit motive. And during any sort of litigation, for example during a cigarette product liability trial, the lawyers presented a list of all the universities and medical schools supported by this Council for Tobacco Research as proof that they care about social responsibility and they care about the wellbeing. And they use this money as proof.

Basically at first glance, all of their external facing actions did seem that they cared about the well-being of people. But it was later revealed through internal documents that this was not the case. And this was basically a very calculated move to prevent legislation, beat litigation, and other self-serving goals in order to maximize profit.

In big tech, we see similar things happening. In 2016, the Partnership on AI to Benefit People and Society was established to, “Study and formulate best practices on AI technologies and to study AI and its influences on people and society.” Again, a seemingly very virtuous goal. And a lot of people signed up for this. A lot of non-profit organizations, academic bodies, and a lot of industries signed up for this. But it was later leaked that despite sounding rosy, the reality on the ground was a little bit darker. So reports from those involved, and this was a piece published at The Intercept. Demonstrated how neither prestigious academic institutions such as MIT, nor civil liberty organizations like the ACLU had much power in the direction of the partnership. So they ended up serving as a legitimate function for big tech’s goals. Basically railroading other institutions while having their brand on your work helps appear socially responsible. But if you don’t actually give them power, it’s only the appearance of social responsibility that you’re getting. You’re not actually being forced to be socially responsible.

There’s other examples of going to litigation specifically. During his testimony to Congress, Mark Zuckerberg states that in order to tackle this problems, they’ll work with independent academics. And these independent academics would be given oversight over their company. It’s unclear how an academic that is chosen by Facebook, theoretically compensated by Facebook, and could be fired by Facebook would be independent of Facebook after being chosen, receiving compensation, and knowing that they can lose that compensation if they do something to anger Facebook.

Another example almost word for word from big tobacco’s showing off to jurors is that Google boasts that it releases more than X research papers on topics in responsible AI in a year to demonstrate social responsibility. This is despite arm’s length involvement with the military minded startups. So if you build on that, Alphabet Google faced a lot of internal backlash with Project Maven, which is basically they’re working on image recognition algorithms for drones. They faced a lot of backlash. So publicly, they appeared to have stopped. They promised to stop working with the military. However, internally, Gradient Ventures, which is basically the venture capital arm of Alphabet still funds, provides researchers, and provides data to military startups. So despite their promise not to work in military, despite their research in responsible AI, they still work in areas that don’t necessarily fit the label of being socially responsible.

Lucas Perry: It seems there’s also this dynamic here where in both tobacco and in tech, it’s cheaper to pretend to be socially responsible than to actually be socially responsible. In the case of big tobacco, that would’ve actually meant dismantling the entire industry and maybe bringing e-cigarettes on 10 years before that actually happened. Yet in the case of big tech, it would seem to be more like hampering short term profit margins and putting a halt to recommender algorithms and systems that are already deployed that are having a dubious effect on American democracy and the wellbeing of the tens of millions of human brains that are getting fed garbage content by these algorithms.

So this first point seems pretty clear to me. I’m not sure if you have anything else that you’d like to add here. Public perception is just important. And if you can get policymakers and government to also think that you’re playing socially responsible, they’ll hold off on regulating you in the ways that you don’t want to be regulated.

Mohamed Abdalla: Yeah, that’s exactly it. Yeah.

Lucas Perry: Are these moves that are made by big tobacco and big tech also reflected in other contentious industries like in the oil industry or other greenhouse gas emitting energy industries? Are they making generally the same type of moves?

Mohamed Abdalla: Yeah, 100%. So this is simply industry’s reaction to any sort of possible legislation. Whether it’s big tobacco and smoking legislation, big tech and some sort of legislation on AI, oil companies and legislation on greenhouse gas emissions, clean energy, so on and so forth. Even a lot of food industry. I’m not sure what the proper term for it is, but a lot of the nutritional science research is heavily corrupted by funding from whether it’s Kellogg’s, or the meat industry, or the dairy industry. So that’s what industry does. They have a profit motive, and this is a profitable action to take. So it’s everywhere.

Lucas Perry: Yeah. So I mean, when the truth isn’t in your favor, and your incentive is profit, then obfuscating the truth is your goal.

Mohamed Abdalla: Exactly.

Lucas Perry: All right. So moving on to the second point then, how is it that big tobacco and big tech in your words, work to influence the events and decisions made by funded universities? And why does influencing the decisions made by funded universities even matter for large industries like big tobacco and big tech?

Mohamed Abdalla: So there’s multiple reasons to influence events. What it means to influence events is also a variety of actions. You could either hold events, you could stop holding events, or you can change how events that are being held operate. So events here, at least in the academic sense, I’m going to talk about conferences. And although they’re not always necessarily funded by universities, they are academic events. So why would you want to do this? Let’s talk about big tobacco first and show by example why they gained from doing this.

First, I’ll just go over some examples. So at my home university, the University of Toronto, Imperial Tobacco, which is one of the companies that belongs in big tobacco, withheld its funding from U of T’s Faculty of Law conference as retribution for the fact that U of T law students were influential in having criminal charges be laid against Shoppers Drug Mart for selling tobacco to a minor. As one of their spokespersons said, they were biting the hand that feeds them. If universities events such as this annual U of T law conference relies on funding from industry in general, then they have an oversized say of what you as an institution will do, or what people working for you can and cannot do. Because you’ll be scared of losing that consistent money.

Lucas Perry: I see. So you feed as many people as you can. Knowing that if they ever bite you, the retraction of your money or what you’re feeding them is an incentive for them to not hurt you?

Mohamed Abdalla: Exactly. But it’s not even the retraction, it’s the threat of retraction. If in the back of their mind 50% of their funding comes from whatever industry, can you afford to live without 50% of your funding? Most people would say no, and that causes worry and will call you to self self-censor. And that’s not a good thing in academia.

Lucas Perry: And replacing that funding is not always easy?

Mohamed Abdalla: It’s very difficult.

Lucas Perry: So not only are you getting the public image of being socially responsible by investing in some institutions or conferences, which appear to be socially responsible or which contain socially responsible workshops and portions. But then you also have in the back of the mind of the board that organizes these conferences, the worry and the knowledge that, “We can’t take a position on this because we’d lose our funding. Or this is too spicy, so we know we can’t take a position on this.” So you’re getting both of these benefits. The constraining of what they may do within the context of what may be deemed ethically and socially responsible, which may be to go against your industry in some strong way. But you also gain the appearance of just flat out being socially responsible while suppressing what would be socially responsible free discourse.

Mohamed Abdalla: 100%. And to build on that a little bit. Since we brought in boards, people that decide what happens. An easier way of influencing what happens is to actually plant or recruit friendly actors within academia. There is a history at least by my home university again, where the former president and dean of law of U of T was the director of a big tobacco company. And someone on the board of Women’s College Hospital, which is a teaching hospital affiliated with my university was the president and chief spokesperson for the Canadian Tobacco Manufacturer’s Council. So although there is no proof that they necessarily went out of their way to change the events held by the university, if a large percentage of your net worth is held in tobacco stocks, even if you’re a good human being, just because you’re a human being, you will have some sort of incentive to not hurt your own wellbeing. And that can influence the events that your university holds, the type of speakers that you invite, the types of stances that you allow your university to take.

Lucas Perry: So you talked about how big tobacco was doing this at your home university. How do you believe that big tech is engaging in this?

Mohamed Abdalla: The first thing that we’ll cover is the funding of large machine learning AI conferences. In addition to academic innovation or whatever reason they may say that they’re funding these conferences. And I believe that a large portion of this is because of academic innovation. You can see that the amount of funding that they provide also helps give them a say. Or at least in the back of the organizer’s mind. NeurIPS, which is the biggest machine learning AI conference has had always at least two big tech sponsors at the highest tier of funding since 2015. And in recent years, the number of big tech companies has exceeded five. This also carries over to workshops where over the past five years, only a single ethics related workshop did not have at least one organizer belonging to big tech. And that was 2018s robust AI in financial services workshop, which instead featured the foreheads of AI branches at big banks, which is not necessarily better. It’s not to say that those working in these companies should not have any say. But to have no venue that doesn’t rely on big tech in some sort of way or is not influenced in big tech in some sort of way is worrying.

Lucas Perry: Or whose existence is tied up in the incentives of big tech. Because whatever big tech’s incentives are, that’s generating the profit which is funding you. So you’re protecting that whole system when you accept money from it. And then your incentives become aligned with the incentives of the company that is suppressing socially responsible work.

Mohamed Abdalla: Yeah. Fully agree. In the next section where we talk about the individual researchers, I’ll go more into this. There’s a very reasonable framing of this issue where big tech isn’t purposely doing this. And industry is not purposely being influenced, but the influence is taking place. But basically exactly as you said. Even if these companies are not making any explicit demands or requests from the conference organizers, it’s only human nature to assume that these organizers would be worried or uncomfortable doing anything that would hurt their sponsors. And this type of self-censorship is worrying.

The example that I just showed was for NeurIPS, which is largely a technical conference. So Google does have incentive to fund technical research because a really good optimization algorithm will help their industry or their work, their products. But even when it comes to conferences that are not technical in their goal, so for example the FAccts conference, the Fairness Accountability, and Transparency Conference has never had a year without big tech funding at the highest level. Google’s three out of three years. Microsoft is two out of three years. And Facebook is two out of three years.

FAccT has a statement regarding sponsorship and financial support where they say that you have to disclose this funding. But it’s unclear how disclosure alone helps combat direct and indirect industrial pressures. A reaction that I often get is basically that those who are involved are very careful to disclose the potential conflicts of interest. But that is not a critical understanding of how the conflict of interest actually works. Disclosing a conflict of interest is not a solution. It’s just simply highlighting the fact that a problem exists.

In the public health sphere, researchers push that resources should be devoted to the problems associated with sequestration, which is elimination of relationships between commercial industry and professionals in all cases where it’s remotely feasible. So that means how this policy realizes that simply disclosing is not actually debiasing yourself.

Lucas Perry: That’s right. So you said that succinctly, disclosing is not debiasing. Another way of saying that is it’s basically just saying, “Hey, my incentives are misaligned here.” Full-stop.

Mohamed Abdalla: Exactly.

Lucas Perry: And then like okay, everyone knows that now, and that’s better than not. But your incentives are still misaligned towards the general good.

Mohamed Abdalla: Yeah, exactly. And it’s unclear why we think that AI ethicists are different from other forms of ethicists in their incorruptibility. Or maybe it’s an unfounded in terms of research view that thinks or believes that we would be able to post-hoc adjust for the biases of researchers, but that’s simply unfounded by research. So yeah, one of the ways that they influence that events is simply by funding the events and funding the people organizing these events.

But there’s also examples where some companies in big tech knowingly are manipulating the events. And I’ll quote here from my paper. “As part of a campaign by Google executives to shift the antitrust conversation, Google sponsored and planned a conference to influence policy makers going so far as to invite a token Google critic capable of giving some semblance of balance.” So it’s clear that these executives know what they’re doing, and they know that by influencing events, they will influence policy, which will influence legislation, and in turn litigation. So it’s clear from the leaks that are happening that this is not simply needless worrying, but this is an active goal of industry in order to maximize their profit.

There is some work for big tobacco that has not been done in big tech. I don’t think it can be done in big tech, and I’ll speak about why. But basically, when it comes to influencing events, there is research that shows that events held that were sponsored by big tobacco, such as symposiums or workshops about secondhand smoking are not only skewed, but also are poor quality compared to events not sponsored by big tobacco. So when it comes to big tobacco research, if the event is sponsored by big tobacco, the causation here is not clear whether or not it’s subconscious, whether or not it’s conscious. The causation might not be perfectly clear, but the results are. And it shows that if you’re funded by big tobacco, you’re more skewed and poorer quality in terms of research about the effects of secondhand smoking or the effects of smoking.

We can’t do this sort of research in big tech because there isn’t an event that isn’t sponsored by big tech. So we can’t even do this sort of research. And that should be worrying. If we know in other fields that sponsorship leads to lower quality of work, why are we not trying to have them divest from funding events directly anyway?

Lucas Perry: All right. Yeah. So this very clearly links up with the first example you gave at the beginning of our conversation about imagine having a health care conference and all of the main investors are big cigarette companies. Wouldn’t we have a problem with that?

So these large industries, which are having detrimental effects to society and civilization first have the incentive to portray a public image of social responsibility without actually being socially responsible. And then they also have an incentive to influence events and decisions made by funded universities as to one, make the incentives of those universities and events aligned with their own because their funding is dependent on these industries. And then also, to therefore constrain what can and may be said at these conferences in order to protect that funding. So the next one here is how is it that big tobacco and big tech influence the research questions and plans of individual scientists?

Mohamed Abdalla: So in the case of big tobacco, we know especially from leaked documents that they actively sought to fund research that placed the blame of lung cancer on anything other than smoking. So there’s the classic example on owning a bird is more likely to increase your chance of getting lung cancer. And that’s the reason why you got lung cancer instead of smoking.

Lucas Perry: Yeah, I thought that was hilarious. They were like, “Maybe it’s pets. I think it’s pets that are creating cancer.”

Mohamed Abdalla: Yeah exactly. And when they choose to fund this research question, not only do they get the positive PR. But they also get the ability to say that this is not conclusive. “Because look, here are these academics in your universities that think that it might not be smoking causing the cancer. So let’s hold off on litigation until this type of research is done.” So this steering of funds instead of exploring the effects of tobacco on lung cancer, they would study just the basic science of cancer instead. And this would limit the amount of negative PR that they get. So that’s one reason for doing it. But number two is it allows them sow doubt and say that there’s confusion, or that we haven’t arrived at some sort of consensus.

So that’s one of the ways that they did it, finding researchers who they termed critics or skeptics. And they would fund them and amplify their voices. And they had a specific set of money for set people, especially if they were smokers. So they tried to actively seek for people that were smokers because they felt that they’d be more sympathetic to these companies. They would purposely steer funds towards these people and they would change the research sphere.

There’s also very egregious actions that they took. So for example, Professor Stanton Glantz. He’s in UCSF I think, University of California, San Francisco. They would take out ads against him in the newspapers where they would put lies to point out flaws in his studies. And these flaws aren’t really flaws. It’s just a twisting of the truth. It’s basically if you go against us, we’re going to attack you. We’re going to make it very hard for you to get further funding. You’re going to have a lot of bad PR. It’s just sort of dis-incentivizing anyone else from doing critical issues against them.

They would work with elected politicians as well to block funding of scientists of opposing viewpoints. So it’s not like they didn’t have their fingers in government as well. During litigation, an email covered where HHS, which is the U.S. Department of Health and Human Services appropriations continuing resolution will include language to prohibit funding for Glantz who is Stanton Glantz, the same scientist. So it’s clear that through intimidation, but also through acting as a funding body, they’re able to change what researchers have work on.

Big tech works in essentially the same way. The first thing to note here is that when it comes to ethical AI or AI ethics, big tech in general has a very specific conception of what it means for an algorithm to be ethical. Whether it’s inspired by their insular culture where it’s sort of a very echo-y place where everyone sort of agrees with each other, there’s sort of an agreed upon culture. There is previous work owning ethics that discusses how there’s three main factors in defining AI ethics. And there’s three values of Silicon Valley. Meritocracy, trust in the market. And I forgot the last one. And basically, their definition is simply different from that, which the rest of the world, or the rest of the country generally has.

So Silicon Valley has a very specific view of what AI ethics is or should be. And that is not necessarily shared by everyone outside of Silicon Valley. That is not necessarily in itself a bad thing. But when they act as a pseudo granting body that is, they provide grants or money for researchers, it becomes an issue. Because if for example, you are a professor. And as a professor, one of your biggest roles is simply to bring in money to do research. And if your research question does not agree with the underlying logics of Silicon Valley’s funding bodies, whoever makes these decisions.

Lucas Perry: Like if you question the assumption about trust in the market?

Mohamed Abdalla: Yeah, exactly. Or you question meritocracy, and maybe that’s not how we should be basing our societal values. Even if the people granting the money are not lawyers like they were in big tobacco, even if they were research scientists, the fact that they’re at a high enough level to be choosing who gets money likely means that they’ve been there for awhile. And there’s an increased chance that they personally believe or agree with the views that their companies hold. Not necessarily always the case, but the probability is a lot higher.

So if you believe in what your company believes in, and there is a researcher who is working from a totally different set of foundations whose assumptions do not match your assumption. As human nature, you’re less likely to believe in this research. You’re less likely to believe that it’s successful or on the true path. So you’re less likely to fund it. And that requires no malicious intent on your side. It’s just that you are part of an industry that has a specific view. And if a researcher does not share this view, you’re not going to give them that money.

And you switch it over to the researcher side. If I want to get tenure, I’m not a professor. So I can’t even get tenure. But if I want to get hired, I have to show that I can bring in money. If I’m a professor and I want to get tenure, I have to show that I can bring in even more money. And if I see that these companies are giving away vast sums of money, it is in my best interest to ask a research question that will be funded by them. Because what good am I if I get no money and I don’t get hired? Or what good am I if I get no money and I don’t get tenure?

So what will end up happening there is that it’s a cyclical thing where researchers view that these companies fund specific types of research or researchers that are based in fundamental assumptions that they may not necessarily agree with. And in order to maximize their opportunities to get this money, they will change their research question. Whether it’s complete, or slight adjustment, or changing the assumptions to match what will get them the money. And the cycle will just keep happening until there’s no difference.

Lucas Perry: So there’s less opportunity for researchers and institutions that fundamentally disagree with some axiom that these industries hold over ethics and accountability, or whatever else?

Mohamed Abdalla: Exactly. 100%. And the important thing to note here is that for this to happen, no one needs to be acting maliciously. The people in big tech probably believe in what they’re pushing for. At least I like to make this assumption. And I think it makes it for the easiest sell, especially for those within the computer science department because there’s a lot of pushback to this type of thought. Where even if the people deciding who gets the money, imagine they’re completely disinterested researchers who have very agreeable goals, and they love society, and they want the general good. The fact that they are in a position to be deciding who gets the money means that they’re likely higher up in these companies. You don’t get to be higher up and stay for these companies long enough, unless you agree with the viewpoint.

Lucas Perry: And that viewpoint though, in a market has to be in some sense, aligned with the impersonal global corporate objective of maximizing the bottom line. There’s this values filtration process internally in a company where maybe you’ll have all the people who are against Project Maven, but none of them are high enough. Right?

Mohamed Abdalla: Exactly.

Lucas Perry: You need to sift those people out for the higher positions because the higher positions are the ones which have to be aligned with the bottom line, with maximizing profits for shareholders. Those people could authentically think that maximizing the profit of some big industrial company is a good thing, because you really trust in the market and how it serves the market.

Mohamed Abdalla: I think there are people that actually believe this. So I know you say it kind of disbelievingly, but I think that people actually believe this.

Lucas Perry: Yeah, people really do believe this. I don’t actually think about this stuff a lot. But yeah. I mean, it makes sense to me. We buy all your stuff. So you’re serving me, I’m making a transaction with you. But this fact about this values sifting towards the top to be aligned with the profit maximization, those are the values that will remain for then deciding the funding of researchers at institutions. So no one has to be evil in the process. You just have to be following the impersonal incentives of a global capitalist industry.

Mohamed Abdalla: Yeah. I do not aim to shame anybody involved from either side. Certain executives I shame and certain attorneys I shame. But I work under the assumption that all computer science, AI, ethicists, researchers, whatever you want to call them are well-intentioned. And the way the system is set up is that even well-intentioned researchers can have negative impact on the research being done, and can have a limiting impact on the types of questions being considered.

And I hope by now you agree that at least theoretically, that by acting as a pseudo granting body, there’s a chance for this influence to occur. But then in my work, what I did was I actually counted how many people were actually looking to big tech as a pseudo granting body. So I looked at the CVs of all computer science faculty at four schools. University of Toronto, Massachusetts Institute of Technology, Stanford, and Berkeley. Two private schools, two public schools. Two eastern coast universities, two western coast. And for each CV that I could find, I looked to answer a certain number of questions. So whether or not a specific faculty works on AI, whether or not they work on the ethics of AI. I very loosely defined that as being having at least one paper defined about any sort of societal impact of AI. Whether or not they have ever received faculty funding from big tech. So that is grants or awards from companies. Whether they have received graduate funding from big tech. So was any portion of this faculty’s graduate education funded by big tech? And whether or not they are or were employed by big tech. So at any time that they have any sort of previous or current financial relationship with big tech?

What the research shows is that of all computer science faculty, 52% of them. So at least half view big tech as a funding body. So that means as a professor, they have received a grant or an award to do research from big tech. And universities technically are here to not maximize profit for these companies, but to do science. And in theory, public good kind of things. At least half of the researchers are looking to these companies as granting bodies.

If you narrow that down to computer science faculty that work in AI, that percentage goes up to 58%. If you limit it to computer science faculty who work in the ethics of AI or who are AI ethicists, it remains at 58%. Which means that 58% of the people looking to answer the really hard questions about AI and society, whether it’s short-term or long-term, view these companies as a funding body. Which in turn, as we discussed, opens them up to influence whether it’s subconscious or conscious.

Lucas Perry: So then if you’re Mohamed Abdalla and you come out with a paper like you came out with, is the thought here that it’s very much less likely than in the future you will receive grants from big tech?

Mohamed Abdalla: So it’s unclear. There’s a meta game to play here as well. A classic example here is Michael Moore. The filmmaker, political activist. I’m not sure the title you want to give him. But a lot of his films are funded by Fox or some subsidiary of Fox.

Lucas Perry: Yeah. But they’re all leftist views.

Mohamed Abdalla: Exactly. So as in the example that I gave previously where Google would invite a token critic to their conferences to give it some semblance of balance, simply disagreeing with them will not reject you from their funding. It’s just that they will likely limit the amount of people who are publicly disagreeing with them by choosing. Again, it seems too self-serving to say, “I’m a martyr. I’ve sacrificed myself.” I don’t view that as the case, although I did get some feedback saying, “Maybe you shouldn’t push this now until you get a job,” kind of thing. But I’m pushing that it shouldn’t be researchers deciding who they get money from. This is a higher level issue.

If you go into a pure hypothetical. And for the listeners, this is not what I actually believe. But let us consider big tech to be evil, right? And publicly minded researchers who refuse to take money from big tech as good. If every good researcher. Again, good here is not being used in the prescriptive sense, but just in our hypothetical. If all of the good researchers refuse to take money from these evil corporations, then what you’re going to be ending up with is these researchers will not get jobs, will not get promoted. Their viewpoints will die out. But also, the people who are not good will have no problem taking this money. And they will be less likely to challenge these evil corporations. So in a game theoretic perspective, if you go from a pure utility perspective, it makes sense for you as a good researchers to take this bad money.

So that’s why I state in the paper whatever our fix to this is, can’t be done at the individual researcher level. You have to assume that all researchers are good, but you have to come up with a system level solution. Whether that’s legislation from governments, whether that’s a funding body solution, or a collection of institutions that come up with an institutional policy that applies to all of these top schools or all computer science departments all over the world. Whoever we can get to agree together. So that’s basically what I’m pushing for. But there’s also ways that you can influence research questions without directly funding them. And the way that you do this is by repeated exposure to your ideas of ethics or your ideas of what is fair, what is not fair. However you want to phrase it.

I got a lot of puzzled looks when I tell people that I also looked at whether or not a professor was funded during graduate school by these companies. And there is some rightful questioning there. Because I saying, or am I assuming that the fact that they got a scholarship from let’s say Microsoft during their PhD, is that going to impact their research question 20 years down the line when they’re a professor? And I do not think that’s actually how this works. But the reason for asking this was to show how often or how much exposure these faculty members were given to big tech’s values or Silicon Valley values, however you want to say it.

Even if they’re not actively going out of their way to give money to researchers to affect their research questions. If every single person who becomes a faculty member at all of these prestigious schools has at one point done some term. Whether that’s a four month internship, or a one-year stint, or a multiple year stint in big tech in Silicon Valley. It’s only human to worry that repeated exposure to such views will impact whatever views you end up developing yourself. Especially if you’re not going into this environment trying to critically examine their views. You’re just likely to adopt it internally, subconsciously before you have to think about it.

And what we show here is that 84% of all computer science faculty have had some sort of financial connection with big tech. Whether that’s receiving funding as graduate, or a faculty member, or been previously employed.

Lucas Perry: We know what the incentives of these industries are all about. So why would they even be interested in funding the graduate work of someone, if it wasn’t going to groom them in some sense? Are there tax reasons?

Mohamed Abdalla: I’m not 100% sure how it works in the United States. It exists in Canada, but I’m not sure if it does in the U.S.

Lucas Perry: Okay.

Mohamed Abdalla: So there’s multiple reasons for doing this. There is of course as usual, the PR aspects of it. We are helping students pay off their student loans in the states I guess. There’s also if you fund someone’s graduate funding, you’re building connections, making them easier to hire possibly.

Lucas Perry: Oh yeah. You win the talent war.

Mohamed Abdalla: Yeah, exactly. Yeah. If you want a Microsoft Fellowship, I think you also win an internship at Microsoft, which makes you more likely to work for Microsoft. So it’s also a semi-hiring thing. There’s a lot of reasons for them to do this. And I can’t say that to influence is the only reason. But the fact that if you limit it to CS faculty who work in AI ethics, 97% of them have had some sort of financial connection to big tech. 97% of them have had exposure to the dominant views of ethics by Silicon Valley. What percentage of this 97 is going to subconsciously accept these views, or adopt these views because they haven’t been presented with another view, or they haven’t been presented with the opportunity to consider another critical view that disagrees with their fundamental assumptions? It’s not to say that it’s impossible, it’s just to say should they be having such a large influence?

Lucas Perry: So we’ve spent a lot of time here then on this third point on influencing the research questions and plans of individual scientists. It seems largely again that by giving money, you can help align their incentives with your own. You can help direct what kind of research questions you care about. You can help give the impression of social responsibility. When actually, you’re constraining and funneling research interest and research activity into places which are beneficial to you. You’re also, I think you’re arguing here exposing researchers to your values and your community.

Mohamed Abdalla: Yeah. Not everyone’s view of ethics is fully formed when they get a lot of exposure to big tech. And this is worrying because if you’re a blank slate, you’re much more easily drawn upon. So they’re more likely to impart their views on you. And if 97% of the people are exposed, it’s only safe to assume that some percentage will draw this. And then that will artificially inflate the amount of people that agree with big tech’s viewpoints. And therefore further push academia or the academic conversation into alignments with something they find favorable.

Lucas Perry: All right. So the last point here then is how big tobacco and big tech discover receptive academics who can be leveraged. So this is the point about finding someone who may be a skeptic or critic in a community of some widely held scientific view, and then funding and propping them up so that they introduce some level of fake or constructed, and artificial doubt and skepticism and obfuscation of the issue. So would you like to unpack how big tobacco and big tech have done this?

Mohamed Abdalla: When it comes to the big tobacco, we did cover this a tiny bit before. For example, when we talked about how they would fund research that questions whether it’s actually keeping birds as pets that caused lung cancer. And so long as this research is still being funded and has not been published in the journal, they can honestly speaking say that it is not yet conclusive. There is research being done, and there are other possible causes being studied.

Despite having internal research to show that this is not true. If you go from a pure logic standpoint where is it conclusive being defined as there exists no oppositional research, they’ve satisfied the conditions such that it is not conclusive and there is fake doubt.

Lucas Perry: Yeah. You’re making logically accurate statements, but they’re epistemically dishonest.

Mohamed Abdalla: Yeah. And that’s basically what they do when they leverage these academics to sow doubt. But they knew that especially in Europe a little bit after this, that there was a lot of concern being drawn regarding the funding of academics by big tobacco. So they would purposefully search for European scientists who had no previous connection to them who they could leverage to testify. And this was part of a larger project that they called the White Coat Project, which resulted in infiltrations in governing bodies, heads of academia, and editorial boards to help with litigation legislation. And that’s actually why I named my paper The Grey Hoodie Project. It’s an homage to the White Coat Project. But since computer scientists don’t actually wear white coats, we’re more likely to wear gray hoodies. That’s where the name of the paper comes from. So that’s how big tobacco did it.

When it comes to big tech, we have clear evidence that they have done the same. Although it’s not clear the scope of which they have done this because there haven’t been enough leaks yet. This is not something that’s usually publicly facing. But Eric Schmidt, previously the CEO of Google was, and I quote from an Intercept article, “Advised on which academic AI ethicists his private foundation should fund.” I think Eric Schmidt has very special views regarding the place of big tech and its impact on society that likely would not be agreed with by the majority of AI ethicists. However, if they find an ethicist that they agree with and they amplify him and give him hundreds of thousands of dollars a year, he is basically pushing his viewpoint on the rest of the community by way of funding.

Another example where Eric again, Eric Schmidt asked that he should fund a certain professor, and this certain professor later served as an expert consultant to the Pentagon’s innovation board. And Eric Schmidt is now on some military advisement role in the U.S. government. And that’s a clear example of how those from big tech are looking to leverage receptive academics. We don’t have a lot of examples from other companies. But the fact that it is happening and this one got leaked, do we have to wait until other ones get leaked to worry about this?

An interesting example that I personally view as quite weak. I don’t like this example, but the irony will show in a little while. There is a professor at George Mason University who had written academic research that was funded indirectly by Google. And his research criticized the antitrust scrutiny of Google shortly before joining the FTC, the Federal Trade Commission. And after he joined the FTC, they dropped their antitrust suits. They’ve picked it up now again. But this claim, which basically draws into question whether or not Google funded him one, because of his criticism of antitrust against Google. So that is a possible reason they chose to fund him. There’s another unstated question here in this example with did he choose to criticize antitrust scrutiny of Google because they fund him? So which direction does this flow? It’s possible that neither direction flow. But when he joined the FTC, did they drop their case because essentially they hired a compromised academic?

I do not believe and I have no proof of any of this. But Google’s response to this hinting question was that this expose was pushed by the Campaign for Accountability. And Google said that this evidence should not be acceptable because this nonprofit Campaign for Accountability is largely funded by Oracle, which is another tech company.

So if you sort of abstract this away, what Google is saying is that claims made regarding societal impacts, or legislation, or anything to do with AI ethics. If that researcher is funded by a big tech company, we should be worried about what they’re saying, or we should be very skeptical about what they’re saying. Because they’re essentially saying that you should not trust this because it’s funded by Oracle. It’s largely backed by Oracle. You know, you abstract that away, it’s largely backed by big tech. Does that not apply to everything that Google does or everything that big tech in general does? So it is clear that they themselves know that industry money has a corrupting influence on the type of research being done. And that sort of just pushes my entire piece.

Lucas Perry: Yeah. I mean at some sense, none of this is mysterious. They couldn’t not be doing this. We know what industry wants and does, and they’re full of smart people. So, I mean, if someone from industry who is participating in this or listening to this conversation, they would be like, “You’ve woken up to the obvious. Good job.” And that’s not to downplay the insight of your work yet. It also makes me think of lobbying.

Mohamed Abdalla: 100%.

Lucas Perry: We could figure out all of the machinations of lobbying and it would be like, “Well yeah, they couldn’t not be doing this, given their incentives.”

Mohamed Abdalla: So I fully agree. If you come into this knowing all of the incentives, what they’re doing is the logical move. I fully agree that this is obvious, right?

Lucas Perry: I don’t think it’s obvious. I think it naturally follows from first principles, but I feel like I learned a lot from your paper. Not everyone knows this. I would say not even many people know this.

Mohamed Abdalla I guess obviously it wasn’t the correct word. But I was going to say that the points that I raise show that there’s a clear concern here. And I think that once people hear the points, they’re more likely to believe this. But there are people in academia. A common criticism I get is that people know who pay them. So they say that it’s unfair to assume that someone funded by a company cannot be critical of that company or big tech in general. And several researchers who work at these companies are critical of their employer’s technology. So the point of my work is to lay this out flat to show that it doesn’t matter if people know who pay them, the academic literature shows that this has a negative effect. And therefore, disclosure isn’t enough. I don’t want to name the person who said this criticism, but they’re pretty high up. The idea that conflict of interest is okay simply because it’s disclosed seems to be a uniquely computer science phenomenon.

Lucas Perry: Yeah. It’s a weird claim to be able to say, “I’m so smart and powerful, and I have a PhD, and giving me dirty money or money that carries with it certain incentives, I’m just free of that.”

Mohamed Abdalla: Yeah. Or whether or not it’s incorrectly perceived ability to self-correct for these biases. That’s sort of the current that I’m trying to fight against. Because the mainstream current in academia is sort of like, “Yeah, but we know who pays us, so we’re going to adjust for it.” And although the conclusions I draw are intuitive I think, the intuition that people have regarding big tech is basically, big tobacco, everyone has an intuitively negative gut feeling. So it’s very easy for them to agree. It’s a little bit more difficult to convince them that even if you believe that big tech is a force for good, you should still be worried.

Lucas Perry: I also think that the word here that is better than obvious is it’s self-evident once it’s been explained. It’s not obvious. Because if it were obvious, then you wouldn’t have needed to write this paper. And I already would’ve known about this, and everyone would have. So if you were just to wrap up and summarize in a few bullet points here this last point on discovering receptive academics and leveraging them, how would you do that?

Mohamed Abdalla: I kind of summarize this to policymakers. When policymakers to try to make policy, they tend to converse with three main parties. They will converse with industry, they converse with the academics, and they converse with the public. And they believe that getting this wide viewpoint will help them arrive at the best compromise to help society move in the way that it should. However, the very mindful way that big tech is trying to leverage academics, a policymaker will talk to industry. He’ll talk to the very specific researchers who are handpicked by industry. And therefore, are basically in agreement with the industry and they will talk to the public. So two thirds of the voices they hear are industry aligned voices, as opposed to previously one third. And that’s something that I cover in the paper.

And that’s the reason why you want to leverage receptive academics, because it gives you the ability so that the majority of whatever a policymaker hears, and they’re really busy people, and they don’t have the time to do the research themselves. If two out of every three people is pushing policy or pushing views that are in alignment with whatever’s good for big tech’s profit motive, then you’re more likely to believe that viewpoint. As opposed to having an independent academia where if the right decision is to agree with big tech then you assume they would. If the right decision is to disagree, then you assume they would. But if they leverage the academics, this is less likely to happen. Therefore, academia is not playing its proper role when it comes to policy-making.

Lucas Perry: All right. So I think this pretty clearly lays out then how industries in general, whether it be big tobacco, big tech, oil companies, greenhouse gas emitting energy companies, you even brought up the food industry. I mean, just anyone really who have the bottom line as their incentive. These strategies are just naturally born of the impersonal incentive structure of a corporation or industry.

This next question is maybe a bit more optimistic. All these organizations are made up of people, and these people are all I guess more or less good or more or less altruistic. And you expect that if we don’t go extinct, these industries always get caught, right? Big tobacco got caught, oil industries are in the midst of getting caught. And next we have big tech. And I mean, the dynamics are also a little bit different because cigarettes and oil can be booted. But we’re kind of married to the technology of big tech forever. Literally.

Mohamed Abdalla: I would agree with that.

Lucas Perry: Yeah. So the strategy for those two seems to be obfuscate the issue for as long as possible so your industry exists as long as possible, and then you will die. There is no socially responsible version of your industry. That’s not going to happen with big tech. I mean, technology is here to stay. So does big tech have any actual incentives for genuine social responsibility, or are they just playing the optimal game from their end where you obfuscate for as long as possible, and you bias all of the events and the researchers as much as possible? Eventually, there’ll be enough podcasts like this and minds changed that they can’t do that any longer without incurring a large social cost in opinion, and perhaps market. So is it always simply the case that promoting the facade of being socially responsible is cheaper and better than the incentive of actually becoming socially responsible?

Mohamed Abdalla: So a thing that I have to say, because the people that I worked with that still work in health policy regarding tobacco would be hurt if I didn’t say it. Big tobacco is still heavily investing in academia, and they’re still heavily pushing research and certain viewpoints. And although the general perception has shifted regarding big tobacco, they’re not done yet. So although I do agree with your conclusion that it is a matter of time when they’re done, to think that the fight is over is simply not true yet. There’s still a lot of health policy folks that are pushing as hard as they can to completely get rid of them. Even within the United States and Europe, they create new institutions that do other research. They’ve become maybe a little bit more subtle about it. But to declare victory I think is the correct path on what will happen, but it has not yet happened. So there’s still work to be done.

Regarding whether or not big tech has an actual incentive to do good, I like to assume the best of people. I assume that Mark Zuckerberg actually founded Facebook because he actually cared about connecting people. I believe that in his heart of hearts, he does have at least generally speaking, a positive goal for society. He doesn’t want to necessarily do bad or be wrecking democracies across the world. So I don’t think that’s his goal, right?

So I think that starting from that viewpoint is helpful because one, it will make you heard. But also, it shows how this is a largely systemic issue. Because despite his well-intentioned goals that we’re assuming exist. And I actually do believe at some level, it’s true. The incentives in the system in which he plays adds a caveat to everything he says, that we aren’t putting there

So for example, when Facebook says they care about social responsibility or that they will take steps to minimize the amount of fake news, whatever that means. All of the statements made by any industry in any company, because of the fact that we’re in a capitalist system, is prior on the statements given it does not hamper profits, right? So when Facebook wants to deal with fake news, they will turn to automated AI algorithms. And they say we’re doing this because it’s impossible to moderate the amount of stories that we get.

From a strictly numeric perspective, this is true. But what they’re not saying is that it is not possible for us to use humans to moderate all of these stories while staying profitable. So that is to say the starting point of their action may be positive. But the fact that it has to be warped to fit the profit motive ends up largely negating, if not completely negating the effects of the actions they take.

So for example, you can take Facebook’s content moderation in the continent of Africa. They used to have none. And until recently, they got only one content center moderation in the entire continent of Africa. The amount of languages spoken in that continent alone, how many people do you have to hire in that one continent’s moderation? How many people per language are you hiring? Sophie Zhang’s resignation letter basically showed that despite being aware of all of these issues and having employees especially at the lower levels who were passionate about the social good. So it’s clear that they are trying to do a social good. The fact that everything is prior to whether or not it will result in money, hurts the end result of their action. So I believe and I agree with you that this industry is different. And I do believe that they have an incentive for the social good. But unless this incentive is forced upon everyone else, they are hurting themselves if they refuse to take profit that they could take, if that makes sense.

But if you choose to not do something because it is socially good but it will hurt your profits, some other company is going to do that thing. And they will take the profits and they will beat your market share until you can find a way to account for it in the stock price.

Lucas Perry: People value you more now that you are being good.

Mohamed Abdalla: Yeah. But I don’t think we’re at a stage where that’s possible or that’s even well-defined what it means. So I agree that even if this research is well-intentioned, the road to hell is paved with good intentions.

Lucas Perry: Yeah. Good intentions lead to bad incentives.

Mohamed Abdalla: Or the good incentives are required to be forced through the lens of bad incentive. It has to be aligned with the bad incentive for them to actually manifest. Otherwise, they will always get blocked.

Lucas Perry: Yeah. By that you mean the things which are good for society must be aligned with the bad incentives of maximizing profit share, or they will not manifest.

Mohamed Abdalla: Exactly. And that’s the issue when it comes to funding academia. Because it is possible to change society’s viewpoint on one, what is possible. But two, what is preferable to match the profit incentives of these companies. So you could find a way of what is ethical AI, what does it cover? What sort of legislation is feasible? What sort of legislation is desirable? In what context does it apply, does it not apply? What jurisdiction, so on and so forth. These are all still open questions. And it is the incentive of these companies to help mold these answers such that they have to change as little as possible.

Lucas Perry: So when we have benefits that are not accruing from industry, or where we have negative externalities or negative effects from the incentives of industry leading to detrimental outcomes for society, the thing that we have for remedying that is regulation. And I imagine that more than the general population, I would guess are libertarian attitudes at big tech companies. Which in this sense I would summarize as socially liberal or left leaning and then against regulation. So valuing the free market. So there’s this value resistance. We talked about how the people at the top are going to be sifted through. You’re not going to have people at the top of big tech companies who really love regulation, or think that regulation is really good for making a beautiful world. Because regulation is just always hampering the bottom line.

Yet, it’s the tool that we have for trying to mitigate negative externalities and negative outcomes from industry maximizing their bottom line. So what do you suggest that we do? Is it just that we need good regulation? We need to find some meaningful regulatory system and effective policy? Because otherwise, nothing will happen. They’ll just keep following their incentives, and they have so much power. And they’ll just do what they do to keep doing the same thing. And the only way to break that is regulation.

Mohamed Abdalla: So I agree. The solution is basically regulation. The question is, how do we go about getting there? Or what specific rules do we want to use or laws do we want to create? And I don’t actually answer any of this in my work. I answer a question that comes before the legislation or the regulation. Which is basically, I propose that AI ethics should be a different department from computer science. So that in the same way that bioethics is no longer in the same department as biology or medicine, AI ethics should be its own separate department. And in that way, anyone working in this department is not allowed to have any sort of relationship with these companies.

Lucas Perry: You call that sequestration.

Mohamed Abdalla: It’s not my own term. But yeah, that’s what it’s called.

Lucas Perry: Yeah. Okay. So this is where you’re just removing all of the incentives. Whether you’re declaring conflict of interest or not, you’re just removing the conflict of interest.

Mohamed Abdalla: Yes. Putting myself on the spot here, it’s very difficult to assume that I myself have not been corrupted by repeated exposure. As much as I try to view myself as a critical thinker, the research shows repeated exposure will influence what you think and what you believe. I’ve interned at Google for example, and they have a very large amount of internal propaganda pointed at their employees.

So I can’t barge in here saying that I am a clean slate or, “I’m a clean person. You should listen to my policies.” But I think that academia should try to create an environment where it is possible. Or dare I say, encouraged to be a clean person where clean means no financial involvement with these companies.

That said, there’s a lot of steps that can be done when it comes to regulation. Slightly unrelated, but kind of not unrelated is fixing the tax code in the U.S. and Canada and around the world. A large reason of why a lot of computer science faculty and computer scientists in general look to industry for funding is because governments have been cutting or at least not increasing with the rates of research being done in fields the amount of money available for research funding. And why do they not have as much money? This is probably in part because these companies are not paying their fair share when it comes to paying their taxes, which is how a lot of researchers get their funding. That’s one way of doing it. If you want to go into specifics, it’s more difficult and much harder to sell for specific policies. I don’t think regulation of specific technologies would be effective because all the technologies changed very fast.

I think creating a governmental body whose role it is to sue these companies when they violate stuff that we don’t believe match our social norms is probably the way to go about it. But I don’t know. It’s hard for me to say. It’s a difficult question that I don’t have an answer for. We don’t even know who to ask for legislation because every computer scientist is sort of corrupted. And they’re like, “Okay, do we not use computer scientists at all? Are we relying only on economists and moral philosophers to do this sort of legislation possibly?” I don’t know.

Lucas Perry: So I want to talk a little bit about transformative AI, and the role that this transition plays in that. There is a sense, and this is a meme that I think needs to be combated. The race between China and America on AI with the end goal of that being AI systems that are increasingly powerful.

So some sense that any kind of regulation used to try to fix any of these negative externalities from these incentives is just shooting ourself in the knee. And the evil other is racing to beat us.

Mohamed Abdalla: That’s the Eric Schmidt argument.

Lucas Perry: So we can’t be implementing these kinds of regulations in the face of the geopolitical and international problem of racing to ever more powerful AI systems. So you already said this is the Eric Schmidt argument. What is your reaction to this kind of argument?

Mohamed Abdalla: There’s multiple possible reactions. And I don’t like to state which one I believe in personally, but I’d like to walk through them. Because I think first off, let us assume that the U.S. and China are racing for an automated general intelligence, AGI. Would you not then increase government funding and nationalize this research such that it belongs to the government and not to a multinational corporation? In the same way that if for example, Google, Facebook, Microsoft, Alibaba, Huawei were in the race to develop nukes. Would you say, leave these companies alone so they can develop nuclear weapons? And once they develop a nuke, we’ll be able to take it. Or would you not nationalize these companies? Or not nationalize them, but basically it has to become only for the U.S. They cannot have any incentives in any other country. That is a form of legislation or regulation.

Governments would have to have a much bigger say in the type of research being done, who’s doing it, what can be done. For example, in the aerospace industry, you can’t employ non U.S. citizens. Is this what you’re pushing for in artificial intelligence research? Because if not, then you’re conceding that it’s not likely to happen. But if you do believe that this is likely to happen, then you would be pushing for some sort of regulation. You could argue what the regulation, but I don’t find the viewpoint that we want these companies to compete with the Chinese companies, because they’re going to create this thing that we need to beat the Chinese at. If you believe that this is going to happen, you’d be still in support of regulation. It’d just be different regulation.

Lucas Perry: I mean obviously I can’t speak for Eric Schmidt. But the kinds of regulation that stops the Chinese from stealing the AGI secrets is good regulation. And then anything else that slows the power of our technology is bad regulation.

Mohamed Abdalla: Yes. But for example, when Donald Trump banned the H1B visa. Or not banned, he put a limit or a pause. I’m not sure the exact thing that happened.

Lucas Perry: Yes. He’s made it harder for international students to be here and to do work here.

Mohamed Abdalla: Yes, exactly. That is the type of regulation that you would have if you believed AI was a threat, like we are racing the Chinese. If you believed that, you would be for that sort of regulation because you don’t want these companies training foreign nationals and the development of this technology. Yet this is not what these companies are going for. They are not agreeing with the legislation or the regulation that limits the amount of foreign workers they can bring in.

Lucas Perry: Yeah. Because they just want all the talent.

Mohamed Abdalla: Exactly. But if they believe that this was a matter of national security, would they not support this? You can’t make the national security arguments when they say, “Don’t regulate us, because we need to develop as much as we can, as fast as we can.” While also pushing against regulation that if this was truly dangerous, if we did truly need to leave you unregulated internally, we should limit who can work for you in the same way that we do it for rocketry. Who can work on rockets, who can work at NASA? They have to be U.S. citizens.

Lucas Perry: Why is that contradictory?

Mohamed Abdalla: Because they’re saying, “Don’t regulate us in terms of what we can work on,” but they’re saying also, “Do not regulate us in terms of who can work for us.” If what you’re working on is a matter of national security and you care about national security, then by definition, you want to limit who can work on it. If you want anyone, or you say there should be no limit on who can work for us, then you are basically admitting that this is not a matter of national security, or profits over everything else. Google, Facebook, Microsoft, when possible legislation comes up, the Eric Schmidt argument gets played. And it’s like, “If you legislate us, if you regulate us, you are slowing down our progress towards this technology.”

But if any sort of regulation against the development of tech will slow down the arrival of AGI, which we assume that the Department of Defense cares is important. Then what you’re saying is that these companies are essentially striving towards that should they not be protected from foreign workers infiltrating. So this is where the companies hold two opposing viewpoints. Where depending on who they’re talking to, no don’t regulate us because we’re working towards AGI and you don’t want to stop us. But at the same time, don’t regulate immigration because we need these workers. But if what you were working on is sensitive, then you shouldn’t even be able to take these workers.

Lucas Perry: Because it would be a national security risk.

Mohamed Abdalla: Exactly. When a lot of your researchers come from another country and they’re likely to go back to that country or at least have friends, have conversations with other countries.

Lucas Perry: Or just be an agent.

Mohamed Abdalla: Yeah, exactly. So if this is actually your worry that this regulation will slow down the development of AGI, how can you at the same time be trying to hire foreign nationals?

Lucas Perry: All right. So let’s do some really rapid fire here.

Mohamed Abdalla: Okay.

Lucas Perry: Is there anything else that you wanted to add to this argument about incentives and companies actually just being good? And we are walking through this Eric Schmidt argument.

Mohamed Abdalla: Yeah. So the thing I want to highlight is that this is a system level problem. So it’s not a problem with any specific company, despite some being in the news more than others. It’s also not a problem with any specific researchers or institutions. This is a systemic issue. And since it’s a high level problem, the solution needs to be at a high level as well. Whether it’s at the institutional level or national level, some sort of legislation, it’s not something that research individually can solve.

Lucas Perry: Okay. So let’s just blast through the action items here then for solving this problem. You argue that everyone should post their funding information online in historic informations. This increases transparency on conflicts of interest. But as we discussed earlier, they actually just need to be removed. You also argue that universities should publish documents highlighting their position on big tech funding for researchers.

Mohamed Abdalla: Yeah. Basically I want them to critically consider the risks associated with accepting such funding. I don’t think that it’s a consideration that most people are taking seriously. And if they are forced to publicly establish a position, they’ll have to defend it. And that will I believe lead to better results.

Lucas Perry: Okay. And then you argue that more discussion on the future of AI ethics and the role of industry in this space is needed. Can’t argue with that. That computer science should explore how to actively court antagonistic thinkers.

Mohamed Abdalla: Yeah. I think there’s a lot of stuff that people don’t say because it’s either not in the zeitgeist, or it’s weird, or seems an attack on a lot of researchers.

Lucas Perry: Stigmatized.

Mohamed Abdalla: Yeah, exactly. So instead of trying to find people who simply list on their CV that they care about AI ethics or AI fairness, you should find who will to disagree with you. If you’re able to beat points to disagree with, it doesn’t matter if you don’t agree with their viewpoint.

Lucas Perry: Yeah. I mean usually, the people that are saying the most disruptive things are the most avant garde and are sometimes bringing in the revolution that we need. You also encourage academia to consider the splintering of AI ethics into different department from computer science. This would be analogous to how bioethics is separated from medicine and biology. We talked about this already as sequestration. Are there any other ways that you think that the field of bioethics can help inform the development of AI ethics on academic integrity?

Mohamed Abdalla: If I’m being honest, I’m not an expert on bioethics or the history of the field of bioethics. I only know it in relation to how it has dealt with the tobacco industry. But I think largely, more historical knowledge needs to be used by people deciding what we as computer scientists do. There’s a lot of lessons learned by other disciplines that we’re not using. And they’ve basically been in a mirror situation. So we should be using this knowledge. So I don’t have an answer, but I think that there’s more to learn.

Lucas Perry: Have you received any criticism from academics in response to your research, following the publication that you want to discuss or address?

Mohamed Abdalla: In this specific publication, no. But it may be because of the COVID pandemic. I have raised these points previously. And I have received some pushback, but not for this specific piece. Although this piece was covered in WIRED and there are some criticisms of the piece in the WIRED article, but I kind of raised them up in this talk.

Lucas Perry: All right. So as we wrap up here, do you have anything else that you’d like to just wrap up on? Any final thoughts for listeners?

Mohamed Abdalla: I just want to stress if they’ve made it this way without hating me, this work is not meant to call into question the integrity of researchers, whether they’re in academia or in industry. And that I think these are critical conversations to be had now. It may be too late for the initial round of AI legislation. But for the future, good. And for longer-term problems, I think it’s more important.

Lucas Perry: Yeah, there’s some meme I think going around, like one of the major problems in the world is good people who are running the software of bad ideas on their brain. And I think similar to that is all of the good people who are caught up in bad incentives. So this is just sort of amplifying your non-critical or judgmental role, but that the universality of the human condition is that we all get caught up in these systemic negative incentive structures that lead to behavior that is harmful for the whole.

So thank you so much for coming on. I really learned a lot in this conversation. I really appreciate that you wrote this article. I think it’s important, and I’m glad that we have this thinking early and we can hopefully try and do something to make the transformation of big tech into something more positive happen faster and more rapidly than it has historically with other industries. So if people want to follow you, or look into more of your work, or get in contact with you, where the best places to do that?

Mohamed Abdalla: I’m not on any social media. So email is the best way to contact me. It’s on my website. If you search my name and add the University of Toronto and the end of it, I should be near the top. It’s cs.toronto.edu/msa. And that’s where all my work is also posted.

Lucas Perry: All right. Thanks so much, Mohamed.

Mohamed Abdalla: Thank you so much.

Maria Arpa on the Power of Nonviolent Communication

 Topics discussed in this episode include:

  • What nonviolent communication (NVC) consists of
  • How NVC is different from normal discourse
  • How NVC is composed of observations, feelings, needs, and requests
  • NVC for systemic change
  • Foundational assumptions in NVC
  • An NVC exercise

 

Timestamps: 

0:00 Intro

2:50 What is nonviolent communication?

4:05 How is NVC different from normal discourse?

18:40 NVC’s four components: observations, feelings, needs, and requests

34:50 NVC for systemic change

54:20 The foundational assumptions of NVC

58:00 An exercise in NVC

 

Citation:

The Center for Nonviolent Communication’s website 

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Maria Arpa on nonviolent communication, which will be referred to as NVC for short throughout the episode. This podcast continues to explore the theme of wisdom in relation to the growing power of our technology and our efforts to mitigate existential risk, which was covered in our last episode with Stephen Batchelor. Maria and I discuss what nonviolent communication consists of, its four components of observations, feelings, needs, and requests, we discuss the efficacy of NVC, its core assumptions, Maria’s experience using NVC in the British prison system, and we also do an on the spot NVC exercise towards the end of the episode.   

I find nonviolent communication to be a powerful upgrade in relating, resolving conflict, and addressing the needs and grievances we find in ourselves and others. It cuts through many of the bugs of normal human discourse around conflict and makes communication far more wholesome and collaborative than it otherwise might be. I honestly view it as a quite a powerful and essential skill or way of being that has had quite a transformative impact on my own life, and may very well do the same for others. It’s a paradigm shift in communication and human relating that I would argue is an essential part of the project of waking up and growing up. 

Maria joined the Center of Nonviolent Communication as the Executive Director in November 2019. She was introduced to NVC by UK Trainer Daren de Witt, who invited her to see Marshall Rosenberg, the founder of NVC, speak in London, a moment that changed her life. She was inspired to attend one of the Special Sessions on Social Change in Switzerland in 2005 and then invited Marshall to Malta where she organised a conference between concentration camp survivors and the multi-national corporation that had bought the site formerly used as a place of torture. Since then she has worked with marginalised, hard to reach individuals and communities and taken her work into prisons, schools, neighbourhoods and workplaces.

And with that, let’s get into our conversation with Maria Arpa. 

I really appreciate you coming on, and I’m excited to learn more about NVC. A lot of people in my community, myself included, find it to be quite a powerful paradigm, so I feel excited and grateful that you’re here. So I think we can kick things off here with just a pretty simple question. What is nonviolent communication?

Maria Arpa: Thank you. Yes, and it’s really good to be here. So nonviolent communication is a way or an approach or a system for putting words to living a nonviolent life. So if you’ve chosen nonviolence as a philosophy or as a way of life, then what Marshall Rosenberg proposed in creating nonviolent communication in the 1960s is a way of communicating interpersonally based on the idea that we need to connect as human beings first.

So NVC, which is short for nonviolent communication, is both a spiritual practice and a set of concrete skills. And it’s based on the idea of listening deeply to ourself and to others, in order to establish what are the real needs that we’re trying to meet? And from the understanding of the needs that come through empathic listening, then we can begin to build strategies.

Lucas Perry: Alright, so can you juxtapose what NVC does, compared to what usually happens in normal discourse and conflict resolution between people?

Maria Arpa: Yeah, lovely. Thank you. I like that question. In society, we have been brought up, and I would go so far as to say indoctrinated, with taking adversarial positions. And we do that because we think about things like the legal profession, academia, science, the military, and all of those disciplines, which are highly prized in society, but they all use a debate model of discourse.

And that’s wonderful if you want to prove a theory or expand the knowledge between people, but if we’re actually trying to just build a relationship in order that we can coexist, do things, envision, create something new in society, debate just doesn’t work. So at the very micro level, in the family system, what I experienced with couples and families is a sort of table tennis match. And while you’re speaking, I am preparing my counter argument. I would say that we’ve been indoctrinated with a debate model of conversation that even extends into our entertainment, because in my understanding, if you go to scriptwriting school, they will tell you that to make a Hollywood blockbuster, you need to leave a conflict in every scene.

Maria Arpa: So that has become the way in which we communicate with each other, which is played out in our legal systems, in the justice system, it’s played out in education. When I position that against nonviolent communication, nonviolent communication says we need to build a relationship first. What is the relationship between us? So if we even take this podcast, Lucas, you and I had built a relationship. We didn’t just make an appointment, come on, and do this, we got to know each other a bit. Is that a helpful answer?

Lucas Perry: That’s a good question. Yeah, I think that is a helpful answer. I’m also curious if you could expand a bit more on what the actual communicative features of this debate style of conversation focus on. So where NVC focuses on talking about needs and feelings, feelings being evidence of needs, and also making observations, observations as being opposed to judgments, what is it that the normal debate style or adversarial kind of conversations that we’re having, what is the structure and content of that kind of conversation that is not needs, feelings, and observations?

To me, it often seems like it deals with much more constructed and synthetic concepts where, NVC you boil down concepts to things which are very simple and basic and core, whereas the adversarial relationship deals with more complex constructive concepts like respect, abandonment, and is less simple in a sense. So do you have anything else you’d add here on to what actually constitutes the adversarial kind of conversation?

Maria Arpa: Yeah, definitely. So in an adversarial conversation now we have two levels. And the first level, because I’m really thinking about how we’ve been programmed, in the first level, we’re battling backwards and forwards and what we’re out to do is win the argument. So what I want is for my argument to prevail over yours. And in families, it’s generally who’s the worst off person, who’s the most tired, who’s the most unresourced, who does most of the work, who’s earning most of the money, that level. It’s a competition. It’s a competitive conversation.

At its worst ends of the spectrum, and when we think about school and education, it’s a debate, which also includes the prospects of enforcement, which is punitive. So we could simply be competing to win an argument or we could be actually out to provide evidence of the other person’s wrongness in order to punish them, maybe not even physically punish them, but punish them with the withdrawal of our love.

Lucas Perry: Right, so if we’re not sufficiently mindful, there is this kind of default way that we have been conditioned into communicating, where I noticed a sense of strong ego identification. It’s a bit selfish. It’s not collaborative in any sense. As you said, it’s kind of like if I can make and give points which are sufficiently strong, then the person will see that I’m the most tired, I’m the most overworked, I have contributed the most, and then my needs will be satisfied. And then my needs being satisfied is more implicit, and it’s never made explicit. And so NVC is more about making the needs and feelings explicit and the core and cutting out the argument.

Maria Arpa: Yes, I really agree with that. The problem with that debate model, that adversarial model, is that I might get my needs met, I might be able to bulldoze or bully my way, or I might be able to play the victim and get my needs met, but usually I have induced the other person to respond to me out of fear, guilt, or shame, not out of love. Someone, somewhere will pay for that. It’s that whole thing, you may have won the battle, but you haven’t won the war. While at a very micro level, day by day, I can scrape by and score points and win this and get my need met, in the long term, I’m actually still feeding the insecurity that nothing can come without struggle.

Lucas Perry: That’s a really good point.

Maria Arpa: And when you say it cuts out the argument, the argument bit which is not necessary, I’m saying that we will go into the dialogue with the idea that once we’ve established the needs, then we can actually build agreements. Now, that’s not to say that I’m going to get everything I want, and you’re going to get everything I want. But there’s a beauty in the negotiation that comes from the desire to contribute.

Lucas Perry: Yeah, so I found what you said to be quite beautiful and powerful around, you may gain short term benefits from participating in what one might call violent communication, or this kind of adversarial debate format, you might get your needs met but there’s a sense in which you’re conditioning in yourself and others this toxic communicative framework, where first, you’re more deeply instantiating this duality between you and other people, which you’re reifying by participating in this kind of conversation. And that’s leading to, I think, a belief in an unwanted world, a world that you don’t want to live in, a world where my needs will only be met, if I’ll take on this unhappy, defensive, self-centered stance in communication. And so that’s a contradicted belief. If I don’t do the debate format, then I will not be safe, or I will not get my needs met or I will be unhappy. You don’t want that to be true, and so it doesn’t have to be true if you pivot into NVC.

Maria Arpa: Yes, and got two things really to say about that. One is that we are often counting the cost of something and one of the things that we very rarely factor in is the emotional cost to getting our needs met, the emotional cost to taking on the task. And sometimes when I work with people, we find that the price is too high. I may be getting my fame and fortune or whatever it is I’m after, but actually, the emotional price to my soul is just too high.

And most of us have been taught not to imagine that as being of any importance. In fact, most of us have been taught, rather like scientific experiments, that when I’m looking at a situation, even when I’m not taking a self-centered point of view, even when I’m looking at it and wanting to be generous and benevolent, I look at a situation and I fracture myself into that situation, as if I don’t matter, or I don’t count, and the truth is everybody matters, and everybody counts.

So that’s one thing, and then when I heard you talking about this idea of adopting the adversarial approach to life and how that will feed the system, the best example I have of that is in the prison that I’ve been working in for the last four years. Prisons are pretty mean places, and so most people come into prison in the belief that they need to develop a very strong set of armor in order to defend themselves, and sometimes attack is the best form of defense, in order to survive this new world. And what I’ve been able to demonstrate with the guys that I’ve been training is that actually, if you throw away the armor, and if you actually come to be able to work out what your needs are, and be able to help other people find their needs, actually, it becomes a different place.

And the proof of that has been 25 men that I have trained to do what I do within the prison, working with other prisoners across the prison, and turning the prison into one of the safest jails in the UK.

Lucas Perry: Wow.

Maria Arpa: So the first thing that happens is I deliver training, and it’s intensive, it’s grueling, there’s a lot of work to do, and what happens to the guys as they’re doing the training. Because see, I believe that the people that cause the problems are the ones who have the answers to the problems. We should go to the people that cause the problems and say, “How do we not end up here again?” So as they do this training and they realize the potential to make their prison sentence go better, what a nice thing to be able to do, and then they realize that actually, I can’t do this for anyone else until I’ve done it for myself. So that’s part of the transformative bit where they go, “Well hang on, I’ve got so much conflict, or I’ve got so much inner conflict or dislike of myself, or whatever chaos going on inside, I can’t possibly do this for anyone else. So, right, now we can begin because now we have to begin as a team.”

And so what’s been remarkable for me is 25 men who, on the outside of prison would never ordinarily come into contact with each other, from completely different walks of life, different areas, different ages, different crimes, we’ve got everything from what in America you call homicide, through sexual offenses, to fraud, and that type of thing.

So these 25 men have been through a process to be able to work together as a team and to be able to understand each other, and they are completely blown away by the idea that they have a process so when they feel that the system that they’re using, because they get overrun with casework, and that what I’ve taught them to do is you have to put yourselves first. So when that happens, you actually need to stop everything and come back as a team, and work out what are the petty conflicts that are arising, and use this nonviolent communication process to come back to center. So for me, very much, there’s a huge difference in being able to live this way, and no other greater place have I proved it than in a place called prison.

Lucas Perry: Yeah, exactly. I bet that’s quite liberating for them, and I bet that there’s a deeper sense of security that is unavailable when you’re taking the adversarial, aggressive stance.

Maria Arpa: There’s a deeper sense of security. There’s a huge amount of gratification for somebody who has literally been thrown away by society and told that they’re worthless and have no place, and I don’t want to get into the crime or whether they did it or didn’t do it, but they’ve been thrown away by society, to actually find that they can be in service of others, and they can actually start to love themselves. So there’s a really huge gratification in being able to do that and not do it in a self sacrificing way, to be able to do it in a way that enriches the other person and enriches themselves. That for me is monumental.

And the second thing is that, in places like prisons and family, I often compare schools to prisons, you can be overwhelmed by the power of enforcement and the misuse of authority. So often, one person in a position of power may dish out the rules one way today in a different way tomorrow, may treat one person differently to another. And so what we’ve been able to establish is that if the guys sit in circle and invite officers to those circles, they can clear up some of the things that just create unnecessary conflict.

Now, obviously, in prison, some topics are non-negotiable, and we don’t go there, right? And if you don’t think you should be imprisoned, then you need to take the appropriate steps through your legal advisors. But if it’s a case of the laundry’s messed up every week and there are fights starting over it, if it’s a case of exactly who’s collecting the slips for the lunches, and that’s creating, or there’s an argument when people are queuing up for their food, these things can be sat down and had out and people could talk about their different experiences, and we can clear up those gray areas by the guys coming up with a policy. And they do this using needs.

Lucas Perry: All right, it’s encouraging to see it as effective and liberating in the case of the prison system. I’m interested in talking with you more about how this might apply to, for example the effective altruism movement, or altruism in general, and also working on really big problems in the world that involve both industry and government, who at times function and act rather impersonally. Where, for example, the collective incentives of some corporation are to maximize some bottom line, and so it’s unclear how one could NVC with something where the aggregate of all the behavior is something rather impersonal.

So right before we get to that, I just want to more concretely lay out the four components of NVC practice, just so listeners have a better sense of what it actually consists of. So could you take us through how NVC consists of observations, feelings, needs, and requests?

Maria Arpa: Yeah, I’d love to, yes, thank you. So usually, in our heads, if we’re indoctrinated in that adversarial and we don’t even know it, whatever, usually what’s happening is when we see something, we’re busy judging it, evaluating it, deciding whether we like it or don’t like it, imposing our diagnosis, and generally having an opinion about it, good or bad. And then, of course we live in a world now where people can take to social media and destroy other people if they choose. So we can actually just act out of what we think we’re seeing.

In nonviolent communication, what we do is we try to get to what we call the observation without the evaluation. So what is it that’s actually happening? And that is really trying to separate the reality from the perception. A really good example of that was I saw a demonstration once, a woman came down from the audience, and she wanted to talk about how angry she was with her flatmate. And she gave out this whole story and the trainer would say, “Well, we need to get to the observation, we need to get to the observation.” And out of this whole mass, the only two observations that she could come up with were that her flatmate occasionally leaves a dirty plate and a dirty mug and dirty cutlery in the sink without washing it up. And on occasion, her flatmate when she leaves the flat or the apartment, allows the door to slam and make a very loud noise behind her. And those are the only two observations that she could come up with that were actually really happening. Everything else was what she made up around it.

Lucas Perry: Yeah. So there’s this sense in which we’re telling ourselves stories on top of what may be just kind of simple, brute facts about the world. And then she’s just suffering so much over the stories that she’s telling herself.

Maria Arpa: Yeah, so the story attached to someone leaving the dirty plates in the sink is, she’s doing it on purpose, doing it to get at me. Those are the sorts of things we might tell ourselves. Or we might be the opposite and say, “She’s just so selfish.” And I really love what you just said. Of course, the person that I’m causing the most grief and pain to is myself. I’m cutting myself off from my own channel of love.

So that’s how we get to the observation, and it’s a really important part of the NVC process, because it helps us to identify that which is what we are telling ourselves and that which is actually happening in front of us. And the way that I could tell if it’s an observation or an evaluation, is I could record it on a video camera and show it to you and you would see the same thing.

Lucas Perry: Yeah, that makes sense. You’re trying to describe things as more factual without the story elements. This happened, then this happened, rather than, “My asshole roommate decided to leave her dirty shit everywhere because she just doesn’t care and she sucks.”

Maria Arpa: Yeah, so that’s the first step. And then what we’re looking to do is check in with ourselves on how do we feel. This is a really important step, because in nonviolent communication, what we propose is that our feelings are the red warning light on the dashboard of the car that tells you to pull over and look under the hood, you would say, we would say bonnet, and to check what else is going on. So feelings are a gateway. They’re our doorway. So they are our barometer.

So it’s really important to develop a really good vocabulary around feelings. And it’s really important to get to the feeling itself, whether it’s sadness, or anger, or upset, or despair or grief, or joy, or happiness, it’s really important to develop a vocabulary of feelings because if I ask someone how they feel, and I get, “I feel like,” no feeling is coming after the word “like.” I feel like jumping off a cliff. I feel like just going to bed and never getting up again. I feel like running away. That’s not a feeling. People can use those kinds of metaphors for us to try and guess the feelings, but actually what I want is the feeling.

Lucas Perry: Right, those are overly constructed. They need to be deconstructed into core feelings, which is a skill in itself that one learns. So you could say for example, “I feel abandoned,” but saying, “I feel abandoned,” needs to be deconstructed. Being abandoned is being afraid and feeling lonely.

Maria Arpa: So if we say, “I feel abandoned,” and I’m particularly referring to the ED at the end, or, “I feel disappointed,” rather than disappointment, then what I’m doing is I’m actually, by the backdoor, I’m accusing somebody of doing it to me.

Lucas Perry: Yeah, that’s right. It’s a belief about the world actually, that other people have abandoned you or are capable of abandoning you.

Maria Arpa: Yeah, exactly. So there’s a skill in the language. However, if you go to the cnvc.org website, we have a free feelings list and a needs list that’s downloadable for anybody that wants to go and get it. And that helps you to really get closer to the language.

Lucas Perry: Okay, so I do have a objection here that I don’t want to spend too much time on, but I’m curious what your reaction is. So I think that what you would call the “abandoned” word is a faux feeling, is that the right word you guys use? There’s a sense in which it needs to be further deconstructed, which you mentioned, because it’s a belief about the world, yet is there not also some reality that we need to respect and engage with, where abusers or toxic people may actually be doing the kind of thing which seems like a belief in the world.

Maria Arpa: That’s where I would get back to the observation. Because those things do happen. I work a lot in domestic violence. I understand this. And there are two things, and one that we’ll go on to later. There’s getting back to the observation, because if I heard you say, “I feel abandoned,” what I would want to do is go back to figure out what’s the observation that brings you to that sense. Because actually, if you’re telling yourself you’ve been abandoned, or if somebody has abandoned you, and we can see that in an observation, then I’m guessing you’re feeling a huge amount of misery, grief, and despair, or loneliness, those would be the feelings.

And then later on, when we get to it, I’ll talk about the use of protective force. Because it isn’t all happy, dippy, and let’s all get on our hippie barge and have a great life. Without putting too fine a point in it, shit has happened, shit is happening, and shit’s always going to happen. And that’s the way of the world. What I’m talking about is how we respond to it.

Lucas Perry: Yeah, that’s right. You can try and NVC Hitler, and when you realize it’s not going to work, that’s when you mobilize your armies.

Maria Arpa: That’s a very interesting thing, you could try to work with Hitler, because actually, I don’t know if you’ve seen, I have a copy of it somewhere, Gandhi actually wrote a letter to Hitler.

Lucas Perry: Yeah, it didn’t work.

Maria Arpa: It didn’t work. And actually, if you look at the letter, it’s a shame because there was nothing in there for me that I recognized as NVC that may have generated at least a response.

Lucas Perry: Alright, so we have feelings, and we want to be sure to deconstruct them into simple feelings, which is a skill that one develops. And the thing here that you said that the feelings are like the warning to check the engine of the car, which is a metaphor to say feelings are a signal giving you information about needs being unmet, or at least even the impression or ignorance or delusion that you think your needs are not being met. Whether it’s actually your needs not being met, or a kind of ignorance to your needs being met, either way, they are a signal of that kind of perception.

Maria Arpa: Yes, absolutely. They’re a signal for something. And so when we talk about feelings, what I’m trying to do is capture the real emotion here and name it.

Lucas Perry: And so then there’s a sense that when you communicate needs to other people, they cannot be argued with and they’re also universally shared. So you can recognize the unmet needs of another person as a reflection or a copy and paste of your own needs.

Maria Arpa: So, this is a really interesting part of the conversation when we get to needs because that sits in something called needs-based theory. And Marshall Rosenberg does not have the monopoly on needs-based theory. I mean, most people will have heard of Maslow’s hierarchy of needs. There’s a Chilean economist called Manfred Max Neef, who boiled all the needs down to just nine and said that everything else is just ways or satisfiers, to try and meet those needs.

For me, needs-based theory is an art, not a science. And so again, you could go on the cnvc.org website, and you can pull off a list of needs, and you’ll recognize them. Now, when I say it’s an art not a science, on there could be, say, the need for order and structure. Okay, so let’s say I have a need for order and structure.

Lucas Perry: That seems like it needs to be deconstructed.

Maria Arpa: Yes. So I would then say, “Maybe that is a strategy to get to a deeper need of inner peace, but at the moment, that seems to be the very present need for me. I come downstairs, my desk looks like a bomb’s hit it, I’ve got calls to get on, and I just don’t feel like I can get my day started until I’ve created a sense of order around myself.”

It is a simple need in that moment, but the idea is that when we look at the fundamental needs like air, and movement, and shelter, and nutrition, and water, those are universal. I mean, I don’t think anyone could disagree with that. And then we get into more spiritual needs and social needs. Things like discovery and creativity and respect, what a big word, respect is. And the way I like to look at it is, you see, all the arguments we ever have can never be over needs, because I can recognize that need in myself, as well as in others, but they’re over the strategies that we’re trying to use to meet the need that may be at a cost to someone else’s needs, or to my own deeper needs.

So a really good example is if you take our need for air. There’s only one strategy to meet our need for air and that’s to breathe. How many arguments do people have over the need for air and the strategy of breathing?

Lucas Perry: Zero.

Maria Arpa: Right. Now, let’s take a really big word that gets bandied about everywhere, respect. How many arguments do we have over the strategies we’re using to meet a need for respect?

Lucas Perry: A million. And another million.

Maria Arpa: Exactly, exactly. And so the arguments are only ever about strategy. And once you’ve understood it, and practiced it, and embodied that, and you can see the world through that lens, everything changes. And that’s why I can do what I do.

Lucas Perry: Yeah, well, so let’s stick with air. So some people have a strategy for meeting their needs by polluting the air with things. So there’s some strategy to meet needs where the air gets worse, and everyone has this more basic need to breathe clean air, and so the government has to step in and make regulations so that some more basic need is protected. But then so there’s this sense that strategies may harm other people’s needs, and there’s a sense in which sometimes the strategies are incompatible. But there’s this assumption that I think is brought in that the world is sufficiently abundant to meet everyone’s needs and that’s a way, I think, of subverting or getting around this contradictory strategy problem where it would suggest that, okay, oil companies, we can meet your needs some other way, as long as you change your strategy and we’ll help you do that so that we have clean air. Does this make sense?

Maria Arpa: It makes total sense, and I’ve got two sort of parallel answers, maybe even three. So the first one, we’ve got where we have where there are people in the world who don’t mind, or maybe they do mind secretly, doesn’t matter, where there are people in the world who will pollute the air for profit. And we’ve reached that point because we have been using an adversarial system with each other that means that as long as I can turn someone into the enemy, I can justify doing whatever I want. So we create bad people and good people.

So in this adversarial system, one of the things we can do is justify what we’re doing by holding up other people as being in the way. So we’ve created that system and actually what we’re finding, is that the system is failing. I don’t know, I don’t want to predict things, I’m not an economist or a politician, but it seems to me that the system is failing rapidly, more and more. More harm is being visited on the planet than is necessary and lots of people are waking up to that.

So now we’re hitting some kind of tipping point where in giving people things like the internet and all this stuff to self soothe them, actually, a lot of people got educated and started to ask better quality questions about the world they’re living in. And I think there’s a bit of an age difference between us, the wrong way for my end, but people of your generation are definitely asking better quality questions, and they’re less willing to be fobbed off.

So now we’ve got to figure out, how do we change things? And while I understand that from time to time, we need to go out and we need to actually put our foot down and make a protest and make a stand and say, “We’re not putting up with this,” and use protective force, and nonviolent resistance, and civil disobedience, while we need to do those things, we will never change things if we’re only operating at the incident level. If you try to do everything and fix it at the incident level without somebody working long-term on the system… People need to organize, and work out how people like you could get into positions of power.

I mean, I did a lovely piece of work with a Somalian community many years ago, and they’d arrived in the UK as refugees, and when they first arrived, they thought they were only going to be around for a few years and that the war would sort itself out and they’d all go back home. So they kept to themselves and they were very excluded and left out of society, and some of the sons were getting into trouble with the police because they hadn’t really worked out how to live in this society, and after they realized that actually they weren’t going back, “We’re here, this is our home,” what they’ve realized is they needed to start organizing. They needed to become teachers and doctors and lawyers and actually start to help their own community in that way. And I found that very moving and very empowering, and I loved doing the work with them. And the work we were doing was literally around the mothers and the sons. So that’s changing things at a system level.

Lucas Perry: Okay, so the final point here is about making requests. And I think this is a good way to pivot into talking about, you can’t make requests to make systemic change, because the power structures are sufficiently embedded in the incentives are structured such that, “Hey, excuse me, please stop having all that power and money, my needs are not being met,” isn’t going to work.

So let’s talk about the efficacy of NVC and how it’s effectively used. So I think it’s quite obvious how NVC is excellent for personal relationships, where there’s enough emotional investment in one another, and authentic care and community where everyone’s clearly invested in actually trying to NVC if they can. Then the question becomes, for bigger problems in the world like existential risk, if NVC can be effective in social movements, or with policy makers, or with politicians, or with industry or other powerful actors whose incentives aren’t aligned with our needs and who function impersonally. What is your reaction to NVC as applied to systemic problems and impersonal, large, powerful actors who have historically never cared about individual needs?

Maria Arpa: That’s a really interesting question because in my experience of the world, nothing happens without some kind of relationship between people. I mean, you can talk about powerful actors that don’t care, but bring me a powerful actor that doesn’t care and let me have a conversation with them. So for me, I agree that there’s a place for NVC in a group of people who care. There’s also a place for NVC in making the conversation irresistible, finding that place in somebody, because if we work on the basis that there are human beings in the world that have no self-love, or no love at all, if we work on the basis that there are human beings that walk the planet that are just all selfish and dangerous and nothing else, then of course, we’re doomed.

But I don’t believe that, you see, I believe that we are all selfish, greedy, kind, and considerate. And I know this from doing this work in prisons, that often what’s happened is the kind and considerate has just gone to sleep, or it’s paralyzed, or it’s frozen, but it is there to be woken up. And that’s the power of this work, when the person has sufficiently embodied it, has practiced this, and really understands that this involves seeing the world through a different lens. That actually, my role in the work I do, and I work in the front line of some of the worst things that go on in society, my role is to wake up the part of a person that is kind and considerate, and nurture it and bring it to life and grow it and work with it. And that doesn’t happen in one conversation. I don’t do that because I want something, I do that because I generally care about how that person is destroying themselves.

I can give you an example of somebody I met in prison who had been imprisoned for being part of a very, very violent gang, been in violent gangs for most of his life, done a lot of time in prison, and the judge called him evil, and greedy, or whatever. And he came on one of my trainings in around 2013. And he kept coming to talk to me in the breaks, it was like he really wanted some kind of connection or some affirmation or something, and he said, “I did a restorative justice training last month, and I really have to think about the harm I’ve done to my victims.” And I said, “You also have to think about the harm you’ve done to yourself.” And that was the first moment of engagement. And actually, now this man will be out of prison in I think 2022. He has put himself through a therapeutic prison for six years. I’ve never seen a life change to such an extent or such a degree. We’re thinking about employing him when he comes out of prison.

And that’s the thing is, it’s how do you engage a person to look at themselves and to look at how they may be destroying themselves in the pursuit of whatever it is they think they need. So, bring me somebody who is a powerful actor, who doesn’t care about anyone else, and we’ll open the conversation. That’s how I see it. The reason that I can do this and I can have these conversations is I don’t have an agenda for another human being. I simply want to understand what is going on, what the motivations are, what the needs are, and work out with that person, is that strategy actually working for you? And if you’re meeting your need for power or growth or structure or whatever, is it costing you in some other needs that’s actually killing you slowly?

Lucas Perry: Yeah, I mean, I think that, for example, at risk of becoming too esoteric, non-dual wisdom traditions, I think would see this kind of violent satisfaction of one own’s needs is also a form of, first of all self harm, because your needs extend beyond what the conceptual egoistic mind would expect to be your needs. I’m thinking of someone who owns a cigarette company, and who’s selling them and knows that he’s basically helping to lie about the science of it, and also promoting that kind of harm. There’s a sense in which it’s spiritually corrupting, and leads to the violation of other needs that you have when you engage in the satisfaction of your needs through these toxic methodologies.

Maria Arpa: Absolutely, and it’s a kind of addiction, it’s a kind of habit, or obsession. One of the things that I’m really interested in is, at the end of the day, when we get to the request part of NVC, the real request is change. Whatever it is I’m asking for, whether I’m making a request of myself or the person in front of me, I’m requesting change. And change isn’t easy for most people. People need to go through a change process. And so it’s not just about the use of NVC as an isolated tool that is going to change the world, it is about contextualizing the use of NVC within other structures and systems like change processes, understanding group dynamics, understanding psychology, and all of those things, and then it has its place.

Lucas Perry: Yeah, it has a place amongst many other tools in which it becomes quite effective, I imagine. I suppose my only reaction here then is, you have this perspective, like, “Bring me someone in one of these positions of power, or who has sufficient leverage on something that looks to be extremely impersonal, and let’s have a conversation,” those conversations don’t happen. And no one could bring you that person really, and the person probably wouldn’t want to even talk to you, or anyone really who they know is coming at them from some kind of paradigm like this.

Maria Arpa: Oh, I don’t know about that. I mean, in the work I do, it’s a very small world, I’m not trying to affect global change. I would love to, but I’m not. But in the prison work I was doing, we managed to get the prison’s minister to come and see the work, and I managed to then have a meeting with him. And I managed to convince him on one or two things that had an effect at the time. So I don’t know that these things don’t happen. I think it’s about the courage and determination of the people to get those meetings, not coming from having an agenda for that person, but coming from really wanting to understand what the thinking is.

Again, in my experience, having been around the block a few times, the people making policies would be absolutely horrified if they saw how those policies are being delivered on the ground. There’s a huge gap between people sitting somewhere making a policy, and then how it gets translated down hierarchical systems, and then how it gets delivered. I like to think that policy makers aren’t sitting around the table going, “How can we make life worse for everybody, because we hate everybody.” Policy makers are sometimes very misguided and detached and unable to connect, but I don’t think policy makers are sitting there going, “We hate everybody, let’s just make life difficult.” They really genuinely believe they’re solving problems. But the issue with solving problems is that we’re addicted to strategy before understanding the needs.

Lucas Perry: We’re addicted to strategy before understanding needs.

Maria Arpa: Yeah. Our whole mentality is, “Problem? Fix it.”

Lucas Perry: So I mean, the idea here then, is that the recognition of needs, as well as bringing in some other assumptions that we can talk about shortly, and relaxing this adversarial communicative paradigm into a needs-based one where you take people on good faith and you recognize the universality of human needs, and there’s this authentic care and empathy which is born of, not something which you’re fabricating, but something which participating in actually serves some kind of need that you actually already have to have authentic human connection, or maybe that boils down to love. And so NVC can be an expression of love in which NVC becomes something spiritual. And then that this kind of process is what leads to a reexamination of strategy.

Maria Arpa: Yeah, so the idea is that because we have a problem, fix it mentality, we are skipping over the main part which is to sit with the pain of not knowing. So what we do is we jump to strategy, whether that’s in our daily lives, “I feel bad, I’ll go and get a haircut or buy myself a new wardrobe, or I’ve got a problem, and it’s going to create a big PR problem, so I’m going to do this,” and what we’re missing is the richness of understanding that when you do that, you’re acting out of fear, you’re jumping, because you’ve got triggered or stimulated in some way, and you’re acting out of fear to prevent yourself from the feelings that you don’t believe you’re going to be able to cope with.

And what I’m saying is that we understand, we get to the observation, we identify, is this an issue? Is it not an issue? And then we go within, in a group, and we sit with the pain, the mourning, of the mistakes we’ve made, or the problem we haven’t solved, or the world we’ve created, whatever it is, and it’s in sitting together with that, and being willing to say, “I don’t know what the answer is right now, or today. Maybe I just need to breathe,” in being able to do that, we reach our creativity.

So we’re coming out of a place of absolute creativity and love, not jumping out of fear. And there’s a tremendous difference in operating in the world in this way. But it requires us to be willing to be vulnerable, and I think that’s what I think you’re talking about when you talk about people being detached. They’re so far away from their vulnerability, and when people are so far away from their vulnerability, they can do terrible things to other people or themselves.

Lucas Perry: Yeah, I mean, this is a sense of vulnerability in which, it’s a vulnerability of the recognition and sensitivity of your needs, but there’s a kind of stable foundation and security in that vulnerability. It’s a kind of vulnerability of self-knowing, it seems.

Maria Arpa: It’s vulnerability, plus trust.

Lucas Perry: It seems to me then, NVC’s place in the repertoire of an effective altruist, or someone interested in systemic change or existential risk, is that it becomes a tool in your toolkit for having a kind of discourse with people that may be surprising. I definitely believe in the capacity for enlightened or awakened people to exist as an example of what might be possible. And so if you come at someone with NVC who’s never experienced NVC before, I agree with you that that is where, “Oh, just have the conversation with the person,” might lead to some kind of transformative change. Because if you exist as a transformative example of what is possible, then there is the capacity for other people to recognize the goodness in you that is something that they would want and that leads to peace and freedom. NVC is obviously not the perfect solution to conversation, or the perfect solution to the problem of strategy, for example, and I guess, broadly, strategy can also be understood as game theory, where you’re going to have lots of different actors with different risk tolerances and incentives, but it is a much, much better way of communicating, full stop.

Maria Arpa: I notice I feel a slight discomfort when you call NVC a tool, because I don’t see it as a tool, I see it as a way of life.

Lucas Perry: Yeah, I hear that.

Maria Arpa: When I’m in that frame, because I look at the person I was 20 years ago, and I look at the person I am now and I see the transformation, but it’s because of the embodiment of something. It’s because it’s really helped me to look at all aspects of my life. It’s helped me to understand things that I wasn’t understanding, it helped me to wake up and become functional, and mindful, and all of those things, but that’s who I am now. I mean, I’m not saying that I’m some perfect person, and of course, occasionally, the shadows always there, but I’ve learned not to act on my shadow. I’ve learned to play with it. But when I am that embodiment or that person, then I’m bringing a new perspective into any conversation I have. And sometimes people find that disarming in an engaging way.

Lucas Perry: Yeah, that’s right. It can be disarming and engaging. I like that you use the word waking up. We just had a podcast with a former Buddhist monk and we talked a lot about awakening. And I agree with you that calling it a tool is an instrumentalisation of it, which lends itself to the problem-solving mindset, which is kind of adversarial with relation to the object which is being problem solved, which in this case, could be a person. So if it becomes a kind of non-conceptual embodied knowing or being, then there is the spiritual transformation and growth that is born of adopting this upgraded way of being. If you download the software of NVC, things will run much better.

Maria Arpa: So then I wanted to comment on the strategy. NVC is a way of unlocking something, okay. Now once I’ve unlocked it, and once I’ve got to the part where we’re now looking at strategies that will satisfy needs, now, we might need a different way of conversing. Now it might be very robust, it might be from the point of negotiation, and that negotiation may be very gentle and sensitive, but it can also be very boundaried. And so yeah, NVC for me is the way to unlock something, to bring people into a consciousness that what we’re going to do is, what’s the point of making strategies if we don’t understand the needs we’re trying to meet, and then using those needs as the measurement for whether the strategy is going to satisfy or not.

Lucas Perry: Okay. And I think I do also want to put a flag here, and you can tell me if this is wrong, that even those negotiations can fail. And that comes back to this kind of assumption that the world has sufficient abundance, that everyone’s needs can be met. So I mean, my perspective is I think that the negotiations can fail, and that when they fail, then something else is needed.

Maria Arpa: So if the negotiation has failed, in my experience, it’s because somebody wanted something, even if it was just speed, that wasn’t available. And so a really big deal for me is understanding where we want to get to, having that shared vision that we’re all trying to get to this place, and working towards it at the speed and tolerances of the whole group, and yet not allowing it to go at the slowest person’s pace. And that’s an art. There’s a real skill to, “We’re not going at the slowest person’s pace, but we’re also not going to take people out of their tolerances.”

Lucas Perry: But it seems like often with so many different people, tolerances are all over the place and can be strictly incompatible.

Maria Arpa: So that means we didn’t do enough work, and our shared vision isn’t right, and maybe we need to go back and look at the deeper needs. One of the things I talk about in this work is you’re never going to undo 30 years of poor communication in one conversation. It’s a process, and what I’m looking for is progress. And sometimes progress is literally just the agreement that the person will have another conversation with me, rather than slam the door in my face.

I’ve done neighbor disputes where I have knocked on someone’s door, they haven’t responded to the letters or the phone calls, and I have knocked on someone’s door, and I’ve got 30 seconds before they slam the door in my face and tell me, in no uncertain terms, tell me to whatever. And so for me at that moment, just then giving me two minutes, and then just getting to the agreement, I’m not going to try and do any business with you right now, just to get to the agreement that you will have another conversation with me, is progress.

And so it’s really about expectations and how quickly we think we can undo things or change things. And change processes are complex. How many times did you wake up and say, “I want to get fit or eat healthier food or lose weight or stop smoking or drink less,” or whatever it was, and then did you execute it straight away? No, you fluctuated. You probably relapsed, and relapse is a really important part of change. But then do we give up? Do we say, “Well that’s it, it’s over, we can’t negotiate,” or do we say, “Well, okay, that didn’t work. What else could we try?”

So in my world, and what I’ve understood, is the art, or the trick to life is not constantly searching to get your needs met. The trick to life is understanding that I have many needs, and on any day, week, month, year, some go met, and some go unmet, and I’m okay with that. It’s just looking on balance. Because if the aim of the game is to go, “Yeah, my need for this, this, this and this are all not being met, so therefore, I’m going to just make it my mission to get my needs met,” you’re still in the adversarial paradigm.

So I have lots of needs that go unmet, and you know what, it’s fine. It doesn’t mean I can’t express gratitude for what I do have. It doesn’t mean I don’t love everybody and everything in the way it is. It’s fine. I have no expectation that all my needs will get met.

Lucas Perry: Yeah, so you’re talking here about some of your experience, which I think boils down to some axioms or assumptions that one makes in NVC that I think are quite wholesome and beneficial. And I’ll just read off a bunch of them here and if you have any reactions or want to highlight any of them, then you can.

So the first is that all human beings have capacity for compassion and empathy. I believe in that. Behavior stems from trying to meet needs, so everything that we do is an expression of just trying to meet needs. You said earlier there are no bad or good people, they are just people trying to meet needs. Needs are universal, shared, and never in conflict. I think that one’s maybe 99.9% true. I don’t know how psychopaths fit in there, like Jeffrey Dahmer, fits in.

Maria Arpa: Well, I mean, I’ve worked with people in prisons who have been labeled as psychopaths, and I have on that very clear basis that people are selfish, greedy, kind and considerate, but the kind and considerate is either not on show, not available, has been put in a box, paralyzed, not today, I have woken up the kind and considerate.

Lucas Perry: You don’t think that there are people that are sufficiently neurodivergent and neuroatypical, that they don’t fit into these frameworks? It seems clearly physically possible.

Maria Arpa: It only runs out when I run out of patience, love, and tolerance to try. It only ends at that point, when I run out of patience, love and tolerance to try, and there might be many reasons why I would say, “I’m no longer going to try,” don’t get me wrong. We’re not asking everybody to just carry on regardless. But yeah, when I say I’ve had enough, and I don’t want to do this anymore, that’s when it, the trouble is we do that I think far too quickly with most people.

Lucas Perry: Yeah. All right. And so kind of moving a bit along here. The world has enough resources for meeting everyone’s basic needs, we mentioned this.

Maria Arpa: I do want to just comment on the world is abundant, and it has enough abundance to meet everybody’s needs.

Lucas Perry: Yeah.

Maria Arpa: The issue is, if it hasn’t, what’s the conversation we’re going to have? Or do we just want to inflict more and more unnecessary human suffering on each other? If, as is predicted, there’s going to be climate change on a scale that renders parts of the planet uninhabitable and there’s going to be mass migration, what are we going to do? Are we going to just keep killing them? Are we going to have a race to the bottom?

Lucas Perry: Are we going to leave it to power dynamics?

Maria Arpa: Yeah, or are we going to say, “Actually, things are getting tighter now, so we need to figure out how to collaborate so that we don’t kill each other.”

Lucas Perry: And then, so I was saying, if feelings point to needs being met or unmet, I would argue this is more like feelings point to needs being met or unmet, or the perception that they are being met or unmet.

Maria Arpa: So I just say, being able to identify the feeling and name the emotion is an alarm system. It’s our body’s natural alarm system, and we can use that to our advantage.

Lucas Perry: And I’ll just finish with the last one here that I think is related to the spiritual path, which says that the most direct path to peace is through self connection. So through self connection to needs, one becomes, I think, increasingly empathetic and compassionate, which leads to a deepening and an expression of NVC, which leads to more peace.

Maria Arpa: Yeah, the first marriage is this one. The first marriage is the one between I and I, and if that one ain’t working, nothing else is going to work.

Lucas Perry: All right. So can we engage in an NVC exercise now as an example of the format, so moving from observations to feelings to needs to request?

Maria Arpa: Okay, so I think it would be really useful if you could tell me about a situation that’s on your mind right now. And Marshall Rosenberg would say, “Can you tell me in 40 words or less what’s alive in you right now?”

Lucas Perry: So let’s NVC about this morning, then? I was late, and I kept you waiting for me, and I also showed up to the Zoom call late because… I guess the because doesn’t matter. Unless that’s part of the chronology of what happened. But yeah, I showed up to the Zoom call late and then my microphone wasn’t working, so I had to get another microphone. And I feel… how do I feel? I feel bad that I mean, bad isn’t the right word, though, right?

Maria Arpa: Mm-hmm (affirmative). But you just say it, just say it, and then we’ll go over it together.

Lucas Perry: Yeah, so I guess I regret and I feel bad that I wasn’t fully prepared, and then it took me 15, 20 minutes to get things started. And this probably relates to some need I have the podcasts go well, but also that I not damage my relationship with guests, which relates to some kind of need about… I mean, this probably goes all the way down to something like a need for love or self-love, very quickly, it would seem. And yeah, it’s unclear that we’ll have another conversation like this anytime soon, so it’s unclear to me what the kinds of requests are, though maybe there’s some requests for understanding and compassion for my failure to completely show up and arrive for the interview perfectly. How’s that for a start?

Maria Arpa: Yeah, it’s really very sweet, and so I’d love to just tell you some of what I’ve heard, first. So I heard you say that you’re feeling bad because you turned up late and unprepared for our interview, and that feeling bad or regretful is linked to some kind of need, at the end of the day you went there, and you said, “It’s probably a need for self-love.” And it’s hard to know what a request would be like, but I guess what I heard you say is you’d like to request some understanding.

Lucas Perry: Yeah, that’s right.

Maria Arpa: Before I respond to you, I would really love to just break down how you did observation, feeling, need, request, and then work with you a little on each of those. Would that be okay?

Lucas Perry: Yeah, that sounds good.

Maria Arpa: So the biggest judgment in your observation was the word late. It takes a bit of understanding, but who defines late?

Lucas Perry: Yeah, it started 15 minutes after the agreed upon time, is more of a concrete observation.

Maria Arpa: Well, a concrete observation actually, is that we got online at the time we agreed, and we didn’t start the interview until 15 minutes later.

Lucas Perry: Well, I was five minutes late to getting to the Zoom call.

Maria Arpa: Okay. Well, yeah, that word late again.

Lucas Perry: Sorry, I arrived five minutes…

Maria Arpa: After the agreed time.

Lucas Perry: Thank you.

Maria Arpa: Because late can be a huge weapon for self punishment. So the observation is, you came on the call five minutes after the agreed time, and we didn’t begin the interview until 15 minutes after the agreed time. So unprepared, late, and all of those things, they’re what you’re telling yourself. It’s part of the story, because from my perspective, since the example you gave includes me in that narrative, from my perspective, we didn’t have an agreement about what to expect at 10:00 AM your time, 3:00 PM my time. So how do I know you were unprepared? I’ve never done this before with you.

Lucas Perry: Okay.

Maria Arpa: Does that make sense? Can you see that you put a huge amount of judgment into what you thought was your observation, when in actual fact, who knows?

Lucas Perry: Yeah, so introducing other observations now here, is the observation that, I believe I pushed this back twice.

Maria Arpa: Once.

Lucas Perry: I pushed this back once?

Maria Arpa: Mm-hmm (affirmative).

Lucas Perry: Okay, so I pushed this back once. And then there was also this long period where I did not get back to you about scheduling after we had an initial conversation. So the feelings are things that need to be deconstructed. They’re around messiness, or disorganization, or not doing a good job, which are super synthetic in an evaluative, and would need to be super deconstructed, but that’s what I have right now.

Maria Arpa: So on some level, you didn’t meet your own standards.

Lucas Perry: No, I did not meet my own standards.

Maria Arpa: Right. So on some level, you didn’t meet your own standards, and that’s giving rise to a number of superficial feelings, like you’re feeling bad and guilty, and all of those things. And I can only guess, I’ve told myself, but perhaps you’re feeling regret, possibly some shame, and I don’t know why the word loneliness comes up for me. Isolation or loneliness, disconnection, something around, that you’ve screwed up, and now you have to sit in there and now there’s some shame and some regret and some embarrassment. Embarrassment, that’s it. I’m telling myself the feeling is embarrassment.

Lucas Perry: I’m trying to focus and see how accurate these are.

Maria Arpa: Yeah, and I could be completely wrong. I can only guess.

Lucas Perry: Yeah, I mean, you’re trying to guide me towards explaining my own feelings.

Maria Arpa: Yeah, so does anything resonate with embarrassment, or shame, or regret, or mourning.

Lucas Perry: Mostly regret. This kind of loneliness thing is interesting, because I think it’s related to the feeling of… If there was sufficient understanding and compassion and self-love, then these feelings wouldn’t arise because there would be understanding. And so the loneliness is born out of the self-rejection of the events as they transpired. And so there’s this need for wholeness, which is a kind of self-love and understanding and compassion and knowledge. It’s just a aligned state of being, and I’ve become unaligned by creating this evaluative story that’s full of judgment. Because all this could happen and I could feel totally fine, right? That roughly captures the feelings.

Maria Arpa: Okay. And then it really resonated for me and I heard you say that this need for wholeness, and definitely for understanding and love, and a deep need for mutuality.

Lucas Perry: Yeah. There’s a sense that I can’t fully meet you when I haven’t fully accepted myself for what I have done. Or if there’s a kind of self consciousness around the events and how it has impacted you.

Maria Arpa: Yeah, so I’m telling myself that we made an agreement, and actually, part of it is a story that you’re telling yourself, and part of it has some reality in it, that you didn’t meet the terms of our agreement.

Lucas Perry: Yeah.

Maria Arpa: And then what that’s doing to you is, when you didn’t meet the terms of the agreement, I’m telling myself that now what happens to you is you worry, I think that’s the word. Ah, that’s it, maybe the feeling’s worry, or anxiety, that then any connection that we might have made is disconnecting or breaking, or we’re losing mutuality, because I may be now looking at you differently for not having met the terms of the agreement.

Lucas Perry: Yeah, and there’s also a level of self-rejection. And someone who is self-rejecting is in contradiction and cannot fully connect with someone else. If you find that yourself is unlovable or that you do not love yourself, then it’s like impossible to love someone else. So I think there’s a sense also, then of, if you’re creating a bad narrative and that’s leading to a sense of self-rejection, then there’s an inability to fully be with the other person. So then I think that’s why you were pointing out to the sense of loneliness, because this kind of self-rejection leads to isolation and inability to fully meet the person as they are.

Maria Arpa: Yeah, so we went over the observation, we got to some of the feelings, we got to some of the needs, now do you have a request? And I think I heard you say you have a request for understanding.

Lucas Perry: Yeah, understanding and compassion, probably mostly from myself, but also from you.

Maria Arpa: Yeah, so I’m wondering if you’d like me to respond to that request now. Would that be helpful for you?

Lucas Perry: Yeah, sure.

Maria Arpa: So I guess when I hear your request for understanding and compassion, and that you’re also recognizing you need to give it to yourself, and that’s a relief for me that you know you need to give it to yourself, and yet on some level, we do have a situation over an agreement that was broken. I would love for you to be able to hear where I am. And I’m just wondering, would you be willing to hear where I am in that, to support you in your request?

Lucas Perry: Yeah, and probably some need for love in the sense of, I mean, there are different kinds of love. So whatever kind of love exists between you and I, as human beings that is coworker love, or colleague love, or whatever kind of relationship we have, I don’t know how to explain our relationship, but whatever kind of love is appropriate in that context.

Maria Arpa: So I guess where I’m coming from, is, I feel deeply privileged and honored to be asked to do this podcast. I’ve heard some of your other podcasts, and I think they’re masterpieces. So to be invited to do this at all and for us to have met and for you to have actually said, “Yeah, let’s go ahead and do this,” went a long way for me to believe in myself as well.

So you may be having your own moment of self punishment, and so did I. “What happens if he doesn’t like me at the end of our interview, and doesn’t want to do it?” And in terms of our agreement, as far as I’m concerned, you got online roughly at the time we said, and I have no idea, we didn’t have a tacit agreement that the interview then starts at, and so in terms of being alongside you while you make preparations and whatever, actually, it helped me to see you as human. So it actually increased my love for you.

Lucas Perry: Oh, nice.

Maria Arpa: Because I saw in that first meeting, you were kind of interviewing me and seeing if there was suitability for a podcast. And of course, I know I know my stuff, right. That’s not the issue. But there was a sense of me wanting to be on best behavior. But now I come to this call, and there you were just being human and expressing it, and I was able to say a few things to you. And I felt that 15 minutes was very connecting, it was very connecting for me. And so I just wonder when you hear that, does it change anything for you?

Lucas Perry: Yeah, I feel much better. I feel more capacity to connect with you and I appreciate the honesty and transition of how you felt with regards to the first time we talked where, because I couldn’t find any of your content online, so I didn’t really know anything about you, so we had this first conversation where you felt almost as if there was kind of evaluative relationship going on, which was then I guess, dissolved by having this conversation in particular, but also the beginning of the podcast where I was being human and my microphone wasn’t working, and my electricity was out this morning and things weren’t working out. So yeah, I appreciate that. I feel much better and warm and more capacity for self-love and connection. So thanks.

Maria Arpa: Yeah.

Lucas Perry: I think that means the NVC was successful.

Maria Arpa: Yeah. And then just to add one thing, during the interview, I heard you say something like, “I don’t know what my requests would be because the opportunity for us to connect again like this,” or whatever, you said something about that we probably wouldn’t speak to each other again in a hurry. I actually felt really sad when I heard that. I felt such sadness, “Oh, no, I’ve connected with Lucas now.” So I hope that there’ll be other opportunities to just chat or stay in touch or whatever, because there’s something about you that I feel really resonates. And I love where you’re coming from, and I love what you’re trying to do. It’s really important.

Lucas Perry: Well, thank you, that’s really sweet. I appreciate it.

Maria Arpa: Thank you. And look at that, we’re bang on time.

Lucas Perry: So that means you escaped question eight, which is my criticisms of NVC.

Maria Arpa: We could come back and add that on another time, but I can’t do it now. If you want to do another bit, we can do another bit. I’m really happy to do that.

Lucas Perry: Yeah. Great. So as we wrap up, if people want to follow you or to learn more about NVC, or the Center for Nonviolent Communication, where are the best places to follow you or get more information or check out more of Marshall’s work?

Maria Arpa: So obviously, CNVC has a website which is CNVC, Center for Nonviolent Communication, .org. That’s your first port of call. Marshall’s books are published by Puddle Dancer Press. I know Meiji very well. I know him reasonably well, and he’s a really wonderful guy, so buy some books from Puddle Dancer Press, because Marshall’s books are amazing. There are 700 NVC trainers across the world, and you can find those on the website if you go to the right bit and search. So if you want to find someone local in your area, and they all work differently and specialize in different things. If you put NVC into Facebook, you will find countless NVC pages. And if you’re looking for me, Google my name, Maria Arpa, and I will come up. Thank you.

Lucas Perry: All right. Thanks, Maria.

Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

 Topics discussed in this episode include:

  • The projects of awakening and growing the wisdom with which to manage technologies
  • What might be possible of embarking on the project of waking up
  • Facets of human nature that contribute to existential risk
  • The dangers of the problem solving mindset
  • Improving the effective altruism and existential risk communities

 

Timestamps: 

0:00 Intro

3:40 Albert Einstein and the quest for awakening

8:45 Non-self, emptiness, and non-duality

25:48 Stephen’s conception of awakening, and making the wise more powerful vs the powerful more wise

33:32 The importance of insight

49:45 The present moment, creativity, and suffering/pain/dukkha

58:44 Stephen’s article, Embracing Extinction

1:04:48 The dangers of the problem solving mindset

1:26:12 Improving the effective altruism and existential risk communities

1:37:30 Where to find and follow Stephen

 

Citations:

Stephen’s website

Stephen’s teachings and courses

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on YoutubeSpotify, SoundCloudiTunesGoogle PlayStitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today, we have a special episode for you with Stephen Batchelor. Stephen is a secular and skeptical Buddhist teacher and practitioner with many years under his belt in a variety of different Buddhist traditions. You’ve probably heard often on this podcast about the dynamics of the race between the power of our technology and the wisdom with which we manage it. This podcast is primarily centered around the wisdom portion of this dynamic and how we might cultivate wisdom, and how that relates to the growing power of our technology. Stephen and I get into discussing the cultivation of wisdom, what awakening might entail or look like. And also his views on embracing existential risk and existential threats. As for a little bit more background, we can think of ourselves as contextualized in a world of existential threats that are primarily created due to the kinds of minds that people have and how we behave. Particularly how we decide to use industry and technology and science and the kinds of incentives and dynamics that are born of that. And so cultivating wisdom here in this conversation is seeking to try to and understand how we might better gain insight into and grow beyond the worst parts of human nature. Things like hate, greed, and delusion, which motivate and help to cultivate the manifestation of existential risks. The flipside of understanding the ways in which hate, greed, and delusion motivate and lead to the manifestation of existential risk is also uncovering and being interested in the project of human awakening and developing into our full potential. So, this just means that whatever idealized kind of version you think you might want to be or that you might strive to be there is a path to getting there and this podcast is primarily interested in that path and how that path relates to living in a world of existential threat and how we might relate to existential risk and its mitigation. This podcast contains a bit of Buddhist jargon in it. I do my best in this podcast to define the words to the best of my ability. I’m not an expert but I think that these definitions will help to bring a bit of context and understanding to some of the conversation. 

Stephen Batchelor is a contemporary Buddhist teacher and writer, best known for his secular or agnostic approach to Buddhism. Stephen considers Buddhism to be a constantly evolving culture of awakening rather than a religious system based on immutable dogmas and beliefs. Through his writings, translations and teaching, Stephen engages in a critical exploration of Buddhism’s role in the modern world, which has earned him both condemnation as a heretic and praise as a reformer. And with that, let’s get into our conversation with Stephen Batchelor. 

Thanks again so much for coming on. I’ve been really excited and looking forward to this conversation. I just wanted to start it off here with a quote by Albert Einstein that I thought would set the mood and the context. “A human being is a part of the whole called by us universe, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest, a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures, and the whole of nature and its beauty. Nobody is able to achieve this completely. But the striving for such achievement is in itself a part of the liberation and a foundation of inner security.”

This quote to me is compelling, because, one, it comes from someone who is celebrated as one of the greatest scientists that have ever lived. In that sense, it’s a calling for the spiritual journey, it seems, from someone who, for people who are skeptical of something like the project of awakening or whatever a secular Dharma might be or look like. I think it sets up well the project. I mean, he talks about here how this idea of separation is a kind of optical delusion of his consciousness. He sets it up as the problem of trying to arrive at experiential truth and this project of self-improvement. It’s in the spirit of this, I think, seeking to become and live an engaged and fulfilled life that I am interested and motivated in having this conversation with you.

With that in mind, the problem, it seems, that we have currently in the 21st century is what Max Tegmark and others have called the race between the power of our technology and the wisdom with which we manage it. I’m basically interested in discussing and exploring how to grow wisdom and about how to grow into and develop full human potential so that we can manage powerful things like technology.

Stephen Batchelor: I love the quote. I think I’ve heard it before. I’ve come across a number of similar statements that Einstein has made over the years of his life. I’ve always been impressed by that. As you say, this is a man who’s not regarded remotely as a religious or a spiritual figure. Yet, obviously, a highly sensitive man, a man who has plumbed the depths of physics in a way that has transformed our world. Clearly, someone with enormous insight and understanding of the kind of universe we live in. Yet, at the same time, in these sorts of passages, we realize that he’s not just the stereotyped, detached scientist separated out from the world looking at things clinically and trying to completely subtract his own subjectivity.

This, I think, is often the problem with scientific approaches. The idea is that you have to get yourself out of the way in order to somehow see things as they really are. Einstein breaks that stereotype very well and recognizes the need that if we are to evolve as human beings and not just as scientists who get increasingly clear and maybe very deep understandings into the workings of the universe, something else is needed. Of course, Einstein himself does not seem to really have any kind of methodology as to how that might be achieved. He seems to be calling upon something he may consider to be an innate human capacity or quality. His words resonate very much in terms of certain philosophies, certain spiritual and religious traditions, but we don’t really see any kind of program or practice that would actually lead to what he recognizes to be so crucial.

I found the final comment he makes a bit deflating, he says, he seems to think it has to do with inner security, which is a highly subjective, and I would think rather limited goal to achieve, given what he’s just made out as his vision.

Lucas Perry: Yeah, that’s wonderfully said. You can help unpack and show the depths of your skepticism, particularly about Buddhism, but also your interest in creating what you call a secular Dharma. We’re on a planet with ancient wisdom traditions, which have a lot to say about human subjectivity. Einstein is setting up this project of seeing through what he says and takes to be a kind of delusion of consciousness, the sense of experiencing oneself and thoughts and feelings as something separate and this restricting us to caring most about our personal desires and to affection.

I mean, here, he seems to be very explicitly tapping into concepts in Buddhism, which have been explored like non-self and emptiness and non-duality. Hey this is post-podcast Lucas here and I just wanted to try and explain a few terms that were introduced here, like “non-self,” “emptiness,” and “non-duality.” And I’ll do my best to explain them but I’m not an expert and other people who think about this kind of stuff might have a different take or give a different explanation, but I’ll do my best. So, I think it’s best to first think about the universe 13.7 billion years ago as an unfolding continuous process, which contextualized within the unfolding of the universe is the process of evolution which has led to human beings. There’s this deep grounded connection of the human mind and human nature, and just being human with the very ground of being and the unfolding of everything. Yet, in that unfolding there is this construction of a dualistic world model, which is very fitness enhancing where you are constructing this model of self and a model of the world. This self-other dualistic conceptual framework born of this evolutionary process is fitness enhancing, it’s helpful, and it’s useful, yet it is a fabrication, an epistemic construction which is imposed upon a process for survival reasons. And so non-duality comes in and simply rejects this dualistic construction and says things are not separate things are not two, one undivided without a second. This means that once one sees all this dualistic construction and fabrication as it is then one enters, what I might say is something more like a don’t know mind, where one isn’t relying on this conceptual dualistic fabrication to ultimately know. Or one doesn’t take it as what reality ultimately is, as divided into all of these things like self and other and tables and chairs and stars and galaxy. Non-self and emptiness are both very much related to this.

Non-self is the view that the self is also this kind of construction or fabrication, which under experiential and conceptual analysis just falls apart and reveals that there is no core or essence to you, that there is nothing to find, that there is no self, but merely the continual unfolding of empty ephemeral conditioned phenomena where emptiness here means that the self and all objects that you think exist are empty of intrinsic existence and are merely these kinds of ephemeral appearances based on causes and conditions which when those causes and conditions no longer sustain for that thing to appear in that current form the thing dissolves. In non-duality there’s this sense of no coming, no going, there’s no real start, beginning or end to anything, there is this continual unfolding process where something like birth and death are abstractions or constructions which are imposed on a non-dual continuous process. And so the claims of non-self emptiness and non-duality are all both ontological claims about how the universe actually is, but they’re also experiential claims about how we can shift our consciousness into a more awake state where we have insight into the nature of our experience and to the nature of things and we’re able to shift into a clear non-conceptual seeing of something like non-dual awareness or emptiness, or non-self. This might entail something like noticing the sense of self becoming an object of witnessing. There’s no longer an identification with self and so there’s space from it. There might eventually be a dropping away of a sense of self where all that’s left is consciousness and contents without a center where there isn’t a distance between witnesser and what is perceived. All there is is consciousness and everything is perceived infinitely close, where everything is basically just made of consciousness and where consciousness is no longer structured by this dualistic frame work.

And so a layer of fabrication drops away and there’s just consciousness and this deep sense of interconnectivity and being. This is what I think Eisnstein is pointing to when he says that our experience of ourselves as separated from the rest of the universe is a kind of optical delusion of our consciousness. I think he is pointing towards how the constructed sense of self and the dualistic fabrication of a self-other world model populated by objects and things and other people and then buying into these constructions as a kind of an ultimate representation of how things are where there are all these things with intrinsic independent existence, with a kind of essence, rather than being this non-dual, undifferentiated, unfolding, continuous process, which can be said to be neither same nor different, which there can neither be said to not be a self or be a self, and so I think this is what he is pointing in the direction of and why I point out that there are other wisdom traditions which have been thinking about and practicing cultivating these kinds of insights and awareness for many years. So, back to the conversation. As you said, he doesn’t have a practice or a system for arriving at an experiential understanding of these things. Yet, there are traditions which have long studied and practiced this project.

Stephen Batchelor: Yes, this is absolutely correct. Myself and many of my peers and colleagues and friends have spent their lives exploring these wisdom traditions. In my own case, this has been various forms of Buddhism primarily. I think we also find these traditions within our own culture. I think we find something very similar in the Socratic tradition. We find something likewise in the Hellenistic philosophies, which also recognize that human flourishing, which is a term I very much like, is essentially an ethical practice. It’s a way of being in which we take our own subjective assumptions to task.

We don’t just assume that everything we think and feel is the way things actually are but we begin to look more critically. We pursue what Socrates calls an examined life. Remember, an unexamined life is not worth living, he said. Then, what is this examined life? Perhaps like yourself and others, I found, in a way, more richness in Asian traditions because they don’t just talk about these things, but they have actual living methodologies and practices that, if followed, can lead us to a radical change of mind can begin to unfold different layers of human experience that are often blocked and somehow ignored, and cut off from what we experience from moment to moment.

Lucas Perry: Yeah, exactly. There’s this valid project then of exploring, I think, the internal and the subjective point of view in a rigorous way, which leads to a project of something like living an examined life. From that perspective, one can come to experiential kinds of wisdom and I think can get in touch with kinds of skillfulness and wisdom, which an overreliance or a sole reliance on the conceptualization of the dualistic mind would fail at, thinking about like compassion or discovering something like Buddha nature, which I had been very skeptical of for a long time, but less so now, and heart-mind, and heart wisdom.

And that awakening is, I think, a valid project, and something that is real and authentic. I think that that’s what Einstein helps to explain. His credentials, I think, helped to beef this up a bit for people who may be skeptical of that project. I mean, I view this partially as, for thousands and thousands of years, people have been struggling just to meet their basic needs. As these basic needs, even just material needs, keep getting met, we have more and more space for authentic human awakening. Today, in the 21st century, we’re better positioned than ever to have the time, space and the information to live deeply and to live an examined life and to explore what is possible of the depths of human compassion and well-being and connection.

Thinking about the best moment of the best day of your life and if it’s possible to stabilize in that kind of introspection and compassion and way of being.

Stephen Batchelor: I broadly go along with exactly what you’re saying. It almost seems self-evident, I think, for those of us who have been involved in this process for a number of years. On the other hand, it’s very easy to sort of talk about emptiness and non-duality and Buddha nature and so on. What really makes the difference is how we actually internalize those ideas both conceptually. I think, it is important that we have a clear, rational understanding of what these ideas convey. Also, of course, through actual forms of spiritual practice, by performing spiritual exercises as we find already in the Greeks, that will hopefully actually lead to significant changes in how we experience life and experience ourselves.

What we’ve said so far is still leaving this somewhat at a level of abstraction. I’ve been involved in Buddhism now full time for the last 45 years. If I’m entirely honest with myself, I have to acknowledge that at many levels, my consciousness seems to be much the same. There are moments in my practice in which I’ve gained what I would consider to be insights. I like to think that my practice of Dharma has also made me more sensitized, more empathetic to the needs of others. I like to think that I’ve committed myself to a way of life in which I put aside the conventional ambitions of most people of my generation.

Yet, I can also see that I still suffer from anxieties and high moods and low moods and I get irritated and can behave very selfishly at times. I see, in many ways, that what this practice leads me to is not a transcendence of these limiting factors that Einstein refers to, but let’s say a greater clarity and a greater humility in acknowledging and accepting these limitations. That I think is, if anything, where the practice of awakening goes to. Not so much to gaining a breakthrough into some transcendental reality, but rather to gain a far more intimate and real encounter with the limitations of one’s own experience as a human being.

I think we can sometimes lose touch with the need for this moment to moment humility, this recognition that we are, I think, to a considerable degree, built as biological organisms to maintain a certain kind of consciousness that will, I suspect, be with us largely until we die. I would say the same also about the Buddhist teachers that I’ve met over the decades that however insightful their teachings may be, however fine examples they are of what a human life can be, the more time I spend with them on a day to day basis. I have done with Tibetan lamas, with Zen masters, I’ve got to know them quite well.

I discover not so much a person who is almost as it were out of this world, but rather someone who carries still with them the same kinds of human traits and quirkiness and has good days and bad days like the rest of us. I would be cautious, in a way, setting up another kind of divide, another kind of duality between the unenlightened and the enlightened. One of the things I like very much about Zen is that it’s quite conscious of this problem. One of my favorite citations is from the Platform Sutra of the Sixth Patriarch called Hui-neng, who says, “When an ordinary person becomes awakened, we call them a Buddha. When a Buddha becomes deluded, we call them an ordinary person.”

This is a way of thinking that is perhaps more close to Taoism than it might be to the Indian Buddhist traditions, which to me, tend to operate rather more explicitly within the domain of there being enlightenment on the one hand, and delusion on the other, awakening on the one hand, non-awakening on the other. Then, there’s a kind of a step-by-step path that leads from one unenlightened to enlightened. The Zen tradition is suspicious of that kind of mental description of that kind of frame and recognizes that awakening is not something remote or transcendent. Awakening is a capacity that is open to us in each moment.

It’s not so much about gaining insight into some deep ontology into the nature of how things really are. It’s understood far more ethically. In other words, if I respond to life in a more awake way that can occur for me as well as the Buddha or anyone else in any given moment. In the next moment, I may have slipped back into my old neurotic self and recognize, in fact, that I don’t respond appropriately to the situation’s I face. I’m a little bit wary of this language. A lot of the language we find in the Indian Buddhist traditions we find also in Advaita Vedanta and so on do tend to set up a kind of a polarity. That’s kind of unavoidable, because given what we’re talking about, we need to have some idea of what it is that we are aspiring to achieve.

The danger is we set up another dualism and that’s problematic. I feel that we need a discourse that is able to affirm awakening and enlightenment in the midst of the every day, in the midst of the messiness of my own mind now. The challenge is not so much to become an enlightened person, but it’s to live a more awake and responsive existence in each situation that I find myself dealing with in my life. At the moment, my practice is talking to you, is having this conversation. What matters is not how deeply I may have understood emptiness, but how appropriately, given our conversation, I can take this conversation forward in a way that may be a benefit to others.

I might even learn something myself in this conversation, maybe you will too. That’s where I would like to locate the idea of awakening, the idea of enlightenment, in how I respond to the given situation, I find myself at any moment.

Lucas Perry: Yeah. One thing that comes to mind that I really like and I think might resonate with you, given what you said, I think, Ram Dass said something like, “I’ve become a connoisseur of my neuroses.” I think that there’s just this distinction here between the immersion in neuroses, and then this more awake state with choice where you can become a connoisseur of your neuroses. It’s not that you’ve gotten rid of every bad possible thought that you can have, but that there’s a freedom of non-reactivity in relation to them. That gives a freedom of experience and a freedom of choice.

I think you very much well set up a pragmatic approach to whatever awakening might be. Given the project of living in a world with many other actors with much pain and with much suffering and with much ignorance and delusion, I’m curious to know how you think one might approach spreading something like a secular Dharma. There’s kind of two approaches here, one where we might make the wise more powerful and one where we might be making the powerful more wise.

If listeners take anything away today, in terms of wisdom, how would you suggest that wisdom be shared and embodied and spread in the world, given these two directions of either making the wise more powerful or making the powerful more wise?

Stephen Batchelor: Okay. To answer that question, I think, I have to maybe flesh out more clearly what I understand by awakening. My understanding of awakening is rooted in the conclusion to the Buddha’s first discourse, or what’s regarded as the Buddha’s first discourse. There, he says very clearly that he could not consider himself to be fully awake until he had recognized, performed and mastered four tasks. The first task is that of embracing life fully. The second task is about letting reactivity be, or letting it go, I think letting it be is probably better. The third is about seeing for oneself, the stopping of reactivity, or seeing for oneself a nonreactive space of mind. Then, from that nonreactive space of mind, being able to respond to life in such a way that you actually open up another way of being in the world.

That’s called as the fourth task creating a path or actualizing a path. That path is understood not just as a spiritual path, but one that engages how we think and how we speak and how we work and how we are engaged with the world. What’s being presented here as awakening is not reducible to gaining a privileged insight into say the nature of emptiness or into the nature of the divine, or into something transcendent or into the unconditioned. Buddha doesn’t speak that way. These early Buddhist texts on which I base what I’m saying have somehow been relegated to the sidelines. Instead, we find a Buddhist tradition today speaking of awakening or sometimes they’ll use the word enlightenment, as basically a kind of mystical breakthrough into seeing a transcendent or truer reality, which is often called an absolute truth or an ultimate truth.

Again, these are words the Buddha never used. In my own approach, I’m very keen to try to recover what seems to me and I admit that this is my own project. I don’t think it’s a widespread view. I find this very, very helpful, because awakening is now understood not as a state that some people have arrived at and other people haven’t, which I think sets up too harsh a split, a duality, but rather, awakening begins to be understood as a process. It begins to be understood as part and parcel of how we lead our lives from moment to moment in the world. We can look at this in four phases as embracing life, of letting our reactivity be, seeing the stopping of reactivity and then responding in an appropriate way.

In practice, this process is going on so rapidly that it’s effectively a single task. It’s a single task with what we might call four facets. Here, I would come to what you talk about as wisdom. Wisdom, in this sense, is not reducible to some cognitive understanding. It has to do with the way in which we engage with life as a whole, as an embodied, enacted person. Again, it reflect somewhat the Zen quotation I already mentioned. It has to do with the whole of ourselves, the whole of the way we are in the world as an embodied creature.

I feel that to make the wise more powerful, by wise, I would mean people who actually have given the totality of their life, not just in terms of years, but I mean, in terms of their varying skill sets that they have as human beings, their emotional life, their intuitive life, their physical life, their cognitive life, that all of these elements begin to become more integrated, the person becomes less divided between the spiritual part of themselves and the material part of themselves, for example.

The whole of one’s being is then drawn into the unfolding of this process within the framework of this fourfold task. I feel that if we are to make the powerful more wise, one would require the powerful to make, not just changes in how they might think about things or even gain mystical insights, but actually to have the courage to embark on another way of being in the world at all levels. That is a much greater challenge, I feel. I think we also need the humility to recognize that although, as you say, we do have at our time in the 21st century now, access to traditions of practice, philosophies, we have leisure, we have the times and places to pursue these sorts of practices, we should be wary of the hubris that thereby we can somehow, by mastering these different approaches, we can thereby be in a far better position to solve the problems that the world presents to us.

I think that may be true in a very general sense. But I feel that what’s really called for is a fundamental change in our perspective, with regard to how we not only think of ourselves, but how we actually behave in relationship to the life that we share on this planet with other beings and this planet that we are endangering through our activities. The other dimension, of course, is this is not going to be something that any particular individual alone will be able to accomplish but it requires a societal shift in perspective. It requires communities. It requires institutions to seek to model themselves more on this kind of awake way of living, so that we can more effectively collaborate.

I think the Buddhists and the Advaitists and the Sufis and the Taoists and so on certainly have a great deal to offer to this conversation. I feel that if we’re really to make a difference, their insights have to be incorporated into a much wider and more comprehensive rethinking of the human situation. That’s not something I can see taking place in a short term. I feel that the degree of change required will probably require generations, if I’m really honest about this.

Lucas Perry: You see living an awake life to be something more like a way of being, a way of engaging with the world, a kind of non-reactivity and a freedom from reactivity and habitual conditioned modes of being and a freedom to act in ways which are ethical and aligned and which are conducive to living an examined and fulfilling life.

Stephen Batchelor: Exactly. I couldn’t have put it better myself, Luke. That’s kind of my perspective. Yes.

Lucas Perry: I wanted to put up a little bit of a defense here of insight because you said that it’s not about pursuing some kind of transcendent experience. Having insight can be such a paradigm shift that it is seemingly transcendent in some way. Or it can also be ordinary and not contain something like spiritual fireworks, but still be such a paradigm shift that you’ll never be the same ever again. I don’t expect you’ll disagree with this. In defense of insight into impermanence, it’s like our brains are hooked up to the world where there’s just all this form and sense data and we’re engaging with many things as if they were to bring us lasting satiation and happiness but they can never do that.

Insight into impermanence, I mean, impermanence, conceptually is so obvious and trivial everyone. Everything is impermanent, everyone understands this. If at a deep, intuitive and experiential and non-conceptual level one embodies impermanence, one doesn’t grasp or interact in the world in the same way, because you just can’t, because you see how things are. Then similarly, if one is living their life from within immersion in conceptual thought and ego identification, the capacity to drop back into witnessing the self and for that to dissolve and drop away, and then for all to remain is consciousness and its contents without a center. Hey this is  post-podcast Lucas again and I just wanted to unpack a little bit here what I meant by “immersion in conceptual thought” and “ego-identification.” By ego identification I mean this kind of immersion and identification with thought where one generates and reifies the sense of self as being in the head, and as the haver of an experience, as someone having an experience in the head, and thinking the thoughts and executing all of the commands of the mind and body. And this stands in distinction with a capacity to unhook awareness from that process and to witness the self as a object of perception, an object to be witness, rather than as the center of identity, and for that to then create a distance between the foundation of witnessing and the process of being identified with the ego or reifying and constructing an ego, which is the beginning of shifting towards a perception of consciousness and it’s contents without a center. So, back to the conversation.  That kind of insight and the stabilization in that can be a foundation for loving-kindness and openness and connection.

People experience a great immense of relief, like, oh, my God, I was a poor little self in this world, and I thought I was going to die. Now, I’ve had this insight into non-self, which can be practiced and stabilized, even in a normal day to day life through the practices of, for example, Dzogchen and Mahamudra. This is just my little defense of insight as I think also offering a lot of freedom and capacity for like an authentic shift in consciousness, which I think is part of awakening and that it’s likely not just a way of being in the world. Do you have any reactions to that?

Stephen Batchelor: I have plenty of reactions to that.

Lucas Perry: Yeah.

Stephen Batchelor: I don’t disagree with you, clearly. Of course, there are moments in people’s lives, whether they’re practicing Buddhism or whatever it is. Sometimes, they’re not doing any kind of formal spiritual practice whatsoever but life itself is a great teacher and shows them something about themselves or about life that hits you very powerfully, and does have a transformative effect. Those moments are often the keynotes that define how we then are in the world. Perhaps my work has tended to be recently at least somewhat of a reaction against the over privileging of these special moments.

And attempt to recover a much more integrated understanding of being awake is about being awake moment to moment in our daily lives. Let me give you a couple of examples. With hand on heart, I can say that I have had one experience which would fit your definition probably of what we might call a mystical experience. This occurred when I was a young Tibetan Buddhist monk, I was probably 22 or 23 years old. And I was living in Dharamsala, where the Dalai Lama has his residence. I was studying Tibetan Buddhism. I was very deeply involved in it. One day, I was out in the forest in the huts near where I lived and I went to get some water. Coming back to my hut with a bucket full of water, I suddenly was stopped in my tracks by this extraordinary realization that there was anything at all, rather than just nothing, a sense of total and utter astonishment that this was happening.

It was a moment that maybe lasted a few minutes and all of its intensity. What it revealed to me was not some ultimate truth and not certainly anything like emptiness or pristine awareness. What it revealed to me was the fundamentally questionable nature of experience itself. That experience, I will fully accord with what you have said, has changed my life, it continues to do so now. Yes, I do feel very strongly that deep personal experience of this sort of nature can have a profoundly transformative effect on the way we then inhabit the value system, what we regard as really being important in life.

As for impermanence, for me, the most effective meditations on impermanence were those on death. Again, this is from my Tibetan Buddhist training. Impermanence is of key significance regarding the fact that I am impermanent, that you are impermanent, the people I love are . When I was a young monk, every day, I had to spend at least 30 minutes meditating on what they call in, Tibetan, chiwa’i mitagpa: The impermanence, which is death, and you contemplate reflectively the certainty of death, the uncertainty of its time, and the fact that since death is certain and its time is uncertain, then how should I live this life? The paradox I find with death meditation was not that it made me feel gloomy or pessimistic or anything like that.

In fact, it had the very opposite effect. It made me feel totally alive. It made me realize that I was a living being. This kind of meditation is just one I did for many, many years. That likewise did have a very transformative effect on my life and it continues to do so. Yes, I agree with you. I think that it is important that we experience ourselves, our existence, our life, our consciousness, from a perspective that suddenly reveals it to be quite other than what we had thought. It’s more than just as we thought, it’s also as we felt. Once these kind of insight become internalized and they become part and parcel of who we are, that I feel is what contributes very much to enabling this being in the world version of awakening.

In the Korean Zen tradition in which I trained, they used to speak about sudden awakening, followed by gradual practice. In other words, they understood that this process that we’re involved in of what we loosely call the spiritual path or awakening is comprised both of moments of deep insight that may be very brief in terms of their duration. But if they’re not somehow followed through with a gradual moment-to-moment commitment to living differently. Then, they can have relatively little impact. The danger I feel of making too big a deal out of these moments of insight is that they come to be regarded as the summum bonum of what it’s all about.

Whereas, I don’t think it is, I think, that they are moments within a much richer and more complex process of living. I don’t think they should be somehow given excessive importance in that overall scheme of things.

Lucas Perry: That makes a lot of sense. This is very much rings true from your conversation you had with Sam Harris and the back and forth that you had there is that you put a very certain kind of emphasis on the teachings. Many of the things that I might respond with over the next hour, you would probably agree with. You see them as not the end goal, you would deemphasize them in relation to this living an authentic, fulfilled, awakened life as a mode of being.

Stephen Batchelor: I think that is broadly correct. I honor the insights that come from all traditions, really. I don’t think Buddhism has a monopoly on these things at all.

Lucas Perry: You’re talking about death. I’ve been listening to a lot of Thích Nhất Hạnh recently. I mean, even insight will change one’s relationship with death. He talks about a lot no coming, no going, no birth, no death and the wave like nature of things. Insight into that plus I find this daily mahamudra practice of dropping back to pristine awareness or I think in Dzogchen, they call Rigpa, and glimpses of non-duality and the nature of things. All this coming together can lead to a very beautiful life where I feel like these peak spiritual experience moments are a part of the ordinary, part of the mystery and part of checking and seeing how things are.

I guess, just the last thing I’m trying to emphasize here is that I think the project does lead to a totally new way of being new paradigm shifts much more well-being and capacity for loving-kindness and discovering parts of you that you never knew existed or that the possible. Like, if you’ve spent your whole life using conceptual thought to know everything and you’ve been within ego identification, and you didn’t know that there was anything else, changing from that and arriving at something like heart mindfulness changes the way you’re going to live the whole rest of your life and with much greater ease and well-being.

Hey it’s post-podcast Lucas back again and just wanted to define a few words here that were introduced that might be interesting and helpful to know. The first two are prestine awareness or rigpa. Rigpa is a Dzogchen word for this. I think they both point toward the same thing, which is this original and ground of consciousness which is this pure witnessing or this original wakefulness of personal experience, which all content and form or perceptions or even the sense of self and experience of self appear in relation to this witnessing or pure knowing, which is the ground of consciousness. One can be caught up in ego-identification or can be obscured and lost in thought or anything like this, but this pristine awareness or rigpa or this witnessing is always there underneath whatever form and phenomena are obscuring it. This is related to glimpses, I mentioned glimpses here, and these are pointing out instructions for noticing this aspect of mind. And if that’s something that you’re interested in doing, I highly recommend the work and books of Loch Kelly. He teaches very skillful pointing out instruction which, I think, help to demonstrate and pointing towards pristine awareness or rigpa, which he calls awake awareness. And I also brought up heart-mind here. Heart-mind is a part which is something that one arrives at by unhooking from conceptual thinking and dropping awareness down into the center of the chest where one finds non-conceptual knowing, effortless loving-kindness, a sense of okay-ness, non-judgement, and a place of continuous intuition from which to operate. But that can use conceptual dualistic thought, but doesn’t need to and also understands when conceptual dualistic thought is useful and when it is not. So, alright, back to the episode.

Stephen Batchelor: I don’t disagree with what you have been saying. I feel somehow that we haven’t really found the right language to talk about it in. We’re still falling back on ideas that are effectively jargon terms, in many ways, that people who are involved in Buddhism and Eastern spirituality will understand. If you haven’t had exposure to those traditions, a lot of this, I think, will sound a little bit obscure, maybe very tantalizing. So many of these words are never really very clearly defined.

I feel that there’s a risk there that we create a kind of a spiritual bubble, in which a certain kind of privileged group of initiates, as it were, are able to discuss and talk around these things. It’s a language that as it stands at the present, I think, excludes a great many people. This is what brings me to my other point. Again, you were talking in terms of well-being, in terms of living at ease, in terms of being more fulfilled, but what does that mean? Words that haven’t yet come up in our conversation are those of imagination, those of creativity, we haven’t touched upon the arts.

I’m always rather surprised, to be honest, in these kinds of discussions to hear very little about the arts and imagination and creativity. For myself, my practice is effectively my art. I do work as an artist. That’s been my vocation since a teenager. It got sidetracked by Buddhism for about 20 years. To me, the creative process, you were saying, we come to experience our ways in ways we’ve never suspected before, that we have a much less central insistence on our ego, we’re less preoccupied with concepts. This is all very good. To me, that’s only, in a way, establishing a foundation or a ground for us to be able to actively and creatively imagine another world, another future, another way in which we could be.

For me, ethics is not about adhering to certain precepts, it’s about becoming the kind of person one aspires to be. That, you can extend socially as well what kind of society do I wish there to be on this earth?

Lucas Perry: There’s this emphasis you come at this with, it’s about this mode of being and acting and living an ethical life, which is like awakened being. Then, I’m like, well, the present moment is so much better. There’s this sense where we want to arrive in the present moment without being extended into the past or the future, experientially, so that right now is the point. Also, you’re emphasizing this way of being where we’re deeply ethically mindful about the kind of world that we’re trying to bring into being.

I just want to, as we pivot into your article on extinction, unify this. The present moment is the point and there’s a way to arrive in it so fully and with such insight that you’re tapping into depths of well-being and compassion that you always wish you would have known were there. Also, with this examined and ethical nature, where you are not just sitting in your cave, but you’re helping to liberate other people from suffering using creativity to imagine a better world and helping to manifest moments to come that are beautiful and magnificent and worthy of life. That doesn’t have to mean that you’re an anxious little self caught up in your head worried about the future.

Stephen Batchelor: I don’t actually believe in the present moment.

Lucas Perry: Okay.

Stephen Batchelor: Quite seriously, nor does Nagarjuna. I’ve never been able to find the present moment, I’ve looked and looked and looked a long time.

Lucas Perry: It’s very slippery, it’s always gone when you check.

Stephen Batchelor: Arguably, it’s only a conceptual device to basically describe what is neither gone nor what is yet to come. There’s no point, there’s no actual present moment, there is only flux and process and change. It’s continuous. It is ongoing. I’m a little bit wary of actually even using the term present moment. I would use it as a useful tool in meditation instruction, come back to the present moment, everyone knows pretty much what that means.

I wouldn’t want to make it into an axiom of how I understand this process as something highly privileged and special. It’s to me more important to somehow engage with the whole flow of my temporality with everything that has gone, with everything that is to come and I’d rather focus my practice really within that flow, rather than singling out any particular moment, the present or any other, as having a kind of privileged position. I’m not so sure about that.

Lucas Perry: Okay.

Stephen Batchelor: Also, creativity, I don’t think is just some sort of useful way whereby we might think of a better world in the future. To me, creativity is built into the very fabric of the practice itself. It’s the capacity in each moment to be open to responding to this conversation, for example, in a way that I’m not held back by my fears and my attachments, and so on and so forth, but have found in this flow an openness to thinking differently, to imagine differently, to communicate, to embody what I believe in ways that I cannot necessarily foresee that I can only work towards.

That’s really where I feel most fully alive. I’d much rather use that expression, a sense of total aliveness. That’s really what I value and what I aspire to, is what are the moments in which I really feel that I’m totally alive? That’s what is to me of such great value. I’m also not sure that by doing all these practices that you find deep happiness and so forth and so on. I would not say that for myself. I’ve certainly experienced periods of great sadness, sometimes of something close to depression, anxiety. These, again, are part and parcel of what it is to be human.

I like Ram Dass’s expression becoming a connoisseur of one’s neuroses. I think that’s also very true. I’m afraid that the language of enlightenment and so forth often tends to give you the impression that if you get enlightened, you won’t feel any of these things anymore. Arguably, you’ll feel them more acutely, I think, particularly as we talk about compassion or loving kindness or bodhichitta, we are effectively opening ourselves to a life of even greater suffering. When we truly empathize with the suffering of maybe those close to us, or the suffering that we are inflicting upon the planet, this is not something that is going to make us feel happy or even at ease. Hey it’s post-podcast Lucas here and just wanted to jump in to define a term here that Stephen brings in which is bodhicitta. Which is a mind that is striving for awakening or enlightenment for the benefit of all sentient beings that they also achieve freedom or awakening and liberation from suffering. Alright, back to the conversation. 

I feel that these kinds of forms of compassion are actually inseparable from experiencing a deep pain, something that’s very hard to bear. I’m afraid that that side of things can easily be somehow marginalized in favor of these moments of deep illumination and insight and so forth and so on.

Lucas Perry: Yeah. I mean, pain and pleasure are inevitable. I think it’s very true that suffering is optional and … Okay, yeah.

Stephen Batchelor: Again, what you’ve just said is one of the cliches that we get a lot. A lot of this has come from out of the mindfulness world. The pain is somehow unavoidable but suffering is optional. I find that very difficult to understand.

Lucas Perry: The direction that I’m going is there’s this kind of loving kindness that is always accessible and I think this fundamental sense of okayness, so that there can be mental anguish and pain and all these things, but they don’t translate into suffering, where I would call suffering, the immersion inside of the thing. If there is always a witnessing of the content of consciousness from, for example, heart-mind, there is this okayness and maybe at worst, bittersweet sadness and compassion, which transforms these things into something that is not I would call suffering.

You also gain the degree of skillfulness to work with the mud of pain and suffering to transform it into what Thich Nhat Hanh would say like a lotus.

Stephen Batchelor: Again, we might be on a semantic thing here.

Lucas Perry: I see.

Stephen Batchelor: If we go back to the early Buddhist texts, or most Buddhists, they have this one word dukkha, they don’t have a separate word for pain or for suffering. This is an intervention that’s come along more recently in the last 20 or 30 years, I think, this distinction. There is dukkha. The first task of the four tasks is to embrace dukkha. Dukkha includes pain, it includes suffering, it includes anything. It has to do with being capable of embracing the tragic nature of our existence. It has to do with being able to confront and be open to death, to sickness, to aging, to extinction, as we’re going to go on and talk.

I find it difficult personally, to somehow imagine we can do all of that without suffering. I don’t know what you mean by suffering but it looks to me as though you’ve defined it in a fairly narrow way, in order to separate it off from pain. In other words, suffering becomes mental anguish. They often talk of this image of the second arrow. The first arrow is the physical pain. Then, you add on to that all of the worries about it and all of the oh, how poor me, and all that kind of stuff. That’s psychologically true. I accept that.

That’s a way too narrow way of talking about dukkha. Dukkha, there is a grandeur and a beauty in dukkha. I know that sounds strange. For me, that’s really, really important not to feel that these spiritual practices somehow can alleviate human suffering in the way that it’s often presented, and that we become everyone smiling and happy, would you get that too. I mean, Thích Nhất Hạnh’s approach. There’s a kind of saccharin sweetness in this approach, which I find kind of false. That’s one of the reasons also like a lot of the Christian tradition, the image of Christ on the cross is not the image of a happy at ease kind of person. There’s a deep tragedy in this dimension of love that I’m very wary of somehow discounting in favor of a kind of enlightened mind that really is happy and at ease all the time.

Of all the different teachers and people I’ve met, I’ve never met anyone like that. It’s a nice idea. I don’t know whether it’s terribly realistic or whether it actually corresponds to how Buddhists, Hindus, Jains, and others have lived over the last centuries.

Lucas Perry: All right. Your skepticism is really refreshing and I love it. I wish we could talk about just this part forever, but let’s move on to extinction. You have an article that you wrote called Embracing Extinction. You talk a lot about these three poisons leading to everything burning, everything being on fire. Would you like to unpack a little bit of your framing here for this article? How is it that everything in the world is burning? What is it that it’s all consuming? How does this relate to extinction?

Stephen Batchelor: Okay. I start this article, which was published in the summer edition of Tricycle, this year, by quoting the famous statement of the Buddha, we find in what’s called the Fire Sermon, where he says, the world is burning, the eyes are burning, the ears are burning, et cetera, et cetera, the senses are burning, the mind is burning. Then he asked, burning with what? The answer is burning with greed, burning with hatred, burning with confusion. That’s his way of speaking about what I would call reactivity.

In other words, when the organism encounters its environment, it’s a bit like a match encountering a match box. That causes certain reactive patterns to flare up. These are almost certainly the result of our evolutionary biology that we have managed to survive as a race, as a species, so successfully, because we’ve been very good for getting what we want. We’ve been very good at getting rid of things that have gotten in our away. We’ve been very good at stabilizing our sense of me and us at the expense of others, by having a very strong sense of ego, a very strong sense of me.

These are understood as fires in the earliest texts and then later, Buddhism, begins to think of them more are toxins, as viruses, as poison that contaminate the whole system, as it were once they have taken hold. What I find quite striking is how this metaphor of fire, which was probably spoken by the Buddha about 500 B.C., a long, long time ago. Yet, when we read that today, it’s very difficult not to hear it as a rather prescient insight into the literal heating up of the physical environment, that through living a life of industrial technology, basically, whereby we have managed very successfully to develop, as we call it, industries and great cities and systems of transport and electricity, all this kind of stuff.

The consequence has been that we’re actually now poisoning the very environment that we depend upon in order to live. For that reason, I feel that there’s something in the Buddhist Dharma that recognizes the heating up that occurs when we lead a life that is driven by our reactive habits, our reactive patterns. The second of the four tasks is to let those be, is to let them go, is to find a way of leading a life that is not conditioned by greed and by hatred and by egoism and confusion. That’s the challenge.

Of course, on an individual level, we can do the best we can. If it’s going to have any lasting impact on the condition of life on earth, then this has to be a societal cultural movement. This comes back to something we already talked about before. If we’re going to make a difference to our future, if we’re going to stave off what might turn out to be rapid extinction, not only of other species, but possibly even of the human species, and not within billions of years, but possibly within the next century, then we have as a human community, a global, to really alter the ways in which we live.

I do think that spiritual traditions, Buddhism and others, offer us a framework in which we can work with these destructive emotions. Hopefully, in our own lives, maybe in the lives of those who we’re able to affect closely, maybe in the lives of people who are listening to this podcast, can ripple out and maybe, in the long term, diminish the kinds of powers that are at work, that in many ways seem unstoppable. At one level, I can be optimistic. I can see that we do have the understanding of what’s creating the problem. There is amongst more and more people, I think, a genuine commitment to lead lives that do not contribute to such a crisis.

I’m also aware, both in myself and many others, I know that we are complicit in this process. Each time we take a plane, each time we put on our heating system. I had a mango last night and I realized it came from the Ivory Coast. I mean that’s entirely unnecessary, yet I still go out and get the things. Again, it’s humility to recognize that I can have all these very high minded ecological ideas, but how am I actually changing the way I live? What am I doing in my life that will help others to likewise take those steps? I feel the power of evolution, the power of greed, hatred, and delusion, which I think are really just the instinctual forces that have got human beings to where they are, are very, very forceful.

They’re the armies of Mara, the Buddha used to call them. He says, there’s nothing in this world as powerful as the armies of Mara. Mara being the demonic or the devil. I wonder for many reasons, whether in fact, we are capable as a human community of restraining such instincts and impulses. I hope so. I’m not totally optimistic.

Lucas Perry: Right. Wisdom, as we would have understood it from the beginning of our conversation would be an understanding of these three poisons of hate, greed and delusion. It’s coming to understand them from this mind of non-reactivity and awareness, one can see their arising and by witnessing, disdentify with them, and then have choice over what qualities will be expressed in the world. One thing that I really liked about your article was how you talk about this problem solving mode of being. Many of our listeners, and myself included, and I think this especially comes from the computer science mindset, is this very strong reliance on conceptual dualistic thought as the only mode of knowing.

From the head, one is embedded in this duality with the world where one is in problem solving mode. You talk about how the world becomes an object of problems to be solved by conceptual thinking. This isn’t, as you say, to vilify conceptual thinking or to vilify and attack something like technology, but is to become aware of where it is skillful and where it is unskillful. To use it in ways, which will bring about better worlds, I’m wondering if you can help to articulate the danger of being in the problem solving mode of being where we lack connection with interdependence with the outside world, where there’s perhaps a strong sense of self from the problem solving mode of being.

Just to finish this off, I’m quoting you here, you say, “Such alienation allows us to regard the world either as a resource for the gratification of our longings or as a set of problems to be solved for the alleviation of our discontents.”

Stephen Batchelor: Yes, okay. To me, this is a very important point. I’m inspired in this thinking by a Martin Heidegger, who’s a very controversial thinker. But someone I feel who did have some considerable insight into this process long before anybody else. I cite him in the article. His point, which I completely agree with, actually, is that the problem with technology is not the technological machines and computers and so forth in themselves but it’s the mindset that, in a way, justifies and enables those kinds of technological behaviors to happen.

As you said, this is effectively a mindset that is cut off from the natural world. I think we can see this beginning in about the 18th century with Descartes and others, whereby we set up the idea that there is a world out there and that there is an internal subject, a consciousness that is able to distance itself from the natural world in order to have the objectivity and the clarity to be able to then manipulate it to suit our particular desires and to ward off our particular fears. Now, one of the things that often disturbs me is that this technological language is often used to describe these spiritual practices as spiritual technologies.

This is a term I hear quite a lot actually, or our rather, unthinking an uncritical use of the word technique, the technique of mindfulness, the techniques of meditation, meditational techniques. As long as we’re not thinking critically around that term technique, I think, very often we are unconsciously perpetuating precisely the distinction that we often, in another part of our mind, we’re trying to overcome, namely this notion of separation. We see this in meditation. If you see your mind as it is, if you recognize what are the destructive emotions, then you can get to the root of them, get rid of them, and then you’ll be happy.

That, again, carries with it a certain mindset, which is so much part, not only of our modern western culture, but I feel is part of the human condition. I think, we’re very deeply primed to think of the world as something out there and ourselves as something in here. We find in eastern religions, for example, the idea of rebirth that when we die, we don’t really die, our mind will sort of go on somewhere else, which again, I think reinforces this notion that there is a duality. There’s a spiritual inside and there is a material outside. That’s just simply the way things are. Many of the people who teach Dzogchen and Vipassana and Mahamudra believe very strongly in there being a mind that is not part of the physical world, that somehow transcends the physical world, that gives us the opt-out clause, that when we die, we don’t really die.

Something mysterious will carry on. To that extent, I feel that I’ll stick to Buddhism, because it’s the one I know, I think Buddhism can actually, again, reinforce this technological mindset as an inner technology. I think that’s a very dangerous idea. If I go back to that experience I had in the woods in Dharamsala when I was 22 years old, it was that idea that was really overthrown. It was a recognition of the mystery that I am part and parcel of I cannot meaningfully separate my experience from what is going on around me. Again, it’s easy to say that, it’s another thing altogether to really feel that in your bones.

I think that requires a lifelong practice, a refinement of sensitivity. It also requires, I think, a much more critical way of thinking about so many of the ideas that we take on board without really examining to see whether they are in fact tacitly reinforcing certain mindsets that we will probably not be happy to endorse. I think, all of this goes together. If we are to engage with this environmental crisis, which undeniably is the consequence of our industrial technologies, then we have to also see to what extent we are complicit, not just as consumers in buying mangoes from the Ivory Coast, but also as subjects. As a subjective conscious beings who are at one level still buying into the mind-matter split.

I get into a lot of trouble with Buddhists because I reject the idea of reincarnation and the mind goes on somewhere after death precisely because I feel it is a dualism that actually undergirds our sense, the core difference, I would say, that separates us from being participants in the natural world. I cannot think of birth, sickness, aging and death as problems to be overcome. Yet, that is quite clearly the goal of Buddhism. It’s to bring the end of suffering, which doesn’t mean just the ending of mental anguish, which is in a sense, just scratching at the surface. It’s the ending of birth, the ending of sickness, the ending of aging, and the ending of death. It’s a total transcendence of an embodied life.

For this reason, I feel that it’s very helpful to replace the idea of solving a problem with the idea of penetrating a mystery. Because birth, sickness, aging and death are not problems, they’re mysteries and they’re mysteries because I cannot separate myself from death. I am the one who is going to die. I cannot separate myself from aging. I am the one who is aging, and so on. To do that is to acknowledge these things cannot be solved in the way that problems are solved. Likewise, confusion and greed and hatred, these are not problems to be solved, as Buddhism would often make us believe, they are mysteries too because I am greedy, I am hateful, I am confused.

They’re part and parcel of the kind of being that evolution has brought about that I am one of million examples. By making that change, and I think that change for me was put into practice by doing Zen meditation, primarily, the meditation of asking the question, what is this? That is a Koan or a hwadu, literally. It’s a practice that I trained in for four years in Korea. I did something like seven, three months retreats, just asking the question, what is this? In other words, getting myself to experience in an embodied, in an emotive way, the fact that I am inseparable from the mystery that is life.

That, for me, is the kind of foundation that can lead us into a profoundly different relationship with the natural world. Again, I need to emphasize that this is working against very profoundly rooted human attachments and beliefs. I think the Four Noble Truths is, again, it’s a problem solving paradigm. Suffering is the problem, ignorance is its cause, get rid of the cause, you get rid of the problem. That’s Nirvana. I think that shows that this problem solving mindset is not just modern technology from the 18th century in Europe, as Heidegger seems to think. It goes back to something way deeper. It seems to be built into the human consciousness itself, maybe even into the structures of our neurology. I can’t really speak with any authority on this thing. That’s my sense.

Lucas Perry: I can also hear the transhumanists and techno-optimists screaming, who want to problem solve the bad things that evolution has given us, like hate, anger and greed. You just find those in the genetics and the conditioning of evolution and snip them out and replace them with awakening or enlightenment, that sounds much better. Sorry, can you more fully unpack this mystery mode of being and what it solves, that being embedded as a subjective creature, who is witnessing things as a mystery rather than as viewing them as a problem to be solved?

Stephen Batchelor: Again, my emphasis in what I just said was effectively to swing the pendulum back to a perspective that’s usually ignored. In practice, we need both obviously. It would be absurd to be just an out and out technophobe and to reject technology and to reject problem solving per se. That would be silly. Technologies have been enormously beneficial to us in so many ways. Look at the current pandemic, it’s quite amazing how we’ve been able to identify the virus so quickly, how we’ve been able to then proceed towards developing vaccines. This is all because of our extraordinary medical technologies. That’s great. I’ve no problem with that at all.

The real issue is when we start to think that a technological way of thinking is the only way of thinking. In the same way that you said earlier that we tend to think that conceptuality and duality and egos are the only ways of being. I think we have to add to that a technological mindset. I think that is just as much part of the problematic with greed, hatred and delusion. I think, it is a form of delusion. It’s a very primary form of delusion. In practice, that’s the challenge is to differentiate between those areas of our life when it is useful to stand apart from let’s say, a novel coronavirus and look at it under a microscope, very useful, very necessary. Not to let that way of thinking become normative to the whole way we lead our lives and to open ourselves to the possibility of encountering the world and ourselves, our mental states, other people, not as problems but as mysteries.

To be able to value that dimension of our experience without reducing it to a technological kind of thinking, but to honor it for what it is, as something that cannot be captured by concepts, by language.

Lucas Perry: Yet, they go together. I think the distinction here is subtle. People might be reacting to this and maybe a little bit confused about the efficacy of relating to things as a mystery as I am a little bit. Let me see if I am capturing this correctly. I can sense myself suffering right now taking the attitude and view that my neuroses and my sufferings and my pain are problems to be solved. It creates a duality between me and them. It creates this adversarial relationship. I’m not willing to be with them or to experience them. It’s this sense of striving or craving for them to go away or to be other than what they are. I think that’s why I am sensing myself suffering right now taking on the problem solving sense of view.

If I disdentify with that and I begin witnessing that, and I shift to these are mysteries and there is this sense of beauty that is compelling to you, which I can sense and this kindness and compassion and ease towards them. This doesn’t mean that the problem solving sense goes away. There’s more of a dropping into heart, into being, into willingness to be with them and explore them and to be skillful in their unfolding and change. That is in its sense still a kind of problem solving. I mean, there are parts of me that are unwanted, but there is a way of coming to issues which are unwanted and seeing them as mysteries and being with them in an experiential way other than the industrial 20th century ego-identification, conceptual thought problem solving mode of being, which feels quite samsaric, like you’re in a hell realm of hungry ghosts in the mind, everything needs to be different.

How do I think all the right thoughts to change the atoms to make the problems go away? Hey it’s post-podcast Lucas here, which I mean as a conditioned pattern of thought which is motivated and structured by ignorance or confusion, as well as craving. And so I see this kind of structure also applying to the problem solving mode of thought which has this element of craving and confusion of separateness that leads to this sense of suffering or disease. It seems to me subtle like that, does this capture what you’re pointing toward?

Stephen Batchelor: I think it is very subtle. Again, I would also concur that yes, there are parts of our inner life, our psychology, that can be effectively dealt with by inner techniques. Like, for example, if we’re extremely distracted all the time, if we train ourselves to be more focused, if we do concentration exercises, do Shamatha practice, over time, we can get better at not being distracted. That’s the application of a technique. There are aspects of spiritual practice, not a term I’m terribly fond of, but let’s stick with it.

I think the cultivation of mindfulness, the cultivation of concentration, cultivation of application, for example, all of these things have a technical aspect to them. If I do therapy I’ve got some neuroses like chronic anxiety. I’m not going to resolve that by saying how mysterious, wow, this is wonderful being in the mystery of anxiety. That’s not what I meant. What I meant is that that is something that we can recognize as being a problem, a legitimate problem.

Lucas Perry: It’s unwanted.

Stephen Batchelor: Yeah, it’s unwanted. It’s unwanted for good reasons because it prevents us from living fully, from being fully alive. It constrains us from living. It keeps us locked up in a little bubble of our own neurotic thoughts. We can find technologies, psychotherapies, that if we apply them can actually effectively help get rid of that problem. Although, as both Freud and Jung were quite clear, the problem will not just evaporate, it’ll still be there, but we’ll be able to live with it better. Jung’s idea was that we get to the point where instead of the neurosis having you, you have the neurosis.

In some ways, I think, a lot of these neuroses are going to be around, whether we like it or not. We can, in a way, have them rather than them having us. That is a form of therapy. That is a form of cure. When we come to these deeper spiritual values, let’s say wisdom, or compassion, or love, I find it very difficult to understand how these are qualities that we can arrive at by simply pursuing a set of technological procedures. I think and I’ve, again, witnessed this in myself in others, colleagues, friends, monks and whatnot, people who’ve dedicated years and years and years and years and years to cultivating these inequalities of mind, but in some ways don’t really seem to have become significantly wiser or more loving.

I really question whether wisdom is something that can be produced by becoming an expert in certain meditation techniques or love. I think these are qualities that are meta-technical, they’re beyond the reach of technique. I think suffering in the deepest sense of existential suffering, which is effectively what I think the Buddha is primarily concerned with, is birth, sickness, aging and death. Birth, sickness, aging and death, likewise, I do not think can be resolved by finding a solution that can render them no more problematic. Even if you follow the traditional Buddhist way of describing this, that’s effectively what happens. It’s only when you’re dead that you are freed from birth, sickness, aging.

Birth, sickness, aging and death are mysteries, but a great amount of what we suffer from within our inner lives, within our social lives, within our world. Our problems, if correctly identified as such, that can be dealt with through applying techniques. The challenge, and this is, I think, perhaps where you talk of subtlety is to be able to differentiate between what is actually a mystery and cannot be solved as to what is a problem and can be solved. Western technological society particularly, really, has no room at all for this mystery focused way of life.

We might get it in church on Sundays, a little bit of it, but we seem to have almost disconnected from that whole side of life. I feel that one of the reasons we’re drawn to some of these eastern spiritualities is because they seem to bring us back to that quality of awareness. If you don’t like the word mystery, and a lot of people feel a little bit uncomfortable with it, just think of it as I do a lot of the time that we live in an incredibly strange world that is extremely weird that you and I are having this conversation.

I never cease to be utterly astonished and amazed by the most banal things. I think it’s to be able to recover a sense of the extra ordinary, within the utterly ordinary, that enables us to begin to have a very different relationship to the natural world that we’re threatening. I feel that if we haven’t embodied that sense of strangeness of … not only strangeness but the same recognition that I cannot separate myself from these things, I cannot distance myself from these things, they are infinitely close. That’s another definition of mystery.

Lucas Perry: The ground of your being.

Stephen Batchelor: Yeah, if you want. Remember, that this is a term coined by Paul Tillich, the Christian theologian in the 1960s. He understood the ground of being to be a groundless ground that is beautiful. A ground which is like an abyss literally in German. If we talk of ground of being be very careful not to make the ground too solid, it’s a ground which is no ground. That, again, is very close to Buddhist thinking.

Lucas Perry: Yeah, it seems subtle in the way that you’re still solving problems from this way of being. From embodying this experiential relationship and subjectivity in the world, it changes and modifies in skillful ways, perhaps the three poisons and it allows you to be more skillful is what you’re saying. It’s not like you pretend like problems don’t exist. It’s not like you stop solving problems. It’s that there’s a lot of skillfulness in the way that this modification of your own subjectivity leads to your own being in the world. I’d love to wrap up here with you then on talking about effective altruism in this field.

The Future of Life Institute is concerned with all kinds of different existential risk. We’re contextualized in the effective altruism movement, which is interested in helping all sentient beings everywhere, basically, by doing whatever is most effective in that pursuit and which leads to the alleviation of suffering and the promotion of well-being as potentially narrowly construed. Though that might not be the only ethical framework, you might decide what would be effective interventions in the world. What this has led to is what we’ve already talked about here, which is this extremely problem solving kind of mind. People are like very in their heads and interested and reliant on conceptual thought to basically solve everything.

Ethics is a problem to be solved. If you can just get everyone to do the right things, the animals will be better off, there will be less factory farms. We’ll get rid of existential threats. We can work on global poverty to do things that are really effective. This has been very successful to a certain degree. With this approach, tremendous suffering has already been alleviated and hopefully still will be. But it lacks many of these practices that you talked about, perhaps it suffers from some of the unskillfulness of the problem solving mindset. There isn’t any engagement in finding natural loving kindness, which already exists in us or cultivating loving kindness in our activities.

There’s not much emotional connection to the benefactors of the altruism. There’s not sufficient, perhaps, emotional satisfaction felt from the good deeds that are performed. There’s also lots of biases that I could mention that exist in general in the human species, like we care about people who are closer to us, rather than people who are far away. That’s a kind of bias. Children are drowning in shallow ponds all over the world, and no one’s really doing anything about it. Shallow ponds being places of easy intervention like you could easily save that child.

This conversation we’re having about wisdom, I think, for me, means that if effective altruism were potentially able to have its participants shift into a non-conceptual experiential embodying of perhaps kinds of insights or a way of being that you might support as living an examined life and as a method of awakening and perhaps insight into emptiness and impermanence and not-self and suffering, I think this could lead to transformative growth that might upgrade our ethics and experience of the world and the way of being and could de-bias some of these biases which lead to ineffective altruism in the world.

I think that seeing through non-self that really kind of annihilates caring about people closer to you rather than far away from you or people who are far away in time for those who are interested in existential threat. I’m curious if you have any reactions or perspective here about how the insights and wisdom of wisdom traditions and perhaps a secular Buddhism and secular Dharma could contribute to this community.

Stephen Batchelor: I have to confess that when confronted with these kinds of problems, the ones you just very clearly present, I really see considerable shortcomings in both the Buddhist community and in this broader spiritual community that we might feel we’re part of. Because in the end, a lot of these practices are effectively things we do on our own and we may do them within a small Sangha or small community. We may write books. We might get more and more people practicing mindfulness. That is all very well. I’m not actually convinced that simply by changing individual minds, and if we change enough individual minds, we’ll suddenly find ourselves in a much healthier world.

I think the problems are systemic. They are built into the structures of our human societies. They’re not intelligible purely as the collective number of individual deluded or undeluded minds. I think we’re going into the sort of territory of systems theory, whereby groups and systems do not behave in such a way that can be predicted by analyzing the behavior of the individual members of that system. I think, if I’m getting that correct. Again, I’ll just speak about the Buddhist community but it of course, probably refers to others as well.

I think the great challenge of the Buddhist community is that it has to come up with a social theory. It has to come up with a way of thinking that goes beyond the person and that is able to think more systemically. Now there are Buddhist thinkers who are trying to do that people like David Loy would be a very good example. Nonetheless, I don’t feel that we’ve really grappled with this question adequately. I have to admit to my own confusions and limitations in this area too. I feel that my writing, which is my main work, is slowly evolving in this direction. What really pushed me in this direction was an essay by Catherine Ingram, who you may have heard of, called Facing Extinction. I borrowed it effectively, my essay called Embracing Extinction is an acknowledgement of my debt to her.

I had been part of the green movement for the last 30 odd years or so. It was only on reading Catherine’s piece that I suddenly was struck viscerally by the fact of our creating a world that could well lead to the extinction of all species within the next century or so. I think we thereby need to be able to respond to these dilemmas at the same pitch and at the same level. In other words, the visceral level in which these questions are beginning to emerge in ourselves. Again, I go back to Zen, one of the favorite sayings of my teacher was great questioning, great awakening, little questioning, little awakening, no questioning, no awakening.

In other words, our capacity to be awake is correlated to our capacity to ask questions in a particular way. If we have intellectual questions or let’s say, problem solving questions, then we can resolve those questions by coming up with solutions. They’ll be at one level operating at the same pitch. In other words, they are conceptual problems, they’re intellectual problems. Great awakening arises because we’re able to ask questions at a deeper level. If you take the Legend of the Buddha, the young prince who goes out of the palace, he encounters a sick person, an aging person and a corpse, and that is what triggers within him what in Zen is called great questioning, or great doubt, great perplexity.

The practice of Zen is actually to stay with those great questions and to embody them to get them to actually penetrate into your flesh and bones. Then, within such a perspective, one then creates the conditions for a comparable level of visceral awakening. That now I feel has to be extended on a communal level. We have as a community, whether it’s a small, intentional community of Buddhists, or a larger human community, be able to actually ask these questions at a visceral level. The kind of empathy you speak of, I feel also has to come from this degree of questioning.

I think there’s often too much of an understandable sense of urgency in a lot of these questions. That urgency often just causes us to immediately try to go out and figure what we can do. That’s probably a good thing. We maybe do not allow enough time to really allow these questions to land at a deep visceral level within ourselves such that answers can then begin to emerge from that same depth. That is the kind of depth I feel that if we are to come up with a more systemic philosophy, a social theory, maybe an economic theory that is grounded in such depth that will perhaps be able to guide us more effectively towards being effectively altruistic.

That’s kind of really where I’m at with this at the moment. My work as it’s evolving in what I’m writing now, for example, I’m writing a book called the Ethics of Uncertainty where I’m trying to flesh this out more fully. This is where I feel my life is going. I don’t know whether I’ll live long enough to actually do more than climb a few more steps if I’m lucky. I’m very moved by my colleagues and friends who were very much involved in the extinction rebellion demonstrations, particularly in London. I have a number of close friends who are very involved with that.

That likewise, I found a great source of inspiration and something towards which I would very much hope for my writing in my philosophy to be able to contribute. That’s kind of where I’m going. I think that humanity does face an existential crisis of a major order at the moment. I see all kinds of forces that are railed that are sent not in our favor, the least of which is the four year election cycle. I just wonder how national governments who are in effect beholden to electorates whose needs are probably largely about can I get work? Can my kids get a good school and good health care system? That’s going to be the priority for most people frankly.

It’s all very well talking about saving the environment. When push comes to shove, again, your bias will be basically my kids, my immediate community, or my nation. We have to get beyond that. We can’t think in national terms anymore. There are transnational movements. I think that they certainly need to be developed and further strengthened. Can such transnational movements ever achieve the kinds of power that will enable changes to occur on a global level? I can’t see that happening in our current world, I’m afraid. I find very distraught by that.

When you see some of these right wing populists, they’re effectively pushing back in the other direction, and that, unfortunately, on the ascendant, I do not feel at all optimistic, given our situation. As a person who tries to lead a life governed by care and compassion and altruism, I cannot but seek ways of embodying those feelings in actions. As a writer, that’s what I’m probably best at doing. I’m very glad I’ve had the opportunity to be able to speak to you and to the Future of Life community about my ideas that I don’t know whether I really have a great deal to say that’s really going to change the paradigm that we are, I think all of us, are working towards another paradigm altogether.

Lucas Perry: Thank you, Stephen. I’ve really, really enjoyed this conversation. To just close things off here, instead of making powerful people more wise or wise people more powerful, maybe we’ll take the wise people and get them to address systemic issues, which lead to and help manifest things like existential risk and animal suffering and global poverty.

Stephen Batchelor: That would be great. That would be wonderful. Thank you very much, Luke. It’s been a lovely conversation. I really wish you all the best and all of those of you who are listening to this likewise.

Lucas Perry: Yeah, thanks so much, Stephen. If people want to follow you or to check out more of your work, I’ve really enjoyed your books on Audible. If people want to follow you or find more of your work, where the best places to do that?

Stephen Batchelor: I have a website, which is www.stephenbatchelor.org or the main institution I’m involved with is called Bodhi College, B-O-D-H-I, hyphen college.org. There, you’ll find information on the courses that I lead through them. Next year, in 2021, I’m leading a series of 12 seminars on Secular Dharma, which I’ll be addressing a lot of the questions that have come up in this podcast. It will be an online course, once a week for 12 weeks, 12 three-hour seminars.

It’ll be publicized in the next few weeks. We’re just finalizing that program as of now. Thank you.

Lucas Perry: All right. Thank you, Stephen. It’s been wonderful.