Posts

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities?

Topics discussed in this episode include:

  • Existential risk
  • Computational substrates and AGI
  • Genetics and aging
  • Risks of synthetic biology
  • Obstacles to space colonization
  • Great Filters, consciousness, and eliminating suffering

You can take a survey about the podcast here

Submit a nominee for the Future of Life Award here

 

Timestamps: 

0:00 Intro

3:58 What are the most important issues in the world?

12:20 Collective intelligence, AI, and the evolution of computational systems

33:06 Where we are with genetics

38:20 Timeline on progress for anti-aging technology

39:29 Synthetic biology risk

46:19 George’s thoughts on COVID-19

49:44 Obstacles to overcome for space colonization

56:36 Possibilities for “Great Filters”

59:57 Genetic engineering for combating climate change

01:02:00 George’s thoughts on the topic of “consciousness”

01:08:40 Using genetic engineering to phase out voluntary suffering

01:12:17 Where to find and follow George

 

Citations: 

George Church’s Twitter and website

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Professor George Church on existential risk, the evolution of computational systems, synthetic-bio risk, aging, space colonization, and more. We’re skipping the AI Alignment Podcast episode this month, but I intend to have it resume again on the 15th of June. Some quick announcements for those unaware, there is currently a live survey that you can take about the FLI and AI Alignment Podcasts. And that’s a great way to voice your opinion about the podcast, help direct its evolution, and provide feedback for me. You can find a link for that survey on the page for this podcast or in the description section of wherever you might be listening. 

The Future of Life Institute is also in the middle of its search for the 2020 winner of the Future of Life Award. The Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients of the Future of Life Institute award were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. The third recipient was Dr. Matthew Meselson, who spearheaded the international ban on bioweapons. Right now, we’re not sure who to give the 2020 Future of Life Award to. That’s where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration. The link for that page is on the page for this podcast or in the description of wherever you might be listening. If your candidate is chosen, you will receive $3,000 as a token of our appreciation. We’re also incentivizing the search via MIT’s successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there are also tiered pay outs to the person who invited the nomination winner, and so on. You can find details about that on the page. 

George Church is Professor of Genetics at Harvard Medical School and Professor of Health Sciences and Technology at Harvard and MIT. He is Director of the U.S. Department of Energy Technology Center and Director of the National Institutes of Health Center of Excellence in Genomic Science. George leads Synthetic Biology at the Wyss Institute, where he oversees the directed evolution of molecules, polymers, and whole genomes to create new tools with applications in regenerative medicine and bio-production of chemicals. He helped initiate the Human Genome Project in 1984 and the Personal Genome Project in 2005. George invented the broadly applied concepts of molecular multiplexing and tags, homologous recombination methods, and array DNA synthesizers. His many innovations have been the basis for a number of companies including Editas, focused on gene therapy, Gen9bio, focused on Synthetic DNA, and Veritas Genetics, which is focused on full human genome sequencing. And with that, let’s get into our conversation with George Church.

So I just want to start off here with a little bit of a bigger picture about what you care about most and see as the most important issues today.

George Church: Well, there’s two categories of importance. One are things that are very common and so affect many people. And then there are things that are very rare but very impactful nevertheless. Those are my two top categories. They weren’t when I was younger. I didn’t consider either of them that seriously. So examples of very common things are age-related diseases, infectious diseases. They can affect all 7.7 billion of us. Then on the rare end would be things that could wipe out all humans or all civilization or all living things, asteroids, supervolcanoes, solar flares, and engineered or costly natural pandemics. So those are things that I think are very important problems. Then we have had the research to enhance wellness and minimize those catastrophes. The third category or somewhat related to those two which is things we can do to say get us off the planet, so things would be highly preventative from total failure.

Lucas Perry: So in terms of these three categories, how do you see the current allocation of resources worldwide and how would you prioritize spending resources on these issues?

George Church: Well the current allocation of resources is very different from the allocations that I would set for my own research goals and what I would set for the world if I were in charge, in that there’s a tendency to be reactive rather than preventative. And this applies to both therapeutics versus preventatives and the same thing for environmental and social issues. All of those, we feel like it somehow makes sense or is more cost-effective, but I think it’s an illusion. It’s far more cost-effective to do many things preventatively. So, for example, if we had preventatively had a system of extensive testing for pathogens, we could probably save the world trillions of dollars on one disease alone with COVID-19. I think the same thing is true for global warming. A little bit of preventative environmental engineering for example in the Arctic where relatively few people would be directly engaged, could save us disastrous outcomes down the road.

So I think we’re prioritizing a very tiny fraction for these things. Aging and preventative medicine is maybe a percent of the NIH budget, and each institute sets aside about a percent to 5% on preventative measures. Gene therapy is another one. Orphan drugs, very expensive therapies, millions of dollars per dose versus genetic counseling which is now in the low hundreds, soon will be double digit dollars per lifetime.

Lucas Perry: So in this first category of very common widespread issues, do you have any other things there that you would add on besides aging? Like aging seems to be the kind of thing in culture where it’s recognized as an inevitability so it’s not put on the list of top 10 causes of death. But lots of people who care about longevity and science and technology and are avant-garde on these things would put aging at the top because they’re ambitious about reducing it or solving aging. So are there other things that you would add to that very common widespread list, or would it just be things from the top 10 causes of mortality?

George Church: Well infection was the other one that I included in the original list in common diseases. Infectious diseases are not so common in the wealthiest parts of the world, but they are still quite common worldwide, HIV, TB, malaria are still quite common, millions of people dying per year. Nutrition is another one that tends to be more common in the four parts of the world that still results in death. So the top three would be aging-related.

And even if you’re not interested in longevity and even if you believe that aging is natural, in fact some people think that infectious diseases and nutritional deficiencies are natural. But putting that aside, if we’re attacking age-related diseases, we can use preventative medicine and aging insights into reducing those. So even if you want to neglect longevity that’s unnatural, if you want to address heart disease, strokes, lung disease, falling down, infectious disease, all of those things might be more easily addressed by aging studies and therapies and preventions than by a frontal assault on each micro disease one at a time.

Lucas Perry: And in terms of the second category, existential risk, if you were to rank order the likelihood and importance of these existential and global catastrophic risks, how would you do so?

George Church: Well you can rank their probability based on past records. So, we have some records of supervolcanoes, solar activity, and asteroids. So that’s one way of calculating probability. And then you can also calculate the impact. So it’s a product, the probability and impact for the various kinds of recorded events. I mean I think they’re similar enough that I’m not sure I would rank order those three.

And then pandemics, whether natural or human-influenced, probably a little more common than those first three. And then climate change. There are historic records but it’s not clear that they’re predictive. The probability of an asteroid hitting probably is not influenced by human presence, but climate change probably is and so you’d need a different model for that. But I would say that that is maybe the most likely of the lot for having an impact.

Lucas Perry: Okay. The Future of Life Institute, the things that we’re primarily concerned about in terms of this existential risk category would be the risks from artificial general intelligence and superintelligence, also synthetic bio-risk coming up in the 21st century more and more, and then accidental nuclear war would also be very bad, maybe not an existential risk. That’s arguable. Those are sort of our central concerns in terms of the existential risk category.

Relatedly the Future of Life Institute sees itself as a part of the effective altruism community which when ranking global priorities, they have four areas of essential consideration for impact. The first is global poverty. The second is animal suffering. And the third is long-term future and existential risk issues, having to do mainly with anthropogenic existential risks. The fourth one is meta-effective altruism. So I don’t want to include that. They also tend to make the same ranking, being that mainly the long-term risks of advanced artificial intelligence are basically the key issues that they’re worried about.

How do you feel about these perspectives or would you change anything?

George Church: My feeling is that natural intelligence is ahead of artificial intelligence and will stay there for quite a while, partly because synthetic biology has a steeper slope and I’m including the enhanced natural intelligence in the synthetic biology. That has a steeper upward slope than totally inorganic computing now. But we can lump those together. We can say artificial intelligence writ large to include anything that our ancestors didn’t have in terms of intelligence, which could include enhancing our own intelligence. And I think especially should include corporate behavior. Corporate behavior is a kind of intelligence which is not natural, is wide spread, and it is likely to change, mutate, evolve very rapidly, faster than human generation times, probably faster than machine generation times.

Nukes I think are aging and maybe are less attractive as a defense mechanism. I think they’re being replaced by intelligence, artificial or otherwise, or collective and synthetic biology. I mean I think that if you wanted to have mutually assured destruction, it would be more cost-effective to do that with syn-bio. But I would still keep it on the list.

So I agree with that list. I’d just like nuanced changes to where the puck is likely to be going.

Lucas Perry: I see. So taking into account and reflecting on how technological change in the short to medium term will influence how one might want to rank these risks.

George Church: Yeah. I mean I just think that a collective human enhanced intelligence is going to be much more disruptive potentially than AI is. That’s just a guess. And I think that nukes will just be part of a collection of threatening things that people do. Probably it’s more threatening to cause collapse of a electric grid or a pandemic or some other economic crash than nukes.

Lucas Perry: That’s quite interesting and is very different than the story that I have in my head, and I think will also be very different than the story that many listeners have in their heads. Could you expand and unpack your timelines and beliefs about why you think the\at collective organic intelligence will be ahead of AI? Could you say, I guess, when you would expect AI to surpass collective bio intelligence and some of the reasons again for why?

George Church: Well, I don’t actually expect silicon-based intelligence to ever bypass in every category. I think it’s already super good at storage retrieval and math. But that’s subject to change. And I think part of the assumptions have been that we’ve been looking at a Moore’s law projection while most people haven’t been looking at the synthetic biology equivalent and haven’t noticed that the Moore’s law might finally be plateauing, at least as it was originally defined. So that’s part of the reason I think for the excessive optimism, if you will, about artificial intelligence.

Lucas Perry: The Moore’s law thing has to do with hardware and computation, right?

George Church: Yeah.

Lucas Perry: That doesn’t say anything about how algorithmic efficiency and techniques and tools are changing, and the access to big data. Something we’ve talked about on this podcast before is that many of the biggest insights and jumps in deep learning and neural nets haven’t come from new techniques but have come from more massive and massive amounts of compute on data.

George Church: Agree, but those data are also available to humans as big data. I think maybe the compromise here is that it’s some hybrid system. I’m just saying that humans plus big data plus silicon-based computers, even if they stay flat in hardware is going to win over either one of them separately. So maybe what I’m advocating is hybrid systems. Just like in your brain you have different parts of your brain that have different capabilities and functionality. In a hybrid system we would have the wisdom of crowds, plus compute engines, plus big data, but available to all the parts of the collective brain.

Lucas Perry: I see. So it’s kind of like, I don’t know if this is still true, but I think at least at some point it was true, that the best teams at chess were AIs plus humans?

George Church: Correct, yeah. I think that’s still true. But I think it will become even more true if we start altering human brains, which we have a tendency to try to do already via education and caffeine and things like that. But there’s really no particular limit to that.

Lucas Perry: I think one of the things that you said was that you don’t think that AI alone will ever be better than biological intelligence in all ways.

George Church: Partly because biological intelligence is a moving target. The first assumption was that the hardware would keep improving on Moore’s law, which it isn’t. The second assumption was that we would not alter biological intelligence. There’s one moving target which was silicon and biology was not moving, when in fact biology is moving at a steeper slope both in terms of hardware and algorithms and everything else and we’re just beginning to see that. So I think that when you consider both of those, it at least sows the seed of uncertainty as to whether AI is inevitably better than a hybrid system.

Lucas Perry: Okay. So let me just share the kind of story that I have in my head and then you can say why it might be wrong. AI researchers have been super wrong about predicting how easy it would be to make progress on AI in the past. So taking predictions with many grains of salt, if you interview say the top 100 AI researchers in the world, they’ll give a 50% probability of there being artificial general intelligence by 2050. That could be very wrong. But they gave like a 90% probability of there being artificial general intelligence by the end of the century.

And the story in my head says that I expect there to be bioengineering and genetic engineering continuing. I expect there to be designer babies. I expect there to be enhancements to human beings further and further on as we get into the century in increasing capacity and quality. But there are computational and substrate differences between computers and biological intelligence like the clock speed of computers can be much higher. They can compute much faster. And then also there’s this idea about the computational architectures in biological intelligences not being privileged or only uniquely available to biological organisms such that whatever the things that we think are really good or skillful or they give biological intelligences a big edge on computers could simply be replicated in computers.

And then there is an ease of mass manufacturing compute and then emulating those systems on computers such that the dominant and preferable form of computation in the future will not be on biological wetware but will be on silicon. And for that reason at some point there’ll just be a really big competitive advantage for the dominant form of compute and intelligence and life on the planet to be silicon based rather than biological based. What is your reaction to that?

George Church: You very nicely summarized what I think is a dominant worldview of people that are thinking about the future, and I’m happy to give a counterpoint. I’m not super opinionated but I think it’s worthy of considering both because the reason we’re thinking about the future is we don’t want to be blind sighted by it. And this could be happening very quickly by the way because both revolutions are ongoing as is the merger.

Now clock speed, my guess is that clock speed may not be quite as important as energy economy. And that’s not to say that both systems, let’s call them bio and non-bio, can’t optimize energy. But if you look back at sort of the history of evolution on earth, the fastest clock speeds, like bacteria and fruit flies, aren’t necessarily more successful in any sense than humans. They might have more bio mass, but I think humans are the only species with our slow clock speed relative to bacteria that are capable of protecting all of the species by taking us to a new planet.

And clock speed is only important if you’re in a direct competition in a fairly stable environment where the fastest bacteria win. But worldwide most of the bacteria are actually very slow growers. If you look at energy consumption right now, which both of them can improve, there are biological compute systems that are arguably a million times more energy-efficient at even tasks where the biological system wasn’t designed or evolved for that task, but it can kind of match. Now there are other things where it’s hard to compare, either because of the intrinsic advantage that either the bio or the non-bio system has, but where they are sort of on the same framework, it takes 100 kilowatts of power to run say Jeopardy! and Go on a computer and the humans that are competing are using considerably less than that, depending on how you calculate all the things that is required to support the 20 watt brain.

Lucas Perry: What do you think the order of efficiency difference is?

George Church: I think it’s a million fold right now. And this largely a hardware thing. I mean there is algorithmic components that will be important. But I think that one of the advantages that bio chemical systems have is that they are intrinsically atomically precise. While Moore’s law seem to be plateauing somewhere around 3 nanometer fabrication resolution, that’s off by maybe a thousand fold from atomic resolution. So that’s one thing, that as you go out many years, they will either be converging on or merging in some ways so that you get the advantages of atomic precision, the advantages of low energy and so forth. So that’s why I think that we’re moving towards a slightly more molecular future. It may not be recognizable as either our silicon von Neumann or other computers, nor totally recognizable as a society of humans.

Lucas Perry: So is your view that we won’t reach artificial general intelligence like the kind of thing which can reason about as well as about humans across all the domains that humans are able to reason? We won’t reach that on non-bio methods of computation first?

George Church: No, I think that we will have AGI in a number of different substrates, mechanical, silicon, quantum computing. Various substrates will be able of doing artificial general intelligence. It’s just that the ones that do it in a most economic way will be the ones that we will tend to use. There’ll be some cute museum that will have a collection of all the different ways, like the tinker toy computer that did Tic Tac Toe. Well, that’s in a museum somewhere next to Danny Hillis, but we’re not going to be using that for AGI. And I think there’ll be a series of artifacts like that, that in practice it will be very pragmatic collection of things that make economic sense.

So just for example, its easier to make a copy of a biological brain. Now that’s one thing that appears to be an advantage to non-bio computers right now, is you can make a copy of even large data sets for a fairly small expenditure of time, cost, and energy. While, to educate a child takes decades and in the end you don’t have anything totally resembling the parents and teachers. I think that’s subject to change. For example, we have now storage of data in DNA form, which is about a million times denser than any comprable non-chemical, non-biological system, and you can make a copy of it for hundreds of joules of energy and pennies. So you can hold an exabyte of data in the palm of your hand and you can make a copy of it relatively easy.

Now that’s not a mature technology, but it shows where we’re going. If we’re talking 100 years, there’s no particular reason why you couldn’t have that embedded in your brain and input and output to it. And by the way, the cost of copying that is very close to the thermodynamic limit for making copies of bits, while computers are nowhere near that. They’re off by a factor of a million.

Lucas Perry: Let’s see if I get this right. Your view is that there is this computational energy economy benefit. There is this precisional element which is of benefit, and that because there are advantages to biological computation, we will want to merge the best aspects of biological computation with non-biological in order to sort of get best of both worlds. So while there may be many different AGIs on offer on different substrates, the future looks like hybrids.

George Church: Correct. And it’s even possible that silicon is not in the mix. I’m not predicting that it’s not in the mix. I’m just saying it’s possible. It’s possible that an atomically precise computer is better at quantum computing or is better at clock time or energy.

Lucas Perry: All right. So I do have a question later about this kind of thing and space exploration and reducing existential risk via further colonization which I do want to get into later. I guess I don’t have too much more to say about our different stories around here. I think that what you’re saying is super interesting and challenging in very interesting ways. I guess the only thing I would have to say is I guess I don’t know enough, but you said that the computation energy economy is like a million fold more efficient.

George Church: That’s for copying bits, for DNA. For doing complex tasks for example, Go, Jeopardy! or Einstein’s Mirabilis, those kinds of things were typically competing a 20 watt brain plus support structure with a 100 kilowatt computer. And I would say at least in the case of Einstein’s 1905 we win, even though we lose at Go and Jeopardy!, which is another interesting thing, is that humans have a great deal more of variability. And if you take the extreme values like one person in one year, Einstein in 1905 as the representative rather than the average person and the average year for that person, well, if you make two computers, they are going to likely be nearly identical, which is both a plus and a minus in this case. Now if you make Einstein in 1905 the average for humans, then you have a completely different set of goalpost for the AGI than just being able to pass a basic Turing test where you’re simulating someone of average human interest and intelligence.

Lucas Perry: Okay. So two things from my end then. First is, do you expect AGI to first come from purely non-biological silicon-based systems? And then the second thing is no matter what the system is, do you still see the AI alignment problem as the central risk from artificial general intelligence and superintelligence, which is just aligning AIs with human values and goals and intentions?

George Church: I think the further we get from human intelligence, the harder it is to convince ourselves that we can educate, and whereas the better they will be at fooling us. It doesn’t mean they’re more intelligent than us. It’s just they’re alien. It’s like a wolf can fool us when we’re out in the woods.

Lucas Perry: Yeah.

George Church: So I think that exceptional humans are as hard to guarantee that we really understand their ethics. So if you have someone who is a sociopath or high functioning autistic, we don’t really know after 20 years of ethics education whether they actually are thinking about it the same way we are, or even in compatible way to the way that we are. We being in this case neurotypicals, although I’m not sure I am one. But anyway.

I think that this becomes a big problem with AGI, and it may actually put a damper on it. Part of the assumption so far is we won’t change humans because we have to get ethics approval for changing humans. But we’re increasingly getting ethics approval for changing humans. I mean gene therapies are now approved and increasing rapidly, all kinds of neuro-interfaces and so forth. So I think that that will change.

Meanwhile, the silicon-based AGI as we approached it, it will change in the opposite direction. It will be harder and harder to get approval to do manipulations in those systems, partly because there’s risk, and partly because there’s sympathy for the systems. Right now there’s very little sympathy for them. But as you got to the point where computers haven an AGI level of say IQ of 70 or something like that for a severely mentally disabled person so it can pass the Turing test, then they should start getting the rights of a disabled person. And once they have the rights of a disabled person, that would include the right to not be unplugged and the right to vote. And then that creates a whole bunch of problems that we won’t want to address, except as academic exercises or museum specimens that we can say, hey, 50 years ago we created this artificial general intelligence, just like we went to the Moon once. They’d be stunts more than practical demonstrations because they will have rights and because it will represent risks that will not be true for enhanced human societies.

So I think more and more we’re going to be investing in enhanced human societies and less and less in the uncertain silicon-based. That’s just a guess. It’s based not on technology but on social criteria.

Lucas Perry: I think that it depends what kind of ethics and wisdom that we’ll have at that point in time. Generally I think that we may not want to take conventional human notions of personhood and apply them to things where it might not make sense. Like if you have a system that doesn’t mind being shut off, but it can be restarted, why is it so unethical to shut it off? Or if the shutting off of it doesn’t make it suffer, suffering may be some sort of high level criteria.

George Church: By the same token you can make human beings that don’t mind being shut off. That won’t change our ethics much I don’t think. And you could also make computers that do mind being shut off, so you’ll have this continuum on both sides. And I think we will have sympathetic rules, but combined with the risk, which is the risk that they can hurt you, the risk that if you don’t treat them with respect, they will be more likely to hurt you, the risk that you’re hurting them without knowing it. For example, if you have somebody with locked-in syndrome, you could say, “Oh, they’re just a vegetable,” or you could say, “They’re actually feeling more pain than I am because they have no agency, they have no ability to control their situation.”

So I think creating computers that could have the moral equivalent of locked-in syndrome or some other pain without the ability to announce their pain could be very troubling to us. And we would only overcome it if that were a solution to an existential problem or had some gigantic economic benefit. I’ve already called that into question.

Lucas Perry: So then, in terms of the first AGI, do you have a particular substrate that you imagine that coming online on?

George Church: My guess is it will probably be very close to what we have right now. As you said, it’s going to be algorithms and databases and things like that. And it will be probably at first a stunt, in the same sense that Go and Jeopardy! are stunts. It’s not clear that those are economically important. A computer that could pass the Turing test, it will make a nice chat bots and phone answering machines and things like that. But beyond that it may not change our world, unless we solve energy issues and so. So I think to answer your question, we’re so close to it now that it might be based on an extrapolation of current systems.

Quantum computing I think is maybe a more special case thing. Just because it’s good at encryption, encryption is very societal utility. I haven’t yet seen encryption described as something that’s mission critical for space flight or curing diseases, other than the social components of those. And quantum simulation may be beaten by building actual quantum systems. So for example, atomically precise systems that you can build with synthetic biology are quantum systems that are extraordinarily hard to predict, but they’re very easy to synthesize and measure.

Lucas Perry: Is your view here that if the first AGI is on the economic and computational scale of a supercomputer such that we imagine that we’re still just leveraging really, really big amounts of data and we haven’t made extremely efficient advancements and algorithms such that the efficiency jumps a lot but rather the current trends continue and it’s just more and more data and maybe some algorithmic improvements, that the first system is just really big and clunky and expensive, and then that thing can self-recursively try to make itself cheaper, and then that the direction that that would move in would be increasingly creating hardware which has synthetic bio components.

George Church: Yeah, I’d think that that already exists in a certain sense. We have a hybrid system that is self-correcting, self-improving at an alarming rate. But it is a hybrid system. In fact, it’s such a complex hybrid system that you can’t point to a room where it can make a copy of itself. You can’t even point to a building, possibly not even a state where you can make a copy of this self-modifying system because it involves humans, it involves all kinds of fab labs scattered around the globe.

We could set a goal to be able to do that, but I would argue we’re much closer to achieving that goal with a human being. You can have a room where you only can make a copy of a human, and if that is augmentable, that human can also make computers. Admittedly it would be a very primitive computer if you restricted that human to primitive supplies and a single room. But anyway, I think that’s the direction we’re going. And we’re going to have to get good at doing things in confined spaces because we’re not going to be able to easily duplicate planet Earth, probably going to have to make a smaller version of it and send it off and how big that is we can discuss later.

Lucas Perry: All right. Cool. This is quite perspective shifting and interesting, and I will want to think about this more in general going forward. I want to spend just a few minutes on this next question. I think it’ll just help give listeners a bit of overview. You’ve talked about it in other places. But I’m generally interested in getting a sense of where we currently stand with the science of genetics in terms of reading and interpreting human genomes, and what we can expect on the short to medium term horizon in human genetic and biological sciences for health and longevity?

George Church: Right. The short version is that we have gotten many factors of 10 improvement in speed, cost, accuracy, and interpretability, 10 million fold reduction in price from $3 billion for a poor quality genomic non-clinical quality sort of half a genome in that each of us have two genomes, one from each parent. So we’ve gone from $3 billion to $300. It will probably be $100 by the middle of year, and then will keep dropping. There’s no particular second law of thermodynamics or Heisenberg stopping us, at least for another million fold. That’s where we are in terms of technically being able to read and for that matter write DNA.

But the interpretation certainly there are genes that we don’t know what they do, there are disease that we don’t know what causes them. There’s a great vast amount of ignorance. But that ignorance may not be as impactful as sometimes we think. It’s often said that common diseases or so called complex multi-genic diseases are off in the future. But I would reframe that slightly for everyone’s consideration, that many of these common diseases are diseases of aging. Not all of them but many, many of them that we care about. And it could be that attacking aging as a specific research program may be more effective than trying to list all the millions of small genetic changes that has small phenotypic effects on these complex diseases.

So that’s another aspect of the interpretation where we don’t necessarily have to get super good at so called polygenic risk scores. We will. We are getting better at it, but it could be in the end a lot of the things that we got so excited about precision medicine, and I’ve been one of the champions of precision medicine since before it was called that. But precision medicine has a potential flaw in it, which is it’s the tendency to work on the reactive cures for specific cancers and inherited diseases and so forth when the preventative form of it which could be quite generic and less personalized might be more cost-effective and humane.

So for example, taking inherited diseases, we have a million to multi-million dollars spent on people having inherited diseases per individual, while a $100 genetic diagnosis could be used to prevent that. And generic solutions like aging reversal or aging prevention might stop cancer more effectively than trying to stop it once it gets to metastatic stage, which there is a great deal of resources put into that. That’s my update on where genomics is. There’s a lot more that could be said.

Lucas Perry:

Yeah. As a complete lay person in terms of biological sciences, stopping aging to me sounds like repairing and cleaning up human DNA and the human genome such that information that is lost over time is repaired. Correct me if I’m wrong or explain a little bit about what the solution to aging might look like.

George Church: I think there’s two kind of closer related schools of thought which one is that there’s damage that you need to go in there and fix the way you would fix a pothole. And the other is that there’s regulation that informs the system how to fix itself. I believe in both. I tend to focus on the second one.

If you take a very young cell, say a fetal cell. It has a tendency to repair much better than an 80-year-old adult cell. The immune system of a toddler is much more capable than that of a 90-year-old. This isn’t necessarily due to damage. This is due to the epigenetic so called regulation of the system. So one cell is convinced that it’s young. I’m going to use some anthropomorphic terms here. So you can take an 80-year-old cell, actually up to 100 years is now done, reprogram it into an embryo like state through for example Yamanaka factors named after Shinya Yamanaka. And that reprogramming resets many, not all, of the features such that it now behaves like a young non-senescent cell. While you might have taken it from a 100-year-old fibroblast that would only replicate a few times before it senesced and died.

Things like that seem to convince us that aging is reversible and you don’t have to micromanage it. You don’t have to go in there and sequence the genome and find every bit of damage and repair it. The cell will repair itself.

Now there are some things like if you delete a gene it’s gone unless you have a copy of it, in which case you could copy it over. But those cells will probably die off. And the same thing happens in the germline when you’re passing from parent to kid, those sorts of things that can happen and the process of weeding them out is not terribly humane right now.

Lucas Perry: Do you have a sense or timelines on progress of aging throughout the century?

George Church: There’s been a lot of wishful thinking for centuries on this topic. But I think we have a wildly different scenario now, partly because this exponential improvement in technologies, reading and writing DNA and the list goes on and on in cell biology and so forth. So I think we suddenly have a great deal of knowledge of causes of aging and ways to manipulate those to reverse it. And I think these are all exponentials and we’re going to act on them very shortly.

We already are seeing some aging drugs, small molecules that are in clinical trials. My lab just published a combination gene therapy that will hit five different diseases of aging in mice and now it’s in clinical trials in dogs and then hopefully in a couple of years it will be in clinical trials in humans.

We’re not talking about centuries here. We’re talking about the sort of time that it takes to get things through clinical trails, which is about a decade. And a lot of stuff going on in parallel which then after one decade of parallel trials would be merging into combined trials. So a couple of decades.

Lucas Perry: All right. So I’m going to get in trouble in here if I don’t talk to you about synthetic bio risk. So, let’s pivot into that. What are your views and perspectives on the dangers to human civilization that an increasingly widespread and more advanced science of synthetic biology will pose?

George Church: I think it’s a significant risk. Getting back to the very beginning of our conversation, I think it’s probably one of the most significant existential risks. And I think that preventing it is not as easy as nukes. Not that nukes are easy, but it’s harder. Partly because it’s becoming cheaper and the information is becoming more widespread.

But it is possible. Part of it depends on having many more positive societally altruistic do gooders than do bad. It would be helpful if we could also make a big impact on poverty and diseases associated poverty and psychiatric disorders. The kind of thing that causes unrest and causes dissatisfaction is what tips the balance where one rare individual or a small team will do something that otherwise it would be unthinkable for even them. But if they’re sociopaths or they are representing a disadvantaged category of people then they feel justified.

So we have to get at some of those core things. It would also be helpful if we were more isolated. Right now we are very well mixed pot, which puts us both at risk for natural, as well as engineered diseases. So if some of us lived in sealed environments on Earth that are very similar to the sealed environments that we would need in space, that would both prepare us for going into space. And some of them would actually be in space. And so the further we are away from the mayhem of our wonderful current society, the better. If we had a significant fraction of population that was isolated, either on earth or elsewhere, it would lower the risk of all of us dying.

Lucas Perry: That makes sense. What are your intuitions about the offense/defense balance on synthetic bio risk? Like if we have 95% to 98% synthetic bio do gooders and a small percentage of malevolent actors or actors who want more power, how do you see the relative strength and weakness of offense versus defense?

George Church: I think as usual it’s a little easier to do offense. It can go back and forth. Certainly it seems easier to defend yourself from a ICBM than from something that could be spread in a cough. And we’re seeing that in spades right now. I think the fraction of white hats versus black hats is much better than 98% and it has to be. It has to be more like a billion to one. And even then it’s very risky. But yeah, it’s not easy to protect.

Now you can do surveillance so that you can restrict research as best you can, but it’s a numbers game. It’s combination of removing incentives, adding strong surveillance, whistleblowers that are not fearful of false positives. The suspicious package in the airport should be something you look at, even though most of them are not actually bombs. We should tolerate a very high rate of false positives. But yes, surveillance is not something we’re super good at it. It falls in the category of preventative medicine. And we would far prefer to do reactive, is to wait until somebody releases some pathogen and then say, “Oh, yeah, yeah, we can prevent that from happening again in the future.”

Lucas Perry: Is there a opportunity for boosting or beefing a human immune system or a public early warning detection systems of powerful and deadly synthetic bio agents?

George Church: Well so, yes is the simple answer. If we boost our immune systems in a public way — which it almost would have to be, there’d be much discussion about how to do that — then pathogens that get around those boosts might become more common. In terms of surveillance, I proposed in 2004 that we had an opportunity and still do of doing surveillance on all synthetic DNA. I think that really should be 100% worldwide. Right now it’s 80% or so. That is relatively inexpensive to fully implement. I mean the fact that we’ve done 80% already closer to this.

Lucas Perry: Yeah. So, funny enough I was actually just about to ask you about that paper that I think you’re referencing. So in 2004 you wrote A Synthetic Biohazard Non-proliferation Proposal, in anticipation of a growing dual use risk of synthetic biology, which proposed in part the sale and registry of certain synthesis machines to verified researchers. If you were to write a similar proposal today, are there some base elements of it you would consider including, especially since the ability to conduct synthetic biology research has vastly proliferated since then? And just generally, are you comfortable with the current governance of dual use research?

George Church: I probably would not change that 2004 white paper very much. Amazingly the world has not changed that much. There still are a very limited number of chemistries and devices and companies, so that’s a bottleneck which you can regulate and is being regulated by the International Gene Synthesis Consortium, IGSC. I did advocate back then and I’m still advocating that we get closer to an international agreement. Two sectors generally in the United Nations have said casually that they would be in favor of that, but we need essentially every level from the UN all the way down to local governments.

There’s really very little pushback today. There was some pushback back in 2004 where the company’s lawyers felt that they would be responsible or there would be an invasion of privacy of their customers. But I think eventually the rationale of high risk avoidance won out, so now it’s just a matter of getting full compliance.

One of these unfortunate things that the better you are at avoiding an existential risk, the less people know about it. In fact, we did so well on Y2K makes it uncertain as to whether we needed to do anything about Y2K at all, and I think hopefully the same thing will be true for a number of disasters that we avoid without most of the population even knowing how close we were.

Lucas Perry: So the main surveillance intervention here would be heavy monitoring and regulation and tracking of the synthesis machines? And then also a watch dog organization which would inspect the products of said machines?

George Church: Correct.

Lucas Perry: Okay.

George Church: Right now most of the DNA is ordered. You’ll send on the internet your order. They’ll send back the DNA. Those same principles have to apply to desktop devices. It has to get some kind of approval to show that you are qualified to make a particular DNA before the machine will make that DNA. And it has to be protected against hardware and software hacking which is a challenge. But again, it’s a numbers game.

Lucas Perry: So on the topic of biological risk, we’re currently in the context of the COVID-19 pandemic. What do you think humanity should take as lessons from COVID-19?

George Church: Well, I think the big one is testing. Testing is probably the fastest way out of it right now. The geographical locations that have pulled out of it fastest were the ones that were best at testing and isolation. If your testing is good enough, you don’t even have to have very good contact tracing, but that’s also valuable. The longer shots are cures and vaccines and those are not entirely necessary and they are long-term and uncertain. There’s no guarantee that we will come up with a cure or a vaccine. For example, HIV, TB and malaria do not have great vaccines, and most of them don’t have great stable cures. HIV is a full series of cures over time. But not even cures. They’re more maintenance, management.

I sincerely hope that coronavirus is not in that category of HIV, TB, and malaria. But we can’t do public health based on hopes alone. So testing. I’ve been requesting a bio weather map and working towards improving the technology to do so since around 2002, which was before the SARS 2003, as part of the inspiration for the personal genome project, was this bold idea of bio weather map. We should be at least as interested in what biology is doing geographically as we are about what the low pressure fronts are doing geographically. It could be extremely inexpensive, certainly relative to the multi-trillion dollar cost for one disease.

Lucas Perry: So given the ongoing pandemic, what has COVID-19 demonstrated about human global systems in relation to existential and global catastrophic risk?

George Church: I think it’s a dramatic demonstration that we’re more fragile than we would like to believe. It’s a demonstration that we tend to be more reactive than proactive or preventative. And it’s a demonstration that we’re heterogeneous. That there are geographical reasons and political systems that are better prepared. And I would say at this point the United States is probably among the least prepared, and that was predictable by people who thought about this in advance. Hopefully we will be adequately prepared that we will not emerge from this as a third world nation. But that is still a possibility.

I think it’s extremely important to make our human systems, especially global systems more resilient. It would be nice to take as examples the countries that did the best or even towns that did the best. For example, the towns of Vo, Italy and I think Bolinas, California, and try to spread that out to the regions that did the worst. Just by isolation and testing, you can eliminate it. That sort of thing is something that we should have worldwide. To make the human systems more resilient we can alter our bodies, but I think very effective is altering our social structures so that we are testing more frequently, we’re constantly monitoring both zoonotic sources and testing bushmeat and all the places where we’re getting too close to the animals. But also testing our cities and all the environments that humans are in so that we have a higher probability of seeing patient zero before they become a patient.

Lucas Perry: The last category that you brought up at the very beginning of this podcast was preventative measures and part of that was not having all of our eggs in the same basket. That has to do with say Mars colonization or colonization of other moons which are perhaps more habitable and then eventually to Alpha Centauri and beyond. So with advanced biology and advanced artificial intelligence, we’ll have better tools and information for successful space colonization. What do you see as the main obstacles to overcome for colonizing the solar system and beyond?

George Church: So we’ll start with the solar system. Most of the solar system is not pleasant compared to Earth. It’s a vacuum and it’s cold, including Mars and many of the moons. There are moons that have more water, more liquid water than Earth, but it requires some drilling to get down to it typically. There’s radiation. There’s low gravity. And we’re not adaptive.

So we might have to do some biological changes. They aren’t necessarily germline but they’ll be the equivalent. There are things that you could do. You can simulate gravity with centrifuges and you can simulate the radiation protection we have on earth with magnetic fields and thick shielding, equivalent of 10 meters of water or dirt. But there will be a tendency to try to solve those problems. There’ll be issues of infectious disease, which ones we want to bring with us and which ones we want to quarantine away from. That’s an opportunity more than a uniquely space related problem.

A lot of the barriers I think are biological. We need to practice building colonies. Right now we have never had a completely recycled human system. We have completely recycled plant and animal systems but none that are humans, and that is partly having to do with social issues, hygiene and eating practices and so forth. I think that can be done, but it should be tested on Earth because the consequences of failure on a moon or non-earth planet is much more severe than if you test it out on Earth. We should have thousands, possibly millions of little space colonies on Earth since one of my pet projects is making that so that it’s economically feasible on Earth. Only by heavy testing at that scale will we find the real gotchas and failure modes.

And then final barrier, which is more in the category that people think about is the economies of, if you do the physics calculation how much energy it takes to raise a kilogram into orbit or out of orbit, it’s much, much less than the cost per kilogram, orders of magnitude than what we currently do. So there’s some opportunity for improvement there. So that’s in the solar system.

Outside of the solar system let’s say Proxima B, Alpha Centauri and things of that range, there’s nothing particularly interesting between here and there, although there’s nothing to stop us from occupying the vacuum of space. To get to four and a half light years either requires a revolution in propulsion and sustainability in a very small container, or a revolution in the size of the container that we’re sending.

So, one pet project that I’m working on is trying to make a nanogram size object that would contain the information sufficiently for building a civilization or at least building a communication device that’s much easier to accelerate and decelerate a nanogram than it is to do any of the scale of space probes we currently use.

Lucas Perry: Many of the issues that human beings will face within the solar system and beyond machines or synthetic computation that exist today seems more robust towards. Again, there are the things which you’ve already talked about like the computational efficiency and precision for self-repair and other kinds of things that modern computers may not have. So I think just a little bit of perspective on that would be useful, like why we might not expect that machines would take the place of humans in many of these endeavors.

George Church: Well, so for example, we would be hard pressed to even estimate, I haven’t seen a good estimate yet, of a self-contained device that could make a copy of itself from dirt or whatever, the chemicals that are available to it on a new planet. But we do know how to do that with humans or hybrid systems.

Here’s a perfect example of a hybrid system. Is a human can’t just go out into space. It needs a spaceship. A spaceship can’t go out into space either. It needs a human. So making a replicating system seems like a good idea, both because we are replicating systems and it lowers the size of the package you need to send. So if you want to have a million people in the Alpha Centauri system, it might be easier just to send a few people and a bunch of frozen embryos or something like that.

Sending a artificial general intelligence is not sufficient. It has to also be able to make a copy of itself, which I think is a much higher hurdle than just AGI. I think AGI, we will achieve before we achieve AGI plus replication. It may not be much before, it will be probably be before.

In principle, a lot of organisms, including humans, start from single cells and mammals tend to need more support structure than most other vertebrates. But in principle if you land a vertebrate fertilized egg in an aquatic environment, it will develop and make copies of itself and maybe even structures.

So my speculation is that there exist a nanogram cell that’s about the size of a lot of vertebrate eggs. There exists a design for a nanogram that would be capable of dealing with a wide variety of harsh environments. We have organisms that thrive everywhere between the freezing point of water and the boiling point or 100 plus degrees at high pressure. So you have this nanogram that is adapted to a variety of different environments and can reproduce, make copies of itself, and built into it is a great deal of know-how about building things. The same way that building a nest is built into a bird’s DNA, you could have programmed into an ability to build computers or a radio or laser transmitters so it could communicate and get more information.

So a nanogram could travel at close the speed of light and then communicate at close the speed of light once it replicates. I think that illustrates the value of hybrid systems, within this particular case a high emphasis on the biochemical, biological components that’s capable of replicating as the core thing that you need for efficient transport.

Lucas Perry: If your claim about hybrid systems is true, then if we extrapolate it to say the deep future, then if there’s any other civilizations out there, then the form in which we will meet them will likely also be hybrid systems.

And this point brings me to reflect on something that Nick Bostrom talks about, the great filters which are supposed points in the evolution and genesis of life throughout the cosmos that are very difficult for life to make it through those evolutionary leaps, so almost all things don’t make it through the filter. And this is hypothesized to be a way of explaining the Fermi paradox, why is it that there are hundreds of billions of galaxies and we don’t see any alien superstructures or we haven’t met anyone yet?

So, I’m curious to know if you have any thoughts or opinions on what the main great filters to reaching interstellar civilization might be?

George Church: Of all the questions you’ve asked, this is the one where i’m most uncertain. I study among other things how life originated, in particular how we make complex biopolymers, so ribosomes making proteins for example, the genetic code. That strikes me as a pretty difficult thing to have arisen. That’s one filter. Maybe much earlier than many people would think.

Another one might be lack of interest that once you get to a certain level of sophistication, you’re happy with your life, your civilization, and then typically you’re overrun by someone or something that is more primitive from your perspective. And then they become complacent, and the cycle repeats itself.

Or the misunderstanding of resources. I mean we’ve seen a number of island civilizations that have gone extinct because they didn’t have a sustainable ecosystem, or they might turn inward. You know, like Easter Island, they got very interested in making statutes and tearing down trees in order to do that. And so they ended up with an island that didn’t have any trees. They didn’t use those trees to build ships so they could populate the rest of the planet. They just miscalculated.

So all of those could be barriers. I don’t know which of them it is. There probably are many planets and moons where if we transplanted life, it would thrive there. But it could be that just making life in the first place is hard and then making intelligence and civilizations that care to grow outside of their planet. It might be hard to detect them if they’re growing in a subtle way.

Lucas Perry: I think the first thing you brought up might be earlier than some people expect, but I think for many people thinking about great filters it is not like abiogenesis, if that’s the right word, seems really hard getting the first self-replicating things in the ancient oceans going. There seemed to be loss of potential filters from there to multi-cellular organisms and then general intelligences like people and beyond.

George Church: But many empires have just become complacent and they’ve been overtaken by perfectly obvious technology that they could’ve at least kept up with by spying, if not by invention. But they became complacent. They seem to plateau at roughly the same place. We’re plateauing more or less the same place the Easter Islanders and the Roman Empire plateaued. Today I mean the slight differences that we are maybe space faring civilization now.

Lucas Perry: Barely.

George Church: Yeah.

Lucas Perry: So, climate change has been something that you’ve been thinking about a bunch it seems. You have the Woolly Mammoth Project which we don’t need to necessarily get into here. But are you considering or are you optimistic about other methods of using genetic engineering for combating climate change?

George Church: Yeah, I think genetic engineering has potential. Most of the other things we talk about putting in LEDs or slightly more efficient car engines, solar power and so forth. And these are slowing down the inevitable rather than reversing it. To reverse it we need to take carbon out of the air, and a really, great way to do that is with photosynthesis, partly because it builds itself. So if we just allow the Arctic to do the photosynthesis the way it used to, we could get a net loss of carbon dioxide from the atmosphere and put it into the ground rather than releasing a lot.

That’s part of the reason that I’m obsessed with Arctic solutions and the Arctic Ocean is also similar. It’s the place where you get upwelling of nutrients, and so you get a natural, very high rate of carbon fixation. It’s just you also have a high rate of carbon consumption back into carbon dioxide. So if you could change that cycle a little bit. So that I think both Arctic land and ocean is a very good place to reverse carbon and accumulation in the atmosphere, and I think that that is best done with synthetic biology.

Now the barriers have historically been release of recombinant DNA into the wild. We now have salmon which are essentially in the wild, the humans that are engineered that are in the wild, and we have golden rice is now finally after more than a decade of tussle being used in the Philippines.

So I think we’re going to see more and more of that. To some extent even the plants of agriculture are in the wild. This is one of the things that was controversial, was that the pollen was going all over the place. But I think there’s essentially zero examples of recombinant DNA causing human damage. And so we just need to be cautious about our environmental decision making.

Lucas Perry: All right. Now taking kind of a sharp pivot here. In the philosophy of consciousness there is a distinction between the hard problem of consciousness and the easy problem. The hard problem is why is it that computational systems have something that it is like to be that system? Why is there a first person phenomenal perspective and experiential perspective filled with what one might call qualia. Some people reject the hard problem as being an actual thing and prefer to say that consciousness is an illusion or is not real. Other people are realists about consciousness and they believe phenomenal consciousness is substantially real and is on the same ontological or metaphysical footing as other fundamental forces of nature, or that perhaps consciousness discloses the intrinsic nature of the physical.

And then the easy problems are how is that we see, how is that light enters the eyes and gets computed, how is it that certain things are computationally related to consciousness?

David Chalmers calls another problem here, the meta problem of consciousness, which is why is it that we make reports about consciousness? Why is that we even talk about consciousness? Particularly if it’s an illusion? Maybe it’s performing some kind of weird computational efficiency. And if it is real, there seems to be some tension between the standard model of physics, being pretty complete feeling, and then how is it that we would be making reports about something that doesn’t have real causal efficacy if there’s nothing real to add to the standard model?

Now you have the Human Connectome Project which would seem to help a lot with the easy problems of consciousness and maybe might have something to say about the meta problem. So I’m curious to know if you have particular views on consciousness or how the Human Connectome Project might relate to that interest?

George Church: Okay. So I think that consciousness is real and it has selective advantage. Part of reality to a biologist is evolution, and I think it’s somewhat coupled to free will. I think of them as even though they are real and hard to think about, they may be easier than we often lay on, and this is when you think of it from an evolutionary standpoint or also from a simulation standpoint.

I can really only evaluate consciousness and the qualia by observations. I can only imagine that you have something similar to what I feel by what you do. And from that standpoint it wouldn’t be that hard to make a synthetic system that displayed consciousness that would be nearly impossible to refute. And as that system replicated and took on a life of its own, let’s say it’s some hybrid biological, non-biological system that displays consciousness, to really convincingly display consciousness it would also have to have some general intelligence or at least pass the Turing test.

But it would have evolutionary advantage in that it could think or could reason about itself. It recognizes the difference between itself and something else. And this has been demonstrated already in robots. There are admittedly kind of proof of concept demos. Like you have robots that can tell themselves in a reflection in a mirror from other people to operate upon their own body by removing dirt from their face, which is only demonstrated in a handful of animal species and recognize their own voice.

So you can see how these would have evolutionary advantages and they could be simulated to whatever level of significance is necessarily to convince an objective observer that they are conscious as far as you know, to the same extent that I know that you are.

So I think the hard problem is a worthy one. I think it is real. It has evolutionary consequences. And free will is related in that free will I think is a game theory which is if you behave in a completely deterministic predictable way, all the organisms around you have an advantage over you. They know that you are going to do a certain thing and so they can anticipate that, they can steal your food, they can bite you, they can do whatever they want. But if you’re unpredictable, which is essentially free will, in this case it can be a random number generator or dice, you now have a selective advantage. And to some extent you could have more free will than the average human, though the average human is constrained by all sorts of social mores and rules and laws and things like that, that something with more free will might not be.

Lucas Perry: I guess I would just want to tease a part self-consciousness from consciousness in general. I think that one can have a first person perspective without having a sense of self or being able to reflect on one’s own existence as a subject in the world. I also feel a little bit confused about why consciousness would provide an evolutionary advantage, where consciousness is the ability to experience things, I guess I have some intuitions about it not being causal like having causal efficacy because the standard model doesn’t seem to be missing anything essentially.

And then your point on free will makes sense. I think that people mean very different things here. I think within common discourse, there is a much more spooky version of free will which we can call libertarian free will, which says that you could’ve done otherwise and it’s more closely related to religion and spirituality, which I reject and I think most people listening to this would reject. I just wanted to point that out. Your take on free will makes sense and is the more scientific and rational version.

George Church: Well actually, I could say they could’ve done otherwise. If you consider that religious, that is totally compatible with flipping the coin. That helps you do otherwise. If you could take the same scenario, you could do something differently. And that ability to do otherwise is of selective advantage. As indeed religions can be of a great selective advantage in certain circumstances.

So back to consciousness versus self-consciousness, I think they’re much more intertwined. I’d be cautious about trying to disentangle them too much. I think your ability to reason about your own existence as being separate from other beings is very helpful for say self-grooming, for self-protection, so forth. And I think that maybe consciousness that is not about oneself may be a byproduct of that.

The greater your ability to reason about yourself versus others, your hand versus the piece of wood in your hands makes you more successful. Even if you’re not super intelligent, just the fact that you’re aware that you’re different from the entity that you’re competing with is a advantage. So I find it not terribly useful to make a giant rift between consciousness and self-consciousness.

Lucas Perry: Okay. So I’m becoming increasingly mindful of your time. We have five minutes left here so I’ve just got one last question for you and I need just a little bit to set it up. You’re vegan as far as I understand.

George Church: Yes.

Lucas Perry: And the effective altruism movement is particularly concerned with animal suffering. We’ve talked a lot about genetic engineering and its possibilities. David Pearce has written something called The Hedonistic Imperative which outlines a methodology and philosophy for using genetic engineering for voluntarily editing out suffering. So that can be done both for wild animals and it could be done for the human species and our descendants.

So I’m curious to know what your view is on animal suffering generally in the world, and do you think about or have thoughts on genetic engineering for wild animal suffering in places outside of human civilization? And then finally, do you view a role for genetic engineering and phasing out human suffering, making it biologically impossible by re-engineering people to operate on gradients of intelligent bliss?

George Church: So I think this kind of difficult problem, a technique that I employ is I imagine what this would be like on another planet and in the future, and whether given that imagined future, we would be willing to come back to where we are now. Rather than saying whether we’re willing to go forward, they ask whether you’re willing to come back. Because there’s a great deal of appropriate respect for inertia and the way things have been. Sometimes it’s called natural, but I think natural includes the future and everything that’s manmade, as well, we’re all part of nature. So I think it’s more of the way things were. So if you go to the future and ask whether we’d be willing to come back is a different way of looking.

I think in going to another planet, we might want to take a limited set of organisms with us, and we might be tempted to make them so that they don’t suffer, including humans. There is a certain amount of let’s say pain which could be a little red light going off on your dashboard. But the point of pain is to get your attention. And you could reframe that. People are born with chronic insensitivity to pain, CIPA, genetically, and they tend to get into problems because they will chew their lips and other body parts and get infected, or they will jump from high places because it doesn’t hurt and break things they shouldn’t break.

So you need some kind of alarm system that gets your attention that cannot be ignored. But I think it could be something that people would complain about less. It might even be more effective because you could prioritize it.

I think there’s a lot of potential there. By studying people that have chronic insensitivity to pain, you could even make that something you could turn on and off. SCNA9 for example is a channel in human neuro system that doesn’t cause the dopey effects of opioids. You can be pain-free without being compromised intellectually. So I think that’s a very promising direction to think about this problem.

Lucas Perry: Just summing that up. You do feel that it is technically feasible to replace pain with some other kind of informationally sensitive thing that could have the same function for reducing and mitigating risk and signaling damage?

George Church: We can even do better. Right now we’re unaware of certain physiological states can be quite hazardous and we’re blind to for example all the pathogens in the air around us. These could be new signaling. It wouldn’t occur to me to make every one of those painful. It would be better just to see the pathogens and have little alarms that go off. It’s much more intelligent.

Lucas Perry: That makes sense. So wrapping up here, if people want to follow your work, or follow you on say Twitter or other social media, where is the best place to check out your work and to follow what you do?

George Church: My Twitter is @geochurch. And my website is easy to find just by google, but it’s arep.med.harvard.edu. Those are two best places.

Lucas Perry: All right. Thank you so much for this. I think that a lot of the information you provided about the skillfulness and advantages of biology and synthetic computation will challenge many of the intuitions of our usual listeners and people in general. I found this very interesting and valuable, and yeah, thanks so much for coming on.

George Church: Okay. Great. Thank you.

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O’Keefe

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O’Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally.

Topics discussed in this episode include:

  • What the Windfall Clause is and how it might function
  • The need for such a mechanism given AGI generated economic windfall
  • Problems the Windfall Clause would help to remedy 
  • The mechanism for distributing windfall profit and the function for defining such profit
  • The legal permissibility of the Windfall Clause 
  • Objections and alternatives to the Windfall Clause

Timestamps: 

0:00 Intro

2:13 What is the Windfall Clause? 

4:51 Why do we need a Windfall Clause? 

06:01 When we might reach windfall profit and what that profit looks like

08:01 Motivations for the Windfall Clause and its ability to help with job loss

11:51 How the Windfall Clause improves allocation of economic windfall 

16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems

18:45 The Windfall Clause as assisting with general norm setting

20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk

23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 

25:03 The windfall function and desiderata for guiding it’s formation 

26:56 How the Windfall Clause is different from being a new taxation scheme

30:20 Developing the mechanism for distributing the windfall 

32:56 The legal permissibility of the Windfall Clause in the United States

40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands

43:28 Historical precedents for the Windfall Clause

44:45 Objections to the Windfall Clause

57:54 Alternatives to the Windfall Clause

01:02:51 Final thoughts

 

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

All of our podcasts are also now on Spotify and iHeartRadio! Or find us on SoundCloudiTunesGoogle Play and Stitcher.

You can listen to the podcast above or read the transcript below. 

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s conversation is with Cullen O’Keefe about a recent report he was the lead author on called The Windfall Clause: Distributing the Benefits of AI for the Common Good. For some quick background, the agricultural and industrial revolutions unlocked new degrees and kinds of abundance, and so too should the intelligence revolution currently underway. Developing powerful forms of AI will likely unlock levels of abundance never before seen, and this comes with the opportunity of using such wealth in service of the common good of all humanity and life on Earth but also with the risks of increasingly concentrated power and resources in the hands of the few who wield AI technologies. This conversation is about one possible mechanism, the Windfall Clause, which attempts to ensure that the abundance and wealth likely to be created by transformative AI systems benefits humanity globally.

For those not familiar with Cullen, Cullen is a policy researcher interested in improving the governance of artificial intelligence using the principles of Effective Altruism.  He currently works as a Research Scientist in Policy at OpenAI and is also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

And with that, here is Cullen O’Keefe on the Windfall Clause.

We’re here today to discuss this recent paper, that you were the lead author on called the Windfall Clause: Distributing the Benefits of AI for the Common Good. Now, there’s a lot there in the title, so we can start of pretty simply here with, what is the Windfall Clause and how does it serve the mission of distributing the benefits of AI for the common good?

Cullen O’Keefe: So the Windfall Clause is a contractual commitment AI developers can make, that basically stipulates that if they achieve windfall profits from AI, that they will donate some percentage of that to causes that benefit everyone.

Lucas Perry: What does it mean to achieve windfall profits?

Cullen O’Keefe: The answer that we give is that when a firm’s profits grow in excess of 1% of gross world product, which is just the sum of all countries GDP, then that firm has hit windfall profits. We use this slightly weird measurement of profits is a percentage of gross world product, just to try to convey the notion that the thing that’s relevant here is not necessarily the size of profits, but really the relative size of profits, relative to the global economy.

Lucas Perry: Right. And so an important background framing and assumption here seems to be the credence that one may have in transformative AI or in artificial general intelligence or in superintelligence, creating previously unattainable levels of wealth and value and prosperity. I believe that in terms of Nick Bostrom’s Superintelligence, this work in particular is striving to serve the common good principal, that superintelligence or AGI should be created in the service of and the pursuit of the common good of all of humanity and life on Earth. Is there anything here that you could add about the background to the inspiration around developing the Windfall Clause.

Cullen O’Keefe: Yeah. That’s exactly right. The phrase Windfall Clause actually comes from Bostrom’s book. Basically, the idea was something that people inside of FHI were excited about for a while, but really hadn’t done anything with because of some legal uncertainties. Basically, the fiduciary duty question that I examined in the third section of the report. When I was an intern there in the summer of 2018, I was asked to do some legal research on this, and ran away with it from there. My legal research pretty convincingly showed that it should be legal as a matter of corporate law, for a corporation to enter in to such a contract. In fact, I don’t think it’s a particularly hard case. I think it looks like things that operations do a lot already. And I think some of the bigger questions were around the implications and design of the Windfall Clause, which is also addressed in the report.

Lucas Perry: So, we have this common good principal, which serves as the moral and ethical foundation. And then the Windfall Clause it seems, is an attempt at a particular policy solution for AGI and superintelligence, serving the common good. With this background, could you expand a little bit more on why is that we need a Windfall Clause?

Cullen O’Keefe: I guess I wouldn’t say that we need a Windfall Clause. The Windfall Clause might be one mechanism that would solve some of these problems. The primary way in which cutting edge AI is being develop is currently in private companies. And the way that private companies are structured is perhaps not maximally conducive to the common good principal. This is not due to corporate greed or anything like that. It’s more just a function of the roles of corporations in our society, which is that they’re primarily vehicles for generating returns to investors. One might think that those tools that we currently have for taking some of the returns that are generated for investors and making sure that they’re distributed in a more equitable and fair way, are inadequate in the face of AGI. And so that’s kind of the motivation for the Windfall Clause.

Lucas Perry: Maybe if you could speak a little bit to the surveys of researchers of credence’s and estimates about when we might get certain kinds of AI. And then what windfall in the context of an AGI world actually means.

Cullen O’Keefe: The surveys of AGI timelines, I think this is an area with high uncertainty. We cite Katja Grace’s survey of AI experts, which is a few years old at this point. I believe that the median timeline that AI experts gave in that was somewhere around 2060, of attaining AGI as defined in a specific way by that paper. I don’t have opinions on whether that timeline is realistic or unrealistic. We just take it as a baseline, as the best specific timeline that has at least some evidence behind it. And what was the second question?

Lucas Perry: What other degrees of wealth might be brought about via transformative AI.

Cullen O’Keefe: The short and unsatisfying answer to this, is that we don’t really know. I think that the amount of economic literature really focusing on AGI in particular is pretty minimal. Some more research on this would be really valuable. A company earning profits that are defined as windfall via the report, would be pretty unprecedented in history, so it’s a very hard situation to imagine. Forecasts about the way that AI will contribute to growth are pretty variable. I think we don’t really have a good idea of what that might mean. And I think especially because the interface between economists and people thinking about AGI has been pretty minimal. A lot of the thinking has been more focused on more mainstream issues. If the strongest version of AGI were to come, the economic gains could be pretty huge. There’s a lot on the line that circumstance.

Part of what motivated the Windfall Clause, is trying to think of mechanisms that could withstand this uncertainty about what the actual economics of AGI will be like. And that’s kind of what the contingent commitment and progressively scaling commitment of the Windfall Clause is supposed to accomplish.

Lucas Perry: All right. So, now I’m going to explore here some of these other motivations that you’ve written in your report. There is the need to address loss of job opportunities. The need to improve the allocation of economic windfall, which if we didn’t do anything right now, there would actually be no way of doing that other than whatever system of taxes we would have around that time. There’s also this need to smooth the transition to advanced AI. And then there is this general norm setting strategy here, which I guess is an attempt to imbue and instantiate a kind of benevolent ethics based on the common good principle. Let’s start of by hitting on addressing the loss of job opportunities. How might transformative AI lead to the loss of job opportunities and how does the Windfall Clause help to remedy that?

Cullen O’Keefe: So I want to start of with a couple of caveats. So number one, I’m not an economist. Second is, I’m very wary of promoting Luddite views. It’s definitely true that in the past, technological innovation has been pretty universally positive in the long run, notwithstanding short term problems with transitions. So, it’s definitely by no means inevitable that advances in AI will lead to joblessness or decreased earnings. That said, I do find it pretty hard to imagine a scenario in which we achieve very general purpose AI systems, like AGI. And there are still bountiful opportunities for human employment. I think there might be some jobs which have human only employment or something like that. It’s kind of unclear, in an economy with AGI or something else resembling it, why there would be a demand for humans. There might be jobs I guess, in which people are inherently uncomfortable having non-humans. Good examples of this would be priests or clergy, probably most religions will not want to automate their clergy.

I’m not a theologian, so I can’t speak to the proper theology of that, but that’s just my intuition. People also mentioned things like psychiatrists, counselors, teachers, child care, stuff like that. That doesn’t look as automatable. And then the human meaning aspect of this, John Danaher, philosopher, recently released a book called Automation and Utopia, talking about how for most people work is the primary source of meaning. It’s certainly what they do with the great plurality of their waking hours. And I think for people like me and you, we’re lucky enough to like our jobs a lot, but for many people work is mostly a source of drudgery. Often unpleasant, unsafe, etcetera. But if we find ourselves in world in which work is largely automated, not only will we have to deal with the economic issues relating to how people who can no longer offer skills for compensation, will feed themselves and their families. But also how they’ll find meaning in life.

Lucas Perry: Right. If the category and meaning of jobs changes or is gone altogether, the Windfall Clause is also there to help meet fundamental universal basic human needs, and then also can potentially have some impact on this question of value and meaning. If the Windfall Clause allows you to have access to hobbies and nice vacations and other things that give human beings meaning.

Cullen O’Keefe: Yeah. I would hope so. It’s not a problem that we explicitly address in the paper. I think this is kind of in the broader category of what to actually do with the windfall, once it’s donated. You can think of this as like the bottom of the funnel. Whereas the Windfall Clause report is more focused at the top of the funnel, getting companies to actually commit to such a thing. And I think there’s a huge rich area of work to think about, what do we actually do with the surplus from AGI, once it manifests. And assuming that we can get it in to the coffers of a public minded organization. It’s something that I’m lucky enough to think about in my current job at OpenAI. So yeah, making sure that both material needs and psychological higher needs are taken care of. That’s not something I have great answers for yet.

Lucas Perry: So, moving on here to the second point. We also need a Windfall Clause or function or mechanism, in order to improve the allocation of economic windfall. So, could you explain that one?

Cullen O’Keefe: You can imagine a world in which employment kind of looks the same as it is today. Most people have jobs, but a lot of the gains are going to a very small group of people, namely shareholders. I think this is still a pretty sub-optimal world. There are diminishing returns on money for happiness. So all else equal and ignoring incentive effects, progressively distributing money seems better than not. Primarily firms looking to develop the AI are based in a small set of countries. In fact, within those countries, the group of people who are heavily invested in those companies is even smaller. And so in a world, even where employment opportunities for the masses are pretty normal, we could still expect to see pretty concentrated accrual of benefits, both within nations, but I think also very importantly, across nations. This seems pretty important to address and the Windfall Clause aims to do just that.

Lucas Perry: A bit of speculation here, but we could have had a kind of Windfall Clause for the industrial revolution, which probably would have made much of the world better off and there wouldn’t be such unequal concentrations of wealth in the present world.

Cullen O’Keefe: Yeah. I think that’s right. I think there’s sort of a Rawlsian or Harsanyian motivation there, that if we didn’t know whether we would be in an industrial country or a country that is later to develop, we would probably want to set up a system that has a more equal distribution of economic gains than the one that we have today.

Lucas Perry: Yeah. By Rawlsian, you meant the Rawls’ veil of ignorance, and then what was the other one you said?

Cullen O’Keefe: Harsanyi is another philosopher who is associated with the veil of ignorance idea and he argues, I think pretty forcefully, that actually the agreement that you would come to behind the veil of ignorance, is one that maximizes expected utility, just due to classic axioms of rationality. What you would actually want to do is maximize expected utility, whereas John Rawls has this idea that you would want to maximize the lot of the worst off, which Harsanyi argues doesn’t really follow from the veil of ignorance, and decision theoretic best practices.

Lucas Perry: I think that the veil of ignorance, which for listeners who don’t know what that is, it’s if you can imagine yourself not knowing how you were going to be born as in the world. You should make ethical and political and moral and social systems, with that view in mind. And if you do that, you will pretty honestly and wholesomely come up with something to your best ability, that is good for everyone. From behind that veil of ignorance, of knowing who you might be in the world, you can produce good ethical systems. Now this is relevant to the Windfall Clause, because going through your paper, there’s the tension between arguing that this is actually something that is legally permissible and that institutions and companies would want to adopt, which is in clear tension with maximizing profits for shareholders and the people with wealth and power in those companies. And so there’s this fundamental tension behind the Windfall Clause, between the incentives of those with power to maintain and hold on to the power and wealth, and the very strong and important ethical and normative views and compunctions, that say that this ought to be distributed to the welfare and wellbeing of all sentient beings across the planet.

Cullen O’Keefe: I think that’s exactly right. I think part of why I and others at the Future of Humanity Institute were interested in this project, is that we know a lot of people working in AI at all levels. And I think a lot of them do want to do the genuinely good thing. But feel the constraints of economics but also of fiduciary duties. We didn’t have any particular insights in to that with this piece, but I think part of the motivation is just that we want to put resources out there for any socially conscious AI developers to say, “We want to make this commitment and we feel very legally safe doing so,” for the reasons that I lay out.

It’s a separate question whether it’s actually in their economic interest to do that or not. But at least we think they have the legal power to do so.

Lucas Perry: Okay. So maybe we can get in to and explore the ethical aspect of this more. I think we’re very lucky to have people like you and your fellow colleagues who have the ethical compunction to follow through and be committed to something like this. But for the people that don’t have that, I’m interested in discussing more later about what to do with them. So, in terms of more of the motivations here, the Windfall Clause is also motivated by this need for a smooth transition to transformative AI or AGI or superintelligence or advanced AI. So what does that mean?

Cullen O’Keefe: As I mentioned, it looks like economic growth from AI will probably be a good thing if we manage to avoid existential and catastrophic risks. That’s almost tautological I suppose. But just as in the industrial revolution where you had a huge spur of economic growth, but also a lot of turbulence. So part of the idea of the Windfall Clause is basically to funnel some of that growth in to a sort of insurance scheme that can help make that transition smoother. An un-smooth transition would be something like a lot of countries are worried they’re not going to see any appreciable benefit from AI and indeed, might lose out a lot because a lot of their industries would be off shored or re-shored and a lot of their people would no longer be economically competitive for jobs. So, that’s the kind of stability that I think we’re worried about. And the Windfall Clause is basically just a way of saying, you’re all going to gain significantly from this advance. Everyone has a stake in making this transition go well.

Lucas Perry: Right. So I mean there’s a spectrum here and on one end of the spectrum there is say a private AI lab or company or actor, who is able to reach AGI or transformative AI first and who can muster or occupy some significant portion of the world GDP. That could be anywhere from one to 99 percent. And there could or could not be mechanisms in place for distributing that to the citizens of the globe. And so one can imagine, as power is increasingly concentrated in the hands of the few, that there could be quite a massive amount of civil unrest and problems. It could create very significant turbulence in the world, right?

Cullen O’Keefe: Yeah. Exactly. And it’s our hypothesis that having credible mechanisms ex-ante to make sure that approximately everyone gains from this, will make people and countries less likely to take destabilizing actions. It’s also a public good of sorts. You would expect that it would be in everyone’s interest for this to happen, but it’s never individually rational to commit that much to making it happen. Which is why it’s a traditional role for governments and for philanthropy to provide those sort of public goods.

Lucas Perry: So that last point here then on the motivations for why we need a Windfall Clause, would be general norm setting. So what do you have to say about general norm setting?

Cullen O’Keefe: This one is definitely a little more vague than some of the others. But if you think about what type of organization you would like to see develop AGI, it seems like one that has some legal commitment to sharing those benefits broadly is probably correlated with good outcomes. And in that sense, it’s useful to be able to distinguish between organizations that are credibly committed to that sort of benefit, from ones that say they want that sort of broad benefit but are not necessarily committed to making it happen. And so in the Windfall Clause report, we are basically trying to say, it’s very important to take norms about the development of AI seriously. One of the norms that we’re trying to develop is the common good principal. And even better is when you and develop those norms through high cost or high signal value mechanisms. And if we’re right that a Windfall Clause can be made binding, then the Windfall Clause is exactly one of them. It’s a pretty credible way for an AI developer to demonstrate their commitment to the common good principal and also show that they’re worthy of taking on this huge task of developing AGI.

The Windfall Clause makes the performance or adherence to the common good principal a testable hypothesis. It’s sets kind of a base line against which commitments to the common good principal can be measured.

Lucas Perry: Now there are also here in your paper, firm motivations. So, incentives for adopting a Windfall Clause from the perspective of AI labs or AI companies, or private institutions which may develop AGI or transformative AI. And your three points here for firm motivations are that it can generate general goodwill. It can improve employee relations and it could reduce political risk. Could you hit on each of these here for why firms might be willing to adopt the Windfall Clause?

Cullen O’Keefe: Yeah. So just as a general note, we do see private corporations giving money to charity and doing other pro-social actions that are beyond their legal obligations, so nothing here is particularly new. Instead, it’s just applying traditional explanations for why companies engage in, what’s sometimes called corporate social responsibility or CSR. And see whether that’s a plausible explanation for why they might be amenable to a Windfall Clause. The first one that we mentioned in the report, is just generating general goodwill, and I think it’s plausible that companies will want to sign a Windfall Clause because it brings some sort of reputational benefit with consumers or other intermediary businesses.

The second one we talk about is managing employee relationships. In general, we see that tech employees have had a lot of power to shape the behavior of their employers. Fellow FLI podcast guest Haydn Belfield just wrote a great paper, saying AI specifically. Tech talent is in very high demand and therefore they have a lot of bargaining power over what their firms do and I think it’s potentially very promising that tech employers lobby for commitments like the Windfall Clause.

The third is termed in a lot of legal and investment circles, as political risk, so that’s basically the risk of governments or activists doing things that hurt you, such as tighter regulation or expropriation, taxation, things like that. And corporate social responsibility, including philanthropy, is just a very common way for firms to manage that. And could be the case for AI firms as well.

Lucas Perry: How strong do you think these motivations listed here are, and what do you think will be the main things that drive firms or institutions or organizations to adopt the Windfall Clause?

Cullen O’Keefe: I think it varies from firm to firm. I think a big one that’s not listed here is how management likes the idea of a Windfall Clause. Obviously, they’re the ones ultimately making the decisions, so that makes sense. I think employee buy-in and enthusiasm about the Windfall Clause or similar ideas will ultimately be a pretty big determinate about whether this actually gets implemented. That’s why I would love to hear and see engagement around this topic from people in the technology industry.

Lucas Perry: Something that we haven’t talked about yet is the distribution mechanism. And in your paper, you come up with desiderata and important considerations for an effective and successful distribution mechanism. Philanthropic effectiveness, security from improper influences, political legitimacy and buy in from AI labs. So, these are just guiding principals for helping to develop the mechanism for distribution. Could you comment on what the mechanism for distribution is or could be and how these desiderata will guide the formation of that mechanism?

Cullen O’Keefe: A lot of this thinking is guided by a few different things. One is just involvement in the effective altruism community. I as a member of that community, spend a lot of time thinking about how to make philanthropy work well. That said, I think that the potential scale of the Windfall Clause requires thinking about factors other than effectiveness, in the way that effectiveness altruists think of that. Just because the scale of potential resources that you’re dealing here, begins to look less and less like traditional philanthropy and more and more like psuedo or para-government institution. And so that’s why I think things like accountability and legitimacy become extra important in the Windfall Clause context. And then firm buy-in I mentioned, just because part of the actual process of negotiating an eventual Windfall Clause would presumably be coming up with distribution mechanism that advances some of the firms objectives of getting positive publicity or goodwill from agreeing to the Windfall Clause, both with their consumers and also with employers and governments.

And so they’re key stakeholders in coming up with that process as well. This all happens in the backdrop of a lot of popular discussion about the role of philanthropy in society, such as recent criticism of mega-philanthropy. I take those criticisms pretty seriously and want to come up with a Windfall Clause distribution mechanism that manages those better than current philanthropy. It’s a big task in itself and one that needs to be taken pretty seriously.

Lucas Perry: Is the windfall function synonymous with the windfall distribution mechanism?

Cullen O’Keefe: No. So, the windfall function, it’s the mathematical function that determines how much money, signatories to the Windfall Clause are obligated to give.

Lucas Perry: So, the windfall function will be part of the windfall contract, and the windfall distribution mechanism is the vehicle or means or the institution by which that output of the function is distributed?

Cullen O’Keefe: Yeah. That’s exactly right. Again, I like to think of this as top of the funnel, bottom of the funnel. So the windfall function is kind of the top of the funnel. It defines how much money has to go in to the Windfall Clause system and then the bottom of the funnel is like the output, what actually gets done with the windfall, to advance the goals of the Windfall Clause.

Lucas Perry: Okay. And so here you have some desiderata for this function, in particular transparency, scale sensitivity, adequacy, pre-windfall commitment, incentive alignment and competitiveness. Are there any here that you want to comment on with regards to the windfall function.

Cullen O’Keefe: Sure. If you look at the windfall function, it looks kind of like a progressive tax system. You fall in to some bracket and the bracket that you’re in determines the marginal percentage of money that you owe. So, in a normal income tax scheme, the bracket is determined by your gross income. In the Windfall Clause scheme, the bracket is determined by a slightly modified thing, which is profits as a percent of gross world product, which we started off talking about.

We went back and forth for a few different ways that this could look, but we ultimately decided upon a simpler windfall function that looks much like an income tax scheme, because we thought it was pretty transparent and easy to understand. And for a project as potentially important as the Windfall Clause, we thought that was pretty important that people be able to understand the contract that’s being negotiated, not just the signatories.

Lucas Perry: Okay. And you’re bringing up this point about taxes. One thing that someone might ask is, “Why do we need a whole Windfall Clause when we could just have some kind of tax on benefits accrued from AI?” But the very important feature to be mindful here, about the Windfall Clause, is that it does something that taxing cannot do, which is redistribute funding from tech heavy first world countries to people around the world, rather than just to the government of the country able to tax them. So that also seems to be a very important consideration here for why the Windfall Clause is important, rather than just some new tax scheme.

Cullen O’Keefe: Yeah. Absolutely. And in talking to people about the Windfall Clause, this is one of the top concerns that comes up. So, you’re right to emphasize it. I agree that the potential for international distribution is one of the main reasons that I personally are more excited about the Windfall Clause than standard corporate taxation. Other reasons are just that it seems just more tractable to negotiate this individually with firms, a number of firms potentially in a position of developing advanced AI is pretty small now and might continue to be small for the foreseeable future. So the number of potential entities that you have persuaded to agree to this might be pretty small.

There’s also the possibility that we mention, but don’t propose an exact mechanism for in the paper of allowing taxation to supersede the Windfall Clause. So, if a government came up with a better taxation scheme, you might either release the signatories from the Windfall Clause or just have the windfall function compensate for that by reducing or eliminating total obligation. Of course, it gets tricky because then you would have to decide which types of taxes would you do that for, if you want to maintain the international motivations of the Windfall Clause. And you would also have to kind of figure out what the optimal tax rate is, which is obviously no small task. So those are definitely complicated questions, but at least in theory, there’s the possibility for accommodating those sorts of ex-post taxation efforts in a way that doesn’t burden firms too much.

Lucas Perry: Do you have any more insights or positives or negatives to comment here about the windfall function. It seems like in the paper, it is as you mention, open for a lot more research. Do you have directions for further investigation of the windfall function?

Cullen O’Keefe: Yeah. It’s one of the things that we lead out with, and it’s actually as you’re saying. This is primarily supposed illustrative and not the right windfall function. I’d be very surprised if this was ultimately the right way to do this. Just because the possibility in this space is so big and we’ve explored so little of it. One of the ideas that I am particularly excited about, and I think more and more might ultimately be the right thing to do, is instead of having a profits based trigger for the windfall function, instead having a market tap based trigger. And there are just basic accounting reasons why I’m more excited about this. Tracking profits is not as straight forward as it seems, because firms can do stuff with their money. They can spend more of it and reallocate it in certain ways. Whereas it’s much harder and they have less incentive to downward manipulate their stock price or market capitalization. So I’d be interested in potentially coming up with more value based approaches to the windfall function rather than our current one, which is based on profits.

That said, there is a ton of other variables that you could tweak here, and would be very excited to work with people or see other proposals of what this could look like.

Lucas Perry: All right. So this is an open question about how the windfall function will exactly look. Can you provide any more clarity on the mechanism for distribution, keeping mind here the difficulty of creating an effective way of distributing the windfall, which you list as the issues of effectiveness, accountability, legitimacy and firm buy-in?

Cullen O’Keefe: One concrete idea that I actually worked closely with FLI on, specifically with Anthony Aguirre and Jared Brown, was the windfall trust idea, which is basically to create a trust or kind of psuedo-trust that makes every person in world or as many people as we can, reach equal beneficiaries of a trust. So, in this structure, which is on page 41 of the report if people are interested in seeing it. It’s pretty simple. The idea is that the successful developer would satisfy their obligations by paying money to a body called the Windfall Trust. For people who don’t know what trust is, it’s a specific type of legal entity. And then all individuals would be either or actual or potential beneficiaries of the Windfall Trust, and would receive equal funding flows from that. And could even receive equal input in to how the trust is managed, depending on how the trust was set up.

Trusts are also exciting because they are very flexible mechanisms that you can arrange the governance of in many different ways. And then to make this more manageable, obviously a single trust with eight billion beneficiaries seems hard to manage, so you take a single trust for every 100,000 people or whatever number you think is manageable. I’m kind of excited about that idea, I think it hits a lot of the desiderata pretty well and could be a way in which a lot of people could see benefit from the windfall.

Lucas Perry: Are there any ways of creating proto-windfall clauses or proto-windfall trusts to sort of test the idea before transformative AI comes on the scene?

Cullen O’Keefe: I would be very excited to do that. I guess one thing I should say, OpenAI where I currently work, has a structure called a capped-profit structure, which is similar in many ways to the Windfall Clause. Our structure is such that profits above a certain cap that can be returned to investors, go to a non-profit, which is the OpenAI non-profit, which then has to use those funds for charitable purposes. But I would be very excited to see new companies and potentially companies aligned with the mission of the FLI podcast, to experiment with structures like this. In the fourth section of the report, we talk all about different precedents that exist already, and some of these have different features that are close to the Windfall Clause. And I’d be interested in someone putting all those together for their start-up or their company and making a kind of pseudo-windfall clause.

Lucas Perry: Let’s get in to the legal permissibility of the Windfall Clause. Now you said that this is actually one of the reasons why you first got in to this, was because it got tabled because people were worried about the fiduciary responsibilities that companies would have. Let’s start by reflecting on whether or not this is legally permissible in America, and then think about China, because these are the two biggest AI players today.

Cullen O’Keefe: Yeah. There’s actually a slight wrinkle there that we might also have to talk about, the Cayman Islands. But we’ll get to that. I guess one interesting fact about the Windfall Clause report, is that it’s slightly weird that I’m the person that ended up writing this. You might think an economist should be the person writing this, since it deals so much with labor economics and inequality, etcetera, etcetera. And I’m not an economist by any means. The reason that I got swept up in this is because of the legal piece. So I’ll first give a quick crash course in corporate law, because I think it’s an area than not a lot of people understand and it’s also important for this.

Corporations are legal entities. They are managed by a board of directors for the benefit of the shareholders, who are the owners of the firm. And accordingly, since the directors have the responsibility of managing a thing which is owned in part by other people, they owe certain duties to the shareholders. There are known as fiduciary duties. The two primary ones are the duty of loyalty and the duty of care. So, duty of loyalty, we don’t really talk about a ton in this piece, just the duty to manage the corporation for the benefit of the corporation itself, and not for the personal gain of the directors.

The duty of care is kind of what it sounds like, just the duty to take adequate care that the decisions made for the corporation by the board of directors will benefit the corporation. The reason that this is important for the purposes of a Windfall Clause and also for the endless speculation of corporate law professors and theorists, is when you engage in corporate philanthropy, it kind of looks like you’re doing something that is not for the benefit of the corporation. By definition, giving money to charity is primarily a philanthropic act or at least that’s kind of the prima facie case for why that might be a problem from the standpoint of corporate law. Because this is other people’s money largely, and the corporation is giving it away, seemingly not for the benefit of the corporation itself.

There actually hasn’t been that much case law, so actual court decisions on this issue. I found some of them across the US. As a side note, we primarily talk about Delaware law, because Delaware is the state in which the plurality of American corporations are incorporated for historical reasons. Their corporate law is by far the most influential in the United States. So, even though you have this potential duty of care issue, with making corporate donations, the standard by which directors are judged is the business judgment rule. Quoting from the American Law Institute, a summary of the business judgment rule is, “A director or officer who makes a business judgment in good faith, fulfills the duty of care if the director or officer, one, is not interested,” that means there is no conflict of interest, “In the subject of the business judgment. Two, is informed with respect to the business judgment to the extent that the director or officer reasonably believes to be appropriate under the circumstances. And three, rationally believes that the business judgment is in the best interests of the corporation.” So this is actually a pretty forgiving standard. It’s basically just use your best judgement standard, which is why it’s very hard for shareholders to successfully make a case that a judgement was a violation of the business judgement rules. It’s very rare for such challenges to actually succeed.

So a number of cases have examined the relationship of the business judgement rule to corporate philanthropy. They basically universally held that this is a permissible invocation or permissible example of the business judgement rule. That there are all these potential benefits that philanthropy could give to the corporation, therefore corporate directors decision to authorize corporate donations would be generally upheld under the business judgement rule, provided all these other things are met.

Lucas Perry: So these firm motivations that we touched on earlier were generating goodwill towards the company, improving employee relations and then reducing political risk I guess is also like having good faith with politicians who are, at the end of the day, hopefully being held accountable by their constituencies.

Cullen O’Keefe: Yeah, exactly. So these are all things that could plausibly, financially benefit the corporation in some form. So in this sense, corporate philanthropy looks less like a donation and more like an investment in the firm’s long term profitability, given all these soft factors like political support and employee relations. Another interesting wrinkle to this, if you read the case law of these corporate donation cases, they’re actually quite funny. The only case I quote from would be Sullivan v. Hammer. A corporate director wanted to make a corporate donation to an art museum, that had his name and kind of served basically as his personal art collection, more or less. And the court kind of said, this is still okay under business judgement rule. So, that was a pretty shocking example of how lenient this standard is.

Lucas Perry: So then they synopsis version here, is that the Windfall Clause is permissible in the United States, because philanthropy in the past has been seen as still being in line with fiduciary duties. And the Windfall Clause would do the same.

Cullen O’Keefe: Yeah, exactly. The one interesting wrinkle about the Windfall Clause that might distinguish it from most corporate philanthropy but though definitely not all, is that it has this potentially very high ex-post cost, even though it’s ex-ante cost might be quite low. So in a situation which a firm actually has to pay out the Windfall Clause, it’s very, very costly to the firm. But the business judgement rule, there’s actually a post to protect these exact types of decisions, because the things that courts don’t want to do is be second guessing every single corporate decision with the benefit of hindsight. So instead, they just instruct people to look at the ex-ante cost benefit analysis, and defer to that, even if ex-post it turns out to have been a bad decision.

There’s an analogy that we draw to stock option compensation, which is very popular, where you give an employee a block of stock options, that at the time is not very valuable because it’s probably just in line with the current value of the stock. But ex-post might be hugely valuable and this how a lot of early employees of companies get wildly rich, well beyond what they would have earned at fair market and cash value ex-ante. That sort of ex-ante reasoning is really the important thing, not the fact that it could be worth a lot ex-post.

One of the interesting things about the Windfall Clause is that it is a contract through time, and potentially over a long time. A lot of contracts that we make are pretty short term focus. But the Windfall Clause is in agreement now to do stuff, is stuff happens in the future, potentially in the distant future, which is part of the way the windfall function is designed. It’s designed to be relevant over a long period of time especially given the uncertainty that we started off talking about, with AI timelines. The important thing that we talked about was the ex-ante cost which means the cost to the firm in expected value right now. Which is basically the probability that this ever gets triggered, and if it does get triggered, how much will it be worth, all discounted by the time value of money etcetera.

One thing that I didn’t talk about is that there’s some language in some court cases about limiting the amount of permissible corporate philanthropy to a reasonable amount, which is obviously not a very helpful guide. But there’s a court case saying that this should be determined by looking to the charitable giving deduction, which is I believe about 10% right now.

Lucas Perry: So sorry, just to get the language correct. It’s the ex-post cost is very high because after the fact you have to pay huge percentages of your profit?

Cullen O’Keefe: Yeah.

Lucas Perry: But it still remains feasible that a court might say that this violates fiduciary responsibilities right?

Cullen O’Keefe: There’s always the possibility that a Delaware court would invent or apply new doctrine in application to this thing, that looks kind of weird from their perspective. I mean, this is a general question of how binding precedent is, which is an endless topic of conversation for lawyers. But if they were doing what I think they should do and just straight up applying precedent, I don’t see a particular reason why this would be decided differently than any of the other corporate philanthropy cases.

Lucas Perry: Okay. So, let’s talk a little bit now about the Cayman Islands and China.

Cullen O’Keefe: Yeah. So a number of significant Chinese tech companies are actually incorporated in the Cayman Islands. It’s not exactly clear to me why this is the case, but it is.

Lucas Perry: Isn’t it for hiding money off-shore?

Cullen O’Keefe: So I’m not sure if that’s why. I think even if taxation is a part of that, I think it also has to do with capital restrictions in China, and also they want to attract foreign investors which is hard if they’re incorporated in China. Investors might not trust Chinese corporate law very much. This is just my speculation right now, I don’t actually know the answer to that.

Lucas Perry: I guess the question then just is, what is the US and China relationship with the Cayman Islands? What is it used for? And then is the Windfall Clause permissible in China?

Cullen O’Keefe: Right. So, the Cayman Islands is where the big three Chinese tech firms, Alibaba, Baidu and Tencent are incorporated. I’m not a Caymanian lawyer by any means, nor am I an expert in China law, but basically from my outsider reading of this law, applying my general legal knowledge, it appears that similar principals of corporate law apply in the Cayman Islands which is why it might be a popular spot for incorporation. They have a rule that looks like the business judgement rule. This is in footnote 120 if anyone wants to dig in to it in the report. So, for the Caymanian corporations, it looks like it should be okay for the same reason. China being a self proclaimed socialist country, also has a pretty interesting corporate law that actually not only allows but appears to encourage firms to engage in corporate philanthropy. From the perspective of their law, at least it looks potentially more friendly than even Delaware law, so kind of a-fortiori should be permissible there.

That said, obviously there’s potential political reality to be considered there, especially also the influence of the Chinese government on state owned enterprises, so I don’t want to be naïve as to just thinking what the law says is what is actually politically feasible there. But all that caveating aside, as far as the law goes, the People’s Republic of China looks potentially promising for a Windfall Clause.

Lucas Perry: And that again matter, because China is currently second to the US in AI and are thus also likely potentially able to reach windfall via transformative AI in the future.

Cullen O’Keefe: Yeah. I think that’s the general consensus, is that after the United States, China seems to be the most likely place to develop AGI for transformative AI. You can listen and read a lot of the work by my colleague Jeff Ding on this, who recently appeared on 80,000 Hours podcast, talking about China’s AI dream and has a report by the same name, from FHI, that I would highly encourage everyone to read.

Lucas Perry: All right. Is it useful here to talk about historical precedents?

Cullen O’Keefe: Sure. I think one that’s potentially interesting is that a lot of sovereign nations have actually dealt with this problem of windfall governance before. It’s actually like natural resource based states. So Norway is kind of the leading example of this. They had a ton of wealth from oil, and had to come up with a way of distributing that wealth in a fair way. And as a sovereign wealth fund as a result, as do a lot of countries and provides for all sorts of socially beneficial applications.

Google actually when it IPO’d, gave one percent of its equity to it’s non-profit arm, the Google Foundation. So that’s actually significantly like the Windfall Clause in the sense that it gave a commitment that would grow in value as the firm’s prospects engaged. And therefore had low ex-ante costs but potentially higher ex-post-cost. Obviously, in personal philanthropy, a lot of people will be familiar with pledges like Founders Pledge or the Giving What We Can Pledge, where people pledge a percentage of their personal income to charity. The Founders Pledge kind of most resembles the Windfall Clause in this respect. People pledge a percentage of equity from their company upon exit or upon liquidity events and in that sense, it looks a lot like a Windfall Clause.

Lucas Perry: All right. So let’s get in to objections, alternatives and limitations here. First objection to the Windfall Clause, would be that the Windfall Clause will never be triggered.

Cullen O’Keefe: That certainly might be true. There’s a lot of reasons why that might be true. So, one is that we could all just be very wrong about the promise of AI. Also AI development could unfold in some other ways. So it could be a non-profit or an academic institution or a government that develops windfall generating AI and no one else does. Or it could just be that the windfall from AI is spread out sufficiently over a large number of firms, such that no one firm earns windfall, but collectively the tech industry does or something. So, that’s all certainly true. I think that those are all scenarios worth investing in addressing. You could potentially modify the Windfall Clause to address some of those scenarios.

hat said, I think there’s a significant non-trivial possibility that such a windfall occurs in a way that would trigger a Windfall Clause, and if it does, it seems worth investing in solutions that could mitigate any potential downside to that or share the benefits equally. Part of the benefit of the Windfall Clause is that if nothing happens, it doesn’t have any obligations. So, it’s quite low cost in that sense. From a philanthropic perspective, there’s a cost in setting this up and promoting the idea, etcetera, and those are definitely non-trivial costs. But the actual costs, signing the clause, only manifests upon actually triggering it.

Lucas Perry: This next one is that firms will find a way to circumvent their commitments under the clause. So it could never trigger because they could just keep moving money around in skillful ways such that the clause never ends up getting triggered. Some sub-points here are that firms will evade the clause by nominally assigning profits to subsidiary, parent or sibling corporations. That firms will evade the clause by paying out profits in dividends. That firms will sell all windfall generating AI assets to a firm that is not bound by the clause. Any thoughts on these here.

Cullen O’Keefe: First of all, a lot of these were raised by early commentators on the idea, and so I’m very thankful to those people for helping raise this. I think we probably haven’t exhausted the list of potential ways in which firms could evade their commitments, so in general I would want to come up with solutions that are not just patch work solutions, but also more like general incentive alignment solutions. That said, I think most of these problems are mitigable by careful contractual drafting. And then potentially also searching to other forms of the Windfall Clause like something based on firm share price. But still, I think there are probably a lot of ways to circumvent the clause in its kind of early form that we’ve proposed. And we would want to make sure that we’re pretty careful about drafting it and simulating potential ways that signatory could try to wriggle out of its commitment.

Cullen O’Keefe: I think it’s also worth noting that a lot of those potential actions would be pretty clear violations of general legal obligations that signatories to a contract have. Or could be mitigated with pretty easy contractual clauses.

Lucas Perry: Right. The solution to these would be foreseeing them and beefing up the actual windfall contract to not allow for these methods of circumvention.

Cullen O’Keefe: Yeah.

Lucas Perry: So now this next one I think is quite interesting. No firm with a realistic chance of developing windfall generating AI would sign the clause. How would you respond to that?

Cullen O’Keefe: I mean, I think that’s certainly a possibility, and if that’s the case, then that’s the case. It seems like our ability to change that might be pretty limited. I would hope that most firms in the potential position to be generating windfall, would take that opportunity as also carrying with it responsibility to follow the common good principle. And I think that a lot of people in those companies, both in leadership and the rank and file employee positions, do take that seriously. We do also think that the Windfall Clause could bring non-trivial benefits as we spent a lot of time talking about.

Lucas Perry: All right. The next one here is that quote, “If the public benefits of the Windfall Clause are supposed to be large, that is inconsistent with stating that the cost to firms will be small enough, that they would be willing to sign the clause.” This has a lot to do with this distinction with the ex-ante and the ex-post differences in cost. And also how there is probabilities and time involved here. So, your response to this objection.

Cullen O’Keefe: I think there’s some a-symmetries between the costs and benefit. Some of the costs are things that would happen in the future. So from a firms perspective, they should probably discount the costs of the Windfall Clause because if they earn windfall, it would be in future. From a public policy perspective, a lot of those benefits might not be as time sensitive. So you might no super-care when exactly those costs happen and therefore not really discount them from a present value standpoint.

Lucas Perry: You also probably wouldn’t want to live in the world in which there was no distribution mechanism or windfall function for allocating the windfall profits from one of your competitors.

Cullen O’Keefe: That’s an interesting question though, because a lot of corporate law principals suggest that firms should want to behave in a risk neutral sense, and then allow investors to kind of spread their bets according to their own risk tolerances. So, I’m not sure that this risks spreading between firms argument works that well.

Lucas Perry: I see. Okay. The next is that the Windfall Clause reduces incentives to innovate.

Cullen O’Keefe: So, I think it’s definitely true that it will probably have some effect on the incentive to innovate. That almost seems like kind of necessary or something. That said, I think people in our community are kind of the opinion that there are significant externalities to innovation and not all innovation towards AGI is strictly beneficial in that sense. So, making sure that those externalities are balanced seems important. And the Windfall Clause is one way to do that. In general, I think that the disincentive is probably just outweighed by the benefits of the Windfall Clause, but I would be open to reanalysis of that exact calculus.

Lucas Perry: Next objection is, the Windfall Clause will shift investment to competitive non-signatory firms.

Cullen O’Keefe: This was another particularly interesting comment and it has a potential perverse effect actually. Suppose you have two types of firms, you have nice firms and less nice firms. And all the nice firms sign the Windfall Clause. And therefore their future profit streams are taxed more heavily than the bad firms. And this is bad, because now investors will probably want to go to bad firms because they offer potentially more attractive return on investment. Like the previous objection, this is probably true to some extent. It kind of depends on the empirical case about how many firms you think are good and bad, and also what the exact calculus is regarding how much this disincentives investors from giving to good firms and causes the good firms to act better.

We do talk a little bit about different ways in which you could potentially mitigate this with careful mechanism design. So you could have the Windfall Clause consist in subordinated obligations but the firm could raise senior equity or senior debt to the Windfall Clause such that new investors would not be disadvantaged by investing in a firm that has signed the Windfall Clause. Those are kind of complicated mechanisms, and again, this is another point where thinking through this from a very careful micro-economic point in modeling this type of development dynamic would be very valuable.

Lucas Perry: All right. So we’re starting to get to the end here of objections or at least objections in the paper. The next is, the Windfall Clause draws attention to signatories in an undesirable way.

Cullen O’Keefe: I think the motivation for this objection is something like, imagine that tomorrow Boeing came out and said, “If we built a Death Star, we’ll only use it for good.” What are you talking about, building a Death Star? Why do you even have to talk about this? I think that’s kind of the motivation, is talking about earning windfall is itself drawing attention to the firm in potentially undesirable ways. So, that could potentially be the case. I guess the fact that we’re having this conversation suggests that this is not a super-taboo subject. I think a lot of people are generally aware of the promise of artificial intelligence. So the idea that the gains could be huge and concentrated in one firm, doesn’t seem that worrying to me. Also, if a firm was super close to AGI or something, it would actually be much harder for them to sign on to the Windfall Clause, because the costs would be so great to them in expectation, that they probably couldn’t justify it from a fiduciary duty standpoint.

So in that sense, signing on to the Windfall Clause at least from a purely rational standpoint, is kind of negative evidence that a firm is close to AGI. That said, there is certainly psychological elements that complicate that. It’s very cheap for me to just make a commitment that says, oh sure if I get a trillion dollars, I’ll give 75% of it some charity. Sure, why not? I’ll make that commitment right now in fact.

Lucas Perry: It’s kind of more efficacious if we get firms to adopt this sooner rather than later, because as time goes on, their credences in who will hit AI windfall will increase.

Cullen O’Keefe: Yeah. That’s exactly right. Assuming timelines are constant, the clock is ticking on stuff like this. Every year that goes by, committing to this gets more expensive to firms, and therefore rationally, less likely.

Lucas Perry: All right. I’m not sure that I understand this next one, but it is, the Windfall Clause will lead to moral licensing. What does that mean?

Cullen O’Keefe: So moral licensing is a psychological concept, that if you do certain actions that either are good or appear to be good, that you’re more like to do bad things later. So you have a license to act immorally because of the times that you acted morally. I think a lot of times this is a common objection to corporate philanthropy. People call this ethics washing or green washing, in the context of environmental stuff specifically. I think you should again, do pretty careful cost benefit analysis here to see whether the Windfall Clause is actually worth the potential licensing effect that it has. But of course, one could raise this objection to pretty much any pro-social act. Given that we think the Windfall Clause could actually have legally enforceable teeth, it seems kind of less likely unless you think that the licensing effects would just be so great that they’ll overcome the benefits of actually having an enforceable Windfall Clause. It seems kind of intuitively implausible to me.

Lucas Perry: Here’s another interesting one. The rule of law might not hold if windfall profits are achieved. Human greed and power really kicks in and the power structures which are meant to enforce the rule of law no longer are able to, in relation to someone with AGI or superintelligence. How do you feel about this objection?

Cullen O’Keefe: I think it’s a very serious one. I think it’s something that perhaps the AI safety maybe should be investing more in. I’m also having an interesting discussion, asynchronously on this with Rohin Shah on the EA Forum. I do think there’s a significant chance that if you have an actor that is potentially as powerful as a corporation with AGI and all the benefits that come with that at its disposal, could be such that it would be very hard to enforce the Windfall Clause against it. That said, I think we do kind of see Davids beating Goliaths in the law. People do win lawsuits against the United States government or very large corporations. So it’s certainly not the case that size is everything, though it would be naïve to suppose that it’s not correlated with the probability of winning.

Other things to worry about, are the fact that this corporation will have very powerful AI that could potentially influence the outcome of cases in some way or perhaps hide ways in which it was evading the Windfall Clause. So, I think that’s worth taking seriously. I guess just in general, I think this issue is worth a lot of investment from the AI safety and AI policy communities, for reasons well beyond the Windfall Clause. And it seems like a problem that we’ll have to figure out how to address.

Lucas Perry: Yeah. That makes sense. You brought up the rule of law not holding up because of its power to win over court cases. But the kind of power that AGI would give, would also potentially far extend beyond just winning court cases right? In your ability to not be bound by the law.

Cullen O’Keefe: Yeah. You could just act as a thug and be beyond the law, for sure.

Lucas Perry: It definitely seems like a neglected point, in terms of trying to have a good future with beneficial AI.

Cullen O’Keefe: I’m kind of the opinion that this is pretty important. It just seems like that this is just also a thing in general, that you’re going to want of a post-AGI world. You want the actor with AGI to be accountable to something other than its own will.

Lucas Perry: Yeah.

Cullen O’Keefe: You want agreements you make before AGI to still have meaning post-AGI and not just depend on the beneficence of the person with AGI.

Lucas Perry: All right. So the last objection here is, the Windfall Clause undesirably leaves control of advanced AI in private hands.

Cullen O’Keefe: I’m somewhat sympathetic to the argument that AGI is just such an important technology that it ought to be governed in a pro-social way. Basically, this project doesn’t have a good solution to that, other than to the extent that you could use Windfall Clause funds to perhaps purchase share stock from the company or have a commitment in shares of stock rather than in money. On the other hand, private companies are doing a lot of very important work right now, in developing AI technologies and are kind of the current leading developers of advanced AI. It seems to me like their behaving pretty responsibility overall. I’m just not sure what the ultimate ideal arrangement of ownership of AI will look like and want to leave that open for other discussion.

Lucas Perry: All right. So we’ve hit on all of these objections, surely there are more objections, but this gives a lot for listeners and others to consider and think about. So in terms of alternatives for the Windfall Clause, you list four things here. They are windfall profits should just be taxed. We should rely on anti-trust enforcement instead. We should establish a sovereign wealth fund for AI. We should implement a universal basic income instead. So could you just go through each of these sequentially and give us some thoughts and analysis on your end?

Cullen O’Keefe: Yeah. We talked about taxes already, so is it okay if I just skip that?

Lucas Perry: Yeah. I’m happy to skip taxes. The point there being that they will end up only serving the country in which they are being taxed, unless that country has some other mechanism for distributing certain kinds of taxes to the world.

Cullen O’Keefe: Yeah. And it also just seems much more tractable right now to work on, private commitments like the Windfall Clause rather than lobbying for pretty robust tax code.

Lucas Perry: Sure. Okay, so number two.

Cullen O’Keefe: So number two is about anti-trust enforcement. This was largely spurred by a conversation with Haydn Belfield. The idea here is that in this world, the AI developer will probably be a monopoly or at least extremely powerful in its market, and therefore we should consider anti-trust enforcement against it. I guess my points are two-fold. Number one is that just under American law, it is pretty clear that merely possessing monopoly power is not itself a reason to take anti-trust action. You have to have acquired that monopoly power in some illegal way. And if some of the stronger hypothesis about AI are right, AI could be a natural monopoly and so it seems pretty plausible that an AI monopoly could develop without any illegal actions taken to gain that monopoly.

I guess second, the Windfall Clause addresses some of the harms from monopoly, though not all of them, by transferring some wealth from shareholders to everyone and therefore transferring some wealth from shareholders to consumers.

Lucas Perry: Okay. Could focusing on anti-trust enforcement alongside the Windfall Clause be beneficial?

Cullen O’Keefe: Yeah. It certainty could be. I don’t want to suggest that we ought not to consider anti-trust, especially if there’s a natural reason to break up firms or if there’s a natural violation of anti-trust law going on. I guess I’m pretty sympathetic to the anti-trust orthodoxy that monopoly is not in itself a reason in itself to break up a firm. But I certainly think that we should continue to think about anti-trust as a potential response to these situations.

Lucas Perry: All right. And number three is we should establish a sovereign wealth fund for AI.

Cullen O’Keefe: So this is an idea that actually came out of FLI. Anthony Aguirre has been thinking about this. The idea is to set up something that looks like the sovereign wealth funds that I alluded to earlier, that places like Norway and other resource rich countries have. Some better and some worse governed, I should say. And I think Anthony’s suggestion was to set this up as a fund that held shares of stock of the corporation, and redistributed wealth in that way. I am sympathetic to this idea overall as I mentioned, I think stock based Windfall Clause could be potentially be an improvement over the cash based one that we suggest. That said, I think there are significant legal problems here if that’s kind of make this harder to imagine working. For one thing, it’s hard to imagine the government buying up all these shares of stock companies, just to acquire a significant portion of them so that you have a good probability of capturing a decent percentage of future windfall, you would have to just spend a ton of money.

Secondly, they couldn’t expropriate the shares of stock, but it would require just compensation under the US Constitution. Third, there are ways that corporations can prevent from accumulating a huge share of its stock if they don’t want it to, the poison pills, the classic example. So if the firms didn’t want a sovereign automation fund to buy up significant shares of their fund, which they might not want to since it might not govern in the best interest of other shareholders, they could just prevent it from acquiring a controlling stake. So all those seem like pretty powerful reasons why contractual mechanisms might be preferable to that kind of sovereign automation fund.

Lucas Perry: All right. And the last one here is, we should implement a universal basic income instead.

Cullen O’Keefe: Saving kind of one of the most popular suggestions for last. This isn’t even really an alternative to the Windfall Clause, it’s just one way that the Windfall Clause could look. And ultimately I think UBI is a really promising idea that’s been pretty well studied. Seems to be pretty effective. It’s obviously quite simple, has widespread appeal. And I would be probably pretty sympathetic to a Windfall Clause that ultimately implements a UBI. That said, I think there are some reasons that you might you prefer other forms of windfall distribution. So one is just that UBI doesn’t seem to target people particularly harmed by AI for example, if we’re worried about a future with a lot of automation of jobs. UBI might not be the best way to compensate those people that are harmed.

Others address that it might not be the best opportunity for providing public goods, if you thought that that’s something that the Windfall Clause should do, but I think it could be a very promising part of the Windfall Clause distribution mechanism.

Lucas Perry: All right. That makes sense. And so wrapping up here, are there any last thoughts you’d like to share with anyone particularly interested in the Windfall Clause or people in policy in government who may be listening or anyone who might find themselves at a leading technology company or AI lab?

Cullen O’Keefe: Yeah. I would encourage them to get in touch with me if they’d like. My email address is listed in the report. I think just in general, this is going to be a major challenge for society in the next century. At least it could be. As I said, I think there’s substantial uncertainty about a lot of this, so I think there’s a lot of potential opportunities to do research, not just in economics and law, but also in political science and thinking about how we can govern the windfall that artificial intelligence brings, in a way that’s universally beneficial. So I hope that other people will be interested in exploring that question. I’ll be working with the Partnership on AI to help think through this as well and if you’re interested in those efforts and have expertise to contribute, I would very much appreciate people getting touch, so they can get involved in that.

Lucas Perry: All right. Wonderful. Thank you and everyone else who helped to help work on this paper. It’s very encouraging and hopefully we’ll see widespread adoption and maybe even implementation of the Windfall Clause in our lifetime.

Cullen O’Keefe: I hope so too, thank you so much Lucas.