PODCAST

Answering the Fermi Question: Is AI our Great Filter?

Anders Sandberg on the TDS podcast

Jeremie Harris
Towards Data Science
48 min readFeb 3, 2021

--

To select chapters, visit the Youtube video here.

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds. You can listen to the podcast below:

APPLE | GOOGLE | SPOTIFY | OTHERS

The apparent absence of alien life in our universe has been a source of speculation and controversy in scientific circles for decades. If we assume that there’s even a tiny chance that intelligent life might evolve on a given planet, it seems almost impossible to imagine that the cosmos isn’t brimming with alien civilizations. So where are they?

That’s what Anders Sandberg calls the “Fermi Question”: given the unfathomable size of the universe, how come we have seen no signs of alien life? Anders is a researcher at the University of Oxford’s Future of Humanity Institute, where he tries to anticipate the ethical, philosophical and practical questions that human beings are going to have to face as we approach what could be a technologically unbounded future. That work focuses to a great extent on superintelligent AI and the existential risks it might create. As part of that work, he’s studied the Fermi Question in great detail, and what it implies for the scarcity of life and the value of the human species.

Our conversation covered a lot of ground, and here were some of my favourite take-homes:

  • At the heart of the debate over the Fermi Question is the Drake equation. The Drake equation is a formula used to estimate the number of alien civilizations that we should be able to observe in our universe, to various parameters, such as the probability that life would evolve on a given planet, the rate at which plausibly life-sustaining stars and planets form, and so on. Most of these parameters come with considerable uncertainties — some spanning tens of orders of magnitude — and Anders argues that most analyses of the Fermi Question fail to take that uncertainty into account, resulting in wildly incorrect conclusions about the likelihood of there being other forms of intelligent life in the universe.
  • The Fermi Question has important implications for the future of humanity. If it turns out that detectable, intelligent life ought to be quite common in the universe, then there must be some reason that we don’t see it out there. And one plausible reason is that there’s some technological or other threshold that all civilizations reach, at which they reliably destroy themselves. If that’s the case, we’d better watch out, because that great filter may lie ahead. But if that’s not the case — if it turns out that humans are alone in the universe because life genuinely is just that uncommon — then life on Earth becomes far more precious and valuable, as the universe’s only spark of light.
  • The Fermi Question gives us good reason to mind the road ahead. And that makes it incredibly important for us to get good at forecasting technological developments that may have destructive potential. One of these candidates is general artificial intelligence, of the kind that some have predicted may bring about a technological singularity. With that in mind, Anders has spent a lot of time studying different AI development forecasting strategies.
  • A common argument used by skeptics of imminent general AI is based on the observation that current AI systems are far less energy efficient, on a per-operation basis, than the human brain. Anders points out that focusing on energy efficiency doesn’t really make sense, since that’s only one dimension we might imagine using to evaluate the potential of AI systems. In another sense, one could argue that a machine learning model that outperforms a typical five year-old at image classification after only a few days of training is already capable of superhuman performance along one performance axis. Likewise, current AI systems are able to absorb far more data, and render predictions and inferences orders of magnitude faster than human beings because their physical structures aren’t as clunky and awkward for certain types of cognition as is the human brain. It’s not always possible to compare apples to apples in the context of this debate, and that ought to be cause for considerable humility when making any kind of prediction about the future of AI.

You can follow Anders on Twitter here or follow me on Twitter here.

Links referenced during the podcast:

Chapters:

  • 0:00 Intro
  • 1:34 Anders’ background
  • 8:45 Problem solving processes
  • 16:06 The universe’s optimization
  • 17:07 Biological evolution
  • 24:55 The Fermi question
  • 33:47 Dwarf stars
  • 42:10 AI and machine learning design
  • 42:54 Philosophers and AI systems
  • 57:19 Human behaviour and free will
  • 1:05:36 Wrap-up

Please find the transcript below:

Jeremie (00:00:00):
Hey everyone. My name’s Jeremie. Welcome back to the Towards Data Science Podcast. And I’m really excited about today’s episode because I get to talk to Anders Sandberg. Now, Anders is someone I’ve been angling to talk to for a long time because his research is focused on some fascinating topics that he approaches with a really interesting multidisciplinary strategy. He’s a researcher, a science debater, a futurist, and a transhumanist. And he has a degree in Computational Neuroscience from Stockholm University. He’s also currently a senior research fellow at Oxford University’s Future of Humanity Institute. I will say Anders is genuinely one of the most interesting thinkers I’ve encountered on the topic of existential risk and the hard questions that advanced AI systems are going to force us to answer. And he has this way of seamlessly blending together knowledge from fields as diverse as machine learning, ethics, metaphysics, and cosmology.

Jeremie (00:00:47):
And that just makes him a joy to speak to. And it makes people realize as well how deeply related these different areas become when you zoom out enough to see how humanity fits into the grand cosmic picture in deep time. I really enjoyed this conversation. It covered some fascinating topics that all touch on the future of AI development in some unexpected and surprising way. Those include things like why we might actually be alone in the universe after all whether the energy efficiency of the human brain suggests that generally that might be harder to put together than it seems, and also whether AIs will ever be conscious. So this one was an absolute blast. I hope you enjoy the conversation as much as I did, and without further ado, I’m going to get out of the way and let the episode start. All right. Well, Anders, thanks so much for joining me for the podcast.

Anders (00:01:32):
Well, thank you for having me.

Jeremie (00:01:34):
I’m absolutely thrilled to have you. I’ve been stalking you on Twitter for longer than I’d care to admit. I mean, there’s a lot of really interesting stuff that you’re working on, so many topics that relate to artificial intelligence, the future of artificial intelligence, the future of humanity. One question I wanted to start with though was a bit biographical. I wanted to get a sense for how you came to this field. What was it that drove you here?

Anders (00:01:56):
So I grew up in North in Stockholm in the 1970s in a suburb, really boring. So I read all the science fiction novels I could find at the local branch library, and then one day I realized, actually I want to make this real, how do I do that? I probably should read the science books. So when I started over there and then I went to the municipal library and then the main library and then university library. So that’s how I ended up, but I always wanted to make the future real. If I can’t write fiction about it, maybe I can investigate it, write papers, invent things, or figure out what we should be focusing on or avoiding.

Jeremie (00:02:35):
Have you developed opinions in the process about what kind of science fiction is most plausible? And I also want to say, what kinds of mistakes science-fiction writers usually make? Because that’s sort of the… I don’t know, I wonder if you spotted trends there.

Anders (00:02:53):
So the problem is science-fiction often want to write good stories and reality is usually a really bad story. The plot thing of reality is hopeless. I mean, just look at this year or any average year, and you would say, “Yeah, this is so uneven and it doesn’t make sense.” So real stories, of course, try to make sense. They try to tell a story that resonates with us. The problem is, of course, that many of the things out in the world are independent of our human emotions, especially when you get to areas of science and technology. And that means that many of the best stories actually don’t handle science and technology very carefully at all. They kind of miss that in favor of human stories, which means that if you want to think about your future, in many cases, you might want to go for science-fiction that actually is much less worthy as fiction but much better at thinking about ideas.

Anders (00:03:54):
But again, you have a trade-off. Many of the coolest ideas might actually not be tremendously plausible. Yeah, I feel that science-fiction that really contains seeds for interesting things is besides fiction, but it’s full of little ideas. They’re trying to describe the interactions of things going on in an environment, not just the amazing technology, but also how it fails or how kids misuse the technology and how to counter part all the people yelling about, get off my lawn, or now we’re talking, get off my augmented reality and the filter. At that point, you start seeing the interest in non-trivial effect. Isaac Asmoth was talking about the elevator principle. If you show pictures of the skyline of Washington, sorry, not Washington, if you show a picture of the skyline of New York to somebody from a past century, they should be able to figure out that there must be something like elevators because otherwise skyscrapers don’t make sense.

Anders (00:04:53):
It’s a tool to walk up all those stairs. Maybe they’re going to be wrong and say, “Yeah, all the rich people of course live conveniently close to the ground floor and the poor people have to make up the high altitude, the flats.” But they would be forced to realize that there must be something like an elevator to make sense of that picture. And I think this is also where science-fiction can be the most useful. It makes you aware of some of these elevator principles. For me, for example, thinking about maintainability of a lot of advanced technologies is an interesting question. How do you repair a space elevator? If you build [inaudible 00:05:33] how much effort does it have to take to protect it and keep it from breaking down? When you make an artificial intelligence, how much extra work is it to keep it sane and functional?

Jeremie (00:05:46):
Interesting. That’s almost… It’s funny, there’s a principle in startups that this reminds me of, which is you should always aim to work on problems that seem boring, that there are things that those are the areas where people aren’t working and it’s where companies like Stripe… A lot of people don’t know what Stripe is. It’s a payment processing company. They do the dirty work. They do the plumbing of the internet. This sort of seems like one of those similar ideas. Do you think that there’s a similar effect that cuts the other way a little bit too where people might encounter an idea that does observe the elevator principle that is sort of a rational forward looking prediction, but whose implications are so profoundly counter-intuitive that people just almost reflexively push back at it. Is that something you’ve seen as well?

Anders (00:06:32):
Oh yeah. All the time. Indeed, getting back to classic science-fiction author Arthur C. Clark in his book Profiles of the Future, he was talking about the failures of imagination and failure of nerve. Quite a lot of people, especially academics have a failure of imagination. They can’t imagine things being very different. I remember being told a professor of nanotechnology that said self-replicating machines were absolutely impossible but when I pointed out what about bacteria? Yeah, but we can’t build them here. So from his perspective, self-replicating machines were absolutely impossible because he couldn’t put it inside his project and he didn’t want to raise his nose from what he could do to resistance that actually exist. But then you have a failure of nerve. You can imagine some things, but you don’t want to follow through because the consequences are so vast, so weird that, okay, this just sounds crazy. I’m just not going to talk about.

Anders (00:07:29):
Again, nanotechnology has that problem because the original visions of Eric Drexler demonstrate that if you get atomically precise manufacturing, that can be scaled up, the world gets really different. And that got people interested in all the technology. But unfortunately the field then was taken over by people like that professor who wanted to work on stuff that was normal. So we ended up with a lot of wonderful solid state science, but the idea that you could actually do things that really transformed the world, that’s not really what we do here in the lab. So that’s probably crazy talk. And the same thing goes, of course, for a lot of other domains, we have seen it in space. That what Clark was writing about. Many people were criticizing the early space pioneers and giving all sorts of reasonable explanation why you could never build a rocket that could get out in space.

Anders (00:08:24):
The problem was, of course, that they were thinking about the simplest way they could do it and then they could demonstrate that that wouldn’t work. They didn’t think about if somebody is really motivated to do it and actually spends some time on making a good design what could they do. So that’s why people were poo-pooing rockets while Goddard was actually launching rockets.

Jeremie (00:08:45):
Do you think that this… Because one paradigm I often get stuck thinking in is this distinction between concrete problems and their solutions and then processes that solve problems. And it always strikes me that it’s easy in our mundane day-to-day existence to encounter a particular challenge, like space travel and look at it and say, “Wow, yeah.” Just like you said, that’s really hard. I can’t imagine personally solving that. And that’s really what we’re saying in a certain sense it’s like, it’s the same thing that causes us to say, “Well, if I ran that country, it would run really well.” We sort of imagine that we can imprint our will on the problem that somehow reflects what would happen if humanity collectively worked on it, rather than saying, we are some sort of collective structure, a super organism of some kind and through a mix of weird market forces, interpersonal interactions and all this stuff, we are kind of a machine learning algorithm collectively working on solving this problem. I mean, is that an accurate frame, do you think?

Anders (00:09:45):
I think so. And it also depends a bit on the kind of problem you’re trying to solve. So if you have a top-down approach, you need a genius of management to do it. And we have seen two good examples of that, that Polar Project of the Manhattan Project. In both cases, there was a fairly well defined goal and the underlying physics was mostly understood, but not completely by any chance. And then you had people working both on the hard science and engineering problem, but you also happen to have a few management geniuses running the whole project. General Leslie Groves was probably one of the best people in the entire 20th century at managing people and getting stuff done no matter what. But you also have vast projects like the internet that grew organically full of internal contradictions and messiness. One of the little micro hobby I have is collecting documents as predictive imminent demise of the internet.

Anders (00:10:46):
Many of them are like one from ’98 to one time point out if this trend continues by September, this is not going to work. It was supposed to be written in July. And of course the solution one day was implemented and in September rolled around, no problem. There are a lot of problems with the internet that people have been patching them like crazy. Here we have a lot of bottom up solutions. Not all of them are perfect. We could have avoided spam if we had implemented mail systems differently in the early ’70s, but nobody could envision that the email would be used outside the computer department. Definitely not by millions of people, including people who’re a little bit selfish and nobody thought of inviting an economist who would point out, “Look, if the marginal cost of sending an extra email is zero, you’re going to get infinite emails.”

Jeremie (00:11:34):
Right. Yeah. And it does seem as though… To some degree, this does make me think of the kinds of problems we have in terms of predicting how an AI is going to solve the problem. If I look at a computer vision model, I’m not going to be able to guess ahead of time in general what kinds of features that algorithm is going to be looking for in images to classify airplanes away from ducks and submarines and stuff like that. In much the same way, it sort of seems like we’re constantly taken by surprise by the ways in which we as a collective end up coming up with these crazy solutions. One thing maybe this ties into some of your work on deep time and the Fermi paradox, one thing this makes me wonder is, is the collection of atoms in the universe really all just the… Is it all just the parameters of some grand optimization algorithm that are being jumbled around over time? And I mean, I’d love to interact with that idea. First of all, maybe we can dive into the Fermi stuff in a minute.

Anders (00:12:34):
In some sense I think it’s totally true that yeah, we’re doing a giant optimization. We’re minimizing free energy, some Gibbs free energy or Hellman’s free energy, whatever. I can never keep them apart. So in some sense, oh yes, atoms and the particles are trying to get to an energy minimum constraint by also trying to minimize entropy. And that’s already where things start getting really weird and interesting. Because the universe started out with a pretty flat space time for some reason. But of course, a very high temperature and a lot of jumbled atoms. And what happens when space time expands and temperatures go down is that now the atoms start binding together in various non-trivial patterns. And because they can clump because of gravity, you get a lot of very non-trivial patterns, including some patterns that start fusion and start generating energy.

Anders (00:13:26):
So now you get energy flows and things get even more complicated, but in the super large, you could argue that the history of the universe is basically that we’re moving a lot of entropy into the gravitational field of the universe but clumping matter. And that is powering a lot of non-trivial non-equilibrium processes that are having very low entropy. Many of them then turn out to be optimizing for other things. There’s this big field of non-equilibrium thermodynamics, which I don’t understand that well. But it seems like in many cases, if you have a flame that is kind of continually fed to gas, it will tend to maximize or minimize entropy production, depending on the constraint.

Anders (00:14:10):
You get a lot of these weird optimizations starting to happen all around the place. So for beings of molecular matter like us, for example, crystals really feel weird because they’re so different from most of other rocks we find in nature because they’re kind of trying to minimize the lattice energy and surface energy and turning into these very exact precise fix that are very different from the normal rocks, which are of course also full of crystals of a different kind.

Anders (00:14:36):
And we are, of course, powered what’s Lory called eight periodic information crystals. He postulated them before we actually knew what the genetic code really was, but there must be some kind of molecule that when put in a regular way contain the information to build the world and the sun. They didn’t know what kind of molecule it was. They speculated a bit. He was wrong about most of that, but he was right in describing a DNA as kind of aperiodic crystal. And the cool thing about evolution is again, you have an optimization process. You try to maximize your fitness or at least your genes, they’re hoping so far we can hope we have a lot of offspring genes. So organisms that are very successful in that, they spread the genes around. Then now you get something that’s optimized for its ecological niche. It’s a local optimization. Many of these niches turn out to be transitory or just plain stupid, or just get wiped out by bad luck when an astroid strikes.

Anders (00:15:31):
But the end result is that a lot of non-trivial information from the environment have been converted into genetics. Our bodies are full of adaptations for handling enough environments. And that has happened over literally billions of generations where cells and organisms have learned a lot of things, usually using hard lessons. And now it’s encode the thinner genes. Similarly, of course, some of those genes encode brains that are doing essentially the same trick, but much faster. And now we even got cumulative culture. So we’re doing it on an even faster scale.

Jeremie (00:16:06):
And those levels of abstraction, I mean, it kind of seems like they keep piling up and to a certain extent, I mean, it makes me wonder because evolutionary biology is always framed in this paradigm where we say, what’s being optimized for well as species genetic fitness, their ability to propagate their genes, essentially something like to propagate genetic information through time. But then sometimes it’s framed from the perspective that you’re trying to optimize the number of individuals or the number of copies of these genes, it’s always somewhat unclear what’s actually supposed to be optimized for. And it sort of seems as we edge closer and closer to what a lot of people think is this technological singularity that that’s going to break a lot of these assumptions as well, because presumably intelligence is not genetic. It’s not contained in genes. So whatever the universe is optimizing for, it doesn’t necessarily just seem to be genetic fitness. It seems to be something, but I have no idea what it is. Do you have any thoughts on what that might be?

Anders (00:17:07):
So if you could talk to evolution, evolution would say, “Oh yes, you must be really, really good and successful as a species.” Just look at them, a large mammal of that size, that common in all parts of the world. Yeah, really good. Except of course, a lot of humans are really bad at reproducing. I mean, why are we making contraceptives? Why aren’t the old men sperm donors also? In terms of inclusive fitness, that’s what you ought to do. Instead, you get people who get religious ideas and decide I’m going to live the life in celibacy in this monastery and think sacred thoughts. We’d come up with a lot of things that are more fun then than the rearing kids. Maybe one day we will really upload ourselves into software. That’s kind of really a bad idea from the sense of biological evolution, but this is what happens all the time because biological evolution, it creates various things in order to try to optimize fitness, but it doesn’t care about what those things do besides that.

Anders (00:18:10):
So sex for example, is a good way of long-term increase in fitness and level of ability because you can share useful genes. And then of course you need to have a motivation system. So animals actually start having sex. So suddenly you get a lot more pleasure and funding into the world, which is from evolution standpoint just instrumental, but from the perspective value, this is kind of a great thing. Brains, well, they really have to coordinate mortal action, avoid getting eaten, but you can use some to imagine things and do a lot of more things. It seems like deep down the universe might just be doing a free energy minimization, but then that leads to the super non-trivial effect. So when you play around with artificial life simulation, a cell automaton, quite often you get wonderful emergent phenomena that are very inspiring in many ways.

Anders (00:19:01):
Oh, I just put in some simple rules and get a lot of complexity out. But if you spend enough time with this simulation, you quite often get a bit bored because you do get complexity, but it’s the same kind of complexity most of the times [inaudible 00:19:14] life again and again. After a while you’re going to see those patterns. You get something truly weird. You usually have to design it to yourself kind of put it in from the outside. Many of artificial lights simulations that they had in the ’90s were found that you’ve got small ecosystems, but then they never became more complex, which is very different from our own ecosystems and our own societies. They do seem to have a tendency to become more complex.

Anders (00:19:39):
It might be that we’re missing something very fundamental about reality or evolution, or it might just be that you need a big enough world to have it happen. A little bit like the neural network revolution in the 2000s demonstrate that up until this point, we have been using too little data, too small computers and too little training. When you scale this up a few orders of magnitude, really amazing new things happen as we couldn’t even imagine in the 1990s.

Jeremie (00:20:07):
And it is amazing just how different the world is from what you might imagine it being prior to the takeover of some of these strange effects, sexual reproduction, biological evolution, all these things. It’s sort of highlights how strange these processes are and raises of course the question of how common they must be at the universal level. I think this might tie into some existential questions about risk, right? Because when people often talk about why do we look up at the night sky and we don’t see any other alien civilizations there? Does that mean something about a great filter or something that might be ahead of us still? You’ve done a lot of work on this topic. I would love to just prime you with the question, what do you think of the Fermi paradox? Can you introduce it and just describe it a little bit and then see where you take things from there?

Anders (00:21:02):
Yeah. So the Fermi paradox is not really a paradox and some people would point out that it’s not even Fermi’s but I like to call it Fermi’s question. So back in the 1950s, the people at the Manhattan Project were having lunch and talking about atomic rockets and how easy it would be to settle the universe now when the power of atom had been unlocked and then Fermi apparently just asked, “So where is everybody?” And that was a very good question, because if it’s easy to go across the gulfs of space and settle the entire universe, we ought to be seeing a lot of the examples of anti-aliens because the universe is really big and really old. And it doesn’t take that long, if you can, to spread between the stars before you’re showing up everywhere. So that empty sky became a real problem, because if you’re somewhat optimistic about technology, this seems to create attention.

Anders (00:21:58):
And that’s why people say it’s a paradox. We assume that there’s a lot of sites and times where the internet can emerge. You multiply that with some reasonable probability of intelligence emerging, and then you should get a number. And if you’re somewhat optimistic, you get a large number and that doesn’t seem to fit. Now, you could argue that maybe there are very few places in the universe where intelligence and life could evolve. So there are some people saying that the earth is very unique, but it’s hard to make it super unique, so unique that you can safely assume that there is no life everywhere else. So there has to be something else. And of course, somewhere in this equation that you multiply various factors together, there has to be one factor that’s small enough to make the universe pretty empty.

Anders (00:22:47):
So that’s the great filter factor. It could be that life is super rare. In that case, well, we’re lucky we exist and now we have a big future ahead of us, or it could be that intelligence is rare, or it could be that intelligence is common but it doesn’t survive very long. And that’s of course the kind of scary great filter that got us working at the Future of Humanity Institute thinking about these questions, because that seems to be one of the few pieces of really independent information about our chances, whatever information global risks come from the reading the newspaper and thinking about what are the latest news about biotechnology and pandemics. They’re trying to understand the other issues, but here we have something that seems to be an average across all possible civilizations. Now, the really interesting thing here is, of course, if we’re living in a universe that has a tendency towards complexity, this gets even worse.

Anders (00:23:41):
If you think that the universe is pretty neutral or inimical to life, okay, fine. It’s pretty empty. If you think the universe is really trying to get life and you tell us, you have a bigger problem. It’s also worth noticing that in the past many people were absolutely convinced that every environment had its own inhabitants. The idea that there were people living on other planets was almost self evident to a lot of people, both in antiquity and the early modern era. [inaudible 00:24:09] actually said that it would be kind of crazy for God to create these planets and not put people on them.

Anders (00:24:16):
That incidentally also means that you don’t need to care too much about the end of humanity. So my friend Thomas Moynihan has written an excellent book, X-Risk about the history of thinking about the extinction of humanity and he points out that up until fairly recently, people were not taking it terribly seriously because if we go extinct, well, somebody else is going to show up. That’s the way the universe is. But if you think we’re almost alone or entirely alone, if our spark goes out, there’s just darkness, that makes existential threats much worse. So this is another reason we really want to understand the Fermi question.

Jeremie (00:24:55):
Okay, yeah. I totally understand that fixation on the Fermi question as you put it and I agree. I mean, the Fermi paradox framing always seemed… It’s not that it seemed naive, but it seems as though it reflects a certain bias towards thinking that when you look at, I guess, the famous Drake equation that sort of lays out all of the factors that you multiply together to get the number of potential alien civilizations out there. And you’ve pointed this out in your work, there seems to be a consistent reflection of the author’s bias. So if you’re analyzing this equation, you can come up with almost any answer you want depending on how you tune those factors. Would you mind explaining?

Anders (00:25:40):
You can unconsciously. It’s very easy to just put in what you think is reasonable and the look you get a reasonable answer. If I get what I think is unreasonable last, quite often, I will go back and more or less consciously fudge things so I get what I think is a reasonable answer. And this is of course quite dangerous if you’re thinking critically about it.

Jeremie (00:26:00):
And actually to that point, that implies a huge amount of uncertainty in the prediction itself. Are we doing a good job academically or research wise at studying that uncertainty and accounting for it?

Anders (00:26:12):
Well, I have a paper where I’m arguing that we have been doing a bad job at this. Because typically what I would say people do is they line up factors for Drake equation. They admit this one we don’t know. We actually have no clue on how likely life is on a terrestrial planet, but let’s just say one chance in a million. And let’s admit making up these numbers and then they multiply everything together and they get a number and they admit, of course, that yeah, this is super uncertain because I made up some of those numbers and then they leave it at that. Because to them admitting that you’re uncertain about something, that’s how to handle uncertainty. But this is of course not rational. If I’m uncertain about how many people live in San Francisco, I should be stating a range that I think I’m 90% or 95% certain it’s inside.

Anders (00:27:03):
I can’t just say, “Okay, I think it’s five million people,” and then not mention how wide the range could be around that. Now the interesting part is that if you actually take the Drake equation and try to put in proper ranges, well, even better if you have even probability distribution so you put in only your level of knowledge and uncertainty, you will get a range of answers, but then it turns out that actually you get a pretty big spread given the current state of knowledge. So we know fairly well the number of stars in the Milky Way and the rate they form at. We have a piece of idea that, okay, terrestrial planets are a dime or a dozen. There’re a lot of them. We have more or less 100 orders of magnitude of uncertainty about how likely life is to emerge on a planet.

Anders (00:27:49):
It could be that it happens within 10 minutes of the first pattern showing up. It could be that it happens through more or less a thermodynamic miracle once every 10 to the power of 100 planets or something like that. We honestly don’t know. So when you put it in, you get the very broad uncertainty distribution. And even if you’re pretty optimistic on average so you think that on average, it should be maybe 10 civilizations in the Milky Way, you get a lot of probability going down into a tale that we’re actually really alone in the observable universe. If you knew that something at the empty sky isn’t that problematic, I might still have some hope that I’m right that we have some more space that’s out there, but the empty is not terribly surprising. If I’m just putting in numbers, then I’m going to end up and say, “Oh, it should be a 100 civilizations here. Why aren’t we seeing them?”

Jeremie (00:28:44):
Yeah. No, it’s interesting how those, especially because you’re multiplying all these probabilities together, right? So would it be a good nut shelling of this argument to say you only have to fall on the very, very pessimistic end of the range for one of these parameters to really destroy any hope or reduce significantly at least the probability that we have interplanetary neighbors?

Anders (00:29:11):
Yeah, exactly. And it’s important to realize that many professional astronauts will say, “Yeah, but people aren’t that stupid when they use the Drake equation.” And then of course, I immediately start waving a bunch of published peer reviewed papers around that that’s more or less the same thing. And many people are equally over-confident about what is super unlikely. So there’re some people who think that life is super unlikely. We actually had to change a little bit in our texts not to give any help to creation is heavenly claiming, but obviously only God can add life to our planet, but it’s not implausible that it might take a very unlikely set of events leading to that complexity. One thing that I didn’t think about before writing that paper was it might be possible that a lot of life ends up with a genetic coding system that is probably crappy.

Anders (00:30:04):
It allows them to reproduce but the evolution is really slow. So we need to get the chance to turn into anything interesting until the star burns up. Now our kind of life only took a few billion years to go from the kind of primordial goo to people writing papers about primordial goo. But it might be that most life of universe actually stay primordial goo until it dries out and dies. So that was something I didn’t think about as the hard point, but I realized, ooh, that’s actually a possibility. It’s kind of a disturbing one.

Jeremie (00:30:40):
Yeah. And actually it’s funny, you mentioned the creationist angle there and there’s something that… I mean, I come from the world back in the day of quantum mechanics and I did my work in a field called Interpretations of Quantum Mechanics. And we talked a lot about multi-verses in that context. One of the things that I always found to be a compelling argument for the multi-verse is the idea that if it genuinely does appear that we are alone in the universe, what a fantastically suspicious situation that would be. It would mean if the observable universe is all that actually exists and there is exactly one exemplar of intelligent life, then that implies that the probability of life evolving on, let’s say, a given planet was tuned to almost exactly one in however many planets there are in the universe.

Jeremie (00:31:37):
That is an incredibly… It’s not 10 times bigger than that or we would have 10 interests of our neighbors. It’s not a billion times smaller than that. It is exactly or very roughly exactly that order of magnitude. And then, I mean, that’s, I guess, where you might point to some of these religious narratives that might be an alternative explanation with this sort of thing, do you find that to be compelling as an argument for there being a much vaster universe beyond the observable universe, maybe a multi-verse, something like that?

Anders (00:32:06):
I think multi-verse fear is many people recoil from them because it seems like, oh, wait a minute, isn’t science supposed to be dealing with testable stuff? This sounds very untestable, but there seem to be a fairly robust prediction of a lot of different theories. So it’s not just that you can claim that the quantum mechanics leads naturally to the amount of our sphere. That’s also my own point of view. But then again, one can spend all day and night arguing about the interpretations of quantum mechanics. But there is also no good reason to think of observable universes all there is to the universe. Indeed, the space time seem to be flat, which means that the simplest answer is, oh yes, it’s infinitely large. You can’t say, “Well, it’s got some kind of close topology, but we have seen no evidence for that.”

Anders (00:32:53):
Then you need to add extra complications to get that to work. And then of course you have the inflation theory is saying that, well, actually there might be other domains and so on. And so you get multi-verse theories propping up almost all over the place. And this means that it’s kind of easy to explain almost everything because somewhere this is bound to happen. The real question is of course, why the world is not weirder. Why are we finding ourselves at a fairly normal looking planets around the G-Star? Even that might be a bit weird because after all yellow dwarf stars, they’re not super common compared to the red dwarf stars, which are everywhere and they’re also going to be shiny much longer. So it’s kind of slightly odd that were relatively early in the reference F book of the universe rather than somewhere in the middle orbiting a little deem red dwarf stars.

Anders (00:33:47):
And in another paper I argue that that might actually be a hint that it’s not that habitable around these dwarf stars. Normally, I’m kind of optimistic about the habitability about those planets, but maybe those flares really do erode the atmosphere or continental drift stops early enough that they lose the carbon cycle and then the climate goes haywire. So actually most life tends to show up early in this reference example.

Jeremie (00:34:15):
That’s actually in a way kind of actually deeply counter-intuitive. And it also highlights, I suppose, the amount of uncertainty there is too. I mean, if you’re otherwise fairly bullish about these… Was it white dwarf stars, sorry, or?

Anders (00:34:30):
Red dwarf stars.

Jeremie (00:34:31):
Sorry, red dwarf stars. If you’re otherwise really bullish about them for a variety of other reasons, and yet here we are orbiting some different star, I mean, what’s the difference in relative frequency between red dwarf stars, which ought to be the star that were orbiting if they indeed are more or equally habitable in our own sun or stars like-

Anders (00:34:53):
So basically, you have I would say about 30 to 50 red dwarf stars per yellow star. And when you go down to the smallest ones, the smallest, the dimmest ones, the numbers skyrocket. They’re really ubiquitous and they also last really long. The sun is only going to be lasting for five more billion years. Then it’s going to become a red giant and shed its outer layers or become a rather boring white dwarf star. Very sad for people living in the solar system at the time, but meanwhile, many of the red dwarf stars around us, they’re just happily going to keep on going for literally a trillion years. Burn up star which is a few light years away right now, it’s still good to shine when outer layer and the sun has turned into white dwarf but also cooled off to become a rather boring almost black dwarf. It’s still going to be shining. It’s going to be fusing practically all its hydrant. If there is a planet around it, it’s going to probably have the same temperature in that future as it has right now.

Jeremie (00:35:59):
So I’m sure this is a bad term for various reasons, but if you add up the sum of all the possible life-sustaining years of red dwarf stars in the universe, it should be way bigger than the sum of life sustaining years of our sun or stars like our sun.

Anders (00:36:19):
It should totally dominate. I can’t remember what the exact number is. I calculated a while ago, but there is a lot more biosphere years around the red dwarf than yellow dwarf stars.

Jeremie (00:36:30):
Biosphere years. Okay. Yeah. Good to know there’s a better term for it. That’s great. Fascinating. Okay. So there are all kinds of open questions about how we got here. Then clearly the range of uncertainties is just incredible. It’s fascinating that we can even reason about this, and frankly, I’ve been surprised reading your work at how much information you’ve been able to extract from our parents’ situation just from being here looking in the more for direction as the evolution of the universe continues, obviously, one of the big things on the horizon, the biggest phase shift that we have to look forward to here is potentially something like the emergence of advanced artificial intelligence, artificial general intelligence, super intelligence, and all that entails.

Jeremie (00:37:12):
You’ve done some work comparing artificial intelligence in terms of its energy consumption and the human brain. And looking at arguments people have made that suggest that maybe artificial intelligence will take longer to develop because for energetic reasons, you’ve argued that that’s probably a bad argument. I’d love to hear you unpack that sort of whole body of thinking.

Anders (00:37:36):
So right now, if you want to run a big machine learning model, you’re going to be spinning up your data center and your electricity bill is going to run into the kilowatt or maybe the megawatt range. It’s kind of not cheap, but meanwhile, of course, even the most brilliant human brain is running between 20 and 25 Watts of power. That’s a fairly dim light bulb. Well, that’s a fairly dim incandescent light bulb. If you use LEDs, it’s actually fairly bright, but it’s still not that much energy. And that’s really weird because neurons in the brain work by a kind of Rube Goldberg mechanism for transmitting information, basically ion pumps that separate the potassium and sodium ions on the different sides of the cell membrane, when a single channels open, they flow through, it creates some electric potential that opens up the channels and you get the little wave spreading electromechanically at about the speed of sound.

Anders (00:38:37):
It’s kind of silly, but it works pretty well. Now, the interesting part here is that you could probably be more energy efficient if you could do normal electronics on that scale, but still brains are way more energy efficient than current computers. So then there are some people trying to use this as an argument saying, look, in order to get through AI, you need enormous amount of energy, and obviously you can’t do that. Now the problem here is are we comparing apples and oranges? And I think that’s what’s going on because when an infant starts learning language, it’s not exactly doing the same thing as whether we train a big language modeling the data center. The infant here is a fair bit of talking in the room it’s in. It’s hearing things from radio and television. It’s surrounded by language, but total amount words an infant hears in one or two years, that’s not astronomical.

Anders (00:39:36):
I don’t know exactly how many million words they get, but it’s not enormous. Now, compare that to modern language models that do really well. You basically feed them all of Wikipedia and Reddit, the big chunk of internet, the project go to Bergen, all the translated United Nations texts, essentially as much text as you can get your hands on. It’s an enormous vast amount yet they also learn it after about a week of training in the data center. And when you look at the processes going on, they seem to be also very different. The machine learning process uses stochastic gradient descent while what’s going on in the infant’s brain seems to be more like heavier learning. So my argument is basically that this doesn’t work as a comparison. We can’t use the energy they use of current computers to say anything about when they can reach human intelligence. In particular because improvements of algorithms can quite often mean that we kind of fake an extra decade of Moore’s law.

Anders (00:40:38):
If you look at performance of chess computers, you have seen in the past that occasionally that the ratings just jumped up by a significant amount when somebody came up with a better way of solving the problem. So this means that actually using energy as a way of estimating bounce on AI doesn’t work. Now, there is a flip side of it. I do think energy, intelligence and information do matter, and we can use it to bound what civilizations can be up to. Because civilizations in some sense is information processing. I like to point out that even when falling in love is at least in part of information processing operation, maybe the key part of love is not that all the information, maybe there is kind of ineffable qualia, or actually what really matters. But you definitely remember who you’re in love with, that’s information storage.

Anders (00:41:31):
You better do something about it. That’s kind of information that needs to go to muscles and so you say something. So the interesting part here is that we can use the physics of information and the energy to actually say a little bit about advanced civilizations and what they can and cannot do. We can look at the energy source and say, “Yeah, there is not enough energy in the universe to actually perform that kind of competition.” This gives us some ways of thinking about the extremely advanced civilization that might exist in the extreme to far future, but it doesn’t tell us very much about whether we get AI in 10 years or 100 years.

Jeremie (00:42:10):
And on that forecasting side of things, obviously any forecasts that have to do with AI and when we’re going to get transformative AI come with huge aerobars, do you have a personal inclination like what would your personal aerobars look like on, let’s say, an 80% or 90% confidence interval for the emergence of AI that’s can, let’s say, design machine learning systems better than human beings? Because I think that’s probably for various reasons a good objective benchmark.

Anders (00:42:43):
So it may be 80% confidence interval. I would probably start later than most of my colleagues. I think I wouldn’t be rather surprised if we saw AI doing good machine learning design before 2050 or maybe even 2040. And the end point of around might be even be around into the 2100s. I have a quite-

Jeremie (00:43:08):
Wow, fascinating.

Anders (00:43:09):
Yeah. So I have a very broad estimate And I think if somebody quoted me about it, I would probably have to make it even broader. Now, of course, from a safety standpoint, I like to point out that but we better work as if it’s going to happen very shortly because we have to have done our homework on making safe and value aligned AI before it arrives. Even if it arrives in 10, 15, 20 years, that’s actually a rather short time to solve some very fundamental and the problems. And if it takes a century, well, it might still turn out that it’s hard of a problem to solve. After all, philosophy have been trying to solve the value alignment problem for humans in 2,500 years with modest results.

Jeremie (00:43:54):
Actually, I think that’s a really interesting area to dive into. What, if any, do you think the differences are between the value line of problem the philosophers have been working away at, for the last, as you say, 2,500 years or whatever it is, and the value on that problem that we have to sort out with our AI systems in order to make sure that they reflect. They do what we want and actually that we know what we want them to do as well, which is sort of the two parts of this.

Anders (00:44:28):
So the interesting thing about philosophy and thinking about ethics is that for a long time, it was just assuming that the only mindset we need to care about are human-like minds. There are a few interesting discourses in the middle-ages about ethics for angels, but generally it’s not regarded as much of a problem because we know that angels behave themselves anyway. The question is whether their behavior sets out to freewill, or just because we’re programmed to do it. But most of our assumption was always yeah, but everybody thinks a bit alike. And that assumption about a human-like mind has been quite profound and it’s a bit problematic because human minds are the special ones. It being non-interesting discovery in the animal studies, but yes, it seems that chimps actually have a sense of fairness. They get very upset when they see another chimp being overpaid for doing some work in the lab.

Anders (00:45:25):
It’s not just envy. It’s also that they realize, wait a minute, why did I ask to get one banana when that chimp got two bananas? So you can see that there are some elements of what we might say, moral feelings. I don’t think chimps actually are thinking about ethics or fairness or anything like that but we certainly need to have these prerequisite feelings that allow us to work in a social group. And a lot of that then gets refined because we got big brains and start thinking abstractly about it into general principles. But underlying that is a particular design of reward functions in brains. We have a motivation system of particular kinds and up until Robert recently, philosophers just assumed that it was all like that. And I think the cool thing that is happening right now is that artificial intelligence is introducing another into ethics. That is rather challenging.

Anders (00:46:23):
Certainly, some philosophers have been thinking about animal rights and animal suffering and those issues, but animals were never moral agents that you needed to care about. If you try to teach ethics to your cat, it’s not going to work very well. But you could in theory try to do that with the AI. People are starting to realize that these systems can be very, very different from humans. Indeed, going from an anthropocentric to non-anthropocentric ethics, I think is the great challenge, not just for making safe the ethical AI, but also for philosophy. And I think it’s also very healthy for both fields to talk to each other. So yeah, generally going beyond the human, that is quite helpful.

Jeremie (00:47:08):
Yeah. It’s a really interesting question that as to whether and to what extent we do count AI systems as being these agents that need to be factored in where we have to do the accounting in a way that minds their needs and wants and desires. And it also sort of, I guess, it’s you can’t separate it out from questions around consciousness and subjective experience, because if these are religious black boxes that don’t have any feelings, no matter what, no matter how real they might seem, or no matter how motive they might seem, if they really are emotionless deep down inside, I don’t know, because machines don’t have feelings, then we might be wasting our time and energy optimizing for their preferences. Do you have any thoughts about how we might explore that? I mean, that seems like it’s obviously linked to the hard problem of consciousness is not an easy thing to answer but what are your thoughts on that?

Anders (00:48:00):
And I’m a lousy philosopher of mind. I have no idea how to really resolve that, but I think sometimes cautiousness is beside the point. So when Nick Bostrom’s book called Superintelligence came out, so the great philosopher mind wrote a somewhat sketchy review saying, look, machines are not conscious so there is no way ethics is a problem, which is kind of weird because a car can run you over without being conscious. Unconscious machine can be quite dangerous. And Nick’s point was of course, very much yeah, we better make safe machines whether they are conscious about that they’re safe or not conscious is beside the point. Now we might want to design machines that are moral patients that we need to care about. And we might also have very good reasons not to want to do it.

Anders (00:48:46):
Joanna Bryson wrote the paper with the great title Machines Should be Our Slaves or maybe its Robots Should be Our Slaves, which is the title is of course pushing things a little bit further than paper, but she’s got a great point. For many purposes, you don’t want something that you have to care about. And she thinks that it would be a rather stupid step to make a lot of machines that we need to care about. I also think that if you can make a machine that would be forced to care about, somebody’s bound to do it if only as an art project. So the real question is can we tell the systems apart that we need to care about a lot? And I think that’s going to turn out to be really tricky, because normally we tend to use intuitions like, okay, I talk to it and it gives sensible responses. And okay, I believe that there’s somebody around there.

Anders (00:49:37):
But we know that you must always fall for a chatbot. It’s kind of embarrassing how easy it is to fool humans to think that there’s somebody there because we’re fine tuned to assume that the safe and sorry, if something seems to be having a mind, we should probably assume it’s a human-like mind, which is also why we’re projecting human-like minds on animals and natural phenomenon and probably building up religions to explain who got so angry that that’s fondable hit that tree? It needed an alien so suitable agents for that. The problem is of course there might be systems that are not agents at all and where this metaphor really fails. And when you think about something like the Google search engine, is that a being? No, not really. The borders of the search engine are very fussy and indeed it’s not even functioning the way a human mind would.

Anders (00:50:33):
We might end up in a world with a lot of very important, powerful systems that do a lot of clever things but are some dissimilar from us that we need new moral categories. It might be that it’s a bad thing to delete a really good search agent, but not because it’s bad for the search agents, but it might be like a piece of art, or it might be that there are other values or goals that matter. So there’s some ideas in animal ethics work sample that animals have life projects that we shouldn’t be interfering with them. And you can imagine robots having projects even though we may be very unconscious. Normal emphasis would say, “Yeah, unconscious things don’t really have any moral rights.” But if you go into environmental ethics, you will find the biocentrists and the ecocentrists saying that actually ecosystems might have value.

Anders (00:51:22):
If you go into terraforming efforts, you will find at least a few philosophers saying, “Oh, that unliving planetary environment has a value and so it would not be helped if we introduce life.” So they might’ve been very willing to say that, “Yeah, maybe some of these robots should have some form of rights so we should respect them, even though they’re still just juggling around numbers and bits without having any internal experience.” This is of course very far outside how we normally deal with it and how we should arrange our own affairs as a society and people relating to things around us is going to be quite challenging.

Jeremie (00:51:58):
This makes me think of the part of the conversation earlier where we were talking about human super organisms or let’s say the collection of all human beings on a planet as being one coherent organism. Viewed through that lens, do we have a moral responsibility to whatever that super organism is? I mean, we basically act as it cells and who’s to say it doesn’t have a legitimate conscious subjective experience. Certainly, maybe if you zoomed out like crazy, you could look at planet earth and see it evolve over time and go, oh, that’s a planet that’s gradually waking up, something is happening. This whole ecosystem deserves some sort of moral standing independent of the individual entities in that system. I guess the problem there too is like you get into combinatorics because you could easily say, well, is Canada its own entity with moral thing? Anyway, do you have any thinking on that sort of almost inverse reductionist position?

Anders (00:52:58):
Yeah. So Eric Strikes gave a very fun gadfly philosophy. He wrote a very fun paper about what does it take to claim that the United States is conscious? And he argues that under some fairly common sense assumptions, it’s not terribly hard to lead the reader to conclude that yeah, maybe the United States is conscious and presumably that Canada would be conscious, but what about the United Nations or the world economic system? That combinatorics is not necessarily that crazy. After all, we normally have interior brain hemispheres and they have a limited bandwidth between them and different modules in our brain are actually not fully aware of the information in other modules. Sometimes you can get these weird disjointed, the experiences. I remember coming home one evening and noticing that that coat I’m hanging on my rack looked really like a sinister character lurking.

Anders (00:53:54):
And then I jumped because another part of my brain had noticed a sinister character lurking in the darkness. The different parts of my brain had reached different conclusions about the same time and I were kind of conscious of both. Now, it could very well be that the same thing happens with the super organisms. In some sense, Google can be a conscious of super organism that’s also part of the United States of America and the world economy. They’re also separated to some extent by the limited information flows between them but you can sometimes say that it makes sense to treat in this situation that part is fairly separate. Just like we might say right now we humans tend to be fairly separate from each other, but you can also talk about what particular groups of human. You can say that the science community has decided that the following statement seems tentatively true about the world. But there’s a lot of scientists inside that community that haven’t even heard the news.

Jeremie (00:54:54):
It’s fascinating how consciousness might turn out to be a fractal problem in that way. And it sort of lends itself to thinking about ultimately panpsychism. I mean, if every neuron in my brain is conscious, every cell in my body is independently conscious, where does that end? Is every organelle within every cell conscious? And can I keep going all the way down until every quirk and leptons is conscious, I mean, every particle?

Anders (00:55:20):
And then we might have to worry about what about unhappy leptons and quirks. Maybe they are the true moral problem in this universe and the many of you are suffering, that’s nothing we really need to help the poor up quirks.

Jeremie (00:55:35):
Who knows they’ll have their own lobby. Now, actually to that-

Anders (00:55:38):
There’s an important thing here, consciousness might be a simple thing and it’s not implausible that you could stretch it out to encompass everything. Then we happen to be the kind of OBX that also write papers and talk and think about being conscious, which probably isn’t true for the rocks. But there’re other properties our minds have that are non-trivial and probably don’t carry over when you go arbitrary further. The offshore, we’re both speaking English, but if I were to examine your brain, I’m not going to find an English speaking neurons. It’s a system that has a certain property. And this is of course my response to Searle’s Chinese Room’s fourth experiment. I think it’s totally true that some systems have properties that you don’t find in the parts and the wetness of water happens when you have enough water molecules together. It’s not exactly part of the water molecule.

Anders (00:56:34):
So many morally relevant parts of our minds might exist on some levels and not others. And this might of course be interesting when you go upwards to the super organism. Maybe the United States certainly speaks English in that sense, but let it be that we can speak about virtues of nations or civilizations. We can certainly talk about virtues of people. I can’t speak about virtues of a body part or a neuron, but it makes sense to say that that guy is a tenacious one, that one is brave. That’s a cowardly thing. And maybe we can say it’s actually a brave civilization, or that might actually make sense. There might even be virtues that exist on a civilizational scale that doesn’t exist on the human individual scale.

Jeremie (00:57:19):
Because one of the things that I find confusing or sort of difficult to reconcile, intellectually, I think it makes sense, but intuitively I still struggle with it is just the idea that if human super organism genuinely is conscious, I mean, its behavior is fully determined. It seems like anything that the human super organism does is just a function or product of the behavior of all the individual humans that make it up each of whom feels like they have free will. So it kind of feels like there’s no room for that super organism to make different decisions than those that it does make, which sort of seems to imply that, oh, it can’t really be a conscious free-willed entity, but then again, I guess the same is true of us with ourselves. It’s not like we have that degree of freedom either.

Anders (00:58:09):
I think the solution here is that people are assuming too much about free will. We want it to be kind of a form of freedom that I don’t think is compatible with physics or even logic really. But what’s really going on is of course, free will is a useful description of what most people do. If I’m committing a crime in many cases, okay, I’ve made a decision and it’s based on what I knew and felt at that time, which might have not been a bad idea. Maybe I can even demonstrate the neural firing that led to me committing the crime, but that doesn’t actually work as a good explanation of why I did it. Free will is a very useful thing on the human to human level. It might be less useful to ascribe to large groups.

Anders (00:58:57):
And we say that the Democratic Party has free will. Well, to some extent, yes, decisions are being made on the party level, but we might ascribe it to individual people, but you can also say that actually it’s a bit of an emergent phenomenon because when people are talking to each other and sometimes there is a group decision, but nobody’s really supporting, but really still made the decision. So I think it’s important to try to see at what level does the explanation work? So normally, when we talk about free will, it’s about predictability. When you say that somebody is being robotic, that means that you have a fairly good model of how an input generates an output. It’s of course fairly trivial to make a simple program that’s extremely hard to predict, but most of the time we don’t think random behavior is really interesting. We think appropriate behavior that is hard to predict, that is kind of what we ascribe free will too. But we get that for a lot of systems.

Anders (00:59:56):
It’s just that in most cases, we don’t interact directly with the biosphere or civilization as a whole. We deal with people or organizations, but their freewill is not at all implausible. We might say, well, Facebook, to some extent has free will in setting up privacy boxes, you might say, well, it depends on what Zuckerberg says, but actually there are shareholders, there are internal structures. So it’s a bit more complicated than that. There are other organizations that are maybe even more free in deciding what to do. Free will just suppose how much we can blame them then for what actions we take or don’t take.

Jeremie (01:00:33):
Right. Which I guess is its own kind of separate moral category and something that we’ll maybe have to figure out. I mean, do you think that these things are problems that these questions are questions that we’ll need to answer in order to be able to safely navigate the technological transition that’s coming or can we get by with, say, a subset of them or what are your thoughts on that?

Anders (01:00:53):
I think we will probably have to make do with a subset. That might be very scary because I think it would be great if we could just solve all these deep questions, but some of them might just be irreducibly complex. It might be that some things don’t have a general answer, but I think good theriatrics can really help you quite a long bit. If you think about the division of power in a government, for example, that balancing act is really useful for avoiding certain failure modes. We learned that in various ways. And if you think about setting up organizations so we don’t have too much infighting, there are various tools for doing that, setting up rules for responsibility that make people behave themselves. We have a pallet of different options here. So if you create new institutions, you actually can think about, okay, what ways can this fail and then can we build in safeguards?

Anders (01:01:51):
And we have invented them over time. It’s fascinating to go back and look at the history or barriers of political institutions, because in many cases, just like looking into software and looking at cryptographic primitives, people had to invent the committee. People had to invent various forms of ballots and election mechanisms at different points in time. And there is bad approaches we’re trying them but most of them have been forgotten. Good ones are now in the toolkit that we can put them in. We probably need to invent way more because we have both new problems, the things are on a different scale. And that’s one reason why the technological transition is so scary. It’s not just that we need to think about artificial general intelligences that might be around but we already have our corporations and other super organisms that already demonstrated that yeah, that’s not trivial at all to control.

Anders (01:02:43):
We also found those cool mechanism to do within a distributed manner like market forces, reputations. And even when parents tell the kids how to behave and not behave and give a various myths about what happens to kids that are doing the wrong thing, that’s a form of programming and we can borrow some of those ideas. It works up the way we want to teach, or the robots how to grow up. So then of course, we will have to test things out and probably find good ways to automate the generation of better tools. So there’s a lot of things to work on here. And the cool part is this is going to kind of blow up the borders between philosophy, programming, economics and a lot of other fields. There is so much interdisciplinary work that needs to be done here. We can steal the coolest, the best parts of different disciplines and build them into entirely new disciplines.

Jeremie (01:03:32):
Yeah, it’s incredible how much you’ve had to learn and know and understand about all these different fields just to be able to make estimates about timelines that have massive uncertainty associated with them, just to be able to get something that’s sensible. It’s just an absolutely fascinating grand tour of the past and the future really deep time. I really appreciate your time Anders. Actually, one thing I will say is to anybody who’s listening, if you’re curious about Anders work, please do check out his Twitter because it is really good stuff. Anders, are there any other resources that you recommend people take a look at?

Anders (01:04:04):
Well, in some sense, I would say Wikipedia. I think Wikipedia is an interest thing not just as a repository of human knowledge but also a demonstration of sometimes we can get our act together. It’s interesting to look at successful examples where people actually collectively pull together information, solve various disputes and the problems and create something that is worth a lot. If you have that kind of typical Star Trek, the old series episodes where aliens were to judge humanity, Wikipedia would be one of the things I would point out, look, this exhibit we’re not that bad. And the interesting thing is people have been trying to make Wikipedia-like resources and most have failed. And we can learn from that too. Eventually, I think we are going to have some kind of art and science of making these great shared resources that actually hold together and give a lot of benefit.

Anders (01:05:00):
Right now, we have only been trying to do this for a few decades online. So we still have no clue on how to do it regularly, but I think we are going to get better. So that’s why I’m really recommending looking at things like the Internet Archive, Wikipedia and maybe some of the reputation systems that seem to work like on the stock exchanges. These are amazing treasures in their own, not just in the sense that they’re cool questions being answered on stock exchange or [inaudible 01:05:29] helps our collective memory and they go on, but also we can actually build entirely new tools for growing in our collective system.

Jeremie (01:05:39):
Yeah. And hopefully they do keep growing. Hopefully, they help us keep the super organism aligned.

Anders (01:05:45):
I’m hoping that in a trillion years, there’s going to be Wikipedia entries on all of this.

Jeremie (01:05:51):
Thanks so much Anders. Really appreciate it. And thanks for your time.

Anders (01:05:55):
Thank you. It has been so much fun.

--

--

Co-founder of Gladstone AI 🤖 an AI safety company. Author of Quantum Mechanics Made Me Do It (preorder: shorturl.at/jtMN0).