#168 – Whether deep history says we’re heading for an intelligence explosion (Ian Morris on the 80,000 Hours Podcast)
We just published an interview: Ian Morris on whether deep history says we’re heading for an intelligence explosion. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
If we carry on looking at these industrialised economies, not thinking about what it is they’re actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn’t.
What we’re doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way.
- Ian Morris
In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence.
They cover:
Some crazy anomalies in the historical record of civilisational progress
Whether we should think about technology from an evolutionary perspective
Whether we ought to expect war to make a resurgence or continue dying out
Why we can’t end up living like The Jetsons
Whether stagnation or cyclical recurring futures seem very plausible
What it means that the rate of increase in the economy has been increasing
Whether violence is likely between humans and powerful AI systems
The most likely reasons for Rob and Ian to be really wrong about all of this
How professional historians react to this sort of talk
The future of Ian’s work
Plenty more
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore
Highlights
Why we won’t end up in *The Jetsons*
Ian Morris: Out of those three possibilities — we go extinct, we turn into superhumans, or things basically stay the same — I would say the one that we can bet the farm on, which is almost certain to happen, is the first: we go extinct. Almost every species of plant and animal that’s ever existed has gone extinct. So to think we’re not going to go extinct, I mean, man, that takes superhuman levels of delusion. So yeah, we are going to go extinct.
But of course, just putting it like that, it then becomes a truism. It’s not a very interesting or helpful observation to make. The interesting bit would be asking under what circumstances do we go extinct? And this is where I think the first prediction (the “go extinct” one) and the third prediction (turn into superhuman somethings), sort of start to merge together.
And definitely I think the one that is so unlikely we can just dismiss it out of hand is that everything stays more or less the same, and the future is like The Jetsons or something, where everybody is the same people they are now, but they’ve got their personal spaceships.
Or even what struck me, when I was quite a little kid watching Star Trek: Star Trek started off in the late ’60s, so it’s a really old show. I was a little boy in the late ’60s watching Star Trek, and it just dawned on me that this is exactly like the world that the producers of TV shows live in, except they’re now on a starship. And all of the assumptions of 1960s LA is baked into that show. You’ve got the middle-aged white guy in charge. You’ve got the Black woman Lieutenant Uhura, who answers the phones for him, basically, she’s the communications expert. And then the technology expert is the Asian guy. It’s like all of the assumptions of 1960s LA TV studios baked into this thing. And surely, the one thing you’ve got to be certain of, if you’ve got intergalactic travel, is that everything else about humanity does not stay the same when you get to this point.
So I think you just give it a minute’s thought, this “stay basically the same” scenario is just a staggeringly unlikely one, particularly when you start thinking more seriously about the kind of resource constraints that we face. And this is something people will often raise with any talk about sort of superhuman futures: that we’re heating up the world; we’re poisoning the atmosphere and the oceans; there’s a finite amount of fossil fuels out there, even if we weren’t killing ourselves with them. All these things are happening to suggest that business as usual is simply not going to be an option. If the world is going to continue — and continue certainly on the sort of growth trends we’ve been seeing in anything like the ones in recent times — then we’re talking about a very, very profound transformation of everything.
So yeah, I came down in Why the West Rules on one option, which I think is unfortunately a perfectly plausible option: that the world continues to face all kinds of problems. When you look back over the long run of history, one of the things you repeatedly see is every time there’s been a major transformation, a major shift in the balance of wealth and power in the world, it’s always been accompanied by massive amounts of violence.
And living in a world that has nuclear weapons, I would say the number one threat to humanity — even more serious than climate change or anything else you might want to talk about — is nuclear war. We’ve had a 90% reduction in the number of nuclear warheads since the 1980s, but we’ve still probably got enough to fight World War II in a single day. And that’s without even thinking about the radiation poisoning that we didn’t get in World War II so much. This is shocking, appalling potential to destroy humanity if we continue squabbling over our resources. So I think abrupt, sudden, violent extinction is a perfectly real possibility.
I tend to be optimistic about this. I think judging from our previous record, we have been pretty good at solving problems, in the long run at least, so maybe we’ll be able to avoid this. If we avoid the abrupt short-term extinction though, I think the only vaguely plausible scenario is that we do transform humanity, or somehow humanity gets transformed, into something utterly unlike what it’s been in the last few thousand years.
Chiefs in Sungir
Ian Morris: The most extreme case [of anomalies in the historical record] is a place called Sungir, and Sungir is in this very unpromising looking location. It’s 150 miles northeast of Moscow. This is a really, really cold and miserable place to live in now. You can imagine how terrible it was during the ice age. And what we find there is this group of burials where the dead have been laid out in these graves. Then people have spent hours and hours grinding up ochre — which is this naturally occurring iron oxide, which you grind it up and it produces this powder that allows you to stain things red. So they ground up tonnes and tonnes of ochre and put it in the graves.
Then they buried these people in these elaborate costumes, which we think were like animal skins. But sewn onto these animal skins are thousands of little beads that have been made by cutting up the bones and teeth of deer and snow leopards and other animals and grinding them into shape and drilling holes through them. And of course, you’re doing all this without power drills: you’re doing all this by getting a stick and putting a little bit of abrasive on it and rubbing the stick between your hands until it grinds its way through this little bead. And there are thousands of little beads like this on each of these bodies.
And along with them, they’ve taken mammoth tusks, and then hundreds and hundreds of hours of labour have been put into straightening the mammoth tusks, making it so they’re 20-foot-long straight rods that would have been so heavy, almost impossible to pick up. Then all these other smaller mammoth bone and tusk ornaments they’ve made. This is just astonishing what these people were doing.
And it’s the kind of thing where if instead of dating to 32,000 BC, it dated to 2000 BC, you would automatically say that this is the burial of a great, powerful chief and all his family — because little kids are in there as well, with these extraordinary offerings with them as well. Again, in later times, you’d say that this symbolises the fact that power and status are being passed down from the Great Chief: the proto-king being passed down to his children. And you’ve got a dynasty here. But it’s 32,000 years ago. This is something that sort of should not be happening.
——-
And of course, it’s a real challenge for evolutionary theory to say why we, once in a blue moon, get these bizarre cases of people who are basically hunter-gatherers producing stuff they should not be producing, they should not be living lives like this.
And there is not complete agreement on this. That’s an understatement to say there’s not complete agreement: there’s wild disagreement. Most archaeologists say that what it is is that actually, as hunter-gatherers, you sometimes get these superabundant niches of resources within a larger landscape where resources are much scarcer. And within these abundant resource niches, the resources are of a kind that it’s possible for a handful of people to begin to monopolise access to them. And they are then able to turn this into control over the resource flows, and channelling resources to their own ends to make them something like chiefs. So for centuries, or even millennia, you will get these chief-like people emerging. But because it’s not farming, they’re not able to keep scaling up and turning from chiefs into kings. But it does happen. This is probably the most popular theory.
The other theory is no. What places like Sungia and some of the Peruvian sites, El Paraíso, some of these sites, what they actually show is that complex society has actually nothing to do with energy or the evolution of hierarchy. It was always possible for humans to live in complex societies if they’d wanted to do so. But they didn’t want to do so; they chose to live free lives instead. And it’s only in more recent times that some colossal mistake gets made and we start going down the path toward these complex societies, where the Orwell line of the future being, somebody’s jackboot on my throat forever and ever: that is what the future looked like. It’s only quite recently that we make some terrible mistake, and start going down this path. And all of the evolutionary theories are simply wrong. It’s up to us to create the world we want to live in.
So you can imagine these arguments get quite political, and they get quite heated and nasty, yet there are these weird cases.
Machine intelligence won’t be held back by human constraints
Ian Morris: I guess one of the things that’s always struck me is people constantly talk about “artificial” intelligence: what exactly is artificial about it? I don’t think a fully conscious — whatever we might mean by that — machine intelligence is going to think of itself as artificial. It’s going to think of itself as itself. It’s not artificial intelligence; it’s different intelligence. And we are creating this different intelligence that may or may not want to share its being with us. I think it’s very difficult to know what the intentions and wishes of an intelligence that different from us are going to be. It would be like horses trying to understand human intentionality. My wife and I have a couple of horses. They understand certain things that we’re doing and thinking, and certain things that we want that they don’t necessarily want, but their grasp of our overall presence in the world is pretty damn limited.
And I think that is going to be the same for our grasp of nonbiological intelligence, and the nonbiological intelligence’s grasp of our intelligence: I think these are going to be radically different kinds of intelligences trying to communicate with each other. And I’m sure there will be some interest in merging them. But I just think it seems highly likely that the nonbiological intelligence, that’s not going to be its primary goal in life: to make Ray Kurzweil live forever.
Rob Wiblin: Yeah, I think that this future is imaginable, but in order for it to happen, it would require a massive worldwide effort to suppress the alternative. Because machine intelligence by itself, trained in its own way, not merged with the human mind, I think is just going to be way better. It’s going to be much faster. It’s not going to be held back by the constraints that face the human brain, which is just at some point going to be a legacy piece of technology. So in order to make this hybrid intelligence the main species or the main kind of thinking that happens on Earth, you would have to basically prohibit this other technology that is going to race ahead and end up being really superior.
The analogy that comes to mind to me is when people in the 19th century were trying to figure out how to design flying machines, they could have projected that the way we would do it is some sort of merger of birds with machines that we would produce a plane that is like a combination — that somehow you stick the birds together and then merge them with a machine and they flap their wings and that produces a plane. But no, the way that you make a combined plane-bird in our world is that you have a plane that flies and the bird inside the plane, at best. Trying to incorporate the bird into the plane adds nothing and is just an extreme design constraint. And I think that is what it’s going to be, trying to merge human brains with these machine intelligences that can just improve so much faster as we improve the underlying technology and the algorithms and so on.
Ian Morris: Yeah, I think that the aircraft thing is a good example, because it’s a basic scaling problem here: you can’t just scale up a sparrow and turn it into a 747; it just does not work that way. And they solved that problem a little bit like the way that initially, say, when people trying to design chess programs on computers that could beat humans solved the problem by not attempting to make the computer think like a human: they do it just by power, by being able to run through all of the different possible combinations of the consequences of the move you make, and run through all of them in a way a human can’t do. You’re thinking about the game in an entirely different way, just like when you strap wings onto an internal combustion engine, you’re thinking about flight in an entirely different way from a bird does. I think that’s a really good example.
But this talk about preventing artificial intelligence from developing, I think this is one place where thinking about the new forms of intelligence in an evolutionary way, rather than as a problem in technology, can be kind of helpful. Because we’re already at the point where you can’t just pull the plug out of the wall and switch the AI off. It doesn’t work like that anymore. And we’re going to get further and further down that path. This is going to become an unstoppable force. It’s a little bit like asking how do you stop a biological evolutionary process? Stop one species from being replaced by a more intelligent competitor?
This is of course what happened over and over again the evolution of humanity: that more intelligent, bigger-brain species of apes becoming more or less human replaced one another. How would the less intelligent species have prevented that from happening? It’s just very difficult to imagine what exactly they’re going to do. I think this is the situation we’re getting in now.
And I think even more than just saying it’s difficult to imagine, it was impossible for the less intelligent species to imagine what it could conceivably do about this. Neanderthals probably couldn’t really conceive of the Homo sapiens threat, let alone come up with a coherent, coordinated response to it. What on Earth makes us think we can conceive of how the threat — if it is a threat — of the machine intelligence is going to look like, and what would be an adequate response to it? I suspect that we’re kidding ourselves over this.
AI from an evolutionary point of view
Rob Wiblin: You’ve put your finger on something that over the last few weeks I’ve realised is just an absolutely key issue. I think the biggest difference between me and people who think that either improvements in machine intelligence are not such a big deal or that they’re going to be modestly useful and obviously beneficial and that there’s not many risks here, is whether we think of these neural networks that we’re building as a new being and a new species, or whether we think about them just as a new sort of consumer product. If you think it’s a new piece of consumer software, then people freaking out about where this might ultimately take us just seems sort of nuts. It seems way over the top.
But my instinct, like yours, is to view this from a biological and an evolutionary point of view. I get the impression that you basically feel the same way. Why do you think it is that that biological and evolutionary perspective is the more appropriate lens on what’s happening?
Ian Morris: I guess I would say the only thing that currently exists in the world that you can make analogies from it — although very imperfect ones — to machine-based intelligence is biological-based intelligence. The brains that animals have evolved across billions of years. And initially, through most of the history of life on this planet, there’s nothing you’d really call a “brain” out there.
You start getting animals that have bodies that can move. Probably most biologists wouldn’t want to call what some of these earliest creatures have a brain. I’ve heard people call it a “ganglion.” The front end of the body of many kinds of animals, say ants in the modern world: all the nerve endings from the body of an ant flow together at the front end of its body, in its head. There they form a kind of information exchange centre. But it’s going on at such a crude level that calling it a brain is stretching the meaning of the word brain to the point that, frankly, may be even beyond the breaking point.
It takes a particular kind of evolution of bigger and bigger brains to get to the point where animals start to be conscious of themselves. And consciousness of any kind is, in evolutionary terms, a relatively recent development — certainly only a few hundred million years old. And you start to get these brains developing that are conscious of the limits of the animal. Because it’s a selective pressure: an animal that is aware of where its own body ends and the rest of the world starts has an advantage over an animal that is not aware in that way. It’s much more able to develop the power to move itself around, control where it’s going, conceptualise problems.
Consciousness is an evolutionary adaptation. And looked at in this way, it’s not something that God put his finger down to Earth and created consciousness and mind and free will and all these kinds of things: it’s something that evolved through an uncontrolled process of natural selection. And our human consciousness, in a way it’s no different from the consciousness of my dogs and cats, but it’s at such a vastly more sophisticated level that in many ways it doesn’t really bear comparison. But again, it’s emerged without anybody being in charge and consciously willing it into existence.
Now, we began creating these machine-based neural networks that relatively quickly are moving toward creating more of themselves. And in a sense you can say we’re already at that point: they are creating the more sophisticated versions of themselves as much as we are controlling this process. Of course they’re going to develop some sort of consciousness. Although it may be nothing like the consciousness you get in biological brains, because it’s not going to be biologically based; it’s going to be silicon based or whatever quantum kind of things they come up with. It’s going to be different from ours.
But it’s going to develop some form of consciousness that might be a form of consciousness that we can’t even understand anymore. A tree cannot understand your consciousness because it doesn’t have a brain. Our brain may be as far from the mind, or whatever you call it, of the machine-based intelligences as a tree is from us. And again, thinking we’re controlling this — this is wildly overoptimistic.
The cyclical view of history
Rob Wiblin: I guess hearing about these examples of hunter-gatherer civilisations flourishing in the past and then kind of collapsing for circumstantial reasons made me wonder if maybe the cyclical view of history is something that we haven’t talked enough about today relative to how plausible it is.
You could imagine a future goes that maybe we’re broadly right in the long term, but we might go through another crash and resurgence again. For example, we could have a nuclear war, and then maybe all of this is delayed 100 years, because it takes a long time for us to recover. And maybe then we go through some transition into the next stage of civilisation through improving technology. That seems pretty plausible to me, and maybe that’s something that this up-and-down pattern in history makes seem more likely.
Ian Morris: Yeah, I think this is something that’s unavoidable if you look at long-term history — the ups-and-downs stuff, the troughs and crests in development — so it’s like history is cyclical and yet not cyclical. It’s like each new trough doesn’t go as low as the last trough. Each new crest kind of overtops all previous crests. So say you’ve got a long-term trendline that is trending upward just with a huge amount of variation around that trend line.
But I think other things are going on as well in this long-run exponential growth process combined with the shorter-term cyclical one. Another of the issues is that you start off with very localised processes — tens of thousands of very localised experiments, in a sense, being run around the planet. And as time has gone on, we’ve moved more and more toward having a single global experiment running. So you go to a place like Sungir, 32,000 BC, the place with the weird burials I talked about a minute ago. We have a few of them from Sungir, and then they stop. And then we’ve got them in other places, and then they stop. Each individual place seems to have had a brief period when all the conditions came together to produce these wild kinds of societies, and then it stops. Occasionally it’ll come back later, but usually it doesn’t, so that thing broke down there and it sort of never gets revived.
As you go forward in time, the societies are getting bigger and bigger. Like I was talking about with the stuff on war, we’re creating these bigger and bigger societies and you still would get these breakdowns. You have, say, a big breakdown in the eastern Mediterranean about 1200 BCE. And there you get the states over the region from Greece, out through to western Iran, down into Egypt, over most of that region, the states collapse and the population crashes as well. Takes centuries and centuries, but it then does rebound again.
And I think what we’ve seen as time has gone on is that as the scale of the whole thing increases, you get multiple effects of this that you wouldn’t predict if you’re just thinking about it in a linear way. One is that the troughs, when they come, the collapses are so much more abrupt than they used to be, and in terms of points on my development scale, so much bigger. And yet we bounce back from them so much faster, because none of them have ever encompassed the whole planet. I think there’s always outside areas where you haven’t had a collapse.
So even something like the Second World War — the most destructive thing we’ve ever had in human history, at least — what does it do? It devastates large parts of Europe, East Asia. And yet within 50 years, that’s all been put behind us. We’ve moved on so much from there because a big part of the world, North America in particular, doesn’t get devastated by it.
Now we’re running up against these new thresholds. It’s like if we’ve got the one experiment going, the global one, if we don’t get it right at the first attempt, we get a profound crash with nowhere in the world left outside it in order to step in in the future and fix things for us again. And while I do remain optimistic, I do think we’re going to see this revolutionary transformation. You’re a little bit stupid if you don’t worry about the downside.
The rate of increase in the economy has been increasing
Rob Wiblin: I think for me, the key fact that I remember just really striking me in the head back in 2008 or 2009, when I first encountered it, is not just that the economy or human influence has been growing over time, but rather that the rate of increase has been increasing.
So there was this long period when we were hunter-gatherers where the annual rate of growth in human population or human technology per year was negligible — 0.1% or something incredibly small. And then you get to the farming era, when you get a really significant increase — it’s glacial change from our point of view, but very much faster change than what was going on before during the hunter-gatherer era — I guess because people are in cities, there’s more ability to record knowledge, there’s more people who have the slack to do some research and come up with new ideas and figure out new ways of doing things. So the growth rate increases a lot once we have settled agriculture and cities and empires and so on.
And then in 1700, 1800, it steps up again by a really big factor — three or 10 or something like that — to the modern world, where we’re kind of used to the idea that technology is changing within people’s lifetimes: that by the time they die, things might look really quite different than they were when they were born, which definitely wasn’t the case before the industrial era.
So if you project forward, you don’t just have to think about just growth continuing, but also the potential that we’ll get a third step change, where the rate of growth will increase compared to the industrial era. Earlier I said that if the economy grew 3.5% for 100 years, then the economy would end up 30-fold larger each century. But if we go through a third phase shift like you’re describing, and average growth rates triple to 10.5% a year, then over the following 100 years after that, we end up with a global economy that’s 22,000 times larger than it was when it started — which is a totally wild impact that is clearly beyond our ability to visualise, except that the world would obviously be really unfamiliar, to say the least. Do you have any reaction to that?
Ian Morris: Again, these sorts of numbers, it’s very difficult to imagine what this means for us, but I think the basic premise of what you’re saying does seem to be borne out by the historical record. And when I started writing my Why the West Rules book in the late 2000s, it dawned on me pretty quickly that one reason why historians often hadn’t seen just how long term you need to look — they hadn’t grasped that you really have got to look at thousands and thousands of years to see what’s going on — is that if you think linearly about long-term change, you can’t see it happening.
And so when I was drawing graphs of my social development scores for Eastern and Western societies, if I just plotted it on a linear scale with years along the bottom of the graph and then points on the index on the vertical axis, basically nothing happens on that graph until you get to about 1800, when suddenly the lines leap off the bottom where they look like they’re zero the whole time up until 200 years ago. And they go up almost at a 90-degree turn: go straight up.
If you plotted it instead on a log linear graph — with again dates along the bottom, presented in the usual way, but on the vertical axis now you’ve got 10-fold increments in the development scores — if you plot it that way, then you see that going back thousands of years, actually development was rising exponentially: just the exponent was really small, so it just took a very long time for anything to happen.
And the book I’m working on now is going to be focusing much more on the early periods. I realised if you want to go back millions of years and look at these phenomena, you’ve really got to draw it on a log–log graph, where both axes are increasingly 10-fold increments. That makes it really obvious, like what you were just saying, that it’s not only that growth has been increasing exponentially — it’s that the exponent has been growing as well. So it’s not just that we’re accelerating, but the rate at which we’re accelerating is itself accelerating.
So whether that is going to give us a world where we see the economy is 23,000 times bigger than now 100 years from now or not, this is the way we’ve got to think about it. I think all of our preconceptions about how the world works are going to be swept away just as abruptly as they were during the Agricultural Revolution, and just as abruptly as they were during the Industrial Revolution.
FYI I think you’ve made a mistake in the first line of this post? You mention Seren Kell on alt proteins?
Fixed. Thanks for flagging this!