Consider just the magnitude of the hammer that is being applied to this situation: it’s going from millions of scientists and engineers and entrepreneurs to billions and trillions on the compute and AI software side. It’s just a very large change.
You should also be surprised if such a large change doesn’t affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, and the Industrial Revolution greatly changed human society.
- Carl Shulman
The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.
Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.
It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.
It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.
It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.
It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.
It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.
As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.
And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.
This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.
These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?
In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:
If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
Why have computers and computer chips had so little effect on economic productivity so far?
Are self-replicating biological systems a good comparison for self-replicating machine systems?
Isn’t this just too crazy and weird to be plausible?
What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
Might there not be severely declining returns to bigger brains and more training?
Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?
Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?
Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore
Highlights
Robot nannies
Carl Shulman: So I think maybe it was Tim Berners-Lee gave an example saying there will never be robot nannies. No one would ever want to have a robot take care of their kids. And I think if you actually work through the hypothetical of a mature robotic and AI technology, that winds up looking pretty questionable.
Think about what do people want out of a nanny? So one thing they might want is just availability. It’s better to have round-the-clock care and stimulation available for a child. And in education, one of the best measured real ways to improve educational performance is individual tutoring instead of large classrooms. So having continuous availability of individual attention is good for a child’s development.
And then we know there are differences in how well people perform as teachers and educators and in getting along with children. If you think of the very best teacher in the entire world, the very best nanny in the entire world today, that’s significantly preferable to the typical outcome, quite a bit, and then the performance of the AI robotic system is going to be better on that front. They’re wittier, they’re funnier, they understand the kid much better. Their thoughts and practices are informed by data from working with millions of other children. It’s super capable.
They’re never going to harm or abuse the child; they’re not going to kind of get lazy when the parents are out of sight. The parents can set criteria about what they’re optimising. So things like managing risks of danger, the child’s learning, the child’s satisfaction, how the nanny interacts with the relationship between child and parent. So you tweak a parameter to try and manage the degree to which the child winds up bonding with the nanny rather than the parent. And then the robot nanny optimising over all of these features very well, very determinedly, and just delivering everything superbly — while also being fabulous medical care in the event of an emergency, providing any physical labour as needed.
And just the amount you can buy. If you want to have 24⁄7 service for each child, then that’s just something you can’t provide in an economy of humans, because one human cannot work 24⁄7 taking care of someone else’s kids. At the least, you need a team of people who can sub off from each other, and that means that’s going to interfere with the relationship and the knowledge sharing and whatnot. You’re going to have confidentiality issues. So the AI or robot can forget information that is confidential. A human can’t do that.
Anyway, we stack all these things with a mind that is super charismatic, super witty, that can have probably a humanoid body. That’s something that technologically does not exist now, but in this world, with demand for it, I expect would be met.
So basically, most of the examples that I see given, of here is the task or job where human performance is just going to win because of human tastes and preferences, when I look at the stack of all of these advantages and the costs that the world is dominated by nostalgic human labour. If incomes are relatively equal, then that means for every hour of these services you buy from someone else, you would work a similar amount to get it, and it just seems that isn’t true. Like, most people would not want to spend all day and all night working as a nanny for someone else’s child —
Rob Wiblin: — doing a terrible job —
Carl Shulman: — in order to get a comparatively terrible job done on their own kids by a human, instead of a being that is just wildly more suitable to it and available in exchange for almost nothing by comparison.
Key transformations after an AI capabilities explosion
Carl Shulman: Right now, human energy consumption is on the scale of 1013 watts. That is, it’s in the thousands of watts per human. Solar energy hitting the top of the atmosphere, not all of it gets down, but is in the vicinity of 2 x 1017 — so 10,000 times or thousands of times our current world energy consumption reaches the Earth. If you are harvesting 5% or 10% of that successfully, with very high-efficiency solar panels or otherwise coming close to the amount of energy use that can be sustained on the Earth, that’s enough for a million watts per person. And a human brain uses 20 watts, a human body uses 100 watts.
So if we consider robotics technology and computer technology that are at least as good as biology — where we have physical examples of this is possible because it’s been done — that budget means you could have, per person, an energy budget that can, at any given time, sustain 50,000 human brain equivalents of AI cognitive labour, 10,000 human-scale robots. And then if you consider smaller ones, say, like insect-sized robots or small AI models, like current systems — including much smarter small models distilled from the gleanings of large models, and with much more advanced algorithms — that’s a per person basis, that’s pretty extreme.
And then when you consider the cognitive labour being produced by those AIs, it gets more dramatic. So the capabilities of one human brain equivalent worth of compute are going to be set by what the best software in the world is. So you shouldn’t think of what average human productivity is today; think about, for a start, for a lower bound, the most skilful and productive humans. In the United States, there are millions of people who earn over $100 per hour in wages. Many of them are in management, others are in professions and STEM fields: software engineers, lawyers, doctors. And there’s even some who earn more than $1,000 an hour: new researchers at OpenAI, high-level executives, financiers.
An AI model running on brain-like efficiency computers is going to be working all the time. It does not sleep, it does not take time off, it does not spend most of its career in education or retirement or leisure. So if you do 8,760 hours of the year, 100% employment, at $100 per hour, you’re getting close to a million dollars of wages equivalent. If you were to buy that amount of skilled labour today that you would get from these 50,000 human brain equivalents at the high end of today’s human wages, you’re talking about, per human being, the energy budget on Earth could sustain more than $50 billion worth at today’s prices of skilled cognitive labour. If you consider the high end, the scarcer, more elite, higher compensated labour, then it’s even more.
If we consider an even larger energy budget beyond Earth, there’s more solar energy and heat dissipation capacity in the rest of the solar system: about 2 billion times as much. If that winds up being used, because people keep building solar panels, machines, computers, until you can no longer do it at an affordable enough price and other resources to make it worthwhile, then multiply those numbers before by a millionfold, 100 millionfold, maybe a billionfold, and that’s a lot. If you have 50 trillion human brains’ worth of AI minds at very high productivity, each per human being, or perhaps a mass of robots, like unto trillions upon trillions of human bodies, and dispersed in a variety of sizes and systems. It is a society whose physical and cognitive, industrial and military capabilities are just very, very, very, very large, relative to today.
Objection: Shouldn’t we be seeing economic growth rates increasing today?
Rob Wiblin: You might expect an economic transformation like this to happen in a somewhat gradual or continuous way, where in the lead up to this happening, you would see economic growth rates increasing. So you might expect that if we’re going to see a massive transformation in the economy because of AGI in 2030 or 2040, shouldn’t we be seeing economic growth rates today increasing? And shouldn’t we maybe have been seeing them increase for decades as information technology has been advancing and as we’ve been gradually getting closer to this time?
But in reality, over the last 50 years, economic growth rates have been kind of flat or declining. Is that in tension with your story?
Carl Shulman: Yeah, you’re pointing to an important thing. When we double the population of humans in a place, ceteris paribus, we expect the economic output after there’s time for capital adjustments to double or more. So a place like Japan, not very much in the way of natural resources per person, but has a lot of people, economies of scale, advanced technology, high productivity, and can generate enormous wealth. And some places have population densities that are hundreds or thousands of times that of other countries, and a lot of those places are extremely wealthy per capita. By the example of humans, doubling the human labour force really can double or more economic output after capital adjustment.
For computers, that’s not the case. And a lot of this reflects the fact that thus far, computers have been able to do only a small portion of the tasks in the economy. Very early on in the history of computers, they got better than humans at serial, reliable arithmetic calculations, which you could do with an incredibly small amount of computation compared to the human brain, just because we’re really badly set up for multiplying and dividing lots of numbers. And there used to be a job of being a human computer, and I think that there are films about them, and it was a thing, those jobs have gone away because just the difference now in performance, you can get the work of millions upon millions of those human computers for basically peanuts.
But even though we now use billions of times as much in the way of that sort of calculation, it doesn’t mean that we get to produce a billion times the wages that were being paid to the human computers at that time, because there were diminishing returns in having more and more arithmetic calculations while other things didn’t keep up. And when we double the human population and capital adjusts, then you’re improving things on all of these fronts. So it’s not that you’re getting a tonne of enhancement of one kind of input, but it’s missing all of the other things that it needs to work with.
And so, as we see progress towards AI that can robustly replace humans, we should expect the share of tasks that computing can do to go up over time, and therefore the increase in revenue to the computer industry, or in economic value-add from computers per doubling of the amount of compute, to go way up. Historically, it’s been more like you double the amount of compute, and then you get maybe one-fifth of a doubling of the revenue of the computer industry. So if we think success at broad automation, human-substituting AI is possible, then we expect that to go up over time from one-fifth to one or beyond.
And then if you ask why would this be? One thing that can help make sense of that is to ask how much compute has the computing industry been providing historically? So I said that now, maybe an H100 that costs tens of thousands of dollars can give computation comparable to the human brain. But that’s after many, many years of Moore’s law, during which the amount of computation you could buy per dollar has gone up by billions of times and more.
So when you say, right now, if we add 10 million H100s to the world each year, then maybe we increase the computation in the world from 8 billion human brains’ worth to 8 billion and 10 million human brains, you’re starting to make a difference in total computation. But it’s pretty small. It’s pretty small, and so it’s only where you’re getting a lot more out of it per computation that you see any economic effect at all.
And going back further, you’re talking about, well, why wasn’t it the case that having twice as many of these computer brains analogous to the brain of an ant or a flukeworm, why wasn’t that doubling the economy? And when you look at it like that, it doesn’t really seem surprising at all.
Objection: Declining returns to increases in intelligence?
Rob Wiblin: Another line of scepticism is this idea that, sure, we might see big increases in the size of these neural networks and big increases in the amount of effective lifespan or amount of training time that they’re getting — so effectively, they would be much more intelligent in terms of just the specifications of the brains that we’re training — but you’ll see massively declining returns to this increasing intelligence or this increasing brain size or this increasing level of training.
Maybe one way of thinking about that would be to imagine that we were designing AI systems to do forecasting into the future. Now, forecasting tens or hundreds of years into the future is notoriously very challenging, and human beings are not very good at it. You might expect that a brain that’s 100 times the size of the human brain and has much more compute and has been trained on all of the knowledge that humans have ever collected because it’s had millions of years of life expectancy, perhaps it could do a much better job of that.
But how much better a job could it really do, given just how chaotic events in the real world are? Maybe being really intelligent just doesn’t actually buy you the ability to do some of these amazing things, and you do just see substantially declining returns as brains become more capable than humans are.
Carl Shulman: Well, actually, from the arguments that we’ve discussed so far, I haven’t even really availed myself of much that would be impacted by that. So I’ll take weather forecasting. So you can expend exponentially more computing power to go incrementally a few more days into the future for local weather prediction, at the level of “Will there be a storm on this day rather than that day?” And yeah, if we scale up our economy by a trillionfold, maybe we can go add an extra week or so to that sort of short-term weather prediction, because it’s a chaotic system.
But that’s not impacting any of the dynamics that we talked about before. It’s not impacting the dynamic where, say, Japan, with a population many times larger than Singapore, can have a much larger GDP just duplicating and expanding. These same sorts of processes that we’re already seeing give you corresponding expansion of economic, industrial, military output.
And we have, again, the limits of just observing the upper peaks of human potential and then taking even quite narrow extrapolations of just looking at how things vary among humans, say, with differing amounts of education. And when you go from some high school education to a university degree, graduate degree, you can see like a doubling and then a quadrupling of wages. And if you go to a million years of education, surely you’re not going to see 10,000 or 100,000 times the wages from that. But getting 4x or 8x or 16x off of your typical graduate degree holder seems plausible enough.
And we see a lot of data in cases where we can do experiments and see, in things like go or chess, where we’ve looked out to sort of superhuman levels of performance and we can say, yeah, there’s room to gain some. And where you can substitute a bigger, smarter, better trained model evaluated fewer times for using a small model evaluated many times.
But by and large, this argument goes through largely just assuming you can get models to the upper bounds of human capacity that we know is possible. And the duplication argument really is unaffected by the sort of that, yes, weather prediction is something where you’ll not get a million times better, but you can make a million times as many physical machines process correspondingly more energy, et cetera.
Objection: Could we really see rates of construction go up a hundredfold or a thousandfold?
Carl Shulman: So the very first thing to say is that that has already happened relative to our ancestors. So there was a time when there were about 10 million humans or relevant hominids hanging around on the Earth, and they had their stone hand axes and whatnot, but very little stuff. Today there’s 8 billion humans with a really enormous amount of stuff being produced. And so if you just say that 1,000 sounds like a lot, well, every numerical measure of the physical production of stuff in our society is like that compared to the past.
And on a per capita basis, does it sound crazy that when you have power plants that support the energy for 10,000 people, that you build one of those per 10,000 people over some period of time? No, because the efforts to create them are also scaling up.
So, how can you have a larger number if you have a larger population of robot workers and machines and whatnot, I think that’s not something we should be super suspicious of.
There’s a different kind of thing which is drawing from how, in developed countries, there has been a tendency to restrict the building of homes, of factories, of power plants. This is a significant cost. You see, you know, in some very restrictive cities like New York City or San Francisco, the price of housing rises by several times compared to the cost of constructing it because of basically legal bans on local building. And people, especially folk who are immersed in the sort of YIMBY-versus-NIMBY debates and think about all the economic losses from this, that’s very front of mind.
I don’t think this is reason for me not to expect explosive construction of physical stuff in this scenario though, and I’ll explain why. So even today we see, in places like China and Dubai, cities thrown up at incredible rates. There are places where intense construction can be allowed, and there’s more of that construction when the payouts are much higher. And so when permitting building can result in additional revenue that is huge compared to the local government, then they may actually go really out of their way to provide the regulatory situation that will attract investments of an international company. And in the scenarios that we’re talking about, yes, enormous industrial output can be created relatively quickly in a location that chooses to become a regulatory haven.
So the United Arab Emirates built up Dubai, Abu Dhabi and has been trying to expand this non-oil economy by just creating a place for it to happen and providing a favourable environment. And in a situation where you have, say, the United States is holding back from having million-dollar-per-capita incomes or $10-million-per-capita incomes by not allowing this construction, and then the UAE can allow that construction locally and 100x their income, then I think they go ahead and do it. Seeing that sort of thing I’d also expect encourages change in the more restrictive regulatory regimes.
And then AI and such can help on the front of governance. So unlimited cheap lawyers makes it easier to navigate horrible paperwork, and unlimited sophisticated AIs to serve as bureaucrats, advisors to politicians, advisors to voters makes it easier to adjust to those things.
But I think the central argument is that some places providing the regulatory space from it can make absolutely enormous profits, potentially gain military dominance — and those are strong pressures to make way for some of this construction to enable it. And even within the scope of existing places that will allow you to make things, that goes very far.
Objection: “This sounds completely whack”
Rob Wiblin: OK, a different reason that some listeners might have for doubting that this is how things are going to play out is maybe not an objection to any kind of specific argument, or a specific objection to some technological question, but just the idea that this is a very cool story, but it sounds completely whack. And you might reasonably expect the future to be more boring and less surprising and less weird than this.
You’ve mentioned already one response that someone could have to this, which is that the present would look completely whack and insane to someone who was brought forward from 500 years ago. So we’ve already seen a crazy transformation through the Industrial Revolution that would have been extremely surprising to many people who existed before the Industrial Revolution. And I guess plausibly to hunter-gatherers, the states of ancient Egypt would look pretty remarkable in terms of the scale of the agriculture, the scale of the government, the sheer number of people and the density and so on. We can imagine that the agricultural revolution shifted things in a way that was quite remarkable and very different than what came before.
Is there any other kind of overall response that someone could give to a listener who’s sceptical on this on grounds that this is just too weird to be likely?
Carl Shulman: So building on some of the things you mentioned. So not only that our post-industrial society is incredibly rich, incredibly populous, incredibly dense, long-lived, and different in many other ways from the days of millions of hunter-gatherers on the Earth, but also, the rate of change is much higher. Things that might previously have been on a thousand-year timescale now happen on the scale of a couple of decades — for, say, a doubling of global economic output. And so there’s a history both of things becoming very different, but also of the rate of change getting a lot faster.
And I know you’ve had Tom Davidson, David Roodman and Ian Morris and others, and some people with critical views discussing this. And so cosmologists among physicists, who have the big picture, actually tend to think more about these kinds of cases. The historians who study big history, global history over very long stretches of time tend to notice this.
So yeah, when you zoom out to the macro scale of history, in some ways it’s quite precedented to have these kinds of changes. And actually it would be surprising to say, “This is the end of the line. No further.” Even when we have the example of biological systems that show the ceilings of performance are much higher than where we’re at, both for replication times, for computing capabilities, and other object-level abilities.
And then you have these very strong arguments from all our models and accounts of growth that can really explain some of why you had the past patterns and past accelerations. They tend to indicate the same thing. Consider just the magnitude of the hammer that is being applied to this situation: it’s going from millions of scientists and engineers and entrepreneurs to billions and trillions on the compute and AI software side. It’s just a very large change. You should also be surprised if such a large change doesn’t affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, and the Industrial Revolution greatly changed human society, and so on and so forth.
Income and wealth distribution
Rob Wiblin: One thing we haven’t talked about almost at all is income distribution and wealth distribution in this new world. We’ve kind of been thinking about on average we could support x number of employees for every person, given the amount of energy and given the number of people around now.
Do you want to say anything about how income would end up being distributed in this world? And should I worry that in this post-AI world, humans can’t do useful work, there’s nothing that they can do for any reasonable price that an AI couldn’t do better and more reliably and cheaper, so they wouldn’t be able to earn an income by working? Should I worry that we’ll end up with an underclass of people who haven’t saved any income and are kind of shut out of opportunities to have a prosperous life in this scenario?
Carl Shulman: I’m not worried about that issue of unemployment, meaning people can’t earn wages to support themselves, and indeed have a very high standard of living. Just as a very simple argument: right now governments redistribute a significant percentage of all of the output in their territories, and we’re talking about an expansion of economic output of orders of magnitude. So if total wealth rises a hundredfold, a thousandfold, and you just keep existing levels of redistribution and government spending, which in some places are already 50% of GDP, almost invariably a noticeable percentage of GDP, then just having that level of redistribution continue means people being hundreds of times richer than they are today, on average, on Earth.
And then if you include off-Earth resources going up another millionfold or billionfold, then it is a situation where the equivalent of social security or universal pension plans or universal distribution of that sort, of tax refunds, can give people what now would be billionaire levels of consumption. Whereas at the same time, a lot of old capital goods and old things you might invest in could have their value fall relative to natural resources or the entitlement to those resources once you go through.
So if it’s the case that a human being is a citizen of a state where they have any political influence, or where the people in charge are willing to continue spending even some portion, some modest portion of wealth on distribution to their citizens, then being poor does not seem like the kind of problem that people are facing.
You might challenge this on the point that natural resource wealth is unevenly distributed, and that’s true. So at one extreme you have a place like Singapore, I think it’s like 8,000 people per square kilometre. At the other end, so you’re Australian and I’m Canadian and I think they’re at two and three people per square kilometre, something like that — so a difference of more than a thousandfold relative to Singapore in terms of the land resources. So you might think you have inequality there.
But as we discussed, most of the natural resources on Earth are actually not even in the current territory of any sovereign state. They’re in international waters. If heat emission is the limit on energy and materials harvesting on Earth, then that’s a global issue in the way that climate change is a global issue. And so if you wind up with heat emission quotas or credits being distributed to states on the basis of their human population, or relatively evenly, or based on prior economic contribution, or some mix of those things, those would be factors that could lead to a more even distribution on Earth.
And again, if you go off Earth, the magnitude of resources are so large that if space wealth is distributed such that each existing nation-state gets some share of that, or some proportion of it is allocated to individuals, then again, it’s a level of wealth where poverty or hunger or access to medicine is not the kind of issue that seems important.
#191 (Part 1) – The economy and national security after AGI (Carl Shulman on the 80,000 Hours Podcast)
We just published an interview: Carl Shulman on the economy and national security after AGI. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.
Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.
It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.
It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.
It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.
It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.
It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.
As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.
And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.
This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.
These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?
In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:
If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
Why have computers and computer chips had so little effect on economic productivity so far?
Are self-replicating biological systems a good comparison for self-replicating machine systems?
Isn’t this just too crazy and weird to be plausible?
What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
Might there not be severely declining returns to bigger brains and more training?
Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?
Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?
Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore
Highlights
Robot nannies
Key transformations after an AI capabilities explosion
Objection: Shouldn’t we be seeing economic growth rates increasing today?
Objection: Declining returns to increases in intelligence?
Objection: Could we really see rates of construction go up a hundredfold or a thousandfold?
Objection: “This sounds completely whack”
Income and wealth distribution