Potentially relatedly, I think massive increases in unemployment are very unlikely.
I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I’d be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.
AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad’s views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that’s more complex than just the machinery itself).
There are arguments that you are still unaware of, which mostly come from outside of the community. They’re less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.
To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence.
I gave a short summary in this post:
AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human “wetware” operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals – and many other unknown/subtler/more complex conditions – are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
These connected/nested components are in effect “variants” – varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.
I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I’d be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.
Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.
Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The problem here is that AI corporations are increasingly making decisions for us. See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
I would bet on both, on your side.
I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I’d be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.
AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad’s views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that’s more complex than just the machinery itself).
There are arguments that you are still unaware of, which mostly come from outside of the community. They’re less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.
To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence.
I gave a short summary in this post:
AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human “wetware” operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals – and many other unknown/subtler/more complex conditions – are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
These connected/nested components are in effect “variants” – varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.
Thanks for clarifying!
Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.
Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The doubling time for 3 % annual growth is 23.4 years (= LN(2)/LN(1.03)).
The problem here is that AI corporations are increasingly making decisions for us.
See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.
Happy though to have had this chat!