I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I’d be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.
Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.
Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The problem here is that AI corporations are increasingly making decisions for us. See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/​profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
Thanks for clarifying!
Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.
Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The doubling time for 3 % annual growth is 23.4 years (= LN(2)/​LN(1.03)).
The problem here is that AI corporations are increasingly making decisions for us.
See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/​profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.
There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.
Happy though to have had this chat!