How much weight do you think should one allocate to the inside and outside view respectively in order to develop a comprehensive estimate of the potential future unemployment rate?
It is hard for me to answer this. It depends on the methodology used to produce the inside view estimate. If this is just a guess from someone working on AI safety, I would put very little weight on it. If it is the output of a detailed quantitative empirical model like Epoch AIās, I could as a 1st approximation ignore the estimates from my post (although I would have to check the model to know).
Especially because I think this ignores the apparent fact that the development of intelligent systems that are more capable than humans has never occurred in history. This fundamentally changes the game.
Task automation has been happening for a long time (with the unemployment rate still being low), and one can think about advanced AI as a continuation of that trend. In addition, the definition of unemployment I used requires both not having a job and being actively looking for one. For sometime in the next few decades to centuries, I predict negligible human unemployment and roughly total AI automation (i.e. almost no human workers). I guess humans will just be happy letting the AIs do everything, and whoever wants to have a job (which will be a little bit of a fake job, as AIs would be able to perform the tasks more efficiently) will also have the chance to do it, i.e. there will be basically no humans actively looking for a job, and having no success (i.e. essentially no unemployment). More pessimistically, it is also possible to have almost total homelessness with negligible unemployment in a dystopian scenario where humans gave up looking for jobs because AIs are so much better, but still kind enough to give humans a subsistence income.
I know you are not saying that the inside view doesnāt matter, but I am concerned that a post like this anchors people toward a base rate that is a lot lower than what things will actually be like. It reinforces status quo bias.
According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, the unemployment rate is still relatively low. So I do not think one can predict massive unemployment solely on the basis of AI boosting economic growth. Note am discussing what could happen in the real world, not what could happen in the absence of any mitigation actions.
I think it makes a lot of sense to reason bottom-up when thinking about topics like these, and I actually disagree with you a lot.
What is your median annual unemployment rate in the US in 2025, 2026 or 2027? If much higher than now, I am happy to set up a bet with you where:
I give you 10 kā¬ if the rate is higher than your median.
You give me 10 kā¬ if the rate is lower than your median, which I would donate to animal welfare interventions.
My medians are not far from the ones suggested by historical data below, although I would want to think more about them if I was to bet 10 kā¬.
Summary: My main intention in my previous comment was to share my perspective on why relying too much on the outside view is problematic (and, to be fair, that wasnāt clear because I addressed multiple points). While I think your calculations and explanation are solid, the general intuition I want to share is that people should place less weight on the outside view, as this article seems to suggest.
I wrote this fairly quickly, so I apologize if my response is not entirely coherent.
Emphasizing the definition of unemployment you use is helpful, and I mostly agree with your model of total AI automation, where no one is necessarily looking for a job.
Regarding your question about my estimate of the median annual unemployment rate: I havenāt thought deeply enough about unemployment to place a bet or form a strong opinion on the exact percentage points. Thanks for the offer, though.
To illustrate the main point in my summary, I want to share a basic reasoning process Iām using.
Assumptions:
Most people are underestimating the speed of AI development.
The new paradigm of scaling inference-time compute (instead of training compute) will lead to rapid increases in AI capabilities.
We have not solved the alignment problem and donāt seem to be making progress quickly enough (among other unsolved issues).
An intelligence explosion is possible.
Worldview implications of my assumptions:
People should take this development much more seriously.
We need more effective regulations to govern AI.
Humanity needs to act now and ambitiously.
To articulate my intuition as clearly as possible: the lack of action weāre currently seeing from various stakeholders in addressing the advancement of frontier AI systems seems to be, in part, because they rely too heavily on the outside view for decision-making. While this doesnāt address the crux of your post ( but it prompted me to write my comment initially), I believe itās dangerous to place significant weight on an approach that attempts to make sense of developments we have no clear reference classes for. AGI hasnāt happened yet, so I donāt understand why we should lean heavily on historical data to assess such a novel development.
Whatās currently happening is that people are essentially throwing their arms up and saying, āUh, the probabilities are so low for X or Y impact of AGI, so letās just trust the process.ā If people placed more weight on assumptions like those above, or reasoned more from first principles, the situation might look very different. Do you see? My issue is with putting too much weight on the outside view, not with your object-level claims.
Thanks for the comment, Johan!
It is hard for me to answer this. It depends on the methodology used to produce the inside view estimate. If this is just a guess from someone working on AI safety, I would put very little weight on it. If it is the output of a detailed quantitative empirical model like Epoch AIās, I could as a 1st approximation ignore the estimates from my post (although I would have to check the model to know).
Task automation has been happening for a long time (with the unemployment rate still being low), and one can think about advanced AI as a continuation of that trend. In addition, the definition of unemployment I used requires both not having a job and being actively looking for one. For sometime in the next few decades to centuries, I predict negligible human unemployment and roughly total AI automation (i.e. almost no human workers). I guess humans will just be happy letting the AIs do everything, and whoever wants to have a job (which will be a little bit of a fake job, as AIs would be able to perform the tasks more efficiently) will also have the chance to do it, i.e. there will be basically no humans actively looking for a job, and having no success (i.e. essentially no unemployment). More pessimistically, it is also possible to have almost total homelessness with negligible unemployment in a dystopian scenario where humans gave up looking for jobs because AIs are so much better, but still kind enough to give humans a subsistence income.
According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, the unemployment rate is still relatively low. So I do not think one can predict massive unemployment solely on the basis of AI boosting economic growth. Note am discussing what could happen in the real world, not what could happen in the absence of any mitigation actions.
What is your median annual unemployment rate in the US in 2025, 2026 or 2027? If much higher than now, I am happy to set up a bet with you where:
I give you 10 kā¬ if the rate is higher than your median.
You give me 10 kā¬ if the rate is lower than your median, which I would donate to animal welfare interventions.
My medians are not far from the ones suggested by historical data below, although I would want to think more about them if I was to bet 10 kā¬.
Thanks for engaging too!
The doubling time for 3 % annual growth is 23.4 years (= LN(2)/āLN(1.03)).
Thank you for your reply!
Summary: My main intention in my previous comment was to share my perspective on why relying too much on the outside view is problematic (and, to be fair, that wasnāt clear because I addressed multiple points). While I think your calculations and explanation are solid, the general intuition I want to share is that people should place less weight on the outside view, as this article seems to suggest.
I wrote this fairly quickly, so I apologize if my response is not entirely coherent.
Emphasizing the definition of unemployment you use is helpful, and I mostly agree with your model of total AI automation, where no one is necessarily looking for a job.
Regarding your question about my estimate of the median annual unemployment rate: I havenāt thought deeply enough about unemployment to place a bet or form a strong opinion on the exact percentage points. Thanks for the offer, though.
To illustrate the main point in my summary, I want to share a basic reasoning process Iām using.
Assumptions:
Most people are underestimating the speed of AI development.
The new paradigm of scaling inference-time compute (instead of training compute) will lead to rapid increases in AI capabilities.
We have not solved the alignment problem and donāt seem to be making progress quickly enough (among other unsolved issues).
An intelligence explosion is possible.
Worldview implications of my assumptions:
People should take this development much more seriously.
We need more effective regulations to govern AI.
Humanity needs to act now and ambitiously.
To articulate my intuition as clearly as possible: the lack of action weāre currently seeing from various stakeholders in addressing the advancement of frontier AI systems seems to be, in part, because they rely too heavily on the outside view for decision-making. While this doesnāt address the crux of your post ( but it prompted me to write my comment initially), I believe itās dangerous to place significant weight on an approach that attempts to make sense of developments we have no clear reference classes for. AGI hasnāt happened yet, so I donāt understand why we should lean heavily on historical data to assess such a novel development.
Whatās currently happening is that people are essentially throwing their arms up and saying, āUh, the probabilities are so low for X or Y impact of AGI, so letās just trust the process.ā If people placed more weight on assumptions like those above, or reasoned more from first principles, the situation might look very different. Do you see? My issue is with putting too much weight on the outside view, not with your object-level claims.
I am open to changing my mind on this.