Summary: My main intention in my previous comment was to share my perspective on why relying too much on the outside view is problematic (and, to be fair, that wasn’t clear because I addressed multiple points). While I think your calculations and explanation are solid, the general intuition I want to share is that people should place less weight on the outside view, as this article seems to suggest.
I wrote this fairly quickly, so I apologize if my response is not entirely coherent.
Emphasizing the definition of unemployment you use is helpful, and I mostly agree with your model of total AI automation, where no one is necessarily looking for a job.
Regarding your question about my estimate of the median annual unemployment rate: I haven’t thought deeply enough about unemployment to place a bet or form a strong opinion on the exact percentage points. Thanks for the offer, though.
To illustrate the main point in my summary, I want to share a basic reasoning process I’m using.
Assumptions:
Most people are underestimating the speed of AI development.
The new paradigm of scaling inference-time compute (instead of training compute) will lead to rapid increases in AI capabilities.
We have not solved the alignment problem and don’t seem to be making progress quickly enough (among other unsolved issues).
An intelligence explosion is possible.
Worldview implications of my assumptions:
People should take this development much more seriously.
We need more effective regulations to govern AI.
Humanity needs to act now and ambitiously.
To articulate my intuition as clearly as possible: the lack of action we’re currently seeing from various stakeholders in addressing the advancement of frontier AI systems seems to be, in part, because they rely too heavily on the outside view for decision-making. While this doesn’t address the crux of your post ( but it prompted me to write my comment initially), I believe it’s dangerous to place significant weight on an approach that attempts to make sense of developments we have no clear reference classes for. AGI hasn’t happened yet, so I don’t understand why we should lean heavily on historical data to assess such a novel development.
What’s currently happening is that people are essentially throwing their arms up and saying, “Uh, the probabilities are so low for X or Y impact of AGI, so let’s just trust the process.” If people placed more weight on assumptions like those above, or reasoned more from first principles, the situation might look very different. Do you see? My issue is with putting too much weight on the outside view, not with your object-level claims.
Thank you for your reply!
Summary: My main intention in my previous comment was to share my perspective on why relying too much on the outside view is problematic (and, to be fair, that wasn’t clear because I addressed multiple points). While I think your calculations and explanation are solid, the general intuition I want to share is that people should place less weight on the outside view, as this article seems to suggest.
I wrote this fairly quickly, so I apologize if my response is not entirely coherent.
Emphasizing the definition of unemployment you use is helpful, and I mostly agree with your model of total AI automation, where no one is necessarily looking for a job.
Regarding your question about my estimate of the median annual unemployment rate: I haven’t thought deeply enough about unemployment to place a bet or form a strong opinion on the exact percentage points. Thanks for the offer, though.
To illustrate the main point in my summary, I want to share a basic reasoning process I’m using.
Assumptions:
Most people are underestimating the speed of AI development.
The new paradigm of scaling inference-time compute (instead of training compute) will lead to rapid increases in AI capabilities.
We have not solved the alignment problem and don’t seem to be making progress quickly enough (among other unsolved issues).
An intelligence explosion is possible.
Worldview implications of my assumptions:
People should take this development much more seriously.
We need more effective regulations to govern AI.
Humanity needs to act now and ambitiously.
To articulate my intuition as clearly as possible: the lack of action we’re currently seeing from various stakeholders in addressing the advancement of frontier AI systems seems to be, in part, because they rely too heavily on the outside view for decision-making. While this doesn’t address the crux of your post ( but it prompted me to write my comment initially), I believe it’s dangerous to place significant weight on an approach that attempts to make sense of developments we have no clear reference classes for. AGI hasn’t happened yet, so I don’t understand why we should lean heavily on historical data to assess such a novel development.
What’s currently happening is that people are essentially throwing their arms up and saying, “Uh, the probabilities are so low for X or Y impact of AGI, so let’s just trust the process.” If people placed more weight on assumptions like those above, or reasoned more from first principles, the situation might look very different. Do you see? My issue is with putting too much weight on the outside view, not with your object-level claims.
I am open to changing my mind on this.