One important point in the post — illustrated with the example of the dot com boom and bust — is that it’s madness to just look at a trend and extrapolate it indefinitely. You need an explanatory theory of why the trend is happening and why it might continue or why it might stop. In the absence of an explanatory understanding of what is happening, you are just making a wild, blind guess about the future.
(David Deutsch makes this point in his awesome book The Beginning of Infinity and in one of his TED Talks.)
A pointed question which Ege Erdil does not ask in the post, but should: is there any hard evidence of AI systems invented within the last 5 years or even the last 10 years doing any labour automation or any measurable productivity augmentation of human workers?
I have looked and I have found very little evidence of this.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems to me like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR [resolution rate] and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
If the amount of labour automation or productivity improvement from LLMs is zero or negative, then naively extrapolating this trend forward would mean full labour automation by AI is an infinite amount of time away. But of course I’ve just argued why these kinds of extrapolations are a mistake.
It continually strikes me as odd that people write 3,000-word, 5,000-word, and 10,000-word essays on AGI and don’t ask fundamental questions like this. You’d think if the trend you are discussing is labour automation by AI, you’d want to see if AI is automating any labour in a way we can rigorously measure. Why are people ignoring that obvious question?
Nvidia revenue is a really bad proxy for AI-based labour automation or for the productivity impact of AI. It’s a bad proxy for the same reason capital investment into AI would be a bad proxy. It measures resources going into AI (inputs), not resources generated by AI (outputs).
The basic reason for the trend continuing so far is that NVIDIA et al have diverted normal compute expenditures into the AI boom.
I agree that the trend will stop, and it will stop around 2027-2033 (my widest uncertainty lies here), and once that happens the probability of having AGI soon will go down quite a bit (if it hasn’t happened by then).
I don’t understand what you’re trying to say here. By “the trend”, do you mean Nvidia’s revenue growth? And what do you mean by “have diverted normal compute expenditures into the AI boom”?
I mean the trend of very fast compute increases dedicated to AI, and what I mean is that fabs and chip manufacturers have switched their customers to AI companies.
Here’s the link to the original post: https://epochai.substack.com/p/the-case-for-multi-decade-ai-timelines
One important point in the post — illustrated with the example of the dot com boom and bust — is that it’s madness to just look at a trend and extrapolate it indefinitely. You need an explanatory theory of why the trend is happening and why it might continue or why it might stop. In the absence of an explanatory understanding of what is happening, you are just making a wild, blind guess about the future.
(David Deutsch makes this point in his awesome book The Beginning of Infinity and in one of his TED Talks.)
A pointed question which Ege Erdil does not ask in the post, but should: is there any hard evidence of AI systems invented within the last 5 years or even the last 10 years doing any labour automation or any measurable productivity augmentation of human workers?
I have looked and I have found very little evidence of this.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems to me like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
If the amount of labour automation or productivity improvement from LLMs is zero or negative, then naively extrapolating this trend forward would mean full labour automation by AI is an infinite amount of time away. But of course I’ve just argued why these kinds of extrapolations are a mistake.
It continually strikes me as odd that people write 3,000-word, 5,000-word, and 10,000-word essays on AGI and don’t ask fundamental questions like this. You’d think if the trend you are discussing is labour automation by AI, you’d want to see if AI is automating any labour in a way we can rigorously measure. Why are people ignoring that obvious question?
Nvidia revenue is a really bad proxy for AI-based labour automation or for the productivity impact of AI. It’s a bad proxy for the same reason capital investment into AI would be a bad proxy. It measures resources going into AI (inputs), not resources generated by AI (outputs).
The basic reason for the trend continuing so far is that NVIDIA et al have diverted normal compute expenditures into the AI boom.
I agree that the trend will stop, and it will stop around 2027-2033 (my widest uncertainty lies here), and once that happens the probability of having AGI soon will go down quite a bit (if it hasn’t happened by then).
I don’t understand what you’re trying to say here. By “the trend”, do you mean Nvidia’s revenue growth? And what do you mean by “have diverted normal compute expenditures into the AI boom”?
I mean the trend of very fast compute increases dedicated to AI, and what I mean is that fabs and chip manufacturers have switched their customers to AI companies.
I still don’t follow. What point are you trying to make about my comment or about Ege Erdil’s post?
I’m trying to identify why the trend has lasted, so that we can predict when the trend will break down.
That was the purpose of my comment.