Why do you think that? Personally, I’ve lost several bets. For example, I’ve bet NO on “Will an AI win a gold medal on the IOI (competitive programming contest) before 2027?” and have already lost that 20 months before the start of 2027.
As a former IOI participant, that achievement feels amazing. As a software engineer, I absolutely find AI tools useful, practical, and economically valuable.
If AI is having an economic impact by automating software engineers’ labour or augmenting their productivity, I’d like to see some economic data or firm-level financial data or a scientific study that shows this.
Your anecdotal experience is interesting, for sure, but the other people who write code for a living who I’ve heard from have said, more or less, AI tools save them the time it would take to copy and paste code from Stack Exchange, and that’s about it.
I think AI’s achievements on narrow tests are amazing. I think AlphaStar’s success on competitive StarCraft II was amazing. But six years after AlphaStar and ten years after AlphaGo, have we seen any big real-world applications of deep reinforcement learning or imitation learning that produce economic value? Or do something else practically useful in a way we can measure? Not that I’m aware of.
Instead, we’ve had companies working on real-world applications of AI, such as Cruise, shutting down. The current hype about AGI reminds me a lot of the hype about self-driving cars that I heard over the last ten years, from around 2015 to 2025. In the five-year period from 2017 to 2022, the rhetoric on solving Level 4⁄5 autonomy was extremely aggressive and optimistic. In the last few years, there have been some signs that some people in the industry are giving up, such as Cruise closing up shop.
Similarly, some companies, including Tesla, Vicarious, Rethink Robotics, and several others have tried to automate factory work and failed.
Other companies, like Covariant, have had modest success on relatively narrow robotics problems, like sorting objects into boxes in a warehouse, but nothing revolutionary.
The situation is complicated and the truth is not obvious, but it’s too simple to say that predictions about AI progress have overall been too pessimistic or too conservative. (I’m only thinking about recent predictions, but one of the first predictions about AI progress, made in 1956, was wildly overoptimistic.[1])
I wrote a post here and a quick take here where I give my other reasons for skepticism about near-term AGI. That might help fill in more information about where I’m coming from, if you’re curious.
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
The economic data seems to depend on one’s point of view. I’m no economist and I certainly can’t prove to you that AI is having an economic impact. Its use grows quickly though: Statistics on AI market size
It’s also important, I think, to distinguish between AI capabilities and AI use. The AI-2027 text argues that a selected few AI capabilities matter most, namely those related to software and AI engineering. These will drive the recursive improvements. Changes to other parts of the industry are downstream of that. Both our viewpoints seem to be consistent with this model: I see rapidly increasing capabilities in software, and you see that other fields have not been so affected yet.
I’ll finish with yet another anecdote, because it happened just yesterday. I was on a mountain hike with my nephew (11 years). He proudly told me that they had a difficult math task in school, and “I was one of the few that could solve it without ChatGPT”.
It’s an anecdote, of course. At the same time, effects of AI seem to be large in education, and changes in education probably lead to changes in the industry.
The economic data seems to depend on one’s point of view. I’m no economist and I certainly can’t prove to you that AI is having an economic impact. Its use grows quickly though: Statistics on AI market size
This is confusing two different concepts. Revenue generated by AI companies or by AI products and services is a different concept than AI’s ability to automate human labour or augment the productivity of human workers. By analogy, video games (another category of software) generate a lot of revenue, but automate no human labour and don’t augment the productivity of human workers.
LLMs haven’t automated any human jobs and the only scientific study I’ve seen on the topic found that LLMs slightly reduced worker productivity. (Mentioned in a footnote to the post I linked above.)
It found that consultants with AI access outperformed consultants without AI access, on most dimensions that were measured. Ethan has since participated in several other studies on the industry adoption of AI.
Why do you think that? Personally, I’ve lost several bets. For example, I’ve bet NO on “Will an AI win a gold medal on the IOI (competitive programming contest) before 2027?” and have already lost that 20 months before the start of 2027.
As a former IOI participant, that achievement feels amazing. As a software engineer, I absolutely find AI tools useful, practical, and economically valuable.
If AI is having an economic impact by automating software engineers’ labour or augmenting their productivity, I’d like to see some economic data or firm-level financial data or a scientific study that shows this.
Your anecdotal experience is interesting, for sure, but the other people who write code for a living who I’ve heard from have said, more or less, AI tools save them the time it would take to copy and paste code from Stack Exchange, and that’s about it.
I think AI’s achievements on narrow tests are amazing. I think AlphaStar’s success on competitive StarCraft II was amazing. But six years after AlphaStar and ten years after AlphaGo, have we seen any big real-world applications of deep reinforcement learning or imitation learning that produce economic value? Or do something else practically useful in a way we can measure? Not that I’m aware of.
Instead, we’ve had companies working on real-world applications of AI, such as Cruise, shutting down. The current hype about AGI reminds me a lot of the hype about self-driving cars that I heard over the last ten years, from around 2015 to 2025. In the five-year period from 2017 to 2022, the rhetoric on solving Level 4⁄5 autonomy was extremely aggressive and optimistic. In the last few years, there have been some signs that some people in the industry are giving up, such as Cruise closing up shop.
Similarly, some companies, including Tesla, Vicarious, Rethink Robotics, and several others have tried to automate factory work and failed.
Other companies, like Covariant, have had modest success on relatively narrow robotics problems, like sorting objects into boxes in a warehouse, but nothing revolutionary.
The situation is complicated and the truth is not obvious, but it’s too simple to say that predictions about AI progress have overall been too pessimistic or too conservative. (I’m only thinking about recent predictions, but one of the first predictions about AI progress, made in 1956, was wildly overoptimistic.[1])
I wrote a post here and a quick take here where I give my other reasons for skepticism about near-term AGI. That might help fill in more information about where I’m coming from, if you’re curious.
Quote:
The economic data seems to depend on one’s point of view. I’m no economist and I certainly can’t prove to you that AI is having an economic impact. Its use grows quickly though: Statistics on AI market size
It’s also important, I think, to distinguish between AI capabilities and AI use. The AI-2027 text argues that a selected few AI capabilities matter most, namely those related to software and AI engineering. These will drive the recursive improvements. Changes to other parts of the industry are downstream of that. Both our viewpoints seem to be consistent with this model: I see rapidly increasing capabilities in software, and you see that other fields have not been so affected yet.
I’ll finish with yet another anecdote, because it happened just yesterday. I was on a mountain hike with my nephew (11 years). He proudly told me that they had a difficult math task in school, and “I was one of the few that could solve it without ChatGPT”.
It’s an anecdote, of course. At the same time, effects of AI seem to be large in education, and changes in education probably lead to changes in the industry.
This is confusing two different concepts. Revenue generated by AI companies or by AI products and services is a different concept than AI’s ability to automate human labour or augment the productivity of human workers. By analogy, video games (another category of software) generate a lot of revenue, but automate no human labour and don’t augment the productivity of human workers.
LLMs haven’t automated any human jobs and the only scientific study I’ve seen on the topic found that LLMs slightly reduced worker productivity. (Mentioned in a footnote to the post I linked above.)
If you’re interested in studies that evaluate the impact of LLMs on productivity, I can recommend the blog of Ethan Mollick. For example this post from September 2023: https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged
It found that consultants with AI access outperformed consultants without AI access, on most dimensions that were measured. Ethan has since participated in several other studies on the industry adoption of AI.