Executive summary: The author argues it is very likely the AI industry is in a bubble, citing rising “bubble” sentiment, circular financing, weak realized productivity and profitability gains, and technical limits that make it hard for capabilities or business models to catch up with valuations.
Key points:
The author proposes operational tests for an AI bubble, including sustained stock declines for firms like Nvidia/Microsoft/Google or consensus judgments from outlets (WSJ, FT, Bloomberg, Economist, NYT) or expert surveys.
Sentiment appears to be tipping toward “bubble,” with BofA’s fund-manager survey moving from 41% (Sep 2025) to 54% (Oct 2025) saying AI stocks are in a bubble, alongside public claims from Jared Bernstein, Jim Covello, Jeremy Grantham, and Michael Burry.
The author cites “circular financing” between AI labs and cloud providers (e.g., OpenAI receiving billions from Microsoft and spending most of it back on Microsoft cloud), as reported by the New York Times and Bloomberg.
Reported business impact is small: McKinsey finds ~80% of companies see no significant top- or bottom-line gains from genAI; MIT Media Lab and BCG each report about 5% of firms achieving real returns; S&P Global survey data show 42% of companies abandoned most AI pilots by end-2024.
Productivity results are mixed: a call-center RCT shows large gains for less-skilled agents (~30% more issues/hour) but little or negative effects for the most skilled; coding studies show a pooled 26.08% increase in completed tasks across three company RCTs, while a METR RCT finds AI tools increased completion time by 19% for experienced open-source developers.
The author argues capability scaling is running into limits: data exhaustion around 2028 (Epoch AI), Sutskever’s claim that pre-training gains have plateaued, Amodei’s shift toward RL “chain-of-thought” scaling, Toby Ord’s analysis that RL would require ~1,000,000× more compute for a GPT-level boost and is infeasible, inference scaling raises ongoing costs, and fundamental LLM limits (lack of continual learning, data inefficiency, poor generalization, scarce agentic data, unsolved video learning) make current valuations hard to justify without major breakthroughs.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author argues it is very likely the AI industry is in a bubble, citing rising “bubble” sentiment, circular financing, weak realized productivity and profitability gains, and technical limits that make it hard for capabilities or business models to catch up with valuations.
Key points:
The author proposes operational tests for an AI bubble, including sustained stock declines for firms like Nvidia/Microsoft/Google or consensus judgments from outlets (WSJ, FT, Bloomberg, Economist, NYT) or expert surveys.
Sentiment appears to be tipping toward “bubble,” with BofA’s fund-manager survey moving from 41% (Sep 2025) to 54% (Oct 2025) saying AI stocks are in a bubble, alongside public claims from Jared Bernstein, Jim Covello, Jeremy Grantham, and Michael Burry.
The author cites “circular financing” between AI labs and cloud providers (e.g., OpenAI receiving billions from Microsoft and spending most of it back on Microsoft cloud), as reported by the New York Times and Bloomberg.
Reported business impact is small: McKinsey finds ~80% of companies see no significant top- or bottom-line gains from genAI; MIT Media Lab and BCG each report about 5% of firms achieving real returns; S&P Global survey data show 42% of companies abandoned most AI pilots by end-2024.
Productivity results are mixed: a call-center RCT shows large gains for less-skilled agents (~30% more issues/hour) but little or negative effects for the most skilled; coding studies show a pooled 26.08% increase in completed tasks across three company RCTs, while a METR RCT finds AI tools increased completion time by 19% for experienced open-source developers.
The author argues capability scaling is running into limits: data exhaustion around 2028 (Epoch AI), Sutskever’s claim that pre-training gains have plateaued, Amodei’s shift toward RL “chain-of-thought” scaling, Toby Ord’s analysis that RL would require ~1,000,000× more compute for a GPT-level boost and is infeasible, inference scaling raises ongoing costs, and fundamental LLM limits (lack of continual learning, data inefficiency, poor generalization, scarce agentic data, unsolved video learning) make current valuations hard to justify without major breakthroughs.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Accurate summary, SummaryBot!