People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.
Since the timeline of any bubble is extremely hard to predict and isnât the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).
For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this.
[Edited on Nov. 17, 2025 at 3:35 PM Eastern to add: I wrote a full-fledged post about the AI bubble that can prompt a richer discussion. It doesnât attempt to operationalize the bubble question, but gets into the expert opinions and evidence. I also do my own analysis.]
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble â as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist â that will pop before January 1, 2031?
My leading view is that there will be some sort of bubble pop, but with people still using genAI tools to some degree afterwards (like how people kept using the internet after the dot com burst).
Still major uncertainty on my part because I donât know much about financial markets, and am still highly uncertain about the level where AI progress fully stalls.
I just realized the way this poll is set up is really confusing. Youâre currently at â50% 100% probabilityâ, which when you look at it on the number line looks like 75%. Not the best tool to use for such a poll, I guess!
I donât know exactly how youâd operationalize an AI bubble. If OpenAI were a public company, you could say its stock price goes down a certain amount. But private companies can control their own valuation (or the public perception of it) to a certain extent, e.g. by not raising more money so their last known valuation is still from their most recent funding round.
Many public companies like Microsoft, Google, and Nvidia are involved in the AI investment boom, so their stocks can be taken into consideration. You can also look at the level of investment and data centre construction.
I donât think it would be that hard to come up with reasonable resolution criteria, itâs just that this is of course always a nitpicky thing with forecasting and I havenât spent any time on it yet.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble â as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist â that will pop before January 1, 2031?
Iâm not exactly sure about the operationalization of this question, but it seems like thereâs a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldnât require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but canât really back that up.
I donât know much about small AI startups. The bigger AI companies have a problem because their valuations have increased so much and the level of investment theyâre making (e.g. into building datacentres) is reaching levels that feel unsustainable.
Itâs to the point where the AI investment, driven primarily by the large AI companies, has significant macroeconomic effects on the United States economy. The popping of an AI bubble could be followed by a U.S. recession.
However, itâs a bit complicated, in that case, as to whether to say the popping of the bubble would have âcausedâ the recession, since there are a lot of factors, such as tariffs. Macroeconomics and financial markets are complicated and I know very little. Iâm not nearly an expert.
I donât think small AI startups creating successful products and then large AI companies copying them and outcompeting them would count as a bubble. That sounds like the total of amount of revenue in the industry would be about the same as if the startups succeeded, it just would flow to the bigger companies instead.
The bubble question is about the industry as a whole.
I do think thereâs also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldnât necessarily just go into the big companies.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble â as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist â that will pop before January 1, 2031?
I put 30% on this possiblility, maybe 35%. I donât have much more to say than âtime horizons!â, âlook how useful theyâre becoming in my dayjob & personal life!â, âlook at the qualitative improvement over the last six yearsâ, âwe only need to automate machine learning research, which isnât the hardest thing to automateâ.
Worlds in which we get a bubble pop are worlds in which we donât get a software intelligence explosion, and in which either useful products come too late for the investment to sustain itself or thereâs not really much many useful products after what we already have. (This is tied in with âare we getting TAI through the things LLMs make us/âare able to do, without fundamental insightsâ.
I havenât done the sums myself, but do we know for sure that they canât make money without being all that useful, so long as a lot of people interact with them everyday?
Is Facebook âusefulâ? Not THAT much. Do people pay for it? No, itâs free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights. Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and arenât that helpful at work. Thereâs already a massive user base for Chat-GPT etc. Maybe they can monetize it even without it being THAT useful. Or maybe the sums just donât work out for that, Iâm not sure. But clearly the market thinks they will make money in expectation. Thatâs a boring reason for rejecting âitâs a bubbleâ claims and bubbles do happen, but beating the market in pricing shares genuinely is quite difficult I suspect.
Of course, there could also be a bubble even if SOME AI companies make a lot of money. Thatâs what happened with the Dot.com bubble.
This is an important point to consider. OpenAI is indeed exploring how to put ads on ChatGPT.
My main source of skepticism about this is that the marginal revenue from an online ad is extremely low, but thatâs fine because the marginal cost of serving a webpage or loading a photo in an app or whatever is also extremely low. I donât have a good sense of the actual numbers here, but since a GPT-5 query is considerably more expensive than serving a webpage, this could be a problem. (Also, thatâs just the marginal cost. OpenAI, like other companies, also has to amortize all its fixed costs over all its sales, whether theyâre ad sales or sales directly to consumers.)
Itâs been rumoured/âreported (not sure which) that OpenAI is planning to get ChatGPT to sell things to you directly. So, if you ask, âHey, ChatGPT, what is the healthiest type of soda?â, it will respond, âWhy, a nice refreshing CocaâColaÂź Zero Sugar of course!â This seems horrible. That would probably drive some people off the platform, but, who knows, it might be a net financial gain.
There are other âuselessâ ways companies like OpenAI could try to drive usage and try to monetize either via ads or paid subscriptions. Maybe if OpenAI leaned heavily into the whole AI âboyfriends/âgirlfriendsâ thing that would somehow pay off â Iâm skeptical, but weâve got to consider all the possibilities here.
What do you make of the fact that METRâs time horizon graph and METRâs study on AI coding assistants point in opposite directions? The graph says: exponential progress! Superhuman coders! AGI soon! Singularity! The study says: overhyped product category, useless tool, tricks people into thinking it helps them when it actually hurts them.
Yep, I wouldnât have predicted that. I guess the standard retort is: Worst case! Existing large codebase! Experienced developers!
I know that thereâs software tools I use >once a week that wouldnât have existed without AI models. Theyâre not very complicated, but theyâdâve been annoying to code up myself, and I wouldnât have done it. I wonder if thereâs a slowdown in less harsh scenarios, but itâs probably not worth the value of information of running such a study.
I dunno. Iâve done a bunch ofcalibrationpractice[1], this feels like a 30%, Iâm calling 30%. My probability went up recently, mostly because some subjectively judged capabilities that I was expecting didnât start showing up.
My metaculus calibration around 30% isnât great, Iâm overconfident there, Iâm trying to keep that in mind. My fatebook is slightly overconfident in that range, and who can tell with Manifold.
Thereâs a longer discussion of that oft-discussed METR time horizons graph that warrants a post of its own.
My problem with how people interpret the graph is that people slip quickly and wordlessly from step to step in a logical chain of inferences that I donât think can be justified. The chain of inferences is something like:
AI model performance on a set of very limited benchmark tasks â AI model performance on software engineering in general â AI model performance on everything humans do
Ask what is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble â as determined by a reliable source such as The Wall Street Journal, the Financial Times, or The Economist â that will pop before January 1, 2031?
I havenât thought about my exact probability too hard yet, but for now Iâll just say 90% because that feels about right.
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.
Since the timeline of any bubble is extremely hard to predict and isnât the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).
For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this.
[Edited on Nov. 17, 2025 at 3:35 PM Eastern to add: I wrote a full-fledged post about the AI bubble that can prompt a richer discussion. It doesnât attempt to operationalize the bubble question, but gets into the expert opinions and evidence. I also do my own analysis.]
For now, an informal poll:
My leading view is that there will be some sort of bubble pop, but with people still using genAI tools to some degree afterwards (like how people kept using the internet after the dot com burst).
Still major uncertainty on my part because I donât know much about financial markets, and am still highly uncertain about the level where AI progress fully stalls.
I just realized the way this poll is set up is really confusing. Youâre currently at â50% 100% probabilityâ, which when you look at it on the number line looks like 75%. Not the best tool to use for such a poll, I guess!
Oh, sure. People will keep using LLMs.
I donât know exactly how youâd operationalize an AI bubble. If OpenAI were a public company, you could say its stock price goes down a certain amount. But private companies can control their own valuation (or the public perception of it) to a certain extent, e.g. by not raising more money so their last known valuation is still from their most recent funding round.
Many public companies like Microsoft, Google, and Nvidia are involved in the AI investment boom, so their stocks can be taken into consideration. You can also look at the level of investment and data centre construction.
I donât think it would be that hard to come up with reasonable resolution criteria, itâs just that this is of course always a nitpicky thing with forecasting and I havenât spent any time on it yet.
Iâm not exactly sure about the operationalization of this question, but it seems like thereâs a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldnât require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but canât really back that up.
I donât know much about small AI startups. The bigger AI companies have a problem because their valuations have increased so much and the level of investment theyâre making (e.g. into building datacentres) is reaching levels that feel unsustainable.
Itâs to the point where the AI investment, driven primarily by the large AI companies, has significant macroeconomic effects on the United States economy. The popping of an AI bubble could be followed by a U.S. recession.
However, itâs a bit complicated, in that case, as to whether to say the popping of the bubble would have âcausedâ the recession, since there are a lot of factors, such as tariffs. Macroeconomics and financial markets are complicated and I know very little. Iâm not nearly an expert.
I donât think small AI startups creating successful products and then large AI companies copying them and outcompeting them would count as a bubble. That sounds like the total of amount of revenue in the industry would be about the same as if the startups succeeded, it just would flow to the bigger companies instead.
The bubble question is about the industry as a whole.
I do think thereâs also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldnât necessarily just go into the big companies.
I put 30% on this possiblility, maybe 35%. I donât have much more to say than âtime horizons!â, âlook how useful theyâre becoming in my dayjob & personal life!â, âlook at the qualitative improvement over the last six yearsâ, âwe only need to automate machine learning research, which isnât the hardest thing to automateâ.
Worlds in which we get a bubble pop are worlds in which we donât get a software intelligence explosion, and in which either useful products come too late for the investment to sustain itself or thereâs not really much many useful products after what we already have. (This is tied in with âare we getting TAI through the things LLMs make us/âare able to do, without fundamental insightsâ.
I havenât done the sums myself, but do we know for sure that they canât make money without being all that useful, so long as a lot of people interact with them everyday?
Is Facebook âusefulâ? Not THAT much. Do people pay for it? No, itâs free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights. Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and arenât that helpful at work. Thereâs already a massive user base for Chat-GPT etc. Maybe they can monetize it even without it being THAT useful. Or maybe the sums just donât work out for that, Iâm not sure. But clearly the market thinks they will make money in expectation. Thatâs a boring reason for rejecting âitâs a bubbleâ claims and bubbles do happen, but beating the market in pricing shares genuinely is quite difficult I suspect.
Of course, there could also be a bubble even if SOME AI companies make a lot of money. Thatâs what happened with the Dot.com bubble.
This is an important point to consider. OpenAI is indeed exploring how to put ads on ChatGPT.
My main source of skepticism about this is that the marginal revenue from an online ad is extremely low, but thatâs fine because the marginal cost of serving a webpage or loading a photo in an app or whatever is also extremely low. I donât have a good sense of the actual numbers here, but since a GPT-5 query is considerably more expensive than serving a webpage, this could be a problem. (Also, thatâs just the marginal cost. OpenAI, like other companies, also has to amortize all its fixed costs over all its sales, whether theyâre ad sales or sales directly to consumers.)
Itâs been rumoured/âreported (not sure which) that OpenAI is planning to get ChatGPT to sell things to you directly. So, if you ask, âHey, ChatGPT, what is the healthiest type of soda?â, it will respond, âWhy, a nice refreshing CocaâColaÂź Zero Sugar of course!â This seems horrible. That would probably drive some people off the platform, but, who knows, it might be a net financial gain.
There are other âuselessâ ways companies like OpenAI could try to drive usage and try to monetize either via ads or paid subscriptions. Maybe if OpenAI leaned heavily into the whole AI âboyfriends/âgirlfriendsâ thing that would somehow pay off â Iâm skeptical, but weâve got to consider all the possibilities here.
What do you make of the fact that METRâs time horizon graph and METRâs study on AI coding assistants point in opposite directions? The graph says: exponential progress! Superhuman coders! AGI soon! Singularity! The study says: overhyped product category, useless tool, tricks people into thinking it helps them when it actually hurts them.
Pretty interesting, no?
Yep, I wouldnât have predicted that. I guess the standard retort is: Worst case! Existing large codebase! Experienced developers!
I know that thereâs software tools I use >once a week that wouldnât have existed without AI models. Theyâre not very complicated, but theyâdâve been annoying to code up myself, and I wouldnât have done it. I wonder if thereâs a slowdown in less harsh scenarios, but itâs probably not worth the value of information of running such a study.
I dunno. Iâve done a bunch of calibration practice[1], this feels like a 30%, Iâm calling 30%. My probability went up recently, mostly because some subjectively judged capabilities that I was expecting didnât start showing up.
My metaculus calibration around 30% isnât great, Iâm overconfident there, Iâm trying to keep that in mind. My fatebook is slightly overconfident in that range, and who can tell with Manifold.
Thereâs a longer discussion of that oft-discussed METR time horizons graph that warrants a post of its own.
My problem with how people interpret the graph is that people slip quickly and wordlessly from step to step in a logical chain of inferences that I donât think can be justified. The chain of inferences is something like:
AI model performance on a set of very limited benchmark tasks â AI model performance on software engineering in general â AI model performance on everything humans do
I donât think these inferences are justifiable.
I havenât thought about my exact probability too hard yet, but for now Iâll just say 90% because that feels about right.