I’m not an economist, but the general consensus among the economists I have spoken to is that different kinds of bubbles (such as the dot-com bubble) are commonplace and natural, and even large companies make stupid mistakes that affect their stock hugely.
Anecdotally, there are a lot of small companies that are clearly overvalued, such as the Swedish startup Lovable, which recently reached the valuation of $6.6 billion. It is insane for a startup whose only product is a wrapper for another company’s LLM in a space where every AI lab has their own coding tool. If people are willing to invest money in that, I’d assume they’d invest even more in larger companies that actually do have a product, even if it is overhyped. Overvaluation leads to a cycle where the company must keep overpromising to avoid a market correction, which in turn leads to even more overvaluation.
Again, I’m not an economist so all that is said with the caveat that I might misunderstand some market mechanisms. But what I am is an AI researcher. I would be more than willing to believe that the investors are right, if they provided evidence. But my experience talking with people who are investing seriously in AI is that they don’t understand the tech at all. At my day-job, talking to the management that wants AI to do things it cannot do and allocating resources for hopeless projects. At investor-sponsored events, where people are basically handing off money to any company that seems reasonable enough (e.g., employing qualified people, having experience in doing business) regardless of the AI project they are proposing. I know a person who got bought for millions due to having a good-sounding idea even though they have no product, no clients and no research backing up their idea. Some people are just happy to get rid of their money.
There do exist reasonable investors too, but there is not too much they can do about the valuations of private companies. And even though in theory markets allow things such as shorting that can correct overvaluations, these instruments are very risky and I think these people are more likely to just stay away of AI companies as a whole than attempt to short a bubble when they don’t know when it crashes.
“despite those benchmarks not really being related to AGI in any way.” This is your judgment, but clearly it is not the judgment of some of the world’s leading scientific experts in the area. (Though there may well be other experts who agree with you.)
Are you sure? Even in that report, they carefully avoid using the term “AGI” and instead refer to “general-purpose AI systems”, a legal term used in the EU AI Act and generally thought to refer to current LLMs. Although both terms contain the word “general”, they mean very different things, which is something that the authors mention as well[1].
They also quite straight say that they do not have any strong evidence. According to them, “[t]he pace and unpredictability of advancements in general-purpose AI pose an ‘evidence dilemma’ for policymakers.” (This PDF, page 14). They continue that due to the “rapid an unexpected” advancements, policymakers will have to make decisions “without having a large body of scientific evidence available”. They admit that “pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary.” Still, they claim that “waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible”.
This type of text using wordings such as “impending risk” even though they do not have strong evidence adds an artificial layer of urgency to the issue that is not based on facts.
Even their “weak evidence” is not very good. They list several risks, but of us the most important is the risk they call “loss of control”, under which they put X-risks. They reference several highly speculative papers on this issue that on a cursory reading make several mistakes and have generally a quite low quality. Going through them would be worth of a whole article, but as an example, the paper by Dung (2024) argues that “given that no one has identified an important capacity which does not improve with scaling, the currently best supported hypothesis is arguably that further scaling will bring about AGI.” That is just wrong: Based on a recent survey, 76% of AI experts believe that scaling is not enough. The most important missing capability is continual learning, which is a feature current algorithms simply lack that cannot be scaled. Dung does mention some objections to his argument, but omits the most obvious ones that he should have been aware of. This is not the kind of argument we should take even as weak evidence.
Going back to the International AI Safety Report, it seems that Bengio et al. know that their evidence is not enough, and they have carefully written their report so that it doesn’t claim anything that is technically incorrect, but overall gives a much more alarmist tone than justified by their findings. If they really had better evidence, they would say so more clearly.
“General-purpose AI is not to be confused with ‘artificial general intelligence’ (AGI). The term AGI lacks a universal definition but is typically used to refer to a potential future AI that equals or surpasses human performance on all or almost all cognitive tasks. By contrast, several of today’s AI models and systems already meet the criteria for counting as general-purpose AI as defined in this report.” (This PDF, page 27).
I guess I’m just slightly confused about what economists actually think here since I’d always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I don’t know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you’re probably wrong too. Or, more precisely, that you’re unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I’d be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize “bubble”, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you’ve cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it’s far too long (2000 words) and I’ll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman’s history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton’s belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I don’t know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hinton’s views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. I’m pretty sure I’ve seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what it’s worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have
I’m not an economist, but the general consensus among the economists I have spoken to is that different kinds of bubbles (such as the dot-com bubble) are commonplace and natural, and even large companies make stupid mistakes that affect their stock hugely.
Anecdotally, there are a lot of small companies that are clearly overvalued, such as the Swedish startup Lovable, which recently reached the valuation of $6.6 billion. It is insane for a startup whose only product is a wrapper for another company’s LLM in a space where every AI lab has their own coding tool. If people are willing to invest money in that, I’d assume they’d invest even more in larger companies that actually do have a product, even if it is overhyped. Overvaluation leads to a cycle where the company must keep overpromising to avoid a market correction, which in turn leads to even more overvaluation.
Again, I’m not an economist so all that is said with the caveat that I might misunderstand some market mechanisms. But what I am is an AI researcher. I would be more than willing to believe that the investors are right, if they provided evidence. But my experience talking with people who are investing seriously in AI is that they don’t understand the tech at all. At my day-job, talking to the management that wants AI to do things it cannot do and allocating resources for hopeless projects. At investor-sponsored events, where people are basically handing off money to any company that seems reasonable enough (e.g., employing qualified people, having experience in doing business) regardless of the AI project they are proposing. I know a person who got bought for millions due to having a good-sounding idea even though they have no product, no clients and no research backing up their idea. Some people are just happy to get rid of their money.
There do exist reasonable investors too, but there is not too much they can do about the valuations of private companies. And even though in theory markets allow things such as shorting that can correct overvaluations, these instruments are very risky and I think these people are more likely to just stay away of AI companies as a whole than attempt to short a bubble when they don’t know when it crashes.
Are you sure? Even in that report, they carefully avoid using the term “AGI” and instead refer to “general-purpose AI systems”, a legal term used in the EU AI Act and generally thought to refer to current LLMs. Although both terms contain the word “general”, they mean very different things, which is something that the authors mention as well[1].
They also quite straight say that they do not have any strong evidence. According to them, “[t]he pace and unpredictability of advancements in general-purpose AI pose an ‘evidence dilemma’ for policymakers.” (This PDF, page 14). They continue that due to the “rapid an unexpected” advancements, policymakers will have to make decisions “without having a large body of scientific evidence available”. They admit that “pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary.” Still, they claim that “waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible”.
This type of text using wordings such as “impending risk” even though they do not have strong evidence adds an artificial layer of urgency to the issue that is not based on facts.
Even their “weak evidence” is not very good. They list several risks, but of us the most important is the risk they call “loss of control”, under which they put X-risks. They reference several highly speculative papers on this issue that on a cursory reading make several mistakes and have generally a quite low quality. Going through them would be worth of a whole article, but as an example, the paper by Dung (2024) argues that “given that no one has identified an important capacity which does not improve with scaling, the currently best supported hypothesis is arguably that further scaling will bring about AGI.” That is just wrong: Based on a recent survey, 76% of AI experts believe that scaling is not enough. The most important missing capability is continual learning, which is a feature current algorithms simply lack that cannot be scaled. Dung does mention some objections to his argument, but omits the most obvious ones that he should have been aware of. This is not the kind of argument we should take even as weak evidence.
Going back to the International AI Safety Report, it seems that Bengio et al. know that their evidence is not enough, and they have carefully written their report so that it doesn’t claim anything that is technically incorrect, but overall gives a much more alarmist tone than justified by their findings. If they really had better evidence, they would say so more clearly.
“General-purpose AI is not to be confused with ‘artificial general intelligence’ (AGI). The term AGI lacks a universal definition but is typically used to refer to a potential future AI that equals or surpasses human performance on all or almost all cognitive tasks. By contrast, several of today’s AI models and systems already meet the criteria for counting as general-purpose AI as defined in this report.” (This PDF, page 27).
I guess I’m just slightly confused about what economists actually think here since I’d always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I don’t know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you’re probably wrong too. Or, more precisely, that you’re unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I’d be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize “bubble”, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you’ve cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it’s far too long (2000 words) and I’ll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman’s history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton’s belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I don’t know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hinton’s views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. I’m pretty sure I’ve seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what it’s worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have