I donât know if/âhow much EA money should go to AI safety either. EAs are trying to find the single best thing, and itâs very hard to know what that is, and many worthwhile things will fail that bar. Maybe David Thorstad is right, and small X-risk reductions have relatively low value because another X-risk will get us in the next few centuries anyway*. What I do think is that society as a whole spending some resources caring about the risk of AGI arriving in the next ten years is likely optimal, and that itâs not more silly to do so than to do many other obviously good things. I donât actually give to AI safety myself, and I only work on AI-related stuff-forecasting etc., Iâm not a techy person-because itâs what people are prepared to pay more for, and people being prepared to pay me to work on near-termist causes is less common, though it does happen. I myself give to animal welfare, not AI safety.
If you really believe that everyone putting money into Open AI etc. will only see returns if they achieve AGI that seems to me to be a point in favour of âthere is a non-negligible risk of AGI in the next 10 yearsâ. I donât believe that, but if I did I that alone would significantly raise the chance I give to AGI within the next 10 years. But yes, they have some incentive to lie here, or to lie to themselves, obviously. Nonetheless, I donât think that means their opinion should get zero weight. For it to actually have been some amazing strategy for them to talk up the chances of AGI, *because it attracted cash* youâd have to believe they can fool outsiders with serious money on the line, and that this will be profitable for them in the long term, rather than crashing and burning when AGI does not arrive. I donât think that is wildly unlikely or anything, indeed, I think it is somewhat plausible-though my guess is Anthropic in particular believe their own hype. But it does require a fairly high amount of foolishness on behalf of other quite serious actors. Iâm much more sure of âraising large amounts of money for stuff that obviously wonât work is relatively hardâ than I am of any argument about how far we are from AGI that looks at the direct evidence, since the latter sort of arguments are very hard to evaluate. Iâd feel very differently here if we were arguing about 50% chance of AI in ten years, or even 10% chance. Itâs common for people to invest in things that probably wonât work but have a high pay-off if they do. But what your saying is that Richard is wrong for thinking there is a non-negligible risk, because the chance is significantly under 1%. I doubt there are many takers for like â1 in 1000â chance of a big pay-off.
It is of course not THAT unlikely that they are fooling the serious money: serious investors make mistakes and even the stock market does. Nonetheless, being able to attract serious investment that is genuinely only investing because they think youâll achieve X, whilst simultaneously being under huge media attention and scrutiny is a credible signal that youâll eventually achieve X.
I donât think the argument Iâve just given is all that definitive, because they have other incentives to hype, like attracting top researchers (who I think it is probably eaiser to fool, because if they are fooled about AGI working at a big lab was probably good for them anyway; quite different from what happens to people funding the labs who are fooled who just lose money.) So itâs possible that the people pouring serious money in donât take any of the AGI stuff seriously. Nonetheless, I trust âserious organisations with technical prowess seem to be trying to do thisâ as a signal to take something minimally seriously, even if they have some incentive to lie.
Similarly, if you really think Microsoft and Google have taken decisions that will crash their stock if AGI doesnât arrive, I think a similar argument applies: Are you really sure youâre better at evaluating whether there is a non-negligible chance that a tech will be achieved by the tech industry than Microsoft and Google? Eventually, if AGI is not arriving from the huge training runs that are being planned in the near future, people will notice, and Microsoft and Google donât want to lose money in 5 years from now either. Again, itâs not THAT implausible that they are mistaken, mistakes happen. But you arenât arguing that there probably wonât be AGI in ten years-a claim I actually strongly agree with!-but rather that Richard was way out in saying that itâs a tail risk we should take seriously given how important it would be.
Slower progress on one thing than another does not mean no progress on the slower thing.
âdespite those benchmarks not really being related to AGI in any way.â This is your judgment, but clearly it is not the judgment of some of the worldâs leading scientific experts in the area. (Though there may well be other experts who agree with you.)
*Actually Thorstadâs opinion is more complicated than that, he says that this is true conditional on X-risk currently being non-negligible, but he doesnât himself endorse the view that it is currently non-negligible as far as I can tell.
Iâm not an economist, but the general consensus among the economists I have spoken to is that different kinds of bubbles (such as the dot-com bubble) are commonplace and natural, and even large companies make stupid mistakes that affect their stock hugely.
Anecdotally, there are a lot of small companies that are clearly overvalued, such as the Swedish startup Lovable, which recently reached the valuation of $6.6 billion. It is insane for a startup whose only product is a wrapper for another companyâs LLM in a space where every AI lab has their own coding tool. If people are willing to invest money in that, Iâd assume theyâd invest even more in larger companies that actually do have a product, even if it is overhyped. Overvaluation leads to a cycle where the company must keep overpromising to avoid a market correction, which in turn leads to even more overvaluation.
Again, Iâm not an economist so all that is said with the caveat that I might misunderstand some market mechanisms. But what I am is an AI researcher. I would be more than willing to believe that the investors are right, if they provided evidence. But my experience talking with people who are investing seriously in AI is that they donât understand the tech at all. At my day-job, talking to the management that wants AI to do things it cannot do and allocating resources for hopeless projects. At investor-sponsored events, where people are basically handing off money to any company that seems reasonable enough (e.g., employing qualified people, having experience in doing business) regardless of the AI project they are proposing. I know a person who got bought for millions due to having a good-sounding idea even though they have no product, no clients and no research backing up their idea. Some people are just happy to get rid of their money.
There do exist reasonable investors too, but there is not too much they can do about the valuations of private companies. And even though in theory markets allow things such as shorting that can correct overvaluations, these instruments are very risky and I think these people are more likely to just stay away of AI companies as a whole than attempt to short a bubble when they donât know when it crashes.
âdespite those benchmarks not really being related to AGI in any way.â This is your judgment, but clearly it is not the judgment of some of the worldâs leading scientific experts in the area. (Though there may well be other experts who agree with you.)
Are you sure? Even in that report, they carefully avoid using the term âAGIâ and instead refer to âgeneral-purpose AI systemsâ, a legal term used in the EU AI Act and generally thought to refer to current LLMs. Although both terms contain the word âgeneralâ, they mean very different things, which is something that the authors mention as well[1].
They also quite straight say that they do not have any strong evidence. According to them, â[t]he pace and unpredictability of advancements in general-purpose AI pose an âevidence dilemmaâ for policymakers.â (This PDF, page 14). They continue that due to the ârapid an unexpectedâ advancements, policymakers will have to make decisions âwithout having a large body of scientific evidence availableâ. They admit that âpre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary.â Still, they claim that âwaiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossibleâ.
This type of text using wordings such as âimpending riskâ even though they do not have strong evidence adds an artificial layer of urgency to the issue that is not based on facts.
Even their âweak evidenceâ is not very good. They list several risks, but of us the most important is the risk they call âloss of controlâ, under which they put X-risks. They reference several highly speculative papers on this issue that on a cursory reading make several mistakes and have generally a quite low quality. Going through them would be worth of a whole article, but as an example, the paper by Dung (2024) argues that âgiven that no one has identified an important capacity which does not improve with scaling, the currently best supported hypothesis is arguably that further scaling will bring about AGI.â That is just wrong: Based on a recent survey, 76% of AI experts believe that scaling is not enough. The most important missing capability is continual learning, which is a feature current algorithms simply lack that cannot be scaled. Dung does mention some objections to his argument, but omits the most obvious ones that he should have been aware of. This is not the kind of argument we should take even as weak evidence.
Going back to the International AI Safety Report, it seems that Bengio et al. know that their evidence is not enough, and they have carefully written their report so that it doesnât claim anything that is technically incorrect, but overall gives a much more alarmist tone than justified by their findings. If they really had better evidence, they would say so more clearly.
âGeneral-purpose AI is not to be confused with âartificial general intelligenceâ (AGI). The term AGI lacks a universal definition but is typically used to refer to a potential future AI that equals or surpasses human performance on all or almost all cognitive tasks. By contrast, several of todayâs AI models and systems already meet the criteria for counting as general-purpose AI as defined in this report.â (This PDF, page 27).
I guess Iâm just slightly confused about what economists actually think here since Iâd always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I donât know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), youâre probably wrong too. Or, more precisely, that youâre unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, Iâd be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize âbubbleâ, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons youâve cited for credence in near-term AGI â investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. â and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but itâs far too long (2000 words) and Iâll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altmanâs history of lying (which is why the OpenAI Board fired him) or Geoffrey Hintonâs belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I donât know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hintonâs views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. Iâm pretty sure Iâve seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what itâs worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://ââhelentoner.substack.com/ââp/ââlong-timelines-to-advanced-ai-have
I donât know if/âhow much EA money should go to AI safety either. EAs are trying to find the single best thing, and itâs very hard to know what that is, and many worthwhile things will fail that bar. Maybe David Thorstad is right, and small X-risk reductions have relatively low value because another X-risk will get us in the next few centuries anyway*. What I do think is that society as a whole spending some resources caring about the risk of AGI arriving in the next ten years is likely optimal, and that itâs not more silly to do so than to do many other obviously good things. I donât actually give to AI safety myself, and I only work on AI-related stuff-forecasting etc., Iâm not a techy person-because itâs what people are prepared to pay more for, and people being prepared to pay me to work on near-termist causes is less common, though it does happen. I myself give to animal welfare, not AI safety.
If you really believe that everyone putting money into Open AI etc. will only see returns if they achieve AGI that seems to me to be a point in favour of âthere is a non-negligible risk of AGI in the next 10 yearsâ. I donât believe that, but if I did I that alone would significantly raise the chance I give to AGI within the next 10 years. But yes, they have some incentive to lie here, or to lie to themselves, obviously. Nonetheless, I donât think that means their opinion should get zero weight. For it to actually have been some amazing strategy for them to talk up the chances of AGI, *because it attracted cash* youâd have to believe they can fool outsiders with serious money on the line, and that this will be profitable for them in the long term, rather than crashing and burning when AGI does not arrive. I donât think that is wildly unlikely or anything, indeed, I think it is somewhat plausible-though my guess is Anthropic in particular believe their own hype. But it does require a fairly high amount of foolishness on behalf of other quite serious actors. Iâm much more sure of âraising large amounts of money for stuff that obviously wonât work is relatively hardâ than I am of any argument about how far we are from AGI that looks at the direct evidence, since the latter sort of arguments are very hard to evaluate. Iâd feel very differently here if we were arguing about 50% chance of AI in ten years, or even 10% chance. Itâs common for people to invest in things that probably wonât work but have a high pay-off if they do. But what your saying is that Richard is wrong for thinking there is a non-negligible risk, because the chance is significantly under 1%. I doubt there are many takers for like â1 in 1000â chance of a big pay-off.
It is of course not THAT unlikely that they are fooling the serious money: serious investors make mistakes and even the stock market does. Nonetheless, being able to attract serious investment that is genuinely only investing because they think youâll achieve X, whilst simultaneously being under huge media attention and scrutiny is a credible signal that youâll eventually achieve X.
I donât think the argument Iâve just given is all that definitive, because they have other incentives to hype, like attracting top researchers (who I think it is probably eaiser to fool, because if they are fooled about AGI working at a big lab was probably good for them anyway; quite different from what happens to people funding the labs who are fooled who just lose money.) So itâs possible that the people pouring serious money in donât take any of the AGI stuff seriously. Nonetheless, I trust âserious organisations with technical prowess seem to be trying to do thisâ as a signal to take something minimally seriously, even if they have some incentive to lie.
Similarly, if you really think Microsoft and Google have taken decisions that will crash their stock if AGI doesnât arrive, I think a similar argument applies: Are you really sure youâre better at evaluating whether there is a non-negligible chance that a tech will be achieved by the tech industry than Microsoft and Google? Eventually, if AGI is not arriving from the huge training runs that are being planned in the near future, people will notice, and Microsoft and Google donât want to lose money in 5 years from now either. Again, itâs not THAT implausible that they are mistaken, mistakes happen. But you arenât arguing that there probably wonât be AGI in ten years-a claim I actually strongly agree with!-but rather that Richard was way out in saying that itâs a tail risk we should take seriously given how important it would be.
Slower progress on one thing than another does not mean no progress on the slower thing.
âdespite those benchmarks not really being related to AGI in any way.â This is your judgment, but clearly it is not the judgment of some of the worldâs leading scientific experts in the area. (Though there may well be other experts who agree with you.)
*Actually Thorstadâs opinion is more complicated than that, he says that this is true conditional on X-risk currently being non-negligible, but he doesnât himself endorse the view that it is currently non-negligible as far as I can tell.
Iâm not an economist, but the general consensus among the economists I have spoken to is that different kinds of bubbles (such as the dot-com bubble) are commonplace and natural, and even large companies make stupid mistakes that affect their stock hugely.
Anecdotally, there are a lot of small companies that are clearly overvalued, such as the Swedish startup Lovable, which recently reached the valuation of $6.6 billion. It is insane for a startup whose only product is a wrapper for another companyâs LLM in a space where every AI lab has their own coding tool. If people are willing to invest money in that, Iâd assume theyâd invest even more in larger companies that actually do have a product, even if it is overhyped. Overvaluation leads to a cycle where the company must keep overpromising to avoid a market correction, which in turn leads to even more overvaluation.
Again, Iâm not an economist so all that is said with the caveat that I might misunderstand some market mechanisms. But what I am is an AI researcher. I would be more than willing to believe that the investors are right, if they provided evidence. But my experience talking with people who are investing seriously in AI is that they donât understand the tech at all. At my day-job, talking to the management that wants AI to do things it cannot do and allocating resources for hopeless projects. At investor-sponsored events, where people are basically handing off money to any company that seems reasonable enough (e.g., employing qualified people, having experience in doing business) regardless of the AI project they are proposing. I know a person who got bought for millions due to having a good-sounding idea even though they have no product, no clients and no research backing up their idea. Some people are just happy to get rid of their money.
There do exist reasonable investors too, but there is not too much they can do about the valuations of private companies. And even though in theory markets allow things such as shorting that can correct overvaluations, these instruments are very risky and I think these people are more likely to just stay away of AI companies as a whole than attempt to short a bubble when they donât know when it crashes.
Are you sure? Even in that report, they carefully avoid using the term âAGIâ and instead refer to âgeneral-purpose AI systemsâ, a legal term used in the EU AI Act and generally thought to refer to current LLMs. Although both terms contain the word âgeneralâ, they mean very different things, which is something that the authors mention as well[1].
They also quite straight say that they do not have any strong evidence. According to them, â[t]he pace and unpredictability of advancements in general-purpose AI pose an âevidence dilemmaâ for policymakers.â (This PDF, page 14). They continue that due to the ârapid an unexpectedâ advancements, policymakers will have to make decisions âwithout having a large body of scientific evidence availableâ. They admit that âpre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary.â Still, they claim that âwaiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossibleâ.
This type of text using wordings such as âimpending riskâ even though they do not have strong evidence adds an artificial layer of urgency to the issue that is not based on facts.
Even their âweak evidenceâ is not very good. They list several risks, but of us the most important is the risk they call âloss of controlâ, under which they put X-risks. They reference several highly speculative papers on this issue that on a cursory reading make several mistakes and have generally a quite low quality. Going through them would be worth of a whole article, but as an example, the paper by Dung (2024) argues that âgiven that no one has identified an important capacity which does not improve with scaling, the currently best supported hypothesis is arguably that further scaling will bring about AGI.â That is just wrong: Based on a recent survey, 76% of AI experts believe that scaling is not enough. The most important missing capability is continual learning, which is a feature current algorithms simply lack that cannot be scaled. Dung does mention some objections to his argument, but omits the most obvious ones that he should have been aware of. This is not the kind of argument we should take even as weak evidence.
Going back to the International AI Safety Report, it seems that Bengio et al. know that their evidence is not enough, and they have carefully written their report so that it doesnât claim anything that is technically incorrect, but overall gives a much more alarmist tone than justified by their findings. If they really had better evidence, they would say so more clearly.
âGeneral-purpose AI is not to be confused with âartificial general intelligenceâ (AGI). The term AGI lacks a universal definition but is typically used to refer to a potential future AI that equals or surpasses human performance on all or almost all cognitive tasks. By contrast, several of todayâs AI models and systems already meet the criteria for counting as general-purpose AI as defined in this report.â (This PDF, page 27).
I guess Iâm just slightly confused about what economists actually think here since Iâd always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I donât know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), youâre probably wrong too. Or, more precisely, that youâre unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, Iâd be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize âbubbleâ, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons youâve cited for credence in near-term AGI â investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. â and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but itâs far too long (2000 words) and Iâll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altmanâs history of lying (which is why the OpenAI Board fired him) or Geoffrey Hintonâs belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I donât know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hintonâs views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. Iâm pretty sure Iâve seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what itâs worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://ââhelentoner.substack.com/ââp/ââlong-timelines-to-advanced-ai-have