I believe that you are underestimating just how strong incentives OpenAI etc. have to lie about AGI. For them, it is an existential question, as there is a real chance of them becoming bankrupt if they do not deliver. This means that we should expect them to always say that AGI is close regardless of their true beliefs, because no CEO is ever going to make public claims that could risk the whole company.
Even in case of companies such as Microsoft and Google that would not fail if there is no AGI, saying out loud that there won’t be AGI would possibly crash their stocks. They will likely maintain the illusion as long as they can.
I will also push back a little on relying too much on views of individual researchers such as Hinton or Bengio, which would be much more credible if they dared to present any evidence for their claims. See, for instance, this report from October 2025 by Bengio, Hinton, and others. It fails to provide any good evidence for progress in capabilities required for general intelligence, mainly focusing on how AI systems are better at some benchmarks, despite those benchmarks not really being related to AGI in any way. Instead, the report admits that while “AI systems continue to improve on most standardised evaluations,” they “show lower success rates on more realistic workplace tasks”, hinting that even the benchmark progress is fluff to at least some degree.
If even their own report doesn’t find any progress towards AGI, what is the basis for their short timelines? I think we are right to require more evidence before using their opinion as a basis for EA interventions or funding.
I don’t know if/how much EA money should go to AI safety either. EAs are trying to find the single best thing, and it’s very hard to know what that is, and many worthwhile things will fail that bar. Maybe David Thorstad is right, and small X-risk reductions have relatively low value because another X-risk will get us in the next few centuries anyway*. What I do think is that society as a whole spending some resources caring about the risk of AGI arriving in the next ten years is likely optimal, and that it’s not more silly to do so than to do many other obviously good things. I don’t actually give to AI safety myself, and I only work on AI-related stuff-forecasting etc., I’m not a techy person-because it’s what people are prepared to pay more for, and people being prepared to pay me to work on near-termist causes is less common, though it does happen. I myself give to animal welfare, not AI safety.
If you really believe that everyone putting money into Open AI etc. will only see returns if they achieve AGI that seems to me to be a point in favour of “there is a non-negligible risk of AGI in the next 10 years”. I don’t believe that, but if I did I that alone would significantly raise the chance I give to AGI within the next 10 years. But yes, they have some incentive to lie here, or to lie to themselves, obviously. Nonetheless, I don’t think that means their opinion should get zero weight. For it to actually have been some amazing strategy for them to talk up the chances of AGI, *because it attracted cash* you’d have to believe they can fool outsiders with serious money on the line, and that this will be profitable for them in the long term, rather than crashing and burning when AGI does not arrive. I don’t think that is wildly unlikely or anything, indeed, I think it is somewhat plausible-though my guess is Anthropic in particular believe their own hype. But it does require a fairly high amount of foolishness on behalf of other quite serious actors. I’m much more sure of “raising large amounts of money for stuff that obviously won’t work is relatively hard” than I am of any argument about how far we are from AGI that looks at the direct evidence, since the latter sort of arguments are very hard to evaluate. I’d feel very differently here if we were arguing about 50% chance of AI in ten years, or even 10% chance. It’s common for people to invest in things that probably won’t work but have a high pay-off if they do. But what your saying is that Richard is wrong for thinking there is a non-negligible risk, because the chance is significantly under 1%. I doubt there are many takers for like “1 in 1000” chance of a big pay-off.
It is of course not THAT unlikely that they are fooling the serious money: serious investors make mistakes and even the stock market does. Nonetheless, being able to attract serious investment that is genuinely only investing because they think you’ll achieve X, whilst simultaneously being under huge media attention and scrutiny is a credible signal that you’ll eventually achieve X.
I don’t think the argument I’ve just given is all that definitive, because they have other incentives to hype, like attracting top researchers (who I think it is probably eaiser to fool, because if they are fooled about AGI working at a big lab was probably good for them anyway; quite different from what happens to people funding the labs who are fooled who just lose money.) So it’s possible that the people pouring serious money in don’t take any of the AGI stuff seriously. Nonetheless, I trust “serious organisations with technical prowess seem to be trying to do this” as a signal to take something minimally seriously, even if they have some incentive to lie.
Similarly, if you really think Microsoft and Google have taken decisions that will crash their stock if AGI doesn’t arrive, I think a similar argument applies: Are you really sure you’re better at evaluating whether there is a non-negligible chance that a tech will be achieved by the tech industry than Microsoft and Google? Eventually, if AGI is not arriving from the huge training runs that are being planned in the near future, people will notice, and Microsoft and Google don’t want to lose money in 5 years from now either. Again, it’s not THAT implausible that they are mistaken, mistakes happen. But you aren’t arguing that there probably won’t be AGI in ten years-a claim I actually strongly agree with!-but rather that Richard was way out in saying that it’s a tail risk we should take seriously given how important it would be.
Slower progress on one thing than another does not mean no progress on the slower thing.
“despite those benchmarks not really being related to AGI in any way.” This is your judgment, but clearly it is not the judgment of some of the world’s leading scientific experts in the area. (Though there may well be other experts who agree with you.)
*Actually Thorstad’s opinion is more complicated than that, he says that this is true conditional on X-risk currently being non-negligible, but he doesn’t himself endorse the view that it is currently non-negligible as far as I can tell.
I’m not an economist, but the general consensus among the economists I have spoken to is that different kinds of bubbles (such as the dot-com bubble) are commonplace and natural, and even large companies make stupid mistakes that affect their stock hugely.
Anecdotally, there are a lot of small companies that are clearly overvalued, such as the Swedish startup Lovable, which recently reached the valuation of $6.6 billion. It is insane for a startup whose only product is a wrapper for another company’s LLM in a space where every AI lab has their own coding tool. If people are willing to invest money in that, I’d assume they’d invest even more in larger companies that actually do have a product, even if it is overhyped. Overvaluation leads to a cycle where the company must keep overpromising to avoid a market correction, which in turn leads to even more overvaluation.
Again, I’m not an economist so all that is said with the caveat that I might misunderstand some market mechanisms. But what I am is an AI researcher. I would be more than willing to believe that the investors are right, if they provided evidence. But my experience talking with people who are investing seriously in AI is that they don’t understand the tech at all. At my day-job, talking to the management that wants AI to do things it cannot do and allocating resources for hopeless projects. At investor-sponsored events, where people are basically handing off money to any company that seems reasonable enough (e.g., employing qualified people, having experience in doing business) regardless of the AI project they are proposing. I know a person who got bought for millions due to having a good-sounding idea even though they have no product, no clients and no research backing up their idea. Some people are just happy to get rid of their money.
There do exist reasonable investors too, but there is not too much they can do about the valuations of private companies. And even though in theory markets allow things such as shorting that can correct overvaluations, these instruments are very risky and I think these people are more likely to just stay away of AI companies as a whole than attempt to short a bubble when they don’t know when it crashes.
“despite those benchmarks not really being related to AGI in any way.” This is your judgment, but clearly it is not the judgment of some of the world’s leading scientific experts in the area. (Though there may well be other experts who agree with you.)
Are you sure? Even in that report, they carefully avoid using the term “AGI” and instead refer to “general-purpose AI systems”, a legal term used in the EU AI Act and generally thought to refer to current LLMs. Although both terms contain the word “general”, they mean very different things, which is something that the authors mention as well[1].
They also quite straight say that they do not have any strong evidence. According to them, “[t]he pace and unpredictability of advancements in general-purpose AI pose an ‘evidence dilemma’ for policymakers.” (This PDF, page 14). They continue that due to the “rapid an unexpected” advancements, policymakers will have to make decisions “without having a large body of scientific evidence available”. They admit that “pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary.” Still, they claim that “waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible”.
This type of text using wordings such as “impending risk” even though they do not have strong evidence adds an artificial layer of urgency to the issue that is not based on facts.
Even their “weak evidence” is not very good. They list several risks, but of us the most important is the risk they call “loss of control”, under which they put X-risks. They reference several highly speculative papers on this issue that on a cursory reading make several mistakes and have generally a quite low quality. Going through them would be worth of a whole article, but as an example, the paper by Dung (2024) argues that “given that no one has identified an important capacity which does not improve with scaling, the currently best supported hypothesis is arguably that further scaling will bring about AGI.” That is just wrong: Based on a recent survey, 76% of AI experts believe that scaling is not enough. The most important missing capability is continual learning, which is a feature current algorithms simply lack that cannot be scaled. Dung does mention some objections to his argument, but omits the most obvious ones that he should have been aware of. This is not the kind of argument we should take even as weak evidence.
Going back to the International AI Safety Report, it seems that Bengio et al. know that their evidence is not enough, and they have carefully written their report so that it doesn’t claim anything that is technically incorrect, but overall gives a much more alarmist tone than justified by their findings. If they really had better evidence, they would say so more clearly.
“General-purpose AI is not to be confused with ‘artificial general intelligence’ (AGI). The term AGI lacks a universal definition but is typically used to refer to a potential future AI that equals or surpasses human performance on all or almost all cognitive tasks. By contrast, several of today’s AI models and systems already meet the criteria for counting as general-purpose AI as defined in this report.” (This PDF, page 27).
I guess I’m just slightly confused about what economists actually think here since I’d always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I don’t know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you’re probably wrong too. Or, more precisely, that you’re unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I’d be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize “bubble”, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you’ve cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it’s far too long (2000 words) and I’ll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman’s history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton’s belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I don’t know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hinton’s views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. I’m pretty sure I’ve seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what it’s worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have
I believe that you are underestimating just how strong incentives OpenAI etc. have to lie about AGI. For them, it is an existential question, as there is a real chance of them becoming bankrupt if they do not deliver. This means that we should expect them to always say that AGI is close regardless of their true beliefs, because no CEO is ever going to make public claims that could risk the whole company.
Even in case of companies such as Microsoft and Google that would not fail if there is no AGI, saying out loud that there won’t be AGI would possibly crash their stocks. They will likely maintain the illusion as long as they can.
I will also push back a little on relying too much on views of individual researchers such as Hinton or Bengio, which would be much more credible if they dared to present any evidence for their claims. See, for instance, this report from October 2025 by Bengio, Hinton, and others. It fails to provide any good evidence for progress in capabilities required for general intelligence, mainly focusing on how AI systems are better at some benchmarks, despite those benchmarks not really being related to AGI in any way. Instead, the report admits that while “AI systems continue to improve on most standardised evaluations,” they “show lower success rates on more realistic workplace tasks”, hinting that even the benchmark progress is fluff to at least some degree.
If even their own report doesn’t find any progress towards AGI, what is the basis for their short timelines? I think we are right to require more evidence before using their opinion as a basis for EA interventions or funding.
I don’t know if/how much EA money should go to AI safety either. EAs are trying to find the single best thing, and it’s very hard to know what that is, and many worthwhile things will fail that bar. Maybe David Thorstad is right, and small X-risk reductions have relatively low value because another X-risk will get us in the next few centuries anyway*. What I do think is that society as a whole spending some resources caring about the risk of AGI arriving in the next ten years is likely optimal, and that it’s not more silly to do so than to do many other obviously good things. I don’t actually give to AI safety myself, and I only work on AI-related stuff-forecasting etc., I’m not a techy person-because it’s what people are prepared to pay more for, and people being prepared to pay me to work on near-termist causes is less common, though it does happen. I myself give to animal welfare, not AI safety.
If you really believe that everyone putting money into Open AI etc. will only see returns if they achieve AGI that seems to me to be a point in favour of “there is a non-negligible risk of AGI in the next 10 years”. I don’t believe that, but if I did I that alone would significantly raise the chance I give to AGI within the next 10 years. But yes, they have some incentive to lie here, or to lie to themselves, obviously. Nonetheless, I don’t think that means their opinion should get zero weight. For it to actually have been some amazing strategy for them to talk up the chances of AGI, *because it attracted cash* you’d have to believe they can fool outsiders with serious money on the line, and that this will be profitable for them in the long term, rather than crashing and burning when AGI does not arrive. I don’t think that is wildly unlikely or anything, indeed, I think it is somewhat plausible-though my guess is Anthropic in particular believe their own hype. But it does require a fairly high amount of foolishness on behalf of other quite serious actors. I’m much more sure of “raising large amounts of money for stuff that obviously won’t work is relatively hard” than I am of any argument about how far we are from AGI that looks at the direct evidence, since the latter sort of arguments are very hard to evaluate. I’d feel very differently here if we were arguing about 50% chance of AI in ten years, or even 10% chance. It’s common for people to invest in things that probably won’t work but have a high pay-off if they do. But what your saying is that Richard is wrong for thinking there is a non-negligible risk, because the chance is significantly under 1%. I doubt there are many takers for like “1 in 1000” chance of a big pay-off.
It is of course not THAT unlikely that they are fooling the serious money: serious investors make mistakes and even the stock market does. Nonetheless, being able to attract serious investment that is genuinely only investing because they think you’ll achieve X, whilst simultaneously being under huge media attention and scrutiny is a credible signal that you’ll eventually achieve X.
I don’t think the argument I’ve just given is all that definitive, because they have other incentives to hype, like attracting top researchers (who I think it is probably eaiser to fool, because if they are fooled about AGI working at a big lab was probably good for them anyway; quite different from what happens to people funding the labs who are fooled who just lose money.) So it’s possible that the people pouring serious money in don’t take any of the AGI stuff seriously. Nonetheless, I trust “serious organisations with technical prowess seem to be trying to do this” as a signal to take something minimally seriously, even if they have some incentive to lie.
Similarly, if you really think Microsoft and Google have taken decisions that will crash their stock if AGI doesn’t arrive, I think a similar argument applies: Are you really sure you’re better at evaluating whether there is a non-negligible chance that a tech will be achieved by the tech industry than Microsoft and Google? Eventually, if AGI is not arriving from the huge training runs that are being planned in the near future, people will notice, and Microsoft and Google don’t want to lose money in 5 years from now either. Again, it’s not THAT implausible that they are mistaken, mistakes happen. But you aren’t arguing that there probably won’t be AGI in ten years-a claim I actually strongly agree with!-but rather that Richard was way out in saying that it’s a tail risk we should take seriously given how important it would be.
Slower progress on one thing than another does not mean no progress on the slower thing.
“despite those benchmarks not really being related to AGI in any way.” This is your judgment, but clearly it is not the judgment of some of the world’s leading scientific experts in the area. (Though there may well be other experts who agree with you.)
*Actually Thorstad’s opinion is more complicated than that, he says that this is true conditional on X-risk currently being non-negligible, but he doesn’t himself endorse the view that it is currently non-negligible as far as I can tell.
I’m not an economist, but the general consensus among the economists I have spoken to is that different kinds of bubbles (such as the dot-com bubble) are commonplace and natural, and even large companies make stupid mistakes that affect their stock hugely.
Anecdotally, there are a lot of small companies that are clearly overvalued, such as the Swedish startup Lovable, which recently reached the valuation of $6.6 billion. It is insane for a startup whose only product is a wrapper for another company’s LLM in a space where every AI lab has their own coding tool. If people are willing to invest money in that, I’d assume they’d invest even more in larger companies that actually do have a product, even if it is overhyped. Overvaluation leads to a cycle where the company must keep overpromising to avoid a market correction, which in turn leads to even more overvaluation.
Again, I’m not an economist so all that is said with the caveat that I might misunderstand some market mechanisms. But what I am is an AI researcher. I would be more than willing to believe that the investors are right, if they provided evidence. But my experience talking with people who are investing seriously in AI is that they don’t understand the tech at all. At my day-job, talking to the management that wants AI to do things it cannot do and allocating resources for hopeless projects. At investor-sponsored events, where people are basically handing off money to any company that seems reasonable enough (e.g., employing qualified people, having experience in doing business) regardless of the AI project they are proposing. I know a person who got bought for millions due to having a good-sounding idea even though they have no product, no clients and no research backing up their idea. Some people are just happy to get rid of their money.
There do exist reasonable investors too, but there is not too much they can do about the valuations of private companies. And even though in theory markets allow things such as shorting that can correct overvaluations, these instruments are very risky and I think these people are more likely to just stay away of AI companies as a whole than attempt to short a bubble when they don’t know when it crashes.
Are you sure? Even in that report, they carefully avoid using the term “AGI” and instead refer to “general-purpose AI systems”, a legal term used in the EU AI Act and generally thought to refer to current LLMs. Although both terms contain the word “general”, they mean very different things, which is something that the authors mention as well[1].
They also quite straight say that they do not have any strong evidence. According to them, “[t]he pace and unpredictability of advancements in general-purpose AI pose an ‘evidence dilemma’ for policymakers.” (This PDF, page 14). They continue that due to the “rapid an unexpected” advancements, policymakers will have to make decisions “without having a large body of scientific evidence available”. They admit that “pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary.” Still, they claim that “waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible”.
This type of text using wordings such as “impending risk” even though they do not have strong evidence adds an artificial layer of urgency to the issue that is not based on facts.
Even their “weak evidence” is not very good. They list several risks, but of us the most important is the risk they call “loss of control”, under which they put X-risks. They reference several highly speculative papers on this issue that on a cursory reading make several mistakes and have generally a quite low quality. Going through them would be worth of a whole article, but as an example, the paper by Dung (2024) argues that “given that no one has identified an important capacity which does not improve with scaling, the currently best supported hypothesis is arguably that further scaling will bring about AGI.” That is just wrong: Based on a recent survey, 76% of AI experts believe that scaling is not enough. The most important missing capability is continual learning, which is a feature current algorithms simply lack that cannot be scaled. Dung does mention some objections to his argument, but omits the most obvious ones that he should have been aware of. This is not the kind of argument we should take even as weak evidence.
Going back to the International AI Safety Report, it seems that Bengio et al. know that their evidence is not enough, and they have carefully written their report so that it doesn’t claim anything that is technically incorrect, but overall gives a much more alarmist tone than justified by their findings. If they really had better evidence, they would say so more clearly.
“General-purpose AI is not to be confused with ‘artificial general intelligence’ (AGI). The term AGI lacks a universal definition but is typically used to refer to a potential future AI that equals or surpasses human performance on all or almost all cognitive tasks. By contrast, several of today’s AI models and systems already meet the criteria for counting as general-purpose AI as defined in this report.” (This PDF, page 27).
I guess I’m just slightly confused about what economists actually think here since I’d always thought they took the idea that markets and investors were mostly quite efficient most of the time fairly seriously.
I don’t know much about this topic myself, but my understanding is that market efficiency is less about having the objectively correct view (or making the objectively right decision) and more about the difficulty of any individual investor making investments that systematically outperform the market. (An explainer page here helps clarify the concept). So, the concept, I think, is not that the market is always right, but when the market is wrong (e.g. that generative AI is a great investment), you’re probably wrong too. Or, more precisely, that you’re unlikely to be systematically right more often then the market is right, and systematically wrong less often than the market is wrong.
As I understand it, there are differing views among economists on how efficient the market really is. And there is the somewhat paradoxical fact that people disagreeing with the market is part of what makes it as efficient as it is in the first place. For instance, some people worry that the rise of passive investing (e.g. via Vanguard ETFs) will make the market less efficient, since more people are just deferring to the market to make all the calls, and not trying to make calls themselves. If nobody ever tried to beat the market, then the market would become completely inefficient.
There is an analogy here to forecasting, with regard to epistemic deference to other forecasters versus herding that throws out outlier data and makes the aggregate forecast less accurate. If all forecasters just circularly updated until all their individual views were the aggregate view, surely that would be a big mistake. Right?
Do you have a specific forecast for AGI, e.g. a median year or a certain probability within a certain timeframe?
If so, I’d be curious to know how important AI investment is to that forecast. How much would your forecast change if it turned out the AI industry is in a bubble and the bubble popped, and the valuations of AI-related companies dropped significantly? (Rather than trying to specifically operationalize “bubble”, we could just defer the definition of bubble to credible journalists.)
There are a few different reasons you’ve cited for credence in near-term AGI — investment in AI companies, the beliefs of certain AI industry leaders (e.g. Sam Altman), the beliefs of certain AI researchers (e.g. Geoffrey Hinton), etc. — and I wonder how significant each of them is. I think each of these different considerations could be spun out into its own lengthy discussion.
I wrote a draft of a comment that addresses several different topics you raised, topic-by-topic, but it’s far too long (2000 words) and I’ll have to put in a lot of work if I want to revise it down to a normal comment length. There are multiple different rabbit holes to go down, like Sam Altman’s history of lying (which is why the OpenAI Board fired him) or Geoffrey Hinton’s belief that LLMs have near-human-level consciousness.
I feel like going deeper into each individual reason for credence in near-term AGI and figuring out how significant each one is for your overall forecast could be a really interesting discussion. The EA Forum has a little-used feature called Dialogues that could be well-suited for this.
I guess markets are efficient most of the time, but stock market bubbles do exist and are common even, which goes against the efficient market hypothesis. I believe it is a debated topic in economics and I don’t know what the current consensus regarding it is.
My own experience points to the direction that there is an AI bubble, as cases like Lovable indicate that investors are overvaluing companies. I cannot explain their valuation, other than that investors bet on things they do not understand. As I mentioned, anecdotally this seems to often be the case.
The report has many authors, some of whom maybe much less concerned or think the whole thing is silly. I never claimed that Bengio and Hinton’s views were a consensus, and in any case, I was citing their views as evidence for taking the idea that AGI may arrive soon seriously, not their views on how risky AI is. I’m pretty sure I’ve seen them give relatively short time-lines when speaking individually, but I guess I could be misremembering. For what it’s worth Yann LeCunn seems to think 10 years is about right, and Gary Marcus seems to think a guess of 10-20 years is reasonable: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have