I think the probability of anyone creating an artificial general intelligence before the end of 2035 is much less than 1 in 10,000 (or 0.01%). The chance of it happening in 2026 is virtually zero. A large majority of AI experts seems to disagree with the way the majority of the effective altruism community seems to think about AGI. Even some of the experts the community is fond of citing to create the impression of support for its views, such as Demis Hassabis and Ilya Sutskever, disagree in crucial ways, e.g. by emphasizing fundamental AI research more than scaling.
The effective altruism community has low standards for the quality of evidence about AGI timelines it is willing to accept. For instance, consider the AI 2027 report that 80,000 Hours spent $160,000 promoting in a YouTube video. Many crucial inputs to the model are simply the subjective, intuitive guesses of the authors. Therefore, the outputs of the model are largely the subjective, intuitive guesses of the authors. Apart from the inputs to the model, the model itself was dubiously constructed; the authorsā modelling decisions appear to have largely baked in the headline result from the outset. Flaws of a similar or greater magnitude can be found in other works frequently cited by the EA community.
This post uses aliens arriving on Earth as an analogy. However, Robin Hanson, whose views are influential in the EA community, seems to believe thereās a realistic chance aliens have already arrived on Earth. Iāve seen a few posts and comments indicating that a few people in the EA community apparently share his view on this, or a stronger version of it. To me, this is suggestive of the EA community employing an epistemic process that is too ready to accept fringe views on thin evidence.
My discussions with people in the EA community on AGI have startled me in this regard. Hereās an analogy. Letās say someone is arguing for a controversial, minority view such as the Covid-19 lab leak hypothesis. You engage them in conversation, expecting them to have ready answers to any question or objection you can think of. You start off by asking, āWhy do you think the novel coronavirus didnāt evolve naturally?ā You then find out this person hadnāt previously considered that the novel coronavirus might have been able to evolve naturally, and wasnāt aware that this was a possibility. Rather than being able to fire back counterarguments chapter and verse, you discover that this person simply hasnāt thought about the elementary terms of the debate before.
Lest you think this is a ridiculously unfair and harsh analogy, let me quote Steven Byrnes describing his experience talking to āa couple dozen AI safety /ā alignment researchersā at an EA Global conference in the Bay Area in 2025:
There were a number of people, all quite new to the fields of AI and AI safety /ā alignment, for whom it seems to have never crossed their mind until they talked to me that maybe foundation models wonāt scale to AGI, and likewise who didnāt seem to realize that the field of AI is broader than just foundation models.
To me, this is equivalent to advocating the lab leak hypothesis and not realizing that viruses can evolve naturally. Itās such a tremendous oversight that a reasonable person who is somewhat knowledgeable about AI could decide, at that point, that theyāve seen everything they need to see from the EA community on this topic, and that the EA community simply doesnāt know what itās talking about, and hasnāt done its homework. My personal experience in trying to engage in discussions on AGI within the EA community has been much the same as what Steven Byrnes describes ā some people mix up definitions and concepts, for instance, or misinterpret studies, or dismiss inconvenient expert opinions out of hand.
All this to say, I donāt find this post or the view it represents to be more credible than the view that possibly hostile aliens have a 10% chance of arriving on Earth this year. Proponents of fringe views on UFOs generally misunderstand ideas like optical illusions, perspective tricks, and other visual phenomena, as well as camera artifacts. Or they simply look at a blinking light in the sky (or a video of one) and conclude, non-credibly, āaliensā ā not considering conventional aircraft or all the other things a blinking light could be. Analogously, the EA community generally disregards majority expert opinion and expert knowledge on AI, in favour of fringe views primarily promoted by people with no expertise, who at least on occasion have made elementary mistakes. Analogously to people who believe UFOs are alien spacecraft, trying to explain what experts know or believe that can cast doubt on the fringe view is met with considerable resistance. (Most people are, very reasonably and understandably, not willing to engage in discussions or debates with the EA community on this topic. It feels futile and exasperating, and at least a vocal minority in the community seems determined to discourage such engagement by making it as unpleasant as possible.)
I can only express empathy for people who have been misled into thinking thereās a very high chance of human extinction from AI in the very near future. I have to think holding such a belief is incredibly distressing. I donāt know if anything I can say will be reassuring. By the time you strongly hold such a belief, changing your mind on it might require turning your whole life and worldview upside-down. It might mean things like new friends, a new community, a new sense of identity or self-image, and so on. Just talking about the belief directly might not change anything, since the root causes of why one holds such a belief might be deeper than ordinary intellectual discussion can reach. I have a hunch, in fact, that it is similar with a lot of intellectual discussion on a lot of topics ā there is a subterranean world of emotions, psychology, narratives, personal history, and personal identity entangled in the surface layer of ideas. However, fringe views of an eschatological, apocalyptic, or millennialist nature are an extreme case. In such cases, ordinary intellectual discussion seems especially unlikely to gain traction.
I have to think holding such a belief is incredibly distressing.
Have you considered that you might be engaging in motivated reasoning because you donāt want to be distressed about this? Also, you get used to it. Humans are very adaptable.
The 10% comes from the linked aggregate of forecasts, from thousands of peopleās estimates/ābets on Metaculus, Manifold and Kalshi; not the EA community.
I think the probability of anyone creating an artificial general intelligence before the end of 2035 is much less than 1 in 10,000 (or 0.01%). The chance of it happening in 2026 is virtually zero. A large majority of AI experts seems to disagree with the way the majority of the effective altruism community seems to think about AGI. Even some of the experts the community is fond of citing to create the impression of support for its views, such as Demis Hassabis and Ilya Sutskever, disagree in crucial ways, e.g. by emphasizing fundamental AI research more than scaling.
The effective altruism community has low standards for the quality of evidence about AGI timelines it is willing to accept. For instance, consider the AI 2027 report that 80,000 Hours spent $160,000 promoting in a YouTube video. Many crucial inputs to the model are simply the subjective, intuitive guesses of the authors. Therefore, the outputs of the model are largely the subjective, intuitive guesses of the authors. Apart from the inputs to the model, the model itself was dubiously constructed; the authorsā modelling decisions appear to have largely baked in the headline result from the outset. Flaws of a similar or greater magnitude can be found in other works frequently cited by the EA community.
This post uses aliens arriving on Earth as an analogy. However, Robin Hanson, whose views are influential in the EA community, seems to believe thereās a realistic chance aliens have already arrived on Earth. Iāve seen a few posts and comments indicating that a few people in the EA community apparently share his view on this, or a stronger version of it. To me, this is suggestive of the EA community employing an epistemic process that is too ready to accept fringe views on thin evidence.
My discussions with people in the EA community on AGI have startled me in this regard. Hereās an analogy. Letās say someone is arguing for a controversial, minority view such as the Covid-19 lab leak hypothesis. You engage them in conversation, expecting them to have ready answers to any question or objection you can think of. You start off by asking, āWhy do you think the novel coronavirus didnāt evolve naturally?ā You then find out this person hadnāt previously considered that the novel coronavirus might have been able to evolve naturally, and wasnāt aware that this was a possibility. Rather than being able to fire back counterarguments chapter and verse, you discover that this person simply hasnāt thought about the elementary terms of the debate before.
Lest you think this is a ridiculously unfair and harsh analogy, let me quote Steven Byrnes describing his experience talking to āa couple dozen AI safety /ā alignment researchersā at an EA Global conference in the Bay Area in 2025:
To me, this is equivalent to advocating the lab leak hypothesis and not realizing that viruses can evolve naturally. Itās such a tremendous oversight that a reasonable person who is somewhat knowledgeable about AI could decide, at that point, that theyāve seen everything they need to see from the EA community on this topic, and that the EA community simply doesnāt know what itās talking about, and hasnāt done its homework. My personal experience in trying to engage in discussions on AGI within the EA community has been much the same as what Steven Byrnes describes ā some people mix up definitions and concepts, for instance, or misinterpret studies, or dismiss inconvenient expert opinions out of hand.
All this to say, I donāt find this post or the view it represents to be more credible than the view that possibly hostile aliens have a 10% chance of arriving on Earth this year. Proponents of fringe views on UFOs generally misunderstand ideas like optical illusions, perspective tricks, and other visual phenomena, as well as camera artifacts. Or they simply look at a blinking light in the sky (or a video of one) and conclude, non-credibly, āaliensā ā not considering conventional aircraft or all the other things a blinking light could be. Analogously, the EA community generally disregards majority expert opinion and expert knowledge on AI, in favour of fringe views primarily promoted by people with no expertise, who at least on occasion have made elementary mistakes. Analogously to people who believe UFOs are alien spacecraft, trying to explain what experts know or believe that can cast doubt on the fringe view is met with considerable resistance. (Most people are, very reasonably and understandably, not willing to engage in discussions or debates with the EA community on this topic. It feels futile and exasperating, and at least a vocal minority in the community seems determined to discourage such engagement by making it as unpleasant as possible.)
I can only express empathy for people who have been misled into thinking thereās a very high chance of human extinction from AI in the very near future. I have to think holding such a belief is incredibly distressing. I donāt know if anything I can say will be reassuring. By the time you strongly hold such a belief, changing your mind on it might require turning your whole life and worldview upside-down. It might mean things like new friends, a new community, a new sense of identity or self-image, and so on. Just talking about the belief directly might not change anything, since the root causes of why one holds such a belief might be deeper than ordinary intellectual discussion can reach. I have a hunch, in fact, that it is similar with a lot of intellectual discussion on a lot of topics ā there is a subterranean world of emotions, psychology, narratives, personal history, and personal identity entangled in the surface layer of ideas. However, fringe views of an eschatological, apocalyptic, or millennialist nature are an extreme case. In such cases, ordinary intellectual discussion seems especially unlikely to gain traction.
Have you considered that you might be engaging in motivated reasoning because you donāt want to be distressed about this? Also, you get used to it. Humans are very adaptable.
The 10% comes from the linked aggregate of forecasts, from thousands of peopleās estimates/ābets on Metaculus, Manifold and Kalshi; not the EA community.