The social norms of EA or at least the EA Forum are different today than they were ten years ago. Ten years ago, if you said you only care about people who are either alive today or who will be born in the next 100 years, and you donât think much about AGI because global poverty seems a lot more important, then you would be fully qualified to be the president of a university EA group, get a job at a meta-EA organization, or represent the views of the EA movement to a public audience.
This isnât just a social thing, itâs also response to a lot of changes in AI timelines over the past ten years. Back then a lot of us had views like âmost experts think powerful AI is far off, Iâm not going to sink a bunch of time into how it might affect my various options for doing goodâ, but as expert views have shifted that makes less sense. While âdonât think much about AGI because global poverty seems a lot more importantâ is still a reasonable position to hold (ex: people who think we canât productively influence how AI goes and so we should focus on doing as much good as we can in areas we can affect), I think it requires a good bit more reasoning and thought than it did ten years ago.
(On the other hand, I see âonly care about people who are either alive today or who will be born in the next 100 yearsâ as still within the range of common EA views (ex).)
I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030. In 2015, it was a little more plausible that AGI could arrive by 2030 because that was 15 years away and who knows what can happen in 15 years.
Now that 2030 is a little less than 5 years away, AGI by 2030 is a less plausible prediction than it was in 2015 because thereâs less time left and itâs more clear it wonât happen.
I donât think the reasons people believe AGI will arrive by 2030 are primarily based on evidence but are primarily a sociological phenomenon. People were ready to believe this regardless of the evidence going back to Ray Kurzweilâs The Age of Spiritual Machines in 1999 and Eliezer Yudkowskyâs âEnd-of-the-World Betâ in 2017. People donât really pay attention to whether the evidence is good or bad, they ignore obvious evidence and arguments against near-term AGI, and they mostly make a choice to ignore or attack people who express disagreement and instead tune into the relentless drumbeat of people agreeing with them. This is sociology, not epistemology.
Donât believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
Expert opinion has changed? First, expert opinion is not itself evidence, itâs peopleâs opinions about evidence. What evidence are the experts basing their beliefs on? That seems way more important than someone just saying a number based on an intuition.
Second, expert opinion does not clearly support the idea of near-term AGI.
As of 2023, the expert opinion on AGI was⌠well, first of all, really confusing. The AI Impacts survey found that the experts believed there is a 50% chance by 2047 that âunaided machines can accomplish every task better and more cheaply than human workers.â And also that thereâs a 50% chance that by 2116 âmachines could be built to carry out the task better and more cheaply than human workers.â I donât know why these predictions are 69 years apart.
Regardless, 2047 is sufficiently far away that it might as well be 2057 or 2067 or 2117. This is just people generating a number using a gut feeling. We donât know how to build AGI and we have no idea how long it will take to figure out how to. No amount of thinking of numbers or saying numbers can escape this fundamental truth.
We actually wonât have to wait long to see that some of the most attention-catching near-term AI predictions are false. Dario Amodei, the CEO of Anthropic (a company that is said to be âliterally creating Godâ), has predicted that by some point between June 2025 and September 2025, 90% of all code will written by AI rather than humans. In late 2025 and early 2026, when itâs clear Dario was wrong about this (when, not if), maybe some people will start to be more skeptical of attention-grabbing expert predictions. But maybe not.
There are already strong signs of AGI discourse being irrational and absurd. On April 16, 2025, Tyler Cowen claimed that OpenAIâs o3 model is AGI and asked, âis April 16th AGI day?â. In a follow-up post on April 17, seemingly in response to criticism, he said, âI donât mind if you donât want to call it AGIâ, but seemed to affirm he still thinks o3 is AGI.
On one hand, I hope that in 5 years the people who promoted the idea of AGI by 2030 will lose a lot of credibility and maybe will do some soul-searching to figure out how they could be so wrong. On the other hand, there is nothing preventing people from being irrational indefinitely, such as:
Defining whatever exists in 2030 as AGI (Tyler Cowen already did it in 2025, and Ray Kurzweil innovated the technique years ago).
Kicking the can down the road a few years, and repeat as necessary (similar to how Elon Musk has predicted that the Tesla fleet will achieve Level 4â5 autonomy in a year every year from 2015 to 2025 and has not given up the game despite his losing streak).
Telling a story in which AGI didnât happen only because effective altruists or other good actors successfully delayed AGI development.
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic. (Sorry!) So, Iâm being blunt about this to try to change that a little.
I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030.
The evidence we have today that there will be AGI by 2030 is clearly dramatically stronger than the evidence we had in 2015 that there would be AGI by 2020, and that is surely the relevant comparison. This is not EA specificâwe have been ahead of the curve in thinking AI would be a big deal, but the whole world has updated in this direction, and it would be strange if we hadnât as well.
My personal take is that there are pretty reasonable arguments that what we have seen in AI/âML since 2015 suggests AI will be a big deal. I like the way I have seen Yoshua Bengio talk about it âover the next few years, or a few decadesâ. I share the view that either of those possibilities are reasonable. People who are highly confident that something like AGI is going to arrive over the next few years are more confident in this than I am, but I think that view is within the bounds of reasonable interpretation of the evidence. I think it is also with-in the bounds of reasonable to have the opposite view, that something like AGI is most likely further than a few years away.
Donât believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
I think this is a healthy attitude and that I think is worth appreciating. We may get answers to these questions over the next few years. That seems pretty positive to me. We will be able to resolve some of these disagreements productively by observing what happens. I hope people who have different views now keep this in mind and that the environment is still in a good place for people who disagree now to work together in the future if some of these disagreements get resolved.
I will offer the ea forum internet-points equivalent of a fruit basket to anyone who would like one in the future if we disagree now and in the future they are proven right and I am proven wrong.
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic.
Can you saw what view it is you think is crazy? It seems quite reasonable to me to think that AI is going to be a massive deal and therefore that it would be highly useful to influence how it goes. On other other hand, I think people often over-estimate the robustness of the arguments for any given strategy for how to actually do that influencing. In other words, its reasonable to prioritize AI, but peopleâs AI takes are often very over-confident.
This isnât just a social thing, itâs also response to a lot of changes in AI timelines over the past ten years. Back then a lot of us had views like âmost experts think powerful AI is far off, Iâm not going to sink a bunch of time into how it might affect my various options for doing goodâ, but as expert views have shifted that makes less sense. While âdonât think much about AGI because global poverty seems a lot more importantâ is still a reasonable position to hold (ex: people who think we canât productively influence how AI goes and so we should focus on doing as much good as we can in areas we can affect), I think it requires a good bit more reasoning and thought than it did ten years ago.
(On the other hand, I see âonly care about people who are either alive today or who will be born in the next 100 yearsâ as still within the range of common EA views (ex).)
I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030. In 2015, it was a little more plausible that AGI could arrive by 2030 because that was 15 years away and who knows what can happen in 15 years.
Now that 2030 is a little less than 5 years away, AGI by 2030 is a less plausible prediction than it was in 2015 because thereâs less time left and itâs more clear it wonât happen.
I donât think the reasons people believe AGI will arrive by 2030 are primarily based on evidence but are primarily a sociological phenomenon. People were ready to believe this regardless of the evidence going back to Ray Kurzweilâs The Age of Spiritual Machines in 1999 and Eliezer Yudkowskyâs âEnd-of-the-World Betâ in 2017. People donât really pay attention to whether the evidence is good or bad, they ignore obvious evidence and arguments against near-term AGI, and they mostly make a choice to ignore or attack people who express disagreement and instead tune into the relentless drumbeat of people agreeing with them. This is sociology, not epistemology.
Donât believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
Expert opinion has changed? First, expert opinion is not itself evidence, itâs peopleâs opinions about evidence. What evidence are the experts basing their beliefs on? That seems way more important than someone just saying a number based on an intuition.
Second, expert opinion does not clearly support the idea of near-term AGI.
As of 2023, the expert opinion on AGI was⌠well, first of all, really confusing. The AI Impacts survey found that the experts believed there is a 50% chance by 2047 that âunaided machines can accomplish every task better and more cheaply than human workers.â And also that thereâs a 50% chance that by 2116 âmachines could be built to carry out the task better and more cheaply than human workers.â I donât know why these predictions are 69 years apart.
Regardless, 2047 is sufficiently far away that it might as well be 2057 or 2067 or 2117. This is just people generating a number using a gut feeling. We donât know how to build AGI and we have no idea how long it will take to figure out how to. No amount of thinking of numbers or saying numbers can escape this fundamental truth.
We actually wonât have to wait long to see that some of the most attention-catching near-term AI predictions are false. Dario Amodei, the CEO of Anthropic (a company that is said to be âliterally creating Godâ), has predicted that by some point between June 2025 and September 2025, 90% of all code will written by AI rather than humans. In late 2025 and early 2026, when itâs clear Dario was wrong about this (when, not if), maybe some people will start to be more skeptical of attention-grabbing expert predictions. But maybe not.
There are already strong signs of AGI discourse being irrational and absurd. On April 16, 2025, Tyler Cowen claimed that OpenAIâs o3 model is AGI and asked, âis April 16th AGI day?â. In a follow-up post on April 17, seemingly in response to criticism, he said, âI donât mind if you donât want to call it AGIâ, but seemed to affirm he still thinks o3 is AGI.
On one hand, I hope that in 5 years the people who promoted the idea of AGI by 2030 will lose a lot of credibility and maybe will do some soul-searching to figure out how they could be so wrong. On the other hand, there is nothing preventing people from being irrational indefinitely, such as:
Defining whatever exists in 2030 as AGI (Tyler Cowen already did it in 2025, and Ray Kurzweil innovated the technique years ago).
Kicking the can down the road a few years, and repeat as necessary (similar to how Elon Musk has predicted that the Tesla fleet will achieve Level 4â5 autonomy in a year every year from 2015 to 2025 and has not given up the game despite his losing streak).
Telling a story in which AGI didnât happen only because effective altruists or other good actors successfully delayed AGI development.
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic. (Sorry!) So, Iâm being blunt about this to try to change that a little.
The evidence we have today that there will be AGI by 2030 is clearly dramatically stronger than the evidence we had in 2015 that there would be AGI by 2020, and that is surely the relevant comparison. This is not EA specificâwe have been ahead of the curve in thinking AI would be a big deal, but the whole world has updated in this direction, and it would be strange if we hadnât as well.
My personal take is that there are pretty reasonable arguments that what we have seen in AI/âML since 2015 suggests AI will be a big deal. I like the way I have seen Yoshua Bengio talk about it âover the next few years, or a few decadesâ. I share the view that either of those possibilities are reasonable. People who are highly confident that something like AGI is going to arrive over the next few years are more confident in this than I am, but I think that view is within the bounds of reasonable interpretation of the evidence. I think it is also with-in the bounds of reasonable to have the opposite view, that something like AGI is most likely further than a few years away.
I think this is a healthy attitude and that I think is worth appreciating. We may get answers to these questions over the next few years. That seems pretty positive to me. We will be able to resolve some of these disagreements productively by observing what happens. I hope people who have different views now keep this in mind and that the environment is still in a good place for people who disagree now to work together in the future if some of these disagreements get resolved.
I will offer the ea forum internet-points equivalent of a fruit basket to anyone who would like one in the future if we disagree now and in the future they are proven right and I am proven wrong.
Can you saw what view it is you think is crazy? It seems quite reasonable to me to think that AI is going to be a massive deal and therefore that it would be highly useful to influence how it goes. On other other hand, I think people often over-estimate the robustness of the arguments for any given strategy for how to actually do that influencing. In other words, its reasonable to prioritize AI, but peopleâs AI takes are often very over-confident.