Thanks for sharing the post Zed :) Like titotal says, I hope you consider staying around. I think AI-risk (AIXR) sceptic posts should be welcomed on the Forum. I’m someone who’d probably count as AIXR sceptic for the EA community (but not the wider world/public). It’s clearly an area you think EA as a whole is making a mistake, so I’ve read the post and recent comments and have some thoughts that I hope you might find useful:
I think there are some good points you made:
I really appreciate posts that push against the ‘EA Orthodoxy’ on the Forum that start off useful discussions. I think ‘red-teaming’ ideas is a great example of necessary error-correction, so regardless of how much I agree or not, I want to give you plaudits for that.
On humility in long-term forecasts—I completely agree here. I’m sure you’ve come across it but Tetlock’s recent forecasting tournament deals with this question and does indeed find Forecasters place lower AIXR than subject-matter experts.[1] But I’d still say that a risk of extinction roughly ~1% is worth considering as an important risk worth consideration and more investigation, wouldn’t you?
I think your scepticism on very short timelines is directionally very valid. I hope that those who have made very, very short timeline predictions on Metaculus are willing to update if those dates[2] come and go without AGI. I think one way out of the poor state of the AGI debate is for more people to make concrete falsifiable predictions.
While I disagree with your reasoning about what the EA position on AIXR is (see below), I think it’s clear that many people think that is the position, so I’d really like to here how you’ve come to this impression and what EA or the AIXR community could do to present a more accurate picture of itself. I think reducing this gap would be useful for all sides.
Some parts that I didn’t find convincing:
You view Hanson’s response as a knock-down argument. But he only addresses the ‘foom’ cases and only does so heuristically, not from any technical arguments. I think more credible counterarguments are being presented by experts such as Belrose & Pope, who you might find convincing (though I think they have non-trivial subjective estimates of AIXR too fwiw).
I really don’t like the move to psychoanalyse people in terms of bias. Is bias at play? Of course, it’s at play for all humans, but therefore just as likely for those who are super optimistic as those pessimistic a-priori. I think once something breaks through enough to be deemed ‘worthy of consideration’ then we ought to do most of our evaluation on the merits of the arguments given. You even say this at the end of the ‘fooling oneself’ section! I guess I think the questions of “are AIXR concerns valid?” and “if not, why are they so prominent?” are probably worth two separate posts imo. Similarly to this, I think you sometimes conflate the questions of “are AIXR concerns valid?” and “if it is, what would an appropriate policy response look like?” I think in your latest comment to Hayven that’s where you strongest objections are (which makes sense to me, given your background and expertise), but again is diferent from the pure question of if AIXR concern is valid.
Framing those concerned with AIXR as ‘alarmists’ - I think you’re perhaps overindexing on MIRI here as representative of AI Safety as a whole? From my vague sense, MIRI doesn’t hold a dominant position in AI Safety space as it perhaps did 10⁄20 years ago. I don’t think that ~90%+ belief in doom is an accurate depiction of EA, and similarly I don’t think that an indefinite global pause is the default EA view of the policies that ought to be adopted. Like you mention Anthropic and CHAI as two good institutions, and they’re both highly EA-coded and sincerely concerend about AIXR. I think a potential disambiguation here is between ‘concern about AIXR’ and ‘certain doom about AIXR’?
But also some bad ones:
saying that EA’s focus on x-risk lacks “common sense”—I actually think x-risk is something which the general public would think makes a lot of sense, but they’d think that EA gets the source of that risk wrong (though empirical data). I think a lot of people would say that trying to reduce the risk of human extinction from Nuclear War or Climate Change is an unambiguously good cause and potentially good use of marginal resources.
Viewing EA, let alone AIXR, as motivated by ‘nonsense utilitarianism’ about ‘trillions of theoretical future people’. Most EA spending goes to Global Health Causes in the present. Many AIXR advocates don’t identify as longtermists at all. They’re often, if not mostly, concerned about risk to humans alive today, themselves, those they care about. Concern about AIXR could also be motivated through non-utilitarian frameworks, though I’d concede that this probably isn’t the standard EA position
I know this is a super long comment, so feel free to only respond to the bits you find useful or even not at all. Alternatively we could try out the new dialogue feature to talk through this a bit more? In any case, thanks again for the post, it got me thinking about where and why I disagree both with AI ‘doomers’ as well as your position in this post.
Good feedback—I see the logic of your points, and don’t find faults with any of them.
On AIXR as valid and what the response would be, you’re right; I emphasize the practical nature of the policy recommendation because otherwise, the argument can veer into the metaphysical. To use an analogy, if I claim there’s a 10% chance another planet could collide with Earth and destroy the planet in the next decade, you might begrudgingly accept the premise to move the conversation on to the practical aspect of my forecast. Even if that were true, what would my policy intervention look like? Build interstellar lifeboats? Is that feasible in the absence of concrete evidence?
Agree—armchair psychoanalysis isn’t really useful. What is useful is understanding how heuristics and biases work on a population level. If we know that, in general, projects run over budget and take longer than expected, we can adjust our estimates. If we know experts mis-forecast x-risk, we can adjust for that too. That’s far from psychoanalysis.
I don’t really know what the median view on AIXR within EA communities truly is. One thing’s for certain: the public narrative around the issue highly tilts towards the “pause AI” camp and the Yudkowskys out there.
On the common sense of X-risk—one of the neat offices that few people know of at the State Department is the Nuclear Risk Reduction Center or NRRC. It’s staffed 24⁄7 and has foreign language designated positions, meaning at least someone in the room speaks Russian, etc. The office is tasked with staying in touch with other nations to reduce the odds of a miscalculation and nuclear war. That makes tons of sense. Thinking about big problems that could end the world makes sense in general—disease, asteroids, etc.
What I find troubling is the propensity to assign odds to distant right-tail events. And then to take the second step of recommending costly and questionable policy recommendations. I don’t think these are EA consensus positions, but they certainly receive outsized attention.
I’m glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you’re taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, there’s a lot of good stuff here. I think a post about the NRRC, or even an insider’s view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making ‘right-tail recommendations’ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, they’re just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
These are all great suggestions! As for my objections to EA as a whole versus a subset, it reminds me a bit of a defense that folks employ whenever a larger organization is criticised. Defenses that one hears from Republicans in the US for example. “It’s not all of us, just a vocal subset!” That might be true, but I think it misses the point. It’s hard to soul-search and introspect as an organization or a movement if we collectively say, “not all-EA” when someone points to the enthusiasm around SBF and ideas like buying up coal mines.
Thanks for sharing the post Zed :) Like titotal says, I hope you consider staying around. I think AI-risk (AIXR) sceptic posts should be welcomed on the Forum. I’m someone who’d probably count as AIXR sceptic for the EA community (but not the wider world/public). It’s clearly an area you think EA as a whole is making a mistake, so I’ve read the post and recent comments and have some thoughts that I hope you might find useful:
I think there are some good points you made:
I really appreciate posts that push against the ‘EA Orthodoxy’ on the Forum that start off useful discussions. I think ‘red-teaming’ ideas is a great example of necessary error-correction, so regardless of how much I agree or not, I want to give you plaudits for that.
On humility in long-term forecasts—I completely agree here. I’m sure you’ve come across it but Tetlock’s recent forecasting tournament deals with this question and does indeed find Forecasters place lower AIXR than subject-matter experts.[1] But I’d still say that a risk of extinction roughly ~1% is worth considering as an important risk worth consideration and more investigation, wouldn’t you?
I think your scepticism on very short timelines is directionally very valid. I hope that those who have made very, very short timeline predictions on Metaculus are willing to update if those dates[2] come and go without AGI. I think one way out of the poor state of the AGI debate is for more people to make concrete falsifiable predictions.
While I disagree with your reasoning about what the EA position on AIXR is (see below), I think it’s clear that many people think that is the position, so I’d really like to here how you’ve come to this impression and what EA or the AIXR community could do to present a more accurate picture of itself. I think reducing this gap would be useful for all sides.
Some parts that I didn’t find convincing:
You view Hanson’s response as a knock-down argument. But he only addresses the ‘foom’ cases and only does so heuristically, not from any technical arguments. I think more credible counterarguments are being presented by experts such as Belrose & Pope, who you might find convincing (though I think they have non-trivial subjective estimates of AIXR too fwiw).
I really don’t like the move to psychoanalyse people in terms of bias. Is bias at play? Of course, it’s at play for all humans, but therefore just as likely for those who are super optimistic as those pessimistic a-priori. I think once something breaks through enough to be deemed ‘worthy of consideration’ then we ought to do most of our evaluation on the merits of the arguments given. You even say this at the end of the ‘fooling oneself’ section! I guess I think the questions of “are AIXR concerns valid?” and “if not, why are they so prominent?” are probably worth two separate posts imo. Similarly to this, I think you sometimes conflate the questions of “are AIXR concerns valid?” and “if it is, what would an appropriate policy response look like?” I think in your latest comment to Hayven that’s where you strongest objections are (which makes sense to me, given your background and expertise), but again is diferent from the pure question of if AIXR concern is valid.
Framing those concerned with AIXR as ‘alarmists’ - I think you’re perhaps overindexing on MIRI here as representative of AI Safety as a whole? From my vague sense, MIRI doesn’t hold a dominant position in AI Safety space as it perhaps did 10⁄20 years ago. I don’t think that ~90%+ belief in doom is an accurate depiction of EA, and similarly I don’t think that an indefinite global pause is the default EA view of the policies that ought to be adopted. Like you mention Anthropic and CHAI as two good institutions, and they’re both highly EA-coded and sincerely concerend about AIXR. I think a potential disambiguation here is between ‘concern about AIXR’ and ‘certain doom about AIXR’?
But also some bad ones:
saying that EA’s focus on x-risk lacks “common sense”—I actually think x-risk is something which the general public would think makes a lot of sense, but they’d think that EA gets the source of that risk wrong (though empirical data). I think a lot of people would say that trying to reduce the risk of human extinction from Nuclear War or Climate Change is an unambiguously good cause and potentially good use of marginal resources.
Viewing EA, let alone AIXR, as motivated by ‘nonsense utilitarianism’ about ‘trillions of theoretical future people’. Most EA spending goes to Global Health Causes in the present. Many AIXR advocates don’t identify as longtermists at all. They’re often, if not mostly, concerned about risk to humans alive today, themselves, those they care about. Concern about AIXR could also be motivated through non-utilitarian frameworks, though I’d concede that this probably isn’t the standard EA position
I know this is a super long comment, so feel free to only respond to the bits you find useful or even not at all. Alternatively we could try out the new dialogue feature to talk through this a bit more? In any case, thanks again for the post, it got me thinking about where and why I disagree both with AI ‘doomers’ as well as your position in this post.
roughly 0.4% for superforecasters vs 2.1% for AI experts by 2100
Currently March 14th 2026 at time of writing
Love this thoughtful response!
Good feedback—I see the logic of your points, and don’t find faults with any of them.
On AIXR as valid and what the response would be, you’re right; I emphasize the practical nature of the policy recommendation because otherwise, the argument can veer into the metaphysical. To use an analogy, if I claim there’s a 10% chance another planet could collide with Earth and destroy the planet in the next decade, you might begrudgingly accept the premise to move the conversation on to the practical aspect of my forecast. Even if that were true, what would my policy intervention look like? Build interstellar lifeboats? Is that feasible in the absence of concrete evidence?
Agree—armchair psychoanalysis isn’t really useful. What is useful is understanding how heuristics and biases work on a population level. If we know that, in general, projects run over budget and take longer than expected, we can adjust our estimates. If we know experts mis-forecast x-risk, we can adjust for that too. That’s far from psychoanalysis.
I don’t really know what the median view on AIXR within EA communities truly is. One thing’s for certain: the public narrative around the issue highly tilts towards the “pause AI” camp and the Yudkowskys out there.
On the common sense of X-risk—one of the neat offices that few people know of at the State Department is the Nuclear Risk Reduction Center or NRRC. It’s staffed 24⁄7 and has foreign language designated positions, meaning at least someone in the room speaks Russian, etc. The office is tasked with staying in touch with other nations to reduce the odds of a miscalculation and nuclear war. That makes tons of sense. Thinking about big problems that could end the world makes sense in general—disease, asteroids, etc.
What I find troubling is the propensity to assign odds to distant right-tail events. And then to take the second step of recommending costly and questionable policy recommendations. I don’t think these are EA consensus positions, but they certainly receive outsized attention.
I’m glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you’re taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, there’s a lot of good stuff here. I think a post about the NRRC, or even an insider’s view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making ‘right-tail recommendations’ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, they’re just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
I think one thing where we agree is that there’s a need to ask and answer a lot more questions, some of which you mention here (beyond ‘is AIXR valid’):
What policy options do we have to counteract AIXR if true?
How do the effectiveness of these policy options change as we change our estimation of the risk?
What is the median view in the AIXR/broader EA/broader AI communities on risk?
And so on.
Some people in EA might write this off as ‘optics’, but I think that’s wrong
These are all great suggestions! As for my objections to EA as a whole versus a subset, it reminds me a bit of a defense that folks employ whenever a larger organization is criticised. Defenses that one hears from Republicans in the US for example. “It’s not all of us, just a vocal subset!” That might be true, but I think it misses the point. It’s hard to soul-search and introspect as an organization or a movement if we collectively say, “not all-EA” when someone points to the enthusiasm around SBF and ideas like buying up coal mines.