Thanks for sharing the post Zed :) Like titotal says, I hope you consider staying around. I think AI-risk (AIXR) sceptic posts should be welcomed on the Forum. Iâm someone whoâd probably count as AIXR sceptic for the EA community (but not the wider world/âpublic). Itâs clearly an area you think EA as a whole is making a mistake, so Iâve read the post and recent comments and have some thoughts that I hope you might find useful:
I think there are some good points you made:
I really appreciate posts that push against the âEA Orthodoxyâ on the Forum that start off useful discussions. I think âred-teamingâ ideas is a great example of necessary error-correction, so regardless of how much I agree or not, I want to give you plaudits for that.
On humility in long-term forecastsâI completely agree here. Iâm sure youâve come across it but Tetlockâs recent forecasting tournament deals with this question and does indeed find Forecasters place lower AIXR than subject-matter experts.[1] But Iâd still say that a risk of extinction roughly ~1% is worth considering as an important risk worth consideration and more investigation, wouldnât you?
I think your scepticism on very short timelines is directionally very valid. I hope that those who have made very, very short timeline predictions on Metaculus are willing to update if those dates[2] come and go without AGI. I think one way out of the poor state of the AGI debate is for more people to make concrete falsifiable predictions.
While I disagree with your reasoning about what the EA position on AIXR is (see below), I think itâs clear that many people think that is the position, so Iâd really like to here how youâve come to this impression and what EA or the AIXR community could do to present a more accurate picture of itself. I think reducing this gap would be useful for all sides.
Some parts that I didnât find convincing:
You view Hansonâs response as a knock-down argument. But he only addresses the âfoomâ cases and only does so heuristically, not from any technical arguments. I think more credible counterarguments are being presented by experts such as Belrose & Pope, who you might find convincing (though I think they have non-trivial subjective estimates of AIXR too fwiw).
I really donât like the move to psychoanalyse people in terms of bias. Is bias at play? Of course, itâs at play for all humans, but therefore just as likely for those who are super optimistic as those pessimistic a-priori. I think once something breaks through enough to be deemed âworthy of considerationâ then we ought to do most of our evaluation on the merits of the arguments given. You even say this at the end of the âfooling oneselfâ section! I guess I think the questions of âare AIXR concerns valid?â and âif not, why are they so prominent?â are probably worth two separate posts imo. Similarly to this, I think you sometimes conflate the questions of âare AIXR concerns valid?â and âif it is, what would an appropriate policy response look like?â I think in your latest comment to Hayven thatâs where you strongest objections are (which makes sense to me, given your background and expertise), but again is diferent from the pure question of if AIXR concern is valid.
Framing those concerned with AIXR as âalarmistsâ - I think youâre perhaps overindexing on MIRI here as representative of AI Safety as a whole? From my vague sense, MIRI doesnât hold a dominant position in AI Safety space as it perhaps did 10â20 years ago. I donât think that ~90%+ belief in doom is an accurate depiction of EA, and similarly I donât think that an indefinite global pause is the default EA view of the policies that ought to be adopted. Like you mention Anthropic and CHAI as two good institutions, and theyâre both highly EA-coded and sincerely concerend about AIXR. I think a potential disambiguation here is between âconcern about AIXRâ and âcertain doom about AIXRâ?
But also some bad ones:
saying that EAâs focus on x-risk lacks âcommon senseââI actually think x-risk is something which the general public would think makes a lot of sense, but theyâd think that EA gets the source of that risk wrong (though empirical data). I think a lot of people would say that trying to reduce the risk of human extinction from Nuclear War or Climate Change is an unambiguously good cause and potentially good use of marginal resources.
Viewing EA, let alone AIXR, as motivated by ânonsense utilitarianismâ about âtrillions of theoretical future peopleâ. Most EA spending goes to Global Health Causes in the present. Many AIXR advocates donât identify as longtermists at all. Theyâre often, if not mostly, concerned about risk to humans alive today, themselves, those they care about. Concern about AIXR could also be motivated through non-utilitarian frameworks, though Iâd concede that this probably isnât the standard EA position
I know this is a super long comment, so feel free to only respond to the bits you find useful or even not at all. Alternatively we could try out the new dialogue feature to talk through this a bit more? In any case, thanks again for the post, it got me thinking about where and why I disagree both with AI âdoomersâ as well as your position in this post.
Good feedbackâI see the logic of your points, and donât find faults with any of them.
On AIXR as valid and what the response would be, youâre right; I emphasize the practical nature of the policy recommendation because otherwise, the argument can veer into the metaphysical. To use an analogy, if I claim thereâs a 10% chance another planet could collide with Earth and destroy the planet in the next decade, you might begrudgingly accept the premise to move the conversation on to the practical aspect of my forecast. Even if that were true, what would my policy intervention look like? Build interstellar lifeboats? Is that feasible in the absence of concrete evidence?
Agreeâarmchair psychoanalysis isnât really useful. What is useful is understanding how heuristics and biases work on a population level. If we know that, in general, projects run over budget and take longer than expected, we can adjust our estimates. If we know experts mis-forecast x-risk, we can adjust for that too. Thatâs far from psychoanalysis.
I donât really know what the median view on AIXR within EA communities truly is. One thingâs for certain: the public narrative around the issue highly tilts towards the âpause AIâ camp and the Yudkowskys out there.
On the common sense of X-riskâone of the neat offices that few people know of at the State Department is the Nuclear Risk Reduction Center or NRRC. Itâs staffed 24â7 and has foreign language designated positions, meaning at least someone in the room speaks Russian, etc. The office is tasked with staying in touch with other nations to reduce the odds of a miscalculation and nuclear war. That makes tons of sense. Thinking about big problems that could end the world makes sense in generalâdisease, asteroids, etc.
What I find troubling is the propensity to assign odds to distant right-tail events. And then to take the second step of recommending costly and questionable policy recommendations. I donât think these are EA consensus positions, but they certainly receive outsized attention.
Iâm glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear youâre taking issue with a particular framing/âsubset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, thereâs a lot of good stuff here. I think a post about the NRRC, or even an insiderâs view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making âright-tail recommendationsâ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, theyâre just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
These are all great suggestions! As for my objections to EA as a whole versus a subset, it reminds me a bit of a defense that folks employ whenever a larger organization is criticised. Defenses that one hears from Republicans in the US for example. âItâs not all of us, just a vocal subset!â That might be true, but I think it misses the point. Itâs hard to soul-search and introspect as an organization or a movement if we collectively say, ânot all-EAâ when someone points to the enthusiasm around SBF and ideas like buying up coal mines.
Thanks for sharing the post Zed :) Like titotal says, I hope you consider staying around. I think AI-risk (AIXR) sceptic posts should be welcomed on the Forum. Iâm someone whoâd probably count as AIXR sceptic for the EA community (but not the wider world/âpublic). Itâs clearly an area you think EA as a whole is making a mistake, so Iâve read the post and recent comments and have some thoughts that I hope you might find useful:
I think there are some good points you made:
I really appreciate posts that push against the âEA Orthodoxyâ on the Forum that start off useful discussions. I think âred-teamingâ ideas is a great example of necessary error-correction, so regardless of how much I agree or not, I want to give you plaudits for that.
On humility in long-term forecastsâI completely agree here. Iâm sure youâve come across it but Tetlockâs recent forecasting tournament deals with this question and does indeed find Forecasters place lower AIXR than subject-matter experts.[1] But Iâd still say that a risk of extinction roughly ~1% is worth considering as an important risk worth consideration and more investigation, wouldnât you?
I think your scepticism on very short timelines is directionally very valid. I hope that those who have made very, very short timeline predictions on Metaculus are willing to update if those dates[2] come and go without AGI. I think one way out of the poor state of the AGI debate is for more people to make concrete falsifiable predictions.
While I disagree with your reasoning about what the EA position on AIXR is (see below), I think itâs clear that many people think that is the position, so Iâd really like to here how youâve come to this impression and what EA or the AIXR community could do to present a more accurate picture of itself. I think reducing this gap would be useful for all sides.
Some parts that I didnât find convincing:
You view Hansonâs response as a knock-down argument. But he only addresses the âfoomâ cases and only does so heuristically, not from any technical arguments. I think more credible counterarguments are being presented by experts such as Belrose & Pope, who you might find convincing (though I think they have non-trivial subjective estimates of AIXR too fwiw).
I really donât like the move to psychoanalyse people in terms of bias. Is bias at play? Of course, itâs at play for all humans, but therefore just as likely for those who are super optimistic as those pessimistic a-priori. I think once something breaks through enough to be deemed âworthy of considerationâ then we ought to do most of our evaluation on the merits of the arguments given. You even say this at the end of the âfooling oneselfâ section! I guess I think the questions of âare AIXR concerns valid?â and âif not, why are they so prominent?â are probably worth two separate posts imo. Similarly to this, I think you sometimes conflate the questions of âare AIXR concerns valid?â and âif it is, what would an appropriate policy response look like?â I think in your latest comment to Hayven thatâs where you strongest objections are (which makes sense to me, given your background and expertise), but again is diferent from the pure question of if AIXR concern is valid.
Framing those concerned with AIXR as âalarmistsâ - I think youâre perhaps overindexing on MIRI here as representative of AI Safety as a whole? From my vague sense, MIRI doesnât hold a dominant position in AI Safety space as it perhaps did 10â20 years ago. I donât think that ~90%+ belief in doom is an accurate depiction of EA, and similarly I donât think that an indefinite global pause is the default EA view of the policies that ought to be adopted. Like you mention Anthropic and CHAI as two good institutions, and theyâre both highly EA-coded and sincerely concerend about AIXR. I think a potential disambiguation here is between âconcern about AIXRâ and âcertain doom about AIXRâ?
But also some bad ones:
saying that EAâs focus on x-risk lacks âcommon senseââI actually think x-risk is something which the general public would think makes a lot of sense, but theyâd think that EA gets the source of that risk wrong (though empirical data). I think a lot of people would say that trying to reduce the risk of human extinction from Nuclear War or Climate Change is an unambiguously good cause and potentially good use of marginal resources.
Viewing EA, let alone AIXR, as motivated by ânonsense utilitarianismâ about âtrillions of theoretical future peopleâ. Most EA spending goes to Global Health Causes in the present. Many AIXR advocates donât identify as longtermists at all. Theyâre often, if not mostly, concerned about risk to humans alive today, themselves, those they care about. Concern about AIXR could also be motivated through non-utilitarian frameworks, though Iâd concede that this probably isnât the standard EA position
I know this is a super long comment, so feel free to only respond to the bits you find useful or even not at all. Alternatively we could try out the new dialogue feature to talk through this a bit more? In any case, thanks again for the post, it got me thinking about where and why I disagree both with AI âdoomersâ as well as your position in this post.
roughly 0.4% for superforecasters vs 2.1% for AI experts by 2100
Currently March 14th 2026 at time of writing
Love this thoughtful response!
Good feedbackâI see the logic of your points, and donât find faults with any of them.
On AIXR as valid and what the response would be, youâre right; I emphasize the practical nature of the policy recommendation because otherwise, the argument can veer into the metaphysical. To use an analogy, if I claim thereâs a 10% chance another planet could collide with Earth and destroy the planet in the next decade, you might begrudgingly accept the premise to move the conversation on to the practical aspect of my forecast. Even if that were true, what would my policy intervention look like? Build interstellar lifeboats? Is that feasible in the absence of concrete evidence?
Agreeâarmchair psychoanalysis isnât really useful. What is useful is understanding how heuristics and biases work on a population level. If we know that, in general, projects run over budget and take longer than expected, we can adjust our estimates. If we know experts mis-forecast x-risk, we can adjust for that too. Thatâs far from psychoanalysis.
I donât really know what the median view on AIXR within EA communities truly is. One thingâs for certain: the public narrative around the issue highly tilts towards the âpause AIâ camp and the Yudkowskys out there.
On the common sense of X-riskâone of the neat offices that few people know of at the State Department is the Nuclear Risk Reduction Center or NRRC. Itâs staffed 24â7 and has foreign language designated positions, meaning at least someone in the room speaks Russian, etc. The office is tasked with staying in touch with other nations to reduce the odds of a miscalculation and nuclear war. That makes tons of sense. Thinking about big problems that could end the world makes sense in generalâdisease, asteroids, etc.
What I find troubling is the propensity to assign odds to distant right-tail events. And then to take the second step of recommending costly and questionable policy recommendations. I donât think these are EA consensus positions, but they certainly receive outsized attention.
Iâm glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear youâre taking issue with a particular framing/âsubset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, thereâs a lot of good stuff here. I think a post about the NRRC, or even an insiderâs view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making âright-tail recommendationsâ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, theyâre just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
I think one thing where we agree is that thereâs a need to ask and answer a lot more questions, some of which you mention here (beyond âis AIXR validâ):
What policy options do we have to counteract AIXR if true?
How do the effectiveness of these policy options change as we change our estimation of the risk?
What is the median view in the AIXR/âbroader EA/âbroader AI communities on risk?
And so on.
Some people in EA might write this off as âopticsâ, but I think thatâs wrong
These are all great suggestions! As for my objections to EA as a whole versus a subset, it reminds me a bit of a defense that folks employ whenever a larger organization is criticised. Defenses that one hears from Republicans in the US for example. âItâs not all of us, just a vocal subset!â That might be true, but I think it misses the point. Itâs hard to soul-search and introspect as an organization or a movement if we collectively say, ânot all-EAâ when someone points to the enthusiasm around SBF and ideas like buying up coal mines.