Thank you for writing this well-argued post—I think its important to keep discussing exactly how big P(doom) is. However, and I say this as someone who believes that P(doom) is on the lower end, it would also be good to be clear about what the implications would be for EAs if P(doom) was low. It seems likely that many of the same recommendations—reduce spending on risky AI technologies and increase spending on AI safety—would still hold, at least until we get a clearer idea of the exact nature of AI risks.
Thank you for writing this well-argued post—I think its important to keep discussing exactly how big P(doom) is. However, and I say this as someone who believes that P(doom) is on the lower end, it would also be good to be clear about what the implications would be for EAs if P(doom) was low. It seems likely that many of the same recommendations—reduce spending on risky AI technologies and increase spending on AI safety—would still hold, at least until we get a clearer idea of the exact nature of AI risks.