I wasn’t around when the XPT questions were being set, but I’d guess that you’re right that extinction/catastrophe were chosen because they are easier to operationalise.
On your question about what forecasts on existential risk would have been: I think this is a great question.
FRI actually ran a follow-up project after the XPT to dig into the AI results. One of the things we did in this follow-up project was elicit forecasts on a broader range of outcomes, including some approximations of existential risk. I don’t think I can share the results yet, but we’re aiming to publish them in August!
Please do! And if possible, one small request from me would be if any insight on extinction vs existential risk for AI can be transferred to bio and nuclear—e.g. might there be some general amount of population decline (e.g. 70%) that seems to be able to trigger long-term/permanent civilizational collapse.
I wasn’t around when the XPT questions were being set, but I’d guess that you’re right that extinction/catastrophe were chosen because they are easier to operationalise.
On your question about what forecasts on existential risk would have been: I think this is a great question.
FRI actually ran a follow-up project after the XPT to dig into the AI results. One of the things we did in this follow-up project was elicit forecasts on a broader range of outcomes, including some approximations of existential risk. I don’t think I can share the results yet, but we’re aiming to publish them in August!
Please do! And if possible, one small request from me would be if any insight on extinction vs existential risk for AI can be transferred to bio and nuclear—e.g. might there be some general amount of population decline (e.g. 70%) that seems to be able to trigger long-term/permanent civilizational collapse.
The follow-up project was on AI specifically, so we don’t currently have any data that would allow us to transfer directly to bio and nuclear, alas.