Thanks for providing the arguments commonly given for and against various cruxes, that’s super interesting.
These two arguments for why extinction would be unlikely
The logistics would be extremely challenging.
Millions of people live very remotely, and AI would have little incentive to pay the high costs of killing them.
make me wonder what the forecasters would’ve estimated for existential risk rather than extinction risk (i.e. we lose control over our future /​ are permanently disempowered, even if not literally everyone dies this century). (Estimates would presumably be somewhere between the ones for catastrophe and extinction.)
I’m also curious about why the tournament chose to focus on extinction/​catastrophe rather than existential risks (especially given that its called the Existential Risk Persuasion Tournament). Maybe those two were easier to operationalize?
I wasn’t around when the XPT questions were being set, but I’d guess that you’re right that extinction/​catastrophe were chosen because they are easier to operationalise.
On your question about what forecasts on existential risk would have been: I think this is a great question.
FRI actually ran a follow-up project after the XPT to dig into the AI results. One of the things we did in this follow-up project was elicit forecasts on a broader range of outcomes, including some approximations of existential risk. I don’t think I can share the results yet, but we’re aiming to publish them in August!
Please do! And if possible, one small request from me would be if any insight on extinction vs existential risk for AI can be transferred to bio and nuclear—e.g. might there be some general amount of population decline (e.g. 70%) that seems to be able to trigger long-term/​permanent civilizational collapse.
Thanks for providing the arguments commonly given for and against various cruxes, that’s super interesting.
These two arguments for why extinction would be unlikely
make me wonder what the forecasters would’ve estimated for existential risk rather than extinction risk (i.e. we lose control over our future /​ are permanently disempowered, even if not literally everyone dies this century). (Estimates would presumably be somewhere between the ones for catastrophe and extinction.)
I’m also curious about why the tournament chose to focus on extinction/​catastrophe rather than existential risks (especially given that its called the Existential Risk Persuasion Tournament). Maybe those two were easier to operationalize?
I wasn’t around when the XPT questions were being set, but I’d guess that you’re right that extinction/​catastrophe were chosen because they are easier to operationalise.
On your question about what forecasts on existential risk would have been: I think this is a great question.
FRI actually ran a follow-up project after the XPT to dig into the AI results. One of the things we did in this follow-up project was elicit forecasts on a broader range of outcomes, including some approximations of existential risk. I don’t think I can share the results yet, but we’re aiming to publish them in August!
Please do! And if possible, one small request from me would be if any insight on extinction vs existential risk for AI can be transferred to bio and nuclear—e.g. might there be some general amount of population decline (e.g. 70%) that seems to be able to trigger long-term/​permanent civilizational collapse.
The follow-up project was on AI specifically, so we don’t currently have any data that would allow us to transfer directly to bio and nuclear, alas.