I agree that it makes much more sense to estimate x-risk on a timescale of 100 years (as I said in the sidenote of my answer), but I think you should specify that in the question, because “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” together with your definition of x-risk, implies taking the whole future of humanity into account. I think it may make sense to explicitly only talk about the risk of existential catastrophe in this or in the next couple of centuries.
Lots of people have different disagreements about how to word this question. I feel like I should pass on editing the question even further, especially given that I don’t think it’s likely to change people’s answers too much.
I agree that it makes much more sense to estimate x-risk on a timescale of 100 years (as I said in the sidenote of my answer), but I think you should specify that in the question, because “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” together with your definition of x-risk, implies taking the whole future of humanity into account.
I think it may make sense to explicitly only talk about the risk of existential catastrophe in this or in the next couple of centuries.
Lots of people have different disagreements about how to word this question. I feel like I should pass on editing the question even further, especially given that I don’t think it’s likely to change people’s answers too much.