[Question] How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

I’m interested in how many 2021 $s you’d think it’s rational for EA be willing to trade (or perhaps the equivalent in human capital) against 0.01% (or 1 basis point) of existential risk.

This question is potentially extremely decision-relevant for EA orgs doing prioritization, like Rethink Priorities. For example, if we assign $X to preventing 0.01% of existential risk, and we take Toby Ord’s figures on existential risk (pg. 167, The Precipice) on face value, then we should not prioritize asteroid risk (~1/​1,000,000 risk this century), if all realistic interventions we could think of costs >>1% of $X, or prioritize climate change (~1/​1,000 risk this century) if realistic interventions costs >>$10X, at least on direct longtermism grounds (though there might still be neartermist or instrumental reasons for doing this research).

To a lesser extent, it may be relevant for individuals considering whether it’s better to earn-to-give vs contribute to existential risk reduction, whether in research or in real-world work.

Assume the money comes from a very EA-aligned (and not too liquidity-constrained) org like Open Phil.

Note: I hereby define existential risk the following way(see discussion in comments for why I used a non-standard definition):

Existential risk – A risk of catastrophe where an adverse outcome would permanently cause Earth-originating intelligent life’s astronomical value to be <50% of what it would otherwise be capable of.

Note that extinction (0%) and maximally bad[1] s-risks (-100%) are special cases of <50%.

[1] assuming symmetry between utility and disutility