I’m not much at maths so I found this hard to follow.
Is the basic thrust that reducing the chance of extinction this year isn’t so valuable if there remains a risk of extinction (or catastrophe) in future because in that case we’ll probably just go extinct (or die young) later anyway?
I think I had always assumed that the argument for x-risk relied on the possibility that the annual risk of extinction would eventually either hit or asymptote to zero. If you think of life spreading out across the galaxy and then other galaxies, and then being separated by cosmic expansion, then that makes some sense.
To analyse it the most simplistic way possible — if you think extinction risk has a 10% chance of permanently going to 0% if we make it through the current period, and a 90% chance of remaining very high even if we make it through the current period, then extinction reduction takes a 10x hit to its cost-effectiveness from this effect. (At least that’s what I had been imagining.)
I recall there’s an Appendix to The Precipice where Ord talks about this sort of thing. At least I remember that he covers the issue that it’s ambiguous whether a high or low level of risk today makes the strongest case for working to reduce extinction being cost-effective. Because as I think you’re pointing out above — while a low risk today makes it harder to reduce the probability of extinction by a given absolute amount, it simultaneously implies we’re more likely to make it through future periods if we don’t go extinct in this one, raising the value of survival now.
David addresses a lot of the arguments for a ‘Time of Perils’in his ‘Existential Risk, Pessimism and the Time of Perils’ paper which this moral mathematics paper is a follow up to
Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.
The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.
The literature on a ‘singleton’ is in part addressing this issue.
Because there’s so much uncertainty about all this, it seems like an overly-confident claim that it’s extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.
I’m not much at maths so I found this hard to follow.
Is the basic thrust that reducing the chance of extinction this year isn’t so valuable if there remains a risk of extinction (or catastrophe) in future because in that case we’ll probably just go extinct (or die young) later anyway?
Yep—nailed it!
Ah great, glad I got it!
I think I had always assumed that the argument for x-risk relied on the possibility that the annual risk of extinction would eventually either hit or asymptote to zero. If you think of life spreading out across the galaxy and then other galaxies, and then being separated by cosmic expansion, then that makes some sense.
To analyse it the most simplistic way possible — if you think extinction risk has a 10% chance of permanently going to 0% if we make it through the current period, and a 90% chance of remaining very high even if we make it through the current period, then extinction reduction takes a 10x hit to its cost-effectiveness from this effect. (At least that’s what I had been imagining.)
I recall there’s an Appendix to The Precipice where Ord talks about this sort of thing. At least I remember that he covers the issue that it’s ambiguous whether a high or low level of risk today makes the strongest case for working to reduce extinction being cost-effective. Because as I think you’re pointing out above — while a low risk today makes it harder to reduce the probability of extinction by a given absolute amount, it simultaneously implies we’re more likely to make it through future periods if we don’t go extinct in this one, raising the value of survival now.
David addresses a lot of the arguments for a ‘Time of Perils’in his ‘Existential Risk, Pessimism and the Time of Perils’ paper which this moral mathematics paper is a follow up to
Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.
The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.
The literature on a ‘singleton’ is in part addressing this issue.
Because there’s so much uncertainty about all this, it seems like an overly-confident claim that it’s extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.