Suppose p(doom) is 90%. Then preventing extinction multiplies the value of the world by 10 in expectation.
But suppose that the best attainable futures are 1000 times better than the default non-extinction scenario. Then ensuring we are on track to get the best possible future multiplies the value of the world by 100 in expectation, even after factoring in the 90% chance of extinction.
In this toy model, you should only allocate your resources to reducing extinction if it is 10 times more tractable than ensuring we are on track to get the best possible future, at the current margin.
You might think that we can just defer this to the future. But I’ve assumed in the set-up that the default future is 1/1000th as good as the best future. So apparently our descendants are not going to be very good at optimising the future, and we can’t trust them with this decision.
Where do you think this goes wrong?
Yes.
So you doubt fanaticism, the view that a tiny chance of an astronomically good outcome can be more valuable than a certainty of a decent outcome. What about in the case of certainty? Do you doubt the utilitarian’s objection to Common-sense Eutopia? This kind of aggregation seems important for the case for longtermism. (See the pages of paper dolls in What We Owe the Future.)