I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)
I happen to strongly agree that moral discount rate should be 0, but a) it’s still worth acknowledging that as an assumption, and b) I think it’s easy for both sides to equivocate it with risk-based discounting. It seems like you’re de facto doing when you say ‘Under that assumption, the work done is indeed very different in what it accomplishes’ - this is only true if risk-based discounting is also very low. See e.g. Thorstad’s Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not be—I don’t agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
I’m confused by your paragraph about insurance. To clarify:
I don’t expect insurance companies to protect against either extinction catastrophes or collapse-of-civilisation catastrophes, since as you say such catastrophes are uninsurable.
I suspect they also don’t protect against medium-damage-to-civilisation catastrophes for much the same reason—I don’t think insurance has the capacity to handle more than very mild civilisational shocks.
I do think government organisations, NGOs and academics have done very important work in the context of reducing risks of civilisation-harming events.
I think that if you assign a high risk of a post-catastrophe civilisation struggling to flourish (as I do), these events look comparably as bad from a long-term perspective as extinction once you also account for their greater likelihood. I suggested a framework for this analysis here and built some tools to implement it described here.
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but that’s going to be a speculative argument about which people might reasonably differ. To my knowledge, no-one’s made the positive case in depth, and the few people who’ve looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who haven’t. See e.g.:
Dartnell - ‘we might have scuppered the chances of any society to follow in our wake’
Rodriguez - ‘humanity might be stagnant for millennia’
Jebari - ‘the development of industrialisation depends on more factors, and is more of a “lucky shot”, than we might otherwise think’
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)
I happen to strongly agree that moral discount rate should be 0, but a) it’s still worth acknowledging that as an assumption, and b) I think it’s easy for both sides to equivocate it with risk-based discounting. It seems like you’re de facto doing when you say ‘Under that assumption, the work done is indeed very different in what it accomplishes’ - this is only true if risk-based discounting is also very low. See e.g. Thorstad’s Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not be—I don’t agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
I’m confused by your paragraph about insurance. To clarify:
I don’t expect insurance companies to protect against either extinction catastrophes or collapse-of-civilisation catastrophes, since as you say such catastrophes are uninsurable.
I suspect they also don’t protect against medium-damage-to-civilisation catastrophes for much the same reason—I don’t think insurance has the capacity to handle more than very mild civilisational shocks.
I do think government organisations, NGOs and academics have done very important work in the context of reducing risks of civilisation-harming events.
I think that if you assign a high risk of a post-catastrophe civilisation struggling to flourish (as I do), these events look comparably as bad from a long-term perspective as extinction once you also account for their greater likelihood. I suggested a framework for this analysis here and built some tools to implement it described here.
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but that’s going to be a speculative argument about which people might reasonably differ. To my knowledge, no-one’s made the positive case in depth, and the few people who’ve looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who haven’t. See e.g.:
Dartnell - ‘we might have scuppered the chances of any society to follow in our wake’
Rodriguez - ‘humanity might be stagnant for millennia’
Jebari - ‘the development of industrialisation depends on more factors, and is more of a “lucky shot”, than we might otherwise think’