I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/âLT and these fields is that weâre almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldnât be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I donât know what you mean by fields only looking into regional disastersâhow are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
Iâm skeptical that the insurance industry isnât bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I donât agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that Iâm not aware of, but I doubt it.)
I happen to strongly agree that moral discount rate should be 0, but a) itâs still worth acknowledging that as an assumption, and b) I think itâs easy for both sides to equivocate it with risk-based discounting. It seems like youâre de facto doing when you say âUnder that assumption, the work done is indeed very different in what it accomplishesâ - this is only true if risk-based discounting is also very low. See e.g. Thorstadâs Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not beâI donât agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
Iâm confused by your paragraph about insurance. To clarify:
I donât expect insurance companies to protect against either extinction catastrophes or collapse-of-civilisation catastrophes, since as you say such catastrophes are uninsurable.
I suspect they also donât protect against medium-damage-to-civilisation catastrophes for much the same reasonâI donât think insurance has the capacity to handle more than very mild civilisational shocks.
I do think government organisations, NGOs and academics have done very important work in the context of reducing risks of civilisation-harming events.
I think that if you assign a high risk of a post-catastrophe civilisation struggling to flourish (as I do), these events look comparably as bad from a long-term perspective as extinction once you also account for their greater likelihood. I suggested a framework for this analysis here and built some tools to implement it described here.
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but thatâs going to be a speculative argument about which people might reasonably differ. To my knowledge, no-oneâs made the positive case in depth, and the few people whoâve looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who havenât. See e.g.:
Dartnell - âwe might have scuppered the chances of any society to follow in our wakeâ
Rodriguez - âhumanity might be stagnant for millenniaâ
Jebari - âthe development of industrialisation depends on more factors, and is more of a âlucky shotâ, than we might otherwise thinkâ
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/âLT and these fields is that weâre almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldnât be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
Iâm skeptical that the insurance industry isnât bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I donât agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that Iâm not aware of, but I doubt it.)
I happen to strongly agree that moral discount rate should be 0, but a) itâs still worth acknowledging that as an assumption, and b) I think itâs easy for both sides to equivocate it with risk-based discounting. It seems like youâre de facto doing when you say âUnder that assumption, the work done is indeed very different in what it accomplishesâ - this is only true if risk-based discounting is also very low. See e.g. Thorstadâs Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not beâI donât agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
Iâm confused by your paragraph about insurance. To clarify:
I donât expect insurance companies to protect against either extinction catastrophes or collapse-of-civilisation catastrophes, since as you say such catastrophes are uninsurable.
I suspect they also donât protect against medium-damage-to-civilisation catastrophes for much the same reasonâI donât think insurance has the capacity to handle more than very mild civilisational shocks.
I do think government organisations, NGOs and academics have done very important work in the context of reducing risks of civilisation-harming events.
I think that if you assign a high risk of a post-catastrophe civilisation struggling to flourish (as I do), these events look comparably as bad from a long-term perspective as extinction once you also account for their greater likelihood. I suggested a framework for this analysis here and built some tools to implement it described here.
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but thatâs going to be a speculative argument about which people might reasonably differ. To my knowledge, no-oneâs made the positive case in depth, and the few people whoâve looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who havenât. See e.g.:
Dartnell - âwe might have scuppered the chances of any society to follow in our wakeâ
Rodriguez - âhumanity might be stagnant for millenniaâ
Jebari - âthe development of industrialisation depends on more factors, and is more of a âlucky shotâ, than we might otherwise thinkâ