The extent to which you think they’re the same is going to depend heavily on
your long term moral discounting rate (if it’s high, then you’re going to be equally concerned between highly destructive events that very likely won’t kill everyone and comparably destructive events that might),
your priors on specific events leading to human extinction (which, given the lack of data, will have a strong impact on your conclusion), and
your change in credence of civilisation flourishing post-catastrophe.
Given the high uncertainty behind each of those considerations (arguably excluding the first), I think it’s too strong to say they’re ‘not the same at all’. I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)
The extent to which you think they’re the same is going to depend heavily on
your long term moral discounting rate (if it’s high, then you’re going to be equally concerned between highly destructive events that very likely won’t kill everyone and comparably destructive events that might),
your priors on specific events leading to human extinction (which, given the lack of data, will have a strong impact on your conclusion), and
your change in credence of civilisation flourishing post-catastrophe.
Given the high uncertainty behind each of those considerations (arguably excluding the first), I think it’s too strong to say they’re ‘not the same at all’. I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)