I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:
non-longtermism axiologies will all else equal be much more likely to prioritize non-existential GCRs over existential
mitigation (especially worst-case mitigation) for existential risks is comparatively more important for longtermists than for non-longtermists.
Some of these divergences were covered at least as early as Parfit (1982). (Note: I did not reread this before making this comment).
I agree that these divergences aren’t very strong for traditional AGI x-risk scenarios, in those cases I think whether and how much you prioritize AGI x-risk depends almost entirely on empirical beliefs.
Agreed, that’s another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It’s only when the probabilities are tiny that you get differences in optimal policy.
I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:
non-longtermism axiologies will all else equal be much more likely to prioritize non-existential GCRs over existential
mitigation (especially worst-case mitigation) for existential risks is comparatively more important for longtermists than for non-longtermists.
Some of these divergences were covered at least as early as Parfit (1982). (Note: I did not reread this before making this comment).
I agree that these divergences aren’t very strong for traditional AGI x-risk scenarios, in those cases I think whether and how much you prioritize AGI x-risk depends almost entirely on empirical beliefs.
Agreed, that’s another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It’s only when the probabilities are tiny that you get differences in optimal policy.