Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.
However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.
I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it’s a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).
Also, questions of fanaticism may be relevant for these x-risks, since it’s not the probability of the risks that matter, but the difference you can make. There’s also ambiguity, since it’s possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).
Thanks for your answer. I don’t think I under stand what you’re saying, though. As I understand it, it makes a huge difference to the resource distribution that longtermism recommends, because if you allow for e.g. Bostrom’s 10^52 happy lives to be the baseline utility, avoiding x-risk becomes vastly more important than if you just consider the 10^10 people alive today. Right?
In principal I agree, although in practice there are other mitigating factors which means it doesn’t seem to be that relevant.
This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.
It is partly also because at a practical level the interventions long-termists consider don’t rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.
Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.
However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.
I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it’s a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).
I would also recommend https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/
Also, questions of fanaticism may be relevant for these x-risks, since it’s not the probability of the risks that matter, but the difference you can make. There’s also ambiguity, since it’s possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).
Thanks for your answer. I don’t think I under stand what you’re saying, though. As I understand it, it makes a huge difference to the resource distribution that longtermism recommends, because if you allow for e.g. Bostrom’s 10^52 happy lives to be the baseline utility, avoiding x-risk becomes vastly more important than if you just consider the 10^10 people alive today. Right?
In principal I agree, although in practice there are other mitigating factors which means it doesn’t seem to be that relevant.
This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.
It is partly also because at a practical level the interventions long-termists consider don’t rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.