Three cheers for this. Two ways in which the post might understate the case for person-affecting focusing on ex risk
Most actions to reduce ex risk would also reduce catastrophic non-ex risks. e.g. efforts to reduce the risk of an existential threat attack by an engineered pathogen would also reduce the risk of
e.g. >100m people dying in an attack by an engineered pathogen. I would expect that the benefits from reducing GCRs as a side-effect of reducing ex risks would be significantly larger than the benefits accruing from preventing ex risks because the probability of GCRs is much much greater. I wouldn’t be that surprised if that increased the EV of ex risk by an order of magntidue, thereby propelling ex risk reduction further into AMF territory.
As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don’t. If you’re temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering).
It is in general good to reassert that there are numerous reasons to focus on ex risk aside from the total view, including neglectedness, political short-termism, the global public goods aspect, the context of the techologies we are developing, the tendency to neglect rare events etc
As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don’t. If you’re temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering
If someone did take an asymmetric view and really committed to it, I would think you should probably be in favour of increasing existential risk, as that removes the possibility of future suffering, rather trying to reduce existential risk. I suppose you might have some (not obviously plausible) story you had about how humanity’s survival decreases future suffering: You could think humans will remove misery in surviving non-humans if humans dodge existential risk, but this misery wouldn’t be averted if humans went extinct but other life keep living.
Three cheers for this. Two ways in which the post might understate the case for person-affecting focusing on ex risk
Most actions to reduce ex risk would also reduce catastrophic non-ex risks. e.g. efforts to reduce the risk of an existential threat attack by an engineered pathogen would also reduce the risk of e.g. >100m people dying in an attack by an engineered pathogen. I would expect that the benefits from reducing GCRs as a side-effect of reducing ex risks would be significantly larger than the benefits accruing from preventing ex risks because the probability of GCRs is much much greater. I wouldn’t be that surprised if that increased the EV of ex risk by an order of magntidue, thereby propelling ex risk reduction further into AMF territory.
As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don’t. If you’re temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering).
It is in general good to reassert that there are numerous reasons to focus on ex risk aside from the total view, including neglectedness, political short-termism, the global public goods aspect, the context of the techologies we are developing, the tendency to neglect rare events etc
If someone did take an asymmetric view and really committed to it, I would think you should probably be in favour of increasing existential risk, as that removes the possibility of future suffering, rather trying to reduce existential risk. I suppose you might have some (not obviously plausible) story you had about how humanity’s survival decreases future suffering: You could think humans will remove misery in surviving non-humans if humans dodge existential risk, but this misery wouldn’t be averted if humans went extinct but other life keep living.
I think the argument is as you describe in the last sentence, though I haven’t engaged much with the NUs on this.