Yeah those are fair—I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches.
Some quick things I imagine this could impact on the intra-longtermist side:
Prioritization between x-risks that cause only human extinction vs extinction of all/most life on earth (e.g. wild animals).
EV calculations become very different in general, and probably global priorities research / movement building become higher priority than x-risk reduction? But it depends on the x-risk.
Yeah, I’m not actually sure that a really convincing person-affecting view can be articulated. But I’d be excited to see someone with a strong understanding of the literature really try.
I also would be interested in seeing someone compare the tradeoffs on non- views vs person-affecting. E.g. person affecting views might entail X weirdness, but maybe X weirdness is better to accept than the repugnant conclusion, etc.
I also would be interested in seeing someone compare the tradeoffs on non- views vs person-affecting. E.g. person affecting views might entail X weirdness, but maybe X weirdness is better to accept than the repugnant conclusion, etc.
Agreed—while I expect people’s intuitions on which is “better” to differ, a comprehensive accounting of which bullets different views have to bite would be a really handy resource. By “comprehensive” I don’t mean literally every possible thought experiment, of course, but something that gives a sense of the significant considerations people have thought of. Ideally these would be organized in such a way that it’s easy to keep track of which cases that bite different views are relevantly similar, and there isn’t double-counting.
Yeah those are fair—I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches.
Some quick things I imagine this could impact on the intra-longtermist side:
Prioritization between x-risks that cause only human extinction vs extinction of all/most life on earth (e.g. wild animals).
EV calculations become very different in general, and probably global priorities research / movement building become higher priority than x-risk reduction? But it depends on the x-risk.
Yeah, I’m not actually sure that a really convincing person-affecting view can be articulated. But I’d be excited to see someone with a strong understanding of the literature really try.
I also would be interested in seeing someone compare the tradeoffs on non- views vs person-affecting. E.g. person affecting views might entail X weirdness, but maybe X weirdness is better to accept than the repugnant conclusion, etc.
Agreed—while I expect people’s intuitions on which is “better” to differ, a comprehensive accounting of which bullets different views have to bite would be a really handy resource. By “comprehensive” I don’t mean literally every possible thought experiment, of course, but something that gives a sense of the significant considerations people have thought of. Ideally these would be organized in such a way that it’s easy to keep track of which cases that bite different views are relevantly similar, and there isn’t double-counting.
There are also person-neutral reasons for caring more about the extinction of all terrestrial life vs. human extinction. (Though it would be very surprising if this did much to reconcile person-affecting and person-neutral cause prioritization, since the reasons for caring in each case are so different: direct harms on sentient life, versus decreased probability that intelligent life will eventually re-evolve.)