I also wrote this post defending the asymmetry, and when I tried to generalize the approach to choosing among more than two options and multiple individuals involved*, I ended up with a soft asymmetry: considering only the interests of possible future people, it would never be worse if they aren’t born, but it wouldn’t be better either, unless the aggregate welfare were negative.
*using something like the beatpath method discussed in Thomas’s paper to get a transitive but incomplete order on the option set
And I looked into something like modelling ethics as a graph traversal problem where you go from option A to option B if the individuals who would exist in A have more interest in B than in A (or the moral reasons from the point of view of A in favour of B outweigh those in favour of A), and either pick the option you visit the most asymptotically, or accumulate scores on the options depending on the difference in interest in options as you traverse, and then pick the option which dominates asymptotically (and also check multiple starting points).
I’m pretty suspicious about approaches which rely on personal identity across counterfactual worlds; it seems pretty clear that either there’s no fact of the matter here, or else almost everything you can do leads to different people being born (e.g. by changing which sperm leads to their conception).
And secondly, this leads us to the conclusion that unless we quickly reach a utopia where everyone has positive lives forever, then the best thing to do is end the world as soon as possible. Which I don’t see a good reason to accept.
I’m pretty suspicious about approaches which rely on personal identity across counterfactual worlds; it seems pretty clear that either there’s no fact of the matter here, or else almost everything you can do leads to different people being born (e.g. by changing which sperm leads to their conception).
These approaches don’t need to rely on personal identity across worlds; either they already “work” even without this (i.e. solve the nonidentity problem) or (I think) you can modify them into wide person-affecting views, using partial injections like the counterpart relations in this paper/EA Forum summary (but dropping the personal identity preservation condition, and using pairwise mappings between all pairs of options instead of for all available options at once).
And secondly, this leads us to the conclusion that unless we quickly reach a utopia where everyone has positive lives forever, then the best thing to do is end the world as soon as possible.
I don’t see how this follows for the particular views I’ve mentioned, and I think it contradicts what I said about soft asymmetry, which does not rely on personal identity and which some of the views described in Thomas’s paper and my attempt to generalize the view in my post satisfy (I’m not sure about Dasgupta’s approach). These views don’t satisfy the independence of irrelevant alternatives (most person-affecting views don’t), and the option of ensuring everyone has positive lives forever is not practically available to us (except as an unlikely fluke, which an approach dealing with uncertainty appropriately should handle, like in Thomas’s paper), so we can’t use it to rule out other options.
Which I don’t see a good reason to accept.
Even if they did imply this (I don’t think they do), the plausibility of the views would be at least a reason to accept the conclusion, right? Even if you have stronger reasons to reject it.
On more modest person-affecting views you might not be familiar with, I’d point you to
The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas, and
Dasgupta’s approach discussed here:
http://users.ox.ac.uk/~sfop0060/pdf/Welfare%20economics%20of%20population.pdf
https://philpapers.org/rec/DASSAF-2
I also wrote this post defending the asymmetry, and when I tried to generalize the approach to choosing among more than two options and multiple individuals involved*, I ended up with a soft asymmetry: considering only the interests of possible future people, it would never be worse if they aren’t born, but it wouldn’t be better either, unless the aggregate welfare were negative.
*using something like the beatpath method discussed in Thomas’s paper to get a transitive but incomplete order on the option set
And I looked into something like modelling ethics as a graph traversal problem where you go from option A to option B if the individuals who would exist in A have more interest in B than in A (or the moral reasons from the point of view of A in favour of B outweigh those in favour of A), and either pick the option you visit the most asymptotically, or accumulate scores on the options depending on the difference in interest in options as you traverse, and then pick the option which dominates asymptotically (and also check multiple starting points).
I’m pretty suspicious about approaches which rely on personal identity across counterfactual worlds; it seems pretty clear that either there’s no fact of the matter here, or else almost everything you can do leads to different people being born (e.g. by changing which sperm leads to their conception).
And secondly, this leads us to the conclusion that unless we quickly reach a utopia where everyone has positive lives forever, then the best thing to do is end the world as soon as possible. Which I don’t see a good reason to accept.
These approaches don’t need to rely on personal identity across worlds; either they already “work” even without this (i.e. solve the nonidentity problem) or (I think) you can modify them into wide person-affecting views, using partial injections like the counterpart relations in this paper/EA Forum summary (but dropping the personal identity preservation condition, and using pairwise mappings between all pairs of options instead of for all available options at once).
I don’t see how this follows for the particular views I’ve mentioned, and I think it contradicts what I said about soft asymmetry, which does not rely on personal identity and which some of the views described in Thomas’s paper and my attempt to generalize the view in my post satisfy (I’m not sure about Dasgupta’s approach). These views don’t satisfy the independence of irrelevant alternatives (most person-affecting views don’t), and the option of ensuring everyone has positive lives forever is not practically available to us (except as an unlikely fluke, which an approach dealing with uncertainty appropriately should handle, like in Thomas’s paper), so we can’t use it to rule out other options.
Even if they did imply this (I don’t think they do), the plausibility of the views would be at least a reason to accept the conclusion, right? Even if you have stronger reasons to reject it.