Am I understanding correctly that the distinction you outlined exists in preference utilitarianism, but not in hedonic utilitarianism? For example, if I were poofed away right now, from a hedonic utilitarian perspective, the only downside seems to be from the prevention of happy experiences I would have later had.
Also, does your argument work symmetrically? For example, if I could choose between ending the torture of an existing person, and preventing the creation of a person who would have been tortured, would your argument give strong reason to choose the former?
Within the preference-oriented perspective of your comment, has there been any exploration on how strong the trade-off should be between the preferences of existing moral patients and the future preferences of future moral patients?
For a simplistic example, when choosing between saving an existing happy person’s life, or creating 10 happy people, many consequentialists would prefer the latter. (The latter creates 10x the amount of happy life-years, which plausibly dominates the existing person’s preferences.) But even a 10-to-1 tradeoff would mean that preventing a happy person’s existence is 10% as bad as killing them—pretty bad!
Am I understanding correctly that the distinction you outlined exists in preference utilitarianism, but not in hedonic utilitarianism?
I gave two kinds of distinctions. Future-oriented preferences don’t seem to count in themselves under hedonic utilitarianism (the first distinction), but the fact that a being already exists with the circuitry and tendencies to experience pleasure (and/​or suffering) could matter on some hedonic utilitarian views (the second distinction).
Also, does your argument work symmetrically? For example, if I could choose between ending the torture of an existing person, and preventing the creation of a person who would have been tortured, would your argument give strong reason to choose the former?
I think the arguments I gave don’t really say much either way about this. I think views where future people don’t (really) matter in themselves, like presentism or necessitarianism, are compatible (although I don’t give much weight to such views). On the other hand, you could defend similarly strong reasons and the procreation asymmetry based on actualism, Frick’s conditional reasons, or harm-minimization views. See this discussion. I think at least actualism is asymmetric in a non-question-begging way, and maybe Frick’s views, too, as I argue in the linked discussion.
Within the preference-oriented perspective of your comment, has there been any exploration on how strong the trade-off should be between the preferences of existing moral patients and the future preferences of future moral patients?
Within a view, I can’t really imagine how you would ground any particular tradeoff ratio, other than basically purely subjectively, i.e. your own preferences about these tradeoffs. You could get something like this across moral views under maximizing expected choiceworthiness over moral uncertainty with the right kind of intertheoretic comparisons between person-affecting views and total views (Greaves and Ord, 2017, another version), but it’s not clear what would ground such intertheoretic comparisons, because the views disagree about what makes something valuable. Maybe also something similar under other approaches to moral uncertainty, but the ratio could depend on the choices available to you.
Thanks for this perspective!
Am I understanding correctly that the distinction you outlined exists in preference utilitarianism, but not in hedonic utilitarianism? For example, if I were poofed away right now, from a hedonic utilitarian perspective, the only downside seems to be from the prevention of happy experiences I would have later had.
Also, does your argument work symmetrically? For example, if I could choose between ending the torture of an existing person, and preventing the creation of a person who would have been tortured, would your argument give strong reason to choose the former?
Within the preference-oriented perspective of your comment, has there been any exploration on how strong the trade-off should be between the preferences of existing moral patients and the future preferences of future moral patients?
For a simplistic example, when choosing between saving an existing happy person’s life, or creating 10 happy people, many consequentialists would prefer the latter. (The latter creates 10x the amount of happy life-years, which plausibly dominates the existing person’s preferences.) But even a 10-to-1 tradeoff would mean that preventing a happy person’s existence is 10% as bad as killing them—pretty bad!
I gave two kinds of distinctions. Future-oriented preferences don’t seem to count in themselves under hedonic utilitarianism (the first distinction), but the fact that a being already exists with the circuitry and tendencies to experience pleasure (and/​or suffering) could matter on some hedonic utilitarian views (the second distinction).
I think the arguments I gave don’t really say much either way about this. I think views where future people don’t (really) matter in themselves, like presentism or necessitarianism, are compatible (although I don’t give much weight to such views). On the other hand, you could defend similarly strong reasons and the procreation asymmetry based on actualism, Frick’s conditional reasons, or harm-minimization views. See this discussion. I think at least actualism is asymmetric in a non-question-begging way, and maybe Frick’s views, too, as I argue in the linked discussion.
Within a view, I can’t really imagine how you would ground any particular tradeoff ratio, other than basically purely subjectively, i.e. your own preferences about these tradeoffs. You could get something like this across moral views under maximizing expected choiceworthiness over moral uncertainty with the right kind of intertheoretic comparisons between person-affecting views and total views (Greaves and Ord, 2017, another version), but it’s not clear what would ground such intertheoretic comparisons, because the views disagree about what makes something valuable. Maybe also something similar under other approaches to moral uncertainty, but the ratio could depend on the choices available to you.