You could also just replace everyone with beings with (much) more satisfied preferences on aggregate. Replacement or otherwise killing everyone against their preferences can be an issue for basically any utilitarian or consequentialist welfarist view that isn’t person-affecting, including symmetric total preference utilitarianism.
how about sneakily placing them into experience machines or injecting them with happiness drugs?
Maybe not that on its own, but if you also change their preferences in the right ways, yes, on many preference views. See my post here.
This is one main reason why I’m inclined towards “preference-affecting” views. On such views, it’s good to satisfy preferences, but not good to create new satisfied preferences. If it were good to create new satisfied preferences, that could outweigh violating important preferences or totally changing people’s preferences.
Also this “can be” is pretty uhhh weird formulation.
Well, it seems pretty central to such proposals, like “oh yeah, the only thing that matters is happiness—suffering!” and then just serial bullet biting and/or strategic concessions. It’s just a remark, nothing important really.
On such views, it’s good to satisfy preferences, but not good to create new satisfied preferences.
Hmm how about messing with which new agents will exist? Like, let’s say farms will create 100 chickens of breed #1 and mistreat them at the level 10. But you can intervene and make it so such that they will create 100 chickens of breed #2 and mistreat them at the level 6. Does this possible action get some opinion from such systems?
Hmm how about messing with which new agents will exist? Like, let’s say farms will create 100 chickens of breed #1 and mistreat them at the level 10. But you can intervene and make it so such that they will create 100 chickens of breed #2 and mistreat them at the level 6. Does this possible action get some opinion from such systems?
Some such views would say it’s good, including a narrow asymmetric view like Pummer (2024)’s (applied to preferences), negative utilitarianism, and wide preference-affecting views. On wide preference-affecting views, it’s better to have a more satisfied preference than a less satisfied different one, and so a better off individual than a worse off different one (like the wide person-affecting views in Meacham (2012) and Thomas (2019)).
I’m most attracted to narrow asymmetric views like Pummer’s, because I think they handle replacement and preference change best. I’m working on some similar views myself.
Strict narrow preference-affecting views, like presentism and necessitarianism (wrt preferences), would be indifferent. Presentism would only care about the preferences that already exist, but none of these chickens exist yet, so they wouldn’t count. Necessitarianism would only care about the preferences that will exist either way, but none of the chickens’ preferences would exist either way, because different chickens would exist between the two outcomes.
Also, how do you balance actions that make less suffering vs less sufferers. Like, you also have another possible action to make it such that farms will create only 70 chickens of breed #3 and mistreat them at the level 10. How do you think about it comparatively. Like, how it cashes out, for chickens, because it’s a pretty practical problem
This will depend on the specific view, and person-affecting views and preference-affecting views can be pretty tricky/technical, in part because they usually violate the independence of irrelevant alternatives. I’d direct you to Pummer’s paper.
I also think existing person-affecting and preference-affecting views usually do badly when choosing between more than two options, and I’m working on what I hope is a better approach.
It’s kind of disappointing that it’s not concrete enough to cash out even for such a simple and isolated decision.
I also checked out the Pummer lecture and it’s kind of weird feeling but i think he doesn’t disambiguate between “let’s make my / ours preferences more coherent” and “let’s figure out how to make social contract/coordination mechanisms/institutions more efficient and good”. It’s disappointing
I’d guess that there are concrete enough answers (although you may need to provide more info), but there are different views with different approaches, and there’s some tricky math involved in many of them.
Pummer is aiming at coherent preferences (moral views), not social contract/coordination mechanisms/institutions. It’s a piece of foundational moral philosophy, not an applied piece.
Do you think his view isn’t concrete enough, specifically? What would you expect?
>I’d guess that there are concrete enough answers (although you may need to provide more info), but there are different views with different approaches, and there’s some tricky math involved in many of them.
Yeah, I’m tempted to write a post here with chicken setup and collect the answers of different people, maybe with some control questions like “would you press a button that instantaneously and painlessly kills all the life on earth”, so I’d have a reason to disregard them without reading. But, eh
>Pummer is aiming at coherent preferences (moral views), not social contract/coordination mechanisms/institutions.
and my opinion is that he is confused what is values and what is coordination problems, so he tries to bake the solutions of coordination problems into values. I’m fine with the level of concreteness he operates under, it’s not like i had high expectations from academic philosophy
You could also just replace everyone with beings with (much) more satisfied preferences on aggregate. Replacement or otherwise killing everyone against their preferences can be an issue for basically any utilitarian or consequentialist welfarist view that isn’t person-affecting, including symmetric total preference utilitarianism.
It’s also not a good model of human preferences concerning other people / beings / entities / things. In that I totally agree
how about sneakily placing them into experience machines or injecting them with happiness drugs? Also this “can be” is pretty uhhh weird formulation.
Maybe not that on its own, but if you also change their preferences in the right ways, yes, on many preference views. See my post here.
This is one main reason why I’m inclined towards “preference-affecting” views. On such views, it’s good to satisfy preferences, but not good to create new satisfied preferences. If it were good to create new satisfied preferences, that could outweigh violating important preferences or totally changing people’s preferences.
What do you mean?
Well, it seems pretty central to such proposals, like “oh yeah, the only thing that matters is happiness—suffering!” and then just serial bullet biting and/or strategic concessions. It’s just a remark, nothing important really.
Hmm how about messing with which new agents will exist? Like, let’s say farms will create 100 chickens of breed #1 and mistreat them at the level 10. But you can intervene and make it so such that they will create 100 chickens of breed #2 and mistreat them at the level 6. Does this possible action get some opinion from such systems?
Some such views would say it’s good, including a narrow asymmetric view like Pummer (2024)’s (applied to preferences), negative utilitarianism, and wide preference-affecting views. On wide preference-affecting views, it’s better to have a more satisfied preference than a less satisfied different one, and so a better off individual than a worse off different one (like the wide person-affecting views in Meacham (2012) and Thomas (2019)).
I’m most attracted to narrow asymmetric views like Pummer’s, because I think they handle replacement and preference change best. I’m working on some similar views myself.
Strict narrow preference-affecting views, like presentism and necessitarianism (wrt preferences), would be indifferent. Presentism would only care about the preferences that already exist, but none of these chickens exist yet, so they wouldn’t count. Necessitarianism would only care about the preferences that will exist either way, but none of the chickens’ preferences would exist either way, because different chickens would exist between the two outcomes.
I have a some related discussion here.
Also, how do you balance actions that make less suffering vs less sufferers. Like, you also have another possible action to make it such that farms will create only 70 chickens of breed #3 and mistreat them at the level 10. How do you think about it comparatively. Like, how it cashes out, for chickens, because it’s a pretty practical problem
Btw thanks for links I’ll check them out
This will depend on the specific view, and person-affecting views and preference-affecting views can be pretty tricky/technical, in part because they usually violate the independence of irrelevant alternatives. I’d direct you to Pummer’s paper.
I also think existing person-affecting and preference-affecting views usually do badly when choosing between more than two options, and I’m working on what I hope is a better approach.
It’s kind of disappointing that it’s not concrete enough to cash out even for such a simple and isolated decision.
I also checked out the Pummer lecture and it’s kind of weird feeling but i think he doesn’t disambiguate between “let’s make my / ours preferences more coherent” and “let’s figure out how to make social contract/coordination mechanisms/institutions more efficient and good”. It’s disappointing
I’d guess that there are concrete enough answers (although you may need to provide more info), but there are different views with different approaches, and there’s some tricky math involved in many of them.
Pummer is aiming at coherent preferences (moral views), not social contract/coordination mechanisms/institutions. It’s a piece of foundational moral philosophy, not an applied piece.
Do you think his view isn’t concrete enough, specifically? What would you expect?
>I’d guess that there are concrete enough answers (although you may need to provide more info), but there are different views with different approaches, and there’s some tricky math involved in many of them.
Yeah, I’m tempted to write a post here with chicken setup and collect the answers of different people, maybe with some control questions like “would you press a button that instantaneously and painlessly kills all the life on earth”, so I’d have a reason to disregard them without reading. But, eh
>Pummer is aiming at coherent preferences (moral views), not social contract/coordination mechanisms/institutions.
and my opinion is that he is confused what is values and what is coordination problems, so he tries to bake the solutions of coordination problems into values. I’m fine with the level of concreteness he operates under, it’s not like i had high expectations from academic philosophy