I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful
(I have not read all of your sequence.) I’m confused how being even close to 100% on something like this is appropriate, my sense is generally just that population ethics is hard, humans have somewhat weak minds in the space of possible minds, and our later post-human views on ethics might be far more subtle or quite different.
I’m a moral anti-realist (subjectivist), so I don’t think there’s an objective (stance-independent) fact of the matter. I’m just describing what I would expect to continue endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I’d sooner give those up instead.
I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Some ideas: Complexity of value but not disvalue or the urgency of suffering is explained by the intensity of desire, not unpleasantness? Do you have any ideas.
(I have not read all of your sequence.) I’m confused how being even close to 100% on something like this is appropriate, my sense is generally just that population ethics is hard, humans have somewhat weak minds in the space of possible minds, and our later post-human views on ethics might be far more subtle or quite different.
I’m a moral anti-realist (subjectivist), so I don’t think there’s an objective (stance-independent) fact of the matter. I’m just describing what I would expect to continue endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I’d sooner give those up instead.