How asymmetric do you think things are? I tend to deprioritise s-risks (both accidental and intentional) because it seems like accidental suffering and intentional suffering will be a very small portion of the things that our descendants choose to do with energy. In everyday cases I don’t feel a pull to putting a lot of weight on suffering. But I feel more confused when we get to tail cases. Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful. So I worry that (1) all of the value is in the tails, as per Power Laws of Value and (2) on my intuitive moral tastes the good tails are not that great and the bad tails are really bad.
I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful
(I have not read all of your sequence.) I’m confused how being even close to 100% on something like this is appropriate, my sense is generally just that population ethics is hard, humans have somewhat weak minds in the space of possible minds, and our later post-human views on ethics might be far more subtle or quite different.
I’m a moral anti-realist (subjectivist), so I don’t think there’s an objective (stance-independent) fact of the matter. I’m just describing what I would expect to continue to endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I’d sooner give those up instead.
How asymmetric do you think things are? I tend to deprioritise s-risks (both accidental and intentional) because it seems like accidental suffering and intentional suffering will be a very small portion of the things that our descendants choose to do with energy. In everyday cases I don’t feel a pull to putting a lot of weight on suffering. But I feel more confused when we get to tail cases. Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful. So I worry that (1) all of the value is in the tails, as per Power Laws of Value and (2) on my intuitive moral tastes the good tails are not that great and the bad tails are really bad.
I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Some ideas: Complexity of value but not disvalue or the urgency of suffering is explained by the intensity of desire, not unpleasantness? Do you have any ideas?
(I have not read all of your sequence.) I’m confused how being even close to 100% on something like this is appropriate, my sense is generally just that population ethics is hard, humans have somewhat weak minds in the space of possible minds, and our later post-human views on ethics might be far more subtle or quite different.
I’m a moral anti-realist (subjectivist), so I don’t think there’s an objective (stance-independent) fact of the matter. I’m just describing what I would expect to continue to endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I’d sooner give those up instead.