Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers.
You say “But I think this framing really favors views according to which pleasure can outweigh suffering, because most ethicists feel that pleasure can outweigh suffering within a given life, but many of them do not think it’s right to harm one person for the greater benefit of another person.” I agree that this does favor positive utilitarianism, however, I spend quite a while justifying it in my second most recent ea forum post.
Finally, you say “One possible resolution of our conflicting intuitions on these matters could be a quite suffering-focused version of weak NU.” I certainly update on the intuitions of negative utilitarians to place more weight on suffering avoidance than I otherwise would, however, even updating on that I still conclude that transhuman bliss could be good enough to offset torture. The badness of torture seems to be a fact about how extreme the experience is as a result of evolution. However, it seems possible to create more extreme positive experience in a transhumanist world, where we can design experiences to be as good as physics allows. Additionally, I’d probably be more sympathetic to suffering reducing than most positive utilitarians. I do think that the expected amount of torture in the future is smaller than the expected amount of transhuman bliss largely for reasons laid out here.
if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so.
If people knew in advance that this would happen, it would relieve a great deal of suffering during people’s lives. People could be much less afraid of death because the very end of their lives would be so nice. I imagine that anxiety about death and pain near the end of life without hope of things getting better are some of the biggest sources of suffering in most people’s entire lives, so the suffering reduction here could be quite nontrivial.
So I think we’d have to specify that no one would know about this other than the person to whom it suddenly happened. In that case it still seems like probably something most people would strongly prefer. That said, the intuition in favor of it gets weaker if we specify that someone else would have to endure a pinprick with no compensation in order to provide this joy to a different person. And my intuition in favor of doing that is weaker than my intuition against torturing one person to create happiness for other people. (This brings up the open vs empty individualism issue again, though.)
When astronomical quantities of happiness are involved, like one minute of torture to create a googol years of transhuman bliss, I begin to have some doubts about the anti-torture stance, in part because I don’t want to give in to scope neglect. That’s why I give some moral credence to strongly suffering-focused weak NU. That said, if I were personally facing this choice, I would still say: “No way. The bliss isn’t worth a minute of torture.” (If I were already in the throes of temptation after a taste of transhuman-level bliss, maybe I’d have a different opinion. Conversely, after the first few seconds of torture, I imagine many people might switch their opinions to saying they want the torture to stop no matter what.)
I do think that the expected amount of torture in the future is smaller than the expected amount of transhuman bliss
I agree, assuming we count their magnitudes the way that a typical classical utilitarian would. It’s plausible that the expected happiness of the future as judged by a typical classical utilitarian could be a few times higher than expected suffering, maybe even an order of magnitude higher. (Relative to my moral values, it’s obvious that the expected badness of the future will far outweigh the expected goodness—except in cases where a posthuman future would prevent lots of suffering elsewhere in the multiverse, etc.)
Hmm, this may be just a completely different intuition about suffering versus well-being. To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it. Even updating on the intuitions of negative utilitarians I still conclude that the amount of future transhuman bliss would outweigh the suffering of the future.
Sidenote, I really enjoy your blog and have cited you a bunch in high school debate.
To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it.
I also have that intuition, probably even if someone else has to endure the pinprick without compensation. But my intuitions about the wrongness of “torture for bliss” are stronger, and if there’s a conflict between the intuitions, I’ll stick with the wrongness of “torture for bliss”.
Thanks for the kind words. :) I hope debate is fun.
When I reflect about the nature of torture it seems obvious that it’s very bad. But I’m not sure how by the nature of reflection on the experience alone we can conclude that there’s no amount of positive bliss that could ever outweigh it. We literally can’t conceive of how good transhuman bliss might be and any case of trying to add up trillions of positive minor experiences seems very sensitive to scope neglect.
Your point that I simply can’t conceive of how good transhuman bliss might be is fair. :) I might indeed change my intuitions if I were to experience it (if that were possible; it’d require a lot of changes to my brain first). I guess we might change our intuitions about many things if we had more insight—e.g., maybe we’d decide that hedonic experience itself isn’t as important as some other things. There’s a question of to what extent we would regard these changes of opinion as moral improvements versus corruption of our original values.
I guess I don’t feel very motivated by the abstract thought that if I were better able to comprehend transhuman-level bliss I might better see how awesome it is and would therefore be more willing to accept the existence of some additional torture in order for more transhuman bliss to exist. I can see how some people might find that line of reasoning motivating, but to me, my reaction is: “No! Stop the extra torture! That’s so obviously the right thing to do.”
That’s true of your current intuitions but I care about what we would care about if we were fully rational and informed. If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.
Yeah, that’s a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn’t that important, while torture is. I’d rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don’t currently find very compelling.
There’s always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we’d oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It’s unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one’s motivations to some degree.
The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That’s not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.
Here’s an intuition pump: Is there any number of elegant scientific discoveries made in a Matrix, where no sentient beings at all would benefit from technologies derived from those discoveries, that would justify murdering someone? Scientific discoveries do seem valuable, and many people have the intuition that they’re valuable independent of their applications. But is it scope neglect to say that whatever their value, that value just couldn’t be commensurable with hedonic wellbeing? If not, what is the problem in principle with saying the same for happiness and suffering?
Fair enough, I don’t either. But there are some non-hedonic things that I have some intuition are valuable independent of hedonics—it’s just that I reject this intuition upon reflection (just as I reject the intuition that happiness is valuable independent of relief of suffering upon reflection). Is there anything other than hedonic well-being that you have an intuition is independently good or bad, even if you don’t endorse that intuition?
Regarding the example about bliss before death, there’s another complication if we give weight to preference satisfaction even when a person doesn’t know whether those preferences have been satisfied. I give a bit of weight to the value of satisfying preferences even if someone doesn’t know about it, based on analogies to my case. (For example, I prefer for the world to contain less suffering even if I don’t know that it does.)
Many people would prefer for the end of their lives to be wonderful, to experience something akin to heaven, etc, and adding the bliss at the end of their lives—even unbeknownst to them until it happened—would still satisfy those preferences. People might also have preferences like “I want to have a net happy life, even though I usually feel depressed” or “I want to have lots of meaningful experiences”, and those preferences would also be satisfied by adding the end-of-life bliss.
I get why that would appeal to a positive utilitarian but I’m not sure why that would be relevant to a negative utilitarians’ view. Also, we could make it so that this only applies to babies who died before turning two, so they don’t have sophisticated preferences about a net positive QOL.
but I’m not sure why that would be relevant to a negative utilitarians’ view
People have preferences to have wonderful ends to their lives, to have net positive lives, etc. Those preferences may be frustrated by default (especially the first one; most people don’t have wonderful ends to their lives) but would become not frustrated once the bliss was added. People’s preferences regarding those things are typically much stronger than their preferences not to experience a single pinprick.
Good point about the babies. One might feel that babies and non-human animals still have implicit preferences for experiencing bliss in the future, but I agree that’s a more tenuous claim.
Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers.
You say “But I think this framing really favors views according to which pleasure can outweigh suffering, because most ethicists feel that pleasure can outweigh suffering within a given life, but many of them do not think it’s right to harm one person for the greater benefit of another person.” I agree that this does favor positive utilitarianism, however, I spend quite a while justifying it in my second most recent ea forum post.
Finally, you say “One possible resolution of our conflicting intuitions on these matters could be a quite suffering-focused version of weak NU.” I certainly update on the intuitions of negative utilitarians to place more weight on suffering avoidance than I otherwise would, however, even updating on that I still conclude that transhuman bliss could be good enough to offset torture. The badness of torture seems to be a fact about how extreme the experience is as a result of evolution. However, it seems possible to create more extreme positive experience in a transhumanist world, where we can design experiences to be as good as physics allows. Additionally, I’d probably be more sympathetic to suffering reducing than most positive utilitarians. I do think that the expected amount of torture in the future is smaller than the expected amount of transhuman bliss largely for reasons laid out here.
Thanks for the replies. :)
If people knew in advance that this would happen, it would relieve a great deal of suffering during people’s lives. People could be much less afraid of death because the very end of their lives would be so nice. I imagine that anxiety about death and pain near the end of life without hope of things getting better are some of the biggest sources of suffering in most people’s entire lives, so the suffering reduction here could be quite nontrivial.
So I think we’d have to specify that no one would know about this other than the person to whom it suddenly happened. In that case it still seems like probably something most people would strongly prefer. That said, the intuition in favor of it gets weaker if we specify that someone else would have to endure a pinprick with no compensation in order to provide this joy to a different person. And my intuition in favor of doing that is weaker than my intuition against torturing one person to create happiness for other people. (This brings up the open vs empty individualism issue again, though.)
When astronomical quantities of happiness are involved, like one minute of torture to create a googol years of transhuman bliss, I begin to have some doubts about the anti-torture stance, in part because I don’t want to give in to scope neglect. That’s why I give some moral credence to strongly suffering-focused weak NU. That said, if I were personally facing this choice, I would still say: “No way. The bliss isn’t worth a minute of torture.” (If I were already in the throes of temptation after a taste of transhuman-level bliss, maybe I’d have a different opinion. Conversely, after the first few seconds of torture, I imagine many people might switch their opinions to saying they want the torture to stop no matter what.)
I agree, assuming we count their magnitudes the way that a typical classical utilitarian would. It’s plausible that the expected happiness of the future as judged by a typical classical utilitarian could be a few times higher than expected suffering, maybe even an order of magnitude higher. (Relative to my moral values, it’s obvious that the expected badness of the future will far outweigh the expected goodness—except in cases where a posthuman future would prevent lots of suffering elsewhere in the multiverse, etc.)
Hmm, this may be just a completely different intuition about suffering versus well-being. To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it. Even updating on the intuitions of negative utilitarians I still conclude that the amount of future transhuman bliss would outweigh the suffering of the future.
Sidenote, I really enjoy your blog and have cited you a bunch in high school debate.
I also have that intuition, probably even if someone else has to endure the pinprick without compensation. But my intuitions about the wrongness of “torture for bliss” are stronger, and if there’s a conflict between the intuitions, I’ll stick with the wrongness of “torture for bliss”.
Thanks for the kind words. :) I hope debate is fun.
When I reflect about the nature of torture it seems obvious that it’s very bad. But I’m not sure how by the nature of reflection on the experience alone we can conclude that there’s no amount of positive bliss that could ever outweigh it. We literally can’t conceive of how good transhuman bliss might be and any case of trying to add up trillions of positive minor experiences seems very sensitive to scope neglect.
Your point that I simply can’t conceive of how good transhuman bliss might be is fair. :) I might indeed change my intuitions if I were to experience it (if that were possible; it’d require a lot of changes to my brain first). I guess we might change our intuitions about many things if we had more insight—e.g., maybe we’d decide that hedonic experience itself isn’t as important as some other things. There’s a question of to what extent we would regard these changes of opinion as moral improvements versus corruption of our original values.
I guess I don’t feel very motivated by the abstract thought that if I were better able to comprehend transhuman-level bliss I might better see how awesome it is and would therefore be more willing to accept the existence of some additional torture in order for more transhuman bliss to exist. I can see how some people might find that line of reasoning motivating, but to me, my reaction is: “No! Stop the extra torture! That’s so obviously the right thing to do.”
That’s true of your current intuitions but I care about what we would care about if we were fully rational and informed. If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.
Yeah, that’s a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn’t that important, while torture is. I’d rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don’t currently find very compelling.
There’s always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we’d oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It’s unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one’s motivations to some degree.
The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That’s not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.
Here’s an intuition pump: Is there any number of elegant scientific discoveries made in a Matrix, where no sentient beings at all would benefit from technologies derived from those discoveries, that would justify murdering someone? Scientific discoveries do seem valuable, and many people have the intuition that they’re valuable independent of their applications. But is it scope neglect to say that whatever their value, that value just couldn’t be commensurable with hedonic wellbeing? If not, what is the problem in principle with saying the same for happiness and suffering?
I don’t have the intuition that scientific discoveries are valuable independent of their use for sentient beings.
Fair enough, I don’t either. But there are some non-hedonic things that I have some intuition are valuable independent of hedonics—it’s just that I reject this intuition upon reflection (just as I reject the intuition that happiness is valuable independent of relief of suffering upon reflection). Is there anything other than hedonic well-being that you have an intuition is independently good or bad, even if you don’t endorse that intuition?
Yeah, to some degree I have egalitarian intuitions pre reflection and some other small non utilitarian intuitions.
Regarding the example about bliss before death, there’s another complication if we give weight to preference satisfaction even when a person doesn’t know whether those preferences have been satisfied. I give a bit of weight to the value of satisfying preferences even if someone doesn’t know about it, based on analogies to my case. (For example, I prefer for the world to contain less suffering even if I don’t know that it does.)
Many people would prefer for the end of their lives to be wonderful, to experience something akin to heaven, etc, and adding the bliss at the end of their lives—even unbeknownst to them until it happened—would still satisfy those preferences. People might also have preferences like “I want to have a net happy life, even though I usually feel depressed” or “I want to have lots of meaningful experiences”, and those preferences would also be satisfied by adding the end-of-life bliss.
I get why that would appeal to a positive utilitarian but I’m not sure why that would be relevant to a negative utilitarians’ view. Also, we could make it so that this only applies to babies who died before turning two, so they don’t have sophisticated preferences about a net positive QOL.
People have preferences to have wonderful ends to their lives, to have net positive lives, etc. Those preferences may be frustrated by default (especially the first one; most people don’t have wonderful ends to their lives) but would become not frustrated once the bliss was added. People’s preferences regarding those things are typically much stronger than their preferences not to experience a single pinprick.
Good point about the babies. One might feel that babies and non-human animals still have implicit preferences for experiencing bliss in the future, but I agree that’s a more tenuous claim.