Your point that I simply can’t conceive of how good transhuman bliss might be is fair. :) I might indeed change my intuitions if I were to experience it (if that were possible; it’d require a lot of changes to my brain first). I guess we might change our intuitions about many things if we had more insight—e.g., maybe we’d decide that hedonic experience itself isn’t as important as some other things. There’s a question of to what extent we would regard these changes of opinion as moral improvements versus corruption of our original values.
I guess I don’t feel very motivated by the abstract thought that if I were better able to comprehend transhuman-level bliss I might better see how awesome it is and would therefore be more willing to accept the existence of some additional torture in order for more transhuman bliss to exist. I can see how some people might find that line of reasoning motivating, but to me, my reaction is: “No! Stop the extra torture! That’s so obviously the right thing to do.”
That’s true of your current intuitions but I care about what we would care about if we were fully rational and informed. If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.
Yeah, that’s a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn’t that important, while torture is. I’d rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don’t currently find very compelling.
There’s always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we’d oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It’s unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one’s motivations to some degree.
The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That’s not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.
Your point that I simply can’t conceive of how good transhuman bliss might be is fair. :) I might indeed change my intuitions if I were to experience it (if that were possible; it’d require a lot of changes to my brain first). I guess we might change our intuitions about many things if we had more insight—e.g., maybe we’d decide that hedonic experience itself isn’t as important as some other things. There’s a question of to what extent we would regard these changes of opinion as moral improvements versus corruption of our original values.
I guess I don’t feel very motivated by the abstract thought that if I were better able to comprehend transhuman-level bliss I might better see how awesome it is and would therefore be more willing to accept the existence of some additional torture in order for more transhuman bliss to exist. I can see how some people might find that line of reasoning motivating, but to me, my reaction is: “No! Stop the extra torture! That’s so obviously the right thing to do.”
That’s true of your current intuitions but I care about what we would care about if we were fully rational and informed. If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.
Yeah, that’s a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn’t that important, while torture is. I’d rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don’t currently find very compelling.
There’s always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we’d oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It’s unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one’s motivations to some degree.
The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That’s not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.