Okay. One question would be whether you share my intuitions in the case I posed to Brian Tomasik. For reference here it is. “Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers. ”
Before reflection, sure, that seems like a worthy trade.
But the trichotomy posed in “Three Types of NU,” which I noted in the second paragraph of my last comment, seems inescapable. Suppose I accept it as morally good to inflict small pain along with lots of superhappiness, and reject lexicality (though I don’t think this is off the table, despite the continuity arguments). Then I’d have to conclude that any degree of horrible experience has its price. That doesn’t just seem absurd, it flies in the face of what ethics just is to me. Sufficiently intense suffering just seems morally serious in a way that nothing else is. If that doesn’t resonate with you, I’m stumped.
Well I think I grasp the force of the initial intuition. I just abandon it upon reflection. I have a strong intuition that extreme suffering is very very bad. I don’t have the intuition that it’s badness can’t be outweighed by anything else, regardless of what the other thing is.
Okay. One question would be whether you share my intuitions in the case I posed to Brian Tomasik. For reference here it is. “Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers. ”
Before reflection, sure, that seems like a worthy trade.
But the trichotomy posed in “Three Types of NU,” which I noted in the second paragraph of my last comment, seems inescapable. Suppose I accept it as morally good to inflict small pain along with lots of superhappiness, and reject lexicality (though I don’t think this is off the table, despite the continuity arguments). Then I’d have to conclude that any degree of horrible experience has its price. That doesn’t just seem absurd, it flies in the face of what ethics just is to me. Sufficiently intense suffering just seems morally serious in a way that nothing else is. If that doesn’t resonate with you, I’m stumped.
Well I think I grasp the force of the initial intuition. I just abandon it upon reflection. I have a strong intuition that extreme suffering is very very bad. I don’t have the intuition that it’s badness can’t be outweighed by anything else, regardless of what the other thing is.