I should note that the main thing I take issue with in that quote of MacAskill’s is the general (and AFAICT unargued) statement that “any argument for the first claim would also be a good argument for the second”. I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).
As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are various ways to unpack and support that line of argument.
One of them rests on the intuition that ethics is about solving problems (an intuition that one may or may not share, of course).[1] If one shares that moral intuition, or premise, then it seems plausible to say that the presence of suffering or miserable lives amounts to a problem, or a problematic state, whereas the absence of pleasure or pleasurable lives does not (other things equal) amount to a problem for anyone, or to a problematic state. That line of argument (whose premises may be challenged, to be sure) does not appear “flippable” such that it becomes a similarly plausible argument in favor of any supposed goodness of creating a happy life.
Alternatively, or additionally, one can support this line of argument by appealing to specific cases and thought experiments, such as the following (sec. 1.4):
we would rightly rush to send an ambulance to help someone who is enduring extreme suffering, yet not to boost the happiness of someone who is already doing well, no matter how much we may be able to boost it. … Similarly, if we were in the possession of pills that could raise the happiness of those who are already happy to the greatest heights possible, there would be no urgency in distributing these pills, whereas if a single person fell to the ground in unbearable agony right before us, there would indeed be an urgency to help.
… if a person is in a state of dreamless sleep rather than a state of ecstatic happiness, this cannot reasonably be characterized as a disaster or a catastrophe. The difference between these two states does not carry great moral weight. By contrast, the difference between sleeping and being tortured does carry immense moral weight, and the realization of torture rather than sleep would indeed amount to a catastrophe. Being forced to endure torture rather than dreamless sleep, or an otherwise neutral state, would be a tragedy of a fundamentally different kind than being forced to “endure” a neutral state instead of a state of maximal bliss.
These cases also don’t seem “flippable” with similar plausibility. And the same applies to Epicurean/Buddhist/minimalistviews of wellbeing and value.
An alternative is to speak in terms of urgency vs. non-urgency, as Karl Popper, Thomas Metzinger, and Jonathan Leighton have done, cf. Vinding, 2020, sec. 1.4.
I’m not sure how I feel about relying on intuitions in thought experiments such as those. I don’t necessarily trust my intuitions.
If you’d asked me 5-10 years ago whose life is more valuable: an average pig’s life or a severely mentally-challenged human’s life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.
My point is when I’m deciding if the absence of pleasure is problematic or not I would prefer for there to be some philosophical argument why or why not, rather than examples that show that my intuition goes against this. You could argue that such arguments don’t really exist, and that all ethical judgement relies on intuition to some extent, but I’m a bit more hopeful. For example Michael St Jules’ comment is along these lines and is interesting.
On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the ground). Therefore creating pleasure is good (and one way of doing so is to create new happy lives), and reducing suffering is also good. Adding caveats to this such as ‘pleasure is only good if it accrues to an already existing being’ just seems to be somewhat ad hoc / going against Occam’s Razor / trying to justify an intuition one already holds which may or may not be correct.
On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the ground).
It seems like you’re just relying on your intuition that pleasure is intrinsically good, and calling that an axiom we have to accept. I don’t think we have to accept that at all — rejecting it does have some counterintuitive consequences, I won’t deny that, but so does accepting it. It’s not at all obvious (and Magnus’s post points to some reasons we might favor rejecting this “axiom”).
Well if you think suffering is bad and pleasure is not good then the counterintuitive (to the vast majority of people) conclusion is that we should (painlessly if possible, but probably painfully if necessary) ensure everyone gets killed off so that we never have any suffering again.
It may well be true that we should ensure everyone gets killed off, but this is certainly an argument that many find compelling against the dual claim that suffering is bad and pleasure is not good.
That case does run counter to “suffering is intrinsically bad but happiness isn’t,” but it doesn’t run counter to “suffering is bad,” which is what your last comment asked about. I don’t see any compelling reasons to doubt that suffering is bad, but I do see some compelling reasons to doubt that happiness is good.
That’s just an intuition, no? (i.e. that everyone painlessly dying would be bad.) I don’t really understand why you want to call it an “axiom” that happiness is intrinsically good, as if this is stronger than an intuition, which seemed to be the point of your original comment.
See this post for why I don’t think the case you presented is decisive against the view I’m defending.
What is your compelling reason to doubt happiness is good? Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don’t think these argue that creating happiness isn’t good.
I do happen to think suffering is bad, but here is a potentially reasonable counterargument—some people think that suffering is what makes life meaningful. For example some think of the idea of drugs being widespread, alleviating everyone of all pain all the time, is monstrous. People’s children would get killed and the parents just wouldn’t feel any negative emotion—this seems a bit wrong...
You could try to use your pareto improvement argument here i.e. that it’s better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it. Firstly, I do think many people would want there to be some pain in this situation and that they would think of a lack of pain being disrespectful and grotesque. Otherwise I’m slightly confused about one having a preference that the child wasn’t killed, but also not feeling any sort of hedonic pain about it...is this contradictory?
As I said I do think suffering is bad, but I’m yet to be convinced this is less of a leap of faith than saying happiness is good.
It would certainly be a good thing to do. And if I could do it costlessly I think I would see it as an obligation, although I’m slightly fuzzy on the concept of moral obligations in the first place.
In reality however there would be an opportunity cost. We’re generally more effective at alleviating suffering than creating pleasure, so we should generally focus on doing the former.
To modify the monk case, what if we could (costlessly; all else equal) make the solitary monk feel a notional 11 units of pleasure followed by 10 units of suffering?
Or, extreme pleasure of “+1001” followed by extreme suffering of “-1000″?
Cases like these make me doubt the assumption of happiness as an independent good. I know meditators who claim to have learned to generate pleasure at will in jhana states, who don’t buy the hedonic arithmetic, and who prefer the states of unexcited contentment over states of intense pleasure.
So I don’t want to impose, from the outside, assumptions about the hedonic arithmetic onto mind-moments who may not buy them from the inside.
Additionally, I feel no personal need for the concept of intrinsic positive value anymore, because all my perceptions of positive value seem just fine explicable in terms of their indirect connections to subjective problems. (I used to use the concept, and it took me many years to translate it into relational terms in all the contexts where it pops up, but I seem to have now uprooted it so that it no longer pops to mind, or at least it stopped doing so over the past four years. In programming terms, one could say that uprooting the concept entailed refactoring a lot of dependencies regarding other concepts, but eventually the tab explosion started shrinking back down again, and it appeared perfectly possible to think without the concept. It would be interesting to hear whether this has simply “clicked” for anyone when reading analytical thought experiments, because for me it felt more like how I would imagine a crisis of faith to feel like for a person who loses their faith in a <core concept>, including the possibly arduous cognitive task of learning to fill the void and seeing what roles the concept played.)
I’m not sure if “pleasure” is the right word. I certainly think that improving one’s mental state is always good, even if this starts at a point in which there is no negative experience at all.
This might not involve increasing “pleasure”. Instead it could be increasing the amount of “meaning” felt or “love” felt. If monks say they prefer contentment over intense pleasure then fine—I would say the contentment state is hedonically better in some way.
This is probably me defining “hedonically better” differently to you but it doesn’t really matter. The point is I think you can improve the wellbeing of someone who is experiencing no suffering and that this is objectively a desirable thing to do.
I think one crux here is that Teo and I would say, calling an increase in the intensity of a happy experience “improving one’s mental state” is a substantive philosophical claim. The kind of view we’re defending does not say something like, “Improvements of one’s mental state are only good if they relieve suffering.” I would agree that that sounds kind of arbitrary.
The more defensible alternative is that replacing contentment (or absence of any experience) with increasingly intense happiness / meaning / love is not itself an improvement in mental state. And this follows from intuitions like “If a mind doesn’t experience a need for change (and won’t do so in the future), what is there to improve?”
Can you elaborate a bit on why the seemingly arbitrary view you quoted in your first paragraph wouldn’t follow, from the view that you and Teo are defending? Are you saying that from your and Teo’s POVs, there’s a way to ‘improve a mental state’ that doesn’t amount to decreasing suffering (/preventing it)? The statement itself seems a bit odd, since ‘improvements’ seems to imply ‘goodness’, and the statement hypothetically considers situations where improvements may not be good..so thought I would see if you could clarify.
In regards to the ‘defensible alternative’, it seems that one could defend a plausible view that a state of contentment, moved to a state of increased bliss, is indeed an improvement, even though there wasn’t a needfor change. Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states. It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states. **Note, using the term improvement can easily lead to accidental equivocation between scenarios of mitigating suffering versus increasing pleasure, but the ethical discernment between each seems manageable.
Are you saying that from your and Teo’s POVs, there’s a way to ‘improve a mental state’ that doesn’t amount to decreasing suffering (/preventing it)?
No, that’s precisely what I’m denying. So, the reason I mentioned that “arbitrary” view was that I thought Jack might be conflating my/Teo’s view with one that (1) agrees that happiness intrinsically improves a mental state, but (2) denies that improving a mental state in this particular way is good (while improving a mental state via suffering-reduction is good).
Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states.
It’s prima facie plausible that there’s an improvement, sure, but upon reflection I don’t think my experience that happiness has varying intensities implies that moving from contentment to more intense happiness is an improvement. Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states.
You could, yeah, but I think “improvement” has such a strong connotation to most people that something of intrinsic value has been added. So I’d worry that using that language would be confusing, especially to welfarist consequentialists who think (as seems really plausible to me) that you should do an act to the extent that it improves the state of the world.
Okay, thanks for clarifying for me! I think I was confused in that opening line when you clarified that your views do not say that only a relief of suffering improves a mental state, but in reality it’s that you do think such is the case, just not in conjunction with the claim that happiness also intrinsically improves a mental state, correct?
>Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
With respect to this, I should have clarified that the state of contentment, that becomes a more intense positive state was one of an existing and experiencing being, not a content state of non-existence and then pleasure is brought into existence. Given the latter, would the painting analogy hold, since in this thought experiment there is an experiencer who has some sort of improvement in their mental state, albeit not a categorical sort of improvement that is on par with the sort the relives suffering? I.e. It wasn’t a problem per se (no suffering) that they were being deprived of the more intense pleasure, but the move from lower pleasure to higher pleasure is still an improvement in some way (albeit perhaps a better word would be needed to distinguish the lexical importance between these sorts of *improvements*).
Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don’t think these argue that creating happiness isn’t good.
I think they do argue that creating happiness isn’t intrinsically good, because you can always construct a version of the Very Repugnant Conclusion that applies to a view that says suffering is weighed some finite X times more than happiness, and I find those versions almost as repugnant. E.g. suppose that on classical utilitarianism we prefer to create 100 purely miserable lives plus some large N micro-pleasure lives over creating 10 purely blissful lives. On this new view, we’d prefer to create 100 purely miserable lives plus X*N micro-pleasure lives over the 10 purely blissful lives. Another variant you could try is a symmetric lexical view where only sufficiently blissful experiences are allowed to outweigh misery. But while some people find that dissolves the repugnance of the VRC, I can’t say the same.
Increasing the X, or introducing lexicalities, to try to escape the VRC just misses the point, I think. The problem is that (even super-awesome/profound) happiness is treated as intrinsically commensurable with miserable experiences, as if giving someone else happiness in itself solves the miserable person’s urgent problem. That’s just fundamentally opposed to what I find morally compelling.
(I like the monk example given in the other response to your question, anywho. I’ve written about why I find strong SFE compelling elsewhere, like here and here.)
You could try to use your pareto improvement argument here i.e. that it’s better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it.
Yeah, that is indeed my response; I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario. This view seems to clearly conflate intrinsic with instrumental value. “Disrespect” and “grotesqueness” are just not things that seem intrinsically important to me, at all.
having a preference that the child wasn’t killed, but also not feeling any sort of hedonic pain about it...is this contradictory?
Depends how you define a preference, I guess, but the point of the thought experiment is to suspend your disbelief about the flow-through effects here. Just imagine that literally nothing changes about the world other than that the suffering is relieved. This seems so obviously better than the default that I’m at a loss for a further response.
“I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario.”
I wasn’t expecting you to. I don’t have any sympathy for it either! I was just giving you an argument that I suspect many others would find compelling. Certainly if my sister died and I didn’t feel anything, my parents wouldn’t like that!
Maybe it’s not particularly relevant to you if an argument is considered compelling by others, but I wanted to raise it just in case. I certainly don’t expect to change your mind on this—nor do I want to as I also think suffering is bad! I’m just not sure suffering being bad is a smaller leap than saying happiness is good.
Here’s another way of saying my objection to your original comment: What makes “happiness is intrinsically good” more of an axiom than “sufficiently intense suffering is morally serious in a sense that happiness (of the sort that doesn’t relieve any suffering) isn’t, so the latter can’t compensate for the former”? I don’t see what answer you can give that doesn’t appeal to intuitions about cases.
For all practical purposes suffering is dispreferred by beings who experience it, as you know, so I don’t find this to be a counterexample. When you say you don’t want someone to make you less sad about the problems in the world, it seems like a Pareto improvement would be to relieve your sadness without changing your motivation to solve those problems—if you agree, it seems you should agree the sadness itself is intrinsically bad.
This response is a bit weird to me because the linked post has two counter-examples and you only answered one, but I feel like the other still applies.
The other thought experiment mentioned in the piece is that of a cow separated from her calf and the two bovines being distressed by this. Michael says (and I’m sympathetic) that the moral action here is to fulfill the bovines preferences to be together, not remove their pain at separation without fulfilling that preference (e.g. through drugging the cows into bliss).
Your response about Pareto Improvements doesn’t seem to work here, or seems less intuitive to me at least. Removing their sadness at separation while leaving their desire to be together intact isn’t a clear Pareto improvement unless one already accepts that pain is what is bad. And it is precisely the imagining of a separated cow/calf duo drugged into happiness but wanting one another that makes me think maybe it isn’t the pain that matters.
I didn’t directly respond to the other one because the principle is exactly the same. I’m puzzled that you think otherwise.
Removing their sadness at separation while leaving their desire to be together intact isn’t a clear Pareto improvement unless one already accepts that pain is what is bad.
I mean, in thought experiments like this all one can hope for is to probe intuitions that you either do or don’t have. It’s not question-begging on my part because my point is: Imagine that you can remove the cow’s suffering but leave everything else practically the same. (This, by definition, assesses the intrinsic value of relieving suffering.) How could that not be better? It’s a Pareto improvement because, contra the “drugged into happiness” image, the idea is not that you’ve relieved the suffering but thwarted the cow’s goal to be reunited with its child; the goals are exactly the same, but the suffering is gone, and it just seems pretty obvious to me that that’s a much better state of the world.
Sticking with the cow example, I agree with you that if we removed their pain at being separated while leaving the desire to be together intact, this seems like a Pareto improvement over not removing their pain.
A preferentist would insist here that the removal of pain is not what makes that situation better, but rather that pain is (probably) dis-prefered by the cows, so removing it gives them something they want.
But the negative hedonist (pain is bad, pleasure is neutral) is stuck with saying that the “drugged into happiness” image is as good as the “cows happily reunited” image. A preferentist by contrast can (I think intuitively) assert that reuniting the cows is better than just removing their pain, because reunification fulfills (1) the cows desire to be free of pain and (2) their desire to be together.
I don’t have settled views on whether or not suffering is necessarily bad in itself.
That someone (or almost everyone) disprefers suffering doesn’t mean suffering is bad in itself. Even if people always disprefer less pleasure, it wouldn’t follow that the absence of pleasure is bad in itself. Even those with symmetric views wouldn’t say so; they’d say its absence is neutral and its presence is good and better. We wouldn’t say dispreferring suffering makes the absence of suffering an intrinsic good.
I’m sympathetic to a more general “relative-only” view according to which suffering is an evaluative impression against the state someone is in relative to an “empty” state or nonexistence, so a kind of self-undermining evaluation. Maybe this is close enough to intrinsic badness and can be treated like intrinsic badness, but it doesn’t seem to actually be intrinsic badness. I think Frick’s approach, Bader’s approach and Actualism, each applied to preferences that are “relative only” rather than whole lives, could still imply that worlds with less suffering are better and some lives with suffering are better not started, all else equal, while no lives are better started, all else equal.
This is compatible with the reason we suffer sometimes being because of mere relative evaluations between states of the world without being “against” the current state or things being worse than nothing.
It seems that a hedonist would need to say that removing my motivation is no harm to me personally, either (except for instrumental reasons), but that violates an interest of mine so seems wrong to me. This doesn’t necessarily count against suffering being bad in itself or respond to your proposed Pareto improvement, it could just count against only suffering mattering.
With respect to your last paragraph, someone who holds a person-affecting view might respond that you have things backwards (indeed, this is what Frick claims): welfare matters because moral patients matter, rather than the other way around, so you need to put the person first, and something something therefore person-affecting view! Then we could discuss what welfare means, and that could be more pleasure and less suffering, or something else.
That being said, this seems kind of confusing to me, too. Welfare matters because moral patients matter, but moral patients are, in my view, just those beings capable of welfare. So, welfare had to come first anyway, and we just added extra steps.
I suspect this can be fixed by dealing directly with interests themselves as the atoms that matter, rather than entire moral patients. E.g. preference satisfaction matters because preferences matter, and something something therefore preference-affecting view! I think such an account would deny that giving even existing people more pleasure is good in itself: they’d need to have an interest in more pleasure for it to make them better off. Maybe we always do have such an interest by our nature, though, and that’s something someone could claim, although I find that unintuitive.
Another response may just be that value is complex, and we shouldn’t give too much extra weight to simpler views just because they’re simpler. That can definitely go even further, e.g. welfare is not cardinally measurable or nothing matters. Also, I think only suffering (or only pleasure) mattering is actually in some sense a simpler view than both suffering and pleasure mattering, since with both, you need to explain why each matters and tradeoffs between them. Some claim that symmetric hedonism is not value monistic at all.
Thanks for your question, Michael :)
I should note that the main thing I take issue with in that quote of MacAskill’s is the general (and AFAICT unargued) statement that “any argument for the first claim would also be a good argument for the second”. I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).
As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are various ways to unpack and support that line of argument.
One of them rests on the intuition that ethics is about solving problems (an intuition that one may or may not share, of course).[1] If one shares that moral intuition, or premise, then it seems plausible to say that the presence of suffering or miserable lives amounts to a problem, or a problematic state, whereas the absence of pleasure or pleasurable lives does not (other things equal) amount to a problem for anyone, or to a problematic state. That line of argument (whose premises may be challenged, to be sure) does not appear “flippable” such that it becomes a similarly plausible argument in favor of any supposed goodness of creating a happy life.
Alternatively, or additionally, one can support this line of argument by appealing to specific cases and thought experiments, such as the following (sec. 1.4):
These cases also don’t seem “flippable” with similar plausibility. And the same applies to Epicurean/Buddhist/minimalist views of wellbeing and value.
An alternative is to speak in terms of urgency vs. non-urgency, as Karl Popper, Thomas Metzinger, and Jonathan Leighton have done, cf. Vinding, 2020, sec. 1.4.
I’m not sure how I feel about relying on intuitions in thought experiments such as those. I don’t necessarily trust my intuitions.
If you’d asked me 5-10 years ago whose life is more valuable: an average pig’s life or a severely mentally-challenged human’s life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.
My point is when I’m deciding if the absence of pleasure is problematic or not I would prefer for there to be some philosophical argument why or why not, rather than examples that show that my intuition goes against this. You could argue that such arguments don’t really exist, and that all ethical judgement relies on intuition to some extent, but I’m a bit more hopeful. For example Michael St Jules’ comment is along these lines and is interesting.
On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the ground). Therefore creating pleasure is good (and one way of doing so is to create new happy lives), and reducing suffering is also good. Adding caveats to this such as ‘pleasure is only good if it accrues to an already existing being’ just seems to be somewhat ad hoc / going against Occam’s Razor / trying to justify an intuition one already holds which may or may not be correct.
It seems like you’re just relying on your intuition that pleasure is intrinsically good, and calling that an axiom we have to accept. I don’t think we have to accept that at all — rejecting it does have some counterintuitive consequences, I won’t deny that, but so does accepting it. It’s not at all obvious (and Magnus’s post points to some reasons we might favor rejecting this “axiom”).
Would you say that saying suffering is bad is a similar intuition?
No, I know of no thought experiments or any arguments generally that make me doubt that suffering is bad. Do you?
Well if you think suffering is bad and pleasure is not good then the counterintuitive (to the vast majority of people) conclusion is that we should (painlessly if possible, but probably painfully if necessary) ensure everyone gets killed off so that we never have any suffering again.
It may well be true that we should ensure everyone gets killed off, but this is certainly an argument that many find compelling against the dual claim that suffering is bad and pleasure is not good.
That case does run counter to “suffering is intrinsically bad but happiness isn’t,” but it doesn’t run counter to “suffering is bad,” which is what your last comment asked about. I don’t see any compelling reasons to doubt that suffering is bad, but I do see some compelling reasons to doubt that happiness is good.
That’s just an intuition, no? (i.e. that everyone painlessly dying would be bad.) I don’t really understand why you want to call it an “axiom” that happiness is intrinsically good, as if this is stronger than an intuition, which seemed to be the point of your original comment.
See this post for why I don’t think the case you presented is decisive against the view I’m defending.
What is your compelling reason to doubt happiness is good? Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don’t think these argue that creating happiness isn’t good.
I do happen to think suffering is bad, but here is a potentially reasonable counterargument—some people think that suffering is what makes life meaningful. For example some think of the idea of drugs being widespread, alleviating everyone of all pain all the time, is monstrous. People’s children would get killed and the parents just wouldn’t feel any negative emotion—this seems a bit wrong...
You could try to use your pareto improvement argument here i.e. that it’s better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it. Firstly, I do think many people would want there to be some pain in this situation and that they would think of a lack of pain being disrespectful and grotesque. Otherwise I’m slightly confused about one having a preference that the child wasn’t killed, but also not feeling any sort of hedonic pain about it...is this contradictory?
As I said I do think suffering is bad, but I’m yet to be convinced this is less of a leap of faith than saying happiness is good.
Say there is a perfectly content monk who isn’t suffering at all. Do you have a moral obligation to make them feel pleasure?
It would certainly be a good thing to do. And if I could do it costlessly I think I would see it as an obligation, although I’m slightly fuzzy on the concept of moral obligations in the first place.
In reality however there would be an opportunity cost. We’re generally more effective at alleviating suffering than creating pleasure, so we should generally focus on doing the former.
To modify the monk case, what if we could (costlessly; all else equal) make the solitary monk feel a notional 11 units of pleasure followed by 10 units of suffering?
Or, extreme pleasure of “+1001” followed by extreme suffering of “-1000″?
Cases like these make me doubt the assumption of happiness as an independent good. I know meditators who claim to have learned to generate pleasure at will in jhana states, who don’t buy the hedonic arithmetic, and who prefer the states of unexcited contentment over states of intense pleasure.
So I don’t want to impose, from the outside, assumptions about the hedonic arithmetic onto mind-moments who may not buy them from the inside.
Additionally, I feel no personal need for the concept of intrinsic positive value anymore, because all my perceptions of positive value seem just fine explicable in terms of their indirect connections to subjective problems. (I used to use the concept, and it took me many years to translate it into relational terms in all the contexts where it pops up, but I seem to have now uprooted it so that it no longer pops to mind, or at least it stopped doing so over the past four years. In programming terms, one could say that uprooting the concept entailed refactoring a lot of dependencies regarding other concepts, but eventually the tab explosion started shrinking back down again, and it appeared perfectly possible to think without the concept. It would be interesting to hear whether this has simply “clicked” for anyone when reading analytical thought experiments, because for me it felt more like how I would imagine a crisis of faith to feel like for a person who loses their faith in a <core concept>, including the possibly arduous cognitive task of learning to fill the void and seeing what roles the concept played.)
I’m not sure if “pleasure” is the right word. I certainly think that improving one’s mental state is always good, even if this starts at a point in which there is no negative experience at all.
This might not involve increasing “pleasure”. Instead it could be increasing the amount of “meaning” felt or “love” felt. If monks say they prefer contentment over intense pleasure then fine—I would say the contentment state is hedonically better in some way.
This is probably me defining “hedonically better” differently to you but it doesn’t really matter. The point is I think you can improve the wellbeing of someone who is experiencing no suffering and that this is objectively a desirable thing to do.
Relevant recent posts:
https://www.simonknutsson.com/undisturbedness-as-the-hedonic-ceiling/
https://centerforreducingsuffering.org/phenomenological-argument/
(I think these unpack a view I share, better than I have.)
Edit: For tranquilist and Epicurean takes, I also like Gloor (2017, sec. 2.1) and Sherman (2017, pp. 103–107), respectively.
I think one crux here is that Teo and I would say, calling an increase in the intensity of a happy experience “improving one’s mental state” is a substantive philosophical claim. The kind of view we’re defending does not say something like, “Improvements of one’s mental state are only good if they relieve suffering.” I would agree that that sounds kind of arbitrary.
The more defensible alternative is that replacing contentment (or absence of any experience) with increasingly intense happiness / meaning / love is not itself an improvement in mental state. And this follows from intuitions like “If a mind doesn’t experience a need for change (and won’t do so in the future), what is there to improve?”
Can you elaborate a bit on why the seemingly arbitrary view you quoted in your first paragraph wouldn’t follow, from the view that you and Teo are defending? Are you saying that from your and Teo’s POVs, there’s a way to ‘improve a mental state’ that doesn’t amount to decreasing suffering (/preventing it)? The statement itself seems a bit odd, since ‘improvements’ seems to imply ‘goodness’, and the statement hypothetically considers situations where improvements may not be good..so thought I would see if you could clarify.
In regards to the ‘defensible alternative’, it seems that one could defend a plausible view that a state of contentment, moved to a state of increased bliss, is indeed an improvement, even though there wasn’t a need for change. Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states. It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states. **Note, using the term improvement can easily lead to accidental equivocation between scenarios of mitigating suffering versus increasing pleasure, but the ethical discernment between each seems manageable.
No, that’s precisely what I’m denying. So, the reason I mentioned that “arbitrary” view was that I thought Jack might be conflating my/Teo’s view with one that (1) agrees that happiness intrinsically improves a mental state, but (2) denies that improving a mental state in this particular way is good (while improving a mental state via suffering-reduction is good).
It’s prima facie plausible that there’s an improvement, sure, but upon reflection I don’t think my experience that happiness has varying intensities implies that moving from contentment to more intense happiness is an improvement. Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
You could, yeah, but I think “improvement” has such a strong connotation to most people that something of intrinsic value has been added. So I’d worry that using that language would be confusing, especially to welfarist consequentialists who think (as seems really plausible to me) that you should do an act to the extent that it improves the state of the world.
Okay, thanks for clarifying for me! I think I was confused in that opening line when you clarified that your views do not say that only a relief of suffering improves a mental state, but in reality it’s that you do think such is the case, just not in conjunction with the claim that happiness also intrinsically improves a mental state, correct?
>Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
With respect to this, I should have clarified that the state of contentment, that becomes a more intense positive state was one of an existing and experiencing being, not a content state of non-existence and then pleasure is brought into existence. Given the latter, would the painting analogy hold, since in this thought experiment there is an experiencer who has some sort of improvement in their mental state, albeit not a categorical sort of improvement that is on par with the sort the relives suffering? I.e. It wasn’t a problem per se (no suffering) that they were being deprived of the more intense pleasure, but the move from lower pleasure to higher pleasure is still an improvement in some way (albeit perhaps a better word would be needed to distinguish the lexical importance between these sorts of *improvements*).
I think they do argue that creating happiness isn’t intrinsically good, because you can always construct a version of the Very Repugnant Conclusion that applies to a view that says suffering is weighed some finite X times more than happiness, and I find those versions almost as repugnant. E.g. suppose that on classical utilitarianism we prefer to create 100 purely miserable lives plus some large N micro-pleasure lives over creating 10 purely blissful lives. On this new view, we’d prefer to create 100 purely miserable lives plus X*N micro-pleasure lives over the 10 purely blissful lives. Another variant you could try is a symmetric lexical view where only sufficiently blissful experiences are allowed to outweigh misery. But while some people find that dissolves the repugnance of the VRC, I can’t say the same.
Increasing the X, or introducing lexicalities, to try to escape the VRC just misses the point, I think. The problem is that (even super-awesome/profound) happiness is treated as intrinsically commensurable with miserable experiences, as if giving someone else happiness in itself solves the miserable person’s urgent problem. That’s just fundamentally opposed to what I find morally compelling.
(I like the monk example given in the other response to your question, anywho. I’ve written about why I find strong SFE compelling elsewhere, like here and here.)
Yeah, that is indeed my response; I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario. This view seems to clearly conflate intrinsic with instrumental value. “Disrespect” and “grotesqueness” are just not things that seem intrinsically important to me, at all.
Depends how you define a preference, I guess, but the point of the thought experiment is to suspend your disbelief about the flow-through effects here. Just imagine that literally nothing changes about the world other than that the suffering is relieved. This seems so obviously better than the default that I’m at a loss for a further response.
“I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario.”
I wasn’t expecting you to. I don’t have any sympathy for it either! I was just giving you an argument that I suspect many others would find compelling. Certainly if my sister died and I didn’t feel anything, my parents wouldn’t like that!
Maybe it’s not particularly relevant to you if an argument is considered compelling by others, but I wanted to raise it just in case. I certainly don’t expect to change your mind on this—nor do I want to as I also think suffering is bad! I’m just not sure suffering being bad is a smaller leap than saying happiness is good.
Here’s another way of saying my objection to your original comment: What makes “happiness is intrinsically good” more of an axiom than “sufficiently intense suffering is morally serious in a sense that happiness (of the sort that doesn’t relieve any suffering) isn’t, so the latter can’t compensate for the former”? I don’t see what answer you can give that doesn’t appeal to intuitions about cases.
https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=LZNATg5BoBT3w5AYz
For all practical purposes suffering is dispreferred by beings who experience it, as you know, so I don’t find this to be a counterexample. When you say you don’t want someone to make you less sad about the problems in the world, it seems like a Pareto improvement would be to relieve your sadness without changing your motivation to solve those problems—if you agree, it seems you should agree the sadness itself is intrinsically bad.
This response is a bit weird to me because the linked post has two counter-examples and you only answered one, but I feel like the other still applies.
The other thought experiment mentioned in the piece is that of a cow separated from her calf and the two bovines being distressed by this. Michael says (and I’m sympathetic) that the moral action here is to fulfill the bovines preferences to be together, not remove their pain at separation without fulfilling that preference (e.g. through drugging the cows into bliss).
Your response about Pareto Improvements doesn’t seem to work here, or seems less intuitive to me at least. Removing their sadness at separation while leaving their desire to be together intact isn’t a clear Pareto improvement unless one already accepts that pain is what is bad. And it is precisely the imagining of a separated cow/calf duo drugged into happiness but wanting one another that makes me think maybe it isn’t the pain that matters.
I didn’t directly respond to the other one because the principle is exactly the same. I’m puzzled that you think otherwise.
I mean, in thought experiments like this all one can hope for is to probe intuitions that you either do or don’t have. It’s not question-begging on my part because my point is: Imagine that you can remove the cow’s suffering but leave everything else practically the same. (This, by definition, assesses the intrinsic value of relieving suffering.) How could that not be better? It’s a Pareto improvement because, contra the “drugged into happiness” image, the idea is not that you’ve relieved the suffering but thwarted the cow’s goal to be reunited with its child; the goals are exactly the same, but the suffering is gone, and it just seems pretty obvious to me that that’s a much better state of the world.
I think my above reply missed the mark here.
Sticking with the cow example, I agree with you that if we removed their pain at being separated while leaving the desire to be together intact, this seems like a Pareto improvement over not removing their pain.
A preferentist would insist here that the removal of pain is not what makes that situation better, but rather that pain is (probably) dis-prefered by the cows, so removing it gives them something they want.
But the negative hedonist (pain is bad, pleasure is neutral) is stuck with saying that the “drugged into happiness” image is as good as the “cows happily reunited” image. A preferentist by contrast can (I think intuitively) assert that reuniting the cows is better than just removing their pain, because reunification fulfills (1) the cows desire to be free of pain and (2) their desire to be together.
I don’t have settled views on whether or not suffering is necessarily bad in itself.
That someone (or almost everyone) disprefers suffering doesn’t mean suffering is bad in itself. Even if people always disprefer less pleasure, it wouldn’t follow that the absence of pleasure is bad in itself. Even those with symmetric views wouldn’t say so; they’d say its absence is neutral and its presence is good and better. We wouldn’t say dispreferring suffering makes the absence of suffering an intrinsic good.
I’m sympathetic to a more general “relative-only” view according to which suffering is an evaluative impression against the state someone is in relative to an “empty” state or nonexistence, so a kind of self-undermining evaluation. Maybe this is close enough to intrinsic badness and can be treated like intrinsic badness, but it doesn’t seem to actually be intrinsic badness. I think Frick’s approach, Bader’s approach and Actualism, each applied to preferences that are “relative only” rather than whole lives, could still imply that worlds with less suffering are better and some lives with suffering are better not started, all else equal, while no lives are better started, all else equal.
This is compatible with the reason we suffer sometimes being because of mere relative evaluations between states of the world without being “against” the current state or things being worse than nothing.
It seems that a hedonist would need to say that removing my motivation is no harm to me personally, either (except for instrumental reasons), but that violates an interest of mine so seems wrong to me. This doesn’t necessarily count against suffering being bad in itself or respond to your proposed Pareto improvement, it could just count against only suffering mattering.
With respect to your last paragraph, someone who holds a person-affecting view might respond that you have things backwards (indeed, this is what Frick claims): welfare matters because moral patients matter, rather than the other way around, so you need to put the person first, and something something therefore person-affecting view! Then we could discuss what welfare means, and that could be more pleasure and less suffering, or something else.
That being said, this seems kind of confusing to me, too. Welfare matters because moral patients matter, but moral patients are, in my view, just those beings capable of welfare. So, welfare had to come first anyway, and we just added extra steps.
I suspect this can be fixed by dealing directly with interests themselves as the atoms that matter, rather than entire moral patients. E.g. preference satisfaction matters because preferences matter, and something something therefore preference-affecting view! I think such an account would deny that giving even existing people more pleasure is good in itself: they’d need to have an interest in more pleasure for it to make them better off. Maybe we always do have such an interest by our nature, though, and that’s something someone could claim, although I find that unintuitive.
Another response may just be that value is complex, and we shouldn’t give too much extra weight to simpler views just because they’re simpler. That can definitely go even further, e.g. welfare is not cardinally measurable or nothing matters. Also, I think only suffering (or only pleasure) mattering is actually in some sense a simpler view than both suffering and pleasure mattering, since with both, you need to explain why each matters and tradeoffs between them. Some claim that symmetric hedonism is not value monistic at all.