I haven’t been convinced by anything I’ve read, but I also haven’t read much.
I’m concerned that unless you use preferences, you couldn’t justify any kind of tradeoff rate between (and hence the commensurability of) suffering and happiness/pleasure, because they are fundamentally different. Then, by using an exclusively hedonistic view of value, haven’t you already rejected the moral relevance of preferences, and, if so, how would you justify referring to them to defend hedonism? Even if you could set a tradeoff rate based on preferences, how would you justify using this rate for everyone, given wide differences in preferences?
If not preferences, what else is there to refer to?
There are also of course thought experiments like wireheading and Nozick’s experience machine. Why would I be wrong to not want to subject myself to these, compared to, say, doing anything else I prefer, assuming no effects on others in all cases?
(Note: I shared this post on Facebook, and some discussion is happening there: https://www.facebook.com/groups/1421571464750714/permalink/2428929580681559/)
(Crossposted from FB)
Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it’s difficult to measure pleasure/suffering directly, preferences are used as a proxy.
But I aver that we’re not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two ‘real-world’ situations. Some people may be willing to take five minutes of having a dust-speck in the eye for ten minutes of eating delicious food, whereas others may only be willing to take 30 seconds of the dust-speck. It’s likely that, when we are asked to do this, we aren’t considering the pleasure and suffering on their own, but taking other things into consideration too (perhaps thinking about our memories of similar situations in the past). The variance may also arise because a speck of dust in the eye *will* cause some people to suffer more than others.
Ideally, we’d be able to just consider the pleasure and the suffering on their own. That’s very difficult to do, though. I think there are right answers to these tradeoff questions, but that our brains aren’t able to answer the questions precisely enough. But in extreme cases, the hedonistic utilitarian could argue that anyone who would rather not have a blissful life at all, if it comes at the cost of being pricked by a pin, is simply wrong. It is the pleasure and the suffering that matter, no matter what people *say* they prefer. (See the ‘Future Tuesday Indifference’ argument promulgated by Parfit and Singer).
Sidgwick’s definition of pleasure is after all “a feeling which the sentient individual at the time of feeling it implicitly or explicitly apprehends to be desirable – desirable, that is, when considered merely as feeling.” The feeling, as it were, cannot be unfelt, even if an individual makes certain claims about the desirability (or lack thereof) of the feeling later on.
On that note, have you read Derek Parfit’s ‘On What Matters’ (particularly Parts 1 and 6, in Volumes One and Two respectively)? In my view, he makes some convincing arguments against preference-based theories. Singer and de-Lazari Radek, in ‘The Point of View of the Universe’, build on his arguments to mount a defence of hedonistic utilitarianism against other normative theories, including preference utilitarianism.
Moral realists who endorse hedonistic utilitarianism, such as Singer, posit that the very nature of what Sidgwick describes as pleasure gives us reason to increase it, and that nothing else in the universe gives us similar reasons.
The experience machine is another example of where hedonistic utilitarians would postulate that people’s preferences are plagued by bias. Joshua Greene and Peter Singer have both argued that people’s objections to entering the experience machine are the result of status quo bias, for instance.
See: https://www.tandfonline.com/doi/abs/10.1080/09515089.2012.757889?journalCode=cphp20 and https://en.wikipedia.org/wiki/Experience_machine#Counterarguments
So is the idea to ground these tradeoffs in preferences, but consider only conscious preferences about conscious experiences themselves? Furthermore, the degree of pleasantness or suffering would be determined by the strengths of these kinds of preferences (which we might hypothesize to fall on a cardinal scale).
If I had just gotten out of an experience machine, I’d be extremely upset. I don’t think I would actually get back into the machine, but even if I did, I think this would only be to relieve my suffering. It seems like this framing introduces a different kind of bias. If my experiences in the outside world were really horrible, I’d be motivated to leave it. If the outside world were not so horrible as to drive me to chronic depression or I could accomplish more good outside than inside, I’d stay out.
Here are some overviews:
https://plato.stanford.edu/entries/hedonism/#EthHed
https://plato.stanford.edu/entries/well-being/#Hed
My guess is that ultimately you’ll just find yourself in an irresolvable standoff of differing intuitions with people who favor a different view of value. Philosophers have debated this question for millennia to decades (depending on how we count) and haven’t reached agreement, so I think in the absence of some methodological revolution settling this question is hopeless. (Though of course, you clarifying your own thinking, or arriving at a view you feel more confident in yourself, seem feasible.)
I’ve got a very slowly in-progress multipart essay attempting to definitively answer this question without resort to (what we normally mean by) intuition: http://www.valence-utilitarianism.com/posts/choose-your-preference-utilitarianism-carefully-part-1