Consider John Rawls’ grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person’s life is going well for them. I think this person’s life is going poorly for them because they are unhappy.
I think the example might seem absurd because we can’t imagine finding satisfaction in counting blades of grass; it seems like a meaningless pursuit. But is it any more meaningful in any objective sense than doing mathematics (in isolation, assuming no one else would ever benefit)? The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Furthermore, I think life satisfaction and preference satisfaction are slightly different. If we’re talking about life satisfaction rather than preference satisfaction, it’s not an overriding desire (which sounds like addiction), but, upon reflection, (greater) satisfaction with the choices they make and their preferences for those choices. If we are talking about preference satisfaction, people can also have preferences over their preferences. A drug addict might be compelled to use drugs, but prefer not to be. In this case, does the mathematician prefer to have different preferences? If they don’t, then the example might not be so counterintuitive after all. If they do, then the subjectivist can object in a way that’s compatible with their subjectivist intuitions.
Also, a standard objection to hedonistic (or more broadly experiential) views is wireheading or the experience machine, of which I’m sure you’re aware, but I’d like to point them out to everyone else here. People don’t want to sacrifice the pursuits they find meaningful to be put into an artificial state of continuous pleasure, and they certainly don’t want that choice to be made for them. Of course, you could wirehead people or put them in experience machines that make their preferences satisfied (by changing these preferences or simulating things that satisfy their preferences), and people will also object to that.
The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.
Life satisfaction and preference satisfaction are different—the former refers to a judgement about one’s life, the latter to one’s preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn’t seem that his life is going well. You’re right that preference satisfactionists often appeal to ‘laundered’ preferences—their have to prefer what their rationally ideal self would prefer, or something—but it’s hard and unsatisfying to spell out what this looks like. Further, it’s unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
I don’t think this need be the case, since we can have preferences that are mutually exclusive in their satisfaction, and having such preferences means we can’t be maximally satisfied. So, if the mathematician’s preference upon reflection is to not count blades of grass (and do something else) but they have the urge to do so, at least one of these two preferences will go unsatisfied, which detracts from their wellbeing.
However, this on its own wouldn’t tell us the mathematician is better off not counting blades of grass, and if we did always prioritize rational preferences over irrational ones, or preferences about preferences over the preferences to which they refer, then it would be as if the irrational/lower preferences count for nothing, as you suggest.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
I agree, although it also doesn’t help preference satisfactionists who only count preference satisfaction/frustration when it’s experienced consciously, and it might also not help them if we’re allowed to change your preferences, since having easier preferences to satisfy might outweigh the preference frustration that would result from having your old preferences replaced by and ignored for the new preferences.
I think the involuntary experience machine and wireheading are problems for all the consequentialist theories with which I’m familiar (at least under the assumption of something like closed individualism, which I actually find to be unlikely).
I think the example might seem absurd because we can’t imagine finding satisfaction in counting blades of grass; it seems like a meaningless pursuit. But is it any more meaningful in any objective sense than doing mathematics (in isolation, assuming no one else would ever benefit)? The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Furthermore, I think life satisfaction and preference satisfaction are slightly different. If we’re talking about life satisfaction rather than preference satisfaction, it’s not an overriding desire (which sounds like addiction), but, upon reflection, (greater) satisfaction with the choices they make and their preferences for those choices. If we are talking about preference satisfaction, people can also have preferences over their preferences. A drug addict might be compelled to use drugs, but prefer not to be. In this case, does the mathematician prefer to have different preferences? If they don’t, then the example might not be so counterintuitive after all. If they do, then the subjectivist can object in a way that’s compatible with their subjectivist intuitions.
Also, a standard objection to hedonistic (or more broadly experiential) views is wireheading or the experience machine, of which I’m sure you’re aware, but I’d like to point them out to everyone else here. People don’t want to sacrifice the pursuits they find meaningful to be put into an artificial state of continuous pleasure, and they certainly don’t want that choice to be made for them. Of course, you could wirehead people or put them in experience machines that make their preferences satisfied (by changing these preferences or simulating things that satisfy their preferences), and people will also object to that.
Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.
Life satisfaction and preference satisfaction are different—the former refers to a judgement about one’s life, the latter to one’s preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn’t seem that his life is going well. You’re right that preference satisfactionists often appeal to ‘laundered’ preferences—their have to prefer what their rationally ideal self would prefer, or something—but it’s hard and unsatisfying to spell out what this looks like. Further, it’s unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
I don’t think this need be the case, since we can have preferences that are mutually exclusive in their satisfaction, and having such preferences means we can’t be maximally satisfied. So, if the mathematician’s preference upon reflection is to not count blades of grass (and do something else) but they have the urge to do so, at least one of these two preferences will go unsatisfied, which detracts from their wellbeing.
However, this on its own wouldn’t tell us the mathematician is better off not counting blades of grass, and if we did always prioritize rational preferences over irrational ones, or preferences about preferences over the preferences to which they refer, then it would be as if the irrational/lower preferences count for nothing, as you suggest.
I agree, although it also doesn’t help preference satisfactionists who only count preference satisfaction/frustration when it’s experienced consciously, and it might also not help them if we’re allowed to change your preferences, since having easier preferences to satisfy might outweigh the preference frustration that would result from having your old preferences replaced by and ignored for the new preferences.
I think the involuntary experience machine and wireheading are problems for all the consequentialist theories with which I’m familiar (at least under the assumption of something like closed individualism, which I actually find to be unlikely).