For whatever it’s worth, my ethical intuitions suggest that optimizing for happiness is not a particularly sensible goal. I personally care relatively little about my self-reported happiness levels, and wouldn’t be very excited about someone optimizing for them.
Kahneman has done some research on this, and if I remember correctly changed his mind publicly a few years ago from his previous position in Thinking Fast and Slow to a position that values life-satisfaction a lot more than happiness (and life-satisfaction tends to trade off against happiness in many situations).
This was the random article I remember reading about this. Take it with all the grains of salt of normal popular science reporting. Here are some quotes (note that I disagree with the “reducing suffering” part as an alternative focus):
At about the same time as these studies were being conducted, the Gallup polling company (which has a relationship with Princeton) began surveying various indicators among the global population. Kahneman was appointed as a consultant to the project.
“I suggested including measures of happiness, as I understand it – happiness in real time. To these were added data from Bhutan, a country that measures its citizens’ happiness as an indicator of the government’s success. And gradually, what we know today as Gallup’s World Happiness Report developed. It has also been adopted by the UN and OECD countries, and is published as an annual report on the state of global happiness.
“A third development, which is very important in my view, was a series of lectures I gave at the London School of Economics in which I presented my findings about happiness. The audience included Prof. Richard Layard – a teacher at the school, a British economist and a member of the House of Lords – who was interested in the subject. Eventually, he wrote a book about the factors that influence happiness, which became a hit in Britain,” Kahneman said, referring to “Happiness: Lessons from a New Science.”
“Layard did important work on community issues, on improving mental health services – and his driving motivation was promoting happiness. He instilled the idea of happiness as a factor in the British government’s economic considerations.
“The involvement of economists like Layard and Deaton made this issue more respectable,” Kahneman added with a smile. “Psychologists aren’t listened to so much. But when economists get involved, everything becomes more serious, and research on happiness gradually caught the attention of policy-making organizations.
“At the same time,” said Kahneman, “a movement has also developed in psychology – positive psychology – that focuses on happiness and attributes great importance to internal questions like meaning. I’m less certain of that.
[...]
Kahneman studied happiness for over two decades, gave rousing lectures and, thanks to his status, contributed to putting the issue on the agenda of both countries and organizations, principally the UN and the OECD. Five years ago, though, he abandoned this line of research.
“I gradually became convinced that people don’t want to be happy,” he explained. “They want to be satisfied with their life.”
A bit stunned, I asked him to repeat that statement. “People don’t want to be happy the way I’ve defined the term – what I experience here and now. In my view, it’s much more important for them to be satisfied, to experience life satisfaction, from the perspective of ‘What I remember,’ of the story they tell about their lives. I furthered the development of tools for understanding and advancing an asset that I think is important but most people aren’t interested in.
“Meanwhile, awareness of happiness has progressed in the world, including annual happiness indexes. It seems to me that on this basis, what can confidently be advanced is a reduction of suffering. The question of whether society should intervene so that people will be happier is very controversial, but whether society should strive for people to suffer less – that’s widely accepted.
I don’t fully agree with all of the above, but a lot of the gist seems correct and important.
First, HLI will primarily use life satisfaction scores to determine our recommendations. Hence, if you think life satisfaction does a reasonably job of capturing well-being, I suppose you will still be interested in the outputs.
Second, it’s not yet clear if there would be different priorities if life satisfaction rather than happiness were used as the measure of benefit. Hence, philosophical differences may not lead to different priorities in this case.
Third, I’ve been somewhat bemused by Kahneman’s apparent recent conversation to thinking life satisfaction, rather than happiness, is what matters for well-being. I don’t see why the descriptive claim that people, in fact, try to maximise their life satisfaction rather than their happiness should have any bearing on the evaluative claim of well-being consists in. To get such a claim off the ground, you’d need something like a ‘subjectivist’ view about well-being, on which well-being consists in whatever people choose their well-being to consist in. Hedonism (well-being consists in happiness) is an ‘objectivist’ view, because it holds your happiness is good for you whether you think it is or not. See Haybron for a brief discussion of this.
I don’t find subjectivism about well-being plausible. Consider John Rawls’ grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person’s life is going well for them. I think this person’s life is going poorly for them because they are unhappy. I’m not sure there’s much, if anything more, to say about this case: some will think grass-counter’s life is going well, some won’t.
Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn’t necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.
(I don’t trust the article’s preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)
I’m not sure what Kahneman believes. I don’t think he’s publicly stated well-being consists in life satisfaction rather than happiness (or anything else). I don’t think his personal beliefs are significant for the (potential) view either way (unless one was making an appeal to authority).
Consider John Rawls’ grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person’s life is going well for them. I think this person’s life is going poorly for them because they are unhappy.
I think the example might seem absurd because we can’t imagine finding satisfaction in counting blades of grass; it seems like a meaningless pursuit. But is it any more meaningful in any objective sense than doing mathematics (in isolation, assuming no one else would ever benefit)? The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Furthermore, I think life satisfaction and preference satisfaction are slightly different. If we’re talking about life satisfaction rather than preference satisfaction, it’s not an overriding desire (which sounds like addiction), but, upon reflection, (greater) satisfaction with the choices they make and their preferences for those choices. If we are talking about preference satisfaction, people can also have preferences over their preferences. A drug addict might be compelled to use drugs, but prefer not to be. In this case, does the mathematician prefer to have different preferences? If they don’t, then the example might not be so counterintuitive after all. If they do, then the subjectivist can object in a way that’s compatible with their subjectivist intuitions.
Also, a standard objection to hedonistic (or more broadly experiential) views is wireheading or the experience machine, of which I’m sure you’re aware, but I’d like to point them out to everyone else here. People don’t want to sacrifice the pursuits they find meaningful to be put into an artificial state of continuous pleasure, and they certainly don’t want that choice to be made for them. Of course, you could wirehead people or put them in experience machines that make their preferences satisfied (by changing these preferences or simulating things that satisfy their preferences), and people will also object to that.
The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.
Life satisfaction and preference satisfaction are different—the former refers to a judgement about one’s life, the latter to one’s preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn’t seem that his life is going well. You’re right that preference satisfactionists often appeal to ‘laundered’ preferences—their have to prefer what their rationally ideal self would prefer, or something—but it’s hard and unsatisfying to spell out what this looks like. Further, it’s unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
I don’t think this need be the case, since we can have preferences that are mutually exclusive in their satisfaction, and having such preferences means we can’t be maximally satisfied. So, if the mathematician’s preference upon reflection is to not count blades of grass (and do something else) but they have the urge to do so, at least one of these two preferences will go unsatisfied, which detracts from their wellbeing.
However, this on its own wouldn’t tell us the mathematician is better off not counting blades of grass, and if we did always prioritize rational preferences over irrational ones, or preferences about preferences over the preferences to which they refer, then it would be as if the irrational/lower preferences count for nothing, as you suggest.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
I agree, although it also doesn’t help preference satisfactionists who only count preference satisfaction/frustration when it’s experienced consciously, and it might also not help them if we’re allowed to change your preferences, since having easier preferences to satisfy might outweigh the preference frustration that would result from having your old preferences replaced by and ignored for the new preferences.
I think the involuntary experience machine and wireheading are problems for all the consequentialist theories with which I’m familiar (at least under the assumption of something like closed individualism, which I actually find to be unlikely).
For whatever it’s worth, my metaethical intuitions suggest that optimizing for happiness is not a particularly sensible goal.
Might just be a nitpick, but isn’t this an ethical intuition, rather than a metaethical one?
(I remember hearing other people use “metaethics” in cases where I thought they were talking about object level ethics, as well, so I’m trying to understand whether there’s a reason behind this or not.)
Hmm, I don’t think so. Though I am not fully sure. Might depend on the precise definition.
It feels metaethical because I am responding to a perceived confusion of “what defines moral value?”, and not “what things are moral?”.
I think “adding up people’s experience over the course of their life determines whether an act has good consequences or not” is a confused approach to ethics, which feels more like a metaethical instead of an ethical disagreement.
However, happy to use either term if anyone feels strongly, or happy to learn that this kind of disagreement falls clearly into either “ethics” or “metaethics”.
I’m by no means schooled in academic philosophy, so I could also be wrong about this.
I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian ‘we should keep all the complexities of human value around’-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories’ wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.
My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don’t, which would be an ethical disagreement. The borderlines aren’t very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn’t hedonistic utilitarianism.
(I saw you quoting Nate’s post in another thread. I think you could say that it makes a meta-ethical argument that it’s possible to care about things outside yourself, but that it doesn’t make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people’s experiences.)
(a) Meaning: what is the semantic function of moral discourse? Is the function of moral discourse to state facts, or does it have some other non-fact-stating role?
(b) Metaphysics: do moral facts (or properties) exist? If so, what are they like? Are they identical or reducible to natural facts (or properties) or are they irreducible and sui generis?
(c) Epistemology and justification: is there such a thing as moral knowledge? How can we know whether our moral judgements are true or false? How can we ever justify our claims to moral knowledge?
(d) Phenomenology: how are moral qualities represented in the experience of an agent making a moral judgement? Do they appear to be ‘out there’ in the world?
(e) Moral psychology: what can we say about the motivational state of someone making a moral judgement? What sort of connection is there between making a moral judgement and being motivated to act as that judgement prescribes?
(f) Objectivity: can moral judgements really be correct or incorrect? Can we work towards finding out the moral truth?
It doesn’t quite seem to me like the original claim fits neatly into any of these categories.
Specifically, we expect to rely on life satisfaction as the primary metric. This is typically measured by asking “Overall, how satisfied are you with your life nowadays?” (0 − 10).
I’d be curious to hear examples of questions that HLI thinks would be better than the above for assessing the thing they want to optimize. My assumption was that their work would typically measure things that were close to life satisfaction, rather than transient feelings (“are you happy now?”), because the latter seems very subjective and timing-dependent.
I think of “life satisfaction” as a measure of something like “how happy/content you are with your past + how happy/content you expect to be in the future + your current emotional state coloring everything, as usual”.
(Note that being happy with the past isn’t the same as having been happy in the past—but I don’t think those trade off against each other all that often, especially in the developing world [where many of the classic “happiness traps”, like drugs and video games, seem like they’d be less available].)
Michael: To what extent do you believe that the thing HLI wants to optimize for is the thing people “want” in Kahneman’s view? If you think there are important differences between your definition of happiness and “life satisfaction”, why pursue the former rather than the latter?
The ‘gold standard’ for measuring happiness is the experience sampling method (ESM), where participants are prompted to record their feelings and possibly their activities one or more times a day.[1] While this is an accurate record of how people feel, it is expensive to implement and intrusive for respondents. A more viable approach is the day reconstruction method (DRM) where respondents use a time-diary to record and rate their previous day. DRM produces comparable results to ESM, but is less burdensome to use (Kahneman et al. 2004).
Further, I don’t think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper ‘Objective happiness’.)
I was responding to this section, which immediately follows your quote:
While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available.
I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people’s values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:
Do you see where these naïve philosophers went confused? They have postulated an agent which treats actions like ends, and tries to steer towards whatever action it most prefers — as if actions were ends unto themselves.
You can’t explain why the agent takes an action by saying that it ranks actions according to whether or not taking them is good. That begs the question of which actions are good!
This agent rates actions as “good” if they lead to outcomes where the agent has lots of stamps in its inventory. Actions are rated according to what they achieve; they do not themselves have intrinsic worth.
The robot program doesn’t contain reality, but it doesn’t need to. It still gets to affect reality. If its model of the world is correlated with the world, and it takes actions that it predicts leads to more actual stamps, then it will tend to accumulate stamps.
It’s not trying to steer the future towards places where it happens to have selected the most micro-stampy actions; it’s just steering the future towards worlds where it predicts it will actually have more stamps.
Now, let me tell you my second story:
Once upon a time, a group of naïve philosophers encountered a group of human beings. The humans seemed to keep selecting the actions that gave them pleasure. Sometimes they ate good food, sometimes they had sex, sometimes they made money to spend on pleasurable things later, but always (for the first few weeks) they took actions that led to pleasure.
But then one day, one of the humans gave lots of money to a charity.
“How can this be?” the philosophers asked, “Humans are pleasure-maximizers!” They thought for a few minutes, and then said, “Ah, it must be that their pleasure from giving the money to charity outweighed the pleasure they would have gotten from spending the money.”
Then a mother jumped in front of a car to save her child.
The naïve philosophers were stunned, until suddenly one of their number said “I get it! The immediate micro-pleasure of choosing that action must have outweighed —
People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.
People will tell you that, because we’re trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.
But I have a message for you: You can, in fact, care about the outer world.
I think I mostly agree with you here, but I’m slightly confused by HLI’s definition of “happiness”—I meant my comment as a set of questions for Michael, inspired by the points you made.
For whatever it’s worth, my ethical intuitions suggest that optimizing for happiness is not a particularly sensible goal. I personally care relatively little about my self-reported happiness levels, and wouldn’t be very excited about someone optimizing for them.
Kahneman has done some research on this, and if I remember correctly changed his mind publicly a few years ago from his previous position in Thinking Fast and Slow to a position that values life-satisfaction a lot more than happiness (and life-satisfaction tends to trade off against happiness in many situations).
This was the random article I remember reading about this. Take it with all the grains of salt of normal popular science reporting. Here are some quotes (note that I disagree with the “reducing suffering” part as an alternative focus):
I don’t fully agree with all of the above, but a lot of the gist seems correct and important.
Thanks for this. Let me make three replies.
First, HLI will primarily use life satisfaction scores to determine our recommendations. Hence, if you think life satisfaction does a reasonably job of capturing well-being, I suppose you will still be interested in the outputs.
Second, it’s not yet clear if there would be different priorities if life satisfaction rather than happiness were used as the measure of benefit. Hence, philosophical differences may not lead to different priorities in this case.
Third, I’ve been somewhat bemused by Kahneman’s apparent recent conversation to thinking life satisfaction, rather than happiness, is what matters for well-being. I don’t see why the descriptive claim that people, in fact, try to maximise their life satisfaction rather than their happiness should have any bearing on the evaluative claim of well-being consists in. To get such a claim off the ground, you’d need something like a ‘subjectivist’ view about well-being, on which well-being consists in whatever people choose their well-being to consist in. Hedonism (well-being consists in happiness) is an ‘objectivist’ view, because it holds your happiness is good for you whether you think it is or not. See Haybron for a brief discussion of this.
I don’t find subjectivism about well-being plausible. Consider John Rawls’ grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person’s life is going well for them. I think this person’s life is going poorly for them because they are unhappy. I’m not sure there’s much, if anything more, to say about this case: some will think grass-counter’s life is going well, some won’t.
Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn’t necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.
(I don’t trust the article’s preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)
I’m not sure what Kahneman believes. I don’t think he’s publicly stated well-being consists in life satisfaction rather than happiness (or anything else). I don’t think his personal beliefs are significant for the (potential) view either way (unless one was making an appeal to authority).
I think the example might seem absurd because we can’t imagine finding satisfaction in counting blades of grass; it seems like a meaningless pursuit. But is it any more meaningful in any objective sense than doing mathematics (in isolation, assuming no one else would ever benefit)? The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Furthermore, I think life satisfaction and preference satisfaction are slightly different. If we’re talking about life satisfaction rather than preference satisfaction, it’s not an overriding desire (which sounds like addiction), but, upon reflection, (greater) satisfaction with the choices they make and their preferences for those choices. If we are talking about preference satisfaction, people can also have preferences over their preferences. A drug addict might be compelled to use drugs, but prefer not to be. In this case, does the mathematician prefer to have different preferences? If they don’t, then the example might not be so counterintuitive after all. If they do, then the subjectivist can object in a way that’s compatible with their subjectivist intuitions.
Also, a standard objection to hedonistic (or more broadly experiential) views is wireheading or the experience machine, of which I’m sure you’re aware, but I’d like to point them out to everyone else here. People don’t want to sacrifice the pursuits they find meaningful to be put into an artificial state of continuous pleasure, and they certainly don’t want that choice to be made for them. Of course, you could wirehead people or put them in experience machines that make their preferences satisfied (by changing these preferences or simulating things that satisfy their preferences), and people will also object to that.
Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.
Life satisfaction and preference satisfaction are different—the former refers to a judgement about one’s life, the latter to one’s preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn’t seem that his life is going well. You’re right that preference satisfactionists often appeal to ‘laundered’ preferences—their have to prefer what their rationally ideal self would prefer, or something—but it’s hard and unsatisfying to spell out what this looks like. Further, it’s unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
I don’t think this need be the case, since we can have preferences that are mutually exclusive in their satisfaction, and having such preferences means we can’t be maximally satisfied. So, if the mathematician’s preference upon reflection is to not count blades of grass (and do something else) but they have the urge to do so, at least one of these two preferences will go unsatisfied, which detracts from their wellbeing.
However, this on its own wouldn’t tell us the mathematician is better off not counting blades of grass, and if we did always prioritize rational preferences over irrational ones, or preferences about preferences over the preferences to which they refer, then it would be as if the irrational/lower preferences count for nothing, as you suggest.
I agree, although it also doesn’t help preference satisfactionists who only count preference satisfaction/frustration when it’s experienced consciously, and it might also not help them if we’re allowed to change your preferences, since having easier preferences to satisfy might outweigh the preference frustration that would result from having your old preferences replaced by and ignored for the new preferences.
I think the involuntary experience machine and wireheading are problems for all the consequentialist theories with which I’m familiar (at least under the assumption of something like closed individualism, which I actually find to be unlikely).
Might just be a nitpick, but isn’t this an ethical intuition, rather than a metaethical one?
(I remember hearing other people use “metaethics” in cases where I thought they were talking about object level ethics, as well, so I’m trying to understand whether there’s a reason behind this or not.)
Hmm, I don’t think so. Though I am not fully sure. Might depend on the precise definition.
It feels metaethical because I am responding to a perceived confusion of “what defines moral value?”, and not “what things are moral?”.
I think “adding up people’s experience over the course of their life determines whether an act has good consequences or not” is a confused approach to ethics, which feels more like a metaethical instead of an ethical disagreement.
However, happy to use either term if anyone feels strongly, or happy to learn that this kind of disagreement falls clearly into either “ethics” or “metaethics”.
I’m by no means schooled in academic philosophy, so I could also be wrong about this.
I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian ‘we should keep all the complexities of human value around’-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories’ wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.
My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don’t, which would be an ethical disagreement. The borderlines aren’t very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn’t hedonistic utilitarianism.
(I saw you quoting Nate’s post in another thread. I think you could say that it makes a meta-ethical argument that it’s possible to care about things outside yourself, but that it doesn’t make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people’s experiences.)
This seems reasonable. I changed it to say “ethical”.
Contemporary Metaethics delineates the field as being about:
It doesn’t quite seem to me like the original claim fits neatly into any of these categories.
I’d be curious to hear examples of questions that HLI thinks would be better than the above for assessing the thing they want to optimize. My assumption was that their work would typically measure things that were close to life satisfaction, rather than transient feelings (“are you happy now?”), because the latter seems very subjective and timing-dependent.
I think of “life satisfaction” as a measure of something like “how happy/content you are with your past + how happy/content you expect to be in the future + your current emotional state coloring everything, as usual”.
(Note that being happy with the past isn’t the same as having been happy in the past—but I don’t think those trade off against each other all that often, especially in the developing world [where many of the classic “happiness traps”, like drugs and video games, seem like they’d be less available].)
Michael: To what extent do you believe that the thing HLI wants to optimize for is the thing people “want” in Kahneman’s view? If you think there are important differences between your definition of happiness and “life satisfaction”, why pursue the former rather than the latter?
Hello Aaron,
In the ‘measuring happiness’ bit of HLI’s website we say
Further, I don’t think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper ‘Objective happiness’.)
I was responding to this section, which immediately follows your quote:
I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people’s values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:
I think I mostly agree with you here, but I’m slightly confused by HLI’s definition of “happiness”—I meant my comment as a set of questions for Michael, inspired by the points you made.