Specifically, we expect to rely on life satisfaction as the primary metric. This is typically measured by asking “Overall, how satisfied are you with your life nowadays?” (0 − 10).
I’d be curious to hear examples of questions that HLI thinks would be better than the above for assessing the thing they want to optimize. My assumption was that their work would typically measure things that were close to life satisfaction, rather than transient feelings (“are you happy now?”), because the latter seems very subjective and timing-dependent.
I think of “life satisfaction” as a measure of something like “how happy/content you are with your past + how happy/content you expect to be in the future + your current emotional state coloring everything, as usual”.
(Note that being happy with the past isn’t the same as having been happy in the past—but I don’t think those trade off against each other all that often, especially in the developing world [where many of the classic “happiness traps”, like drugs and video games, seem like they’d be less available].)
Michael: To what extent do you believe that the thing HLI wants to optimize for is the thing people “want” in Kahneman’s view? If you think there are important differences between your definition of happiness and “life satisfaction”, why pursue the former rather than the latter?
The ‘gold standard’ for measuring happiness is the experience sampling method (ESM), where participants are prompted to record their feelings and possibly their activities one or more times a day.[1] While this is an accurate record of how people feel, it is expensive to implement and intrusive for respondents. A more viable approach is the day reconstruction method (DRM) where respondents use a time-diary to record and rate their previous day. DRM produces comparable results to ESM, but is less burdensome to use (Kahneman et al. 2004).
Further, I don’t think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper ‘Objective happiness’.)
I was responding to this section, which immediately follows your quote:
While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available.
I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people’s values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:
Do you see where these naïve philosophers went confused? They have postulated an agent which treats actions like ends, and tries to steer towards whatever action it most prefers — as if actions were ends unto themselves.
You can’t explain why the agent takes an action by saying that it ranks actions according to whether or not taking them is good. That begs the question of which actions are good!
This agent rates actions as “good” if they lead to outcomes where the agent has lots of stamps in its inventory. Actions are rated according to what they achieve; they do not themselves have intrinsic worth.
The robot program doesn’t contain reality, but it doesn’t need to. It still gets to affect reality. If its model of the world is correlated with the world, and it takes actions that it predicts leads to more actual stamps, then it will tend to accumulate stamps.
It’s not trying to steer the future towards places where it happens to have selected the most micro-stampy actions; it’s just steering the future towards worlds where it predicts it will actually have more stamps.
Now, let me tell you my second story:
Once upon a time, a group of naïve philosophers encountered a group of human beings. The humans seemed to keep selecting the actions that gave them pleasure. Sometimes they ate good food, sometimes they had sex, sometimes they made money to spend on pleasurable things later, but always (for the first few weeks) they took actions that led to pleasure.
But then one day, one of the humans gave lots of money to a charity.
“How can this be?” the philosophers asked, “Humans are pleasure-maximizers!” They thought for a few minutes, and then said, “Ah, it must be that their pleasure from giving the money to charity outweighed the pleasure they would have gotten from spending the money.”
Then a mother jumped in front of a car to save her child.
The naïve philosophers were stunned, until suddenly one of their number said “I get it! The immediate micro-pleasure of choosing that action must have outweighed —
People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.
People will tell you that, because we’re trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.
But I have a message for you: You can, in fact, care about the outer world.
I think I mostly agree with you here, but I’m slightly confused by HLI’s definition of “happiness”—I meant my comment as a set of questions for Michael, inspired by the points you made.
I’d be curious to hear examples of questions that HLI thinks would be better than the above for assessing the thing they want to optimize. My assumption was that their work would typically measure things that were close to life satisfaction, rather than transient feelings (“are you happy now?”), because the latter seems very subjective and timing-dependent.
I think of “life satisfaction” as a measure of something like “how happy/content you are with your past + how happy/content you expect to be in the future + your current emotional state coloring everything, as usual”.
(Note that being happy with the past isn’t the same as having been happy in the past—but I don’t think those trade off against each other all that often, especially in the developing world [where many of the classic “happiness traps”, like drugs and video games, seem like they’d be less available].)
Michael: To what extent do you believe that the thing HLI wants to optimize for is the thing people “want” in Kahneman’s view? If you think there are important differences between your definition of happiness and “life satisfaction”, why pursue the former rather than the latter?
Hello Aaron,
In the ‘measuring happiness’ bit of HLI’s website we say
Further, I don’t think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper ‘Objective happiness’.)
I was responding to this section, which immediately follows your quote:
I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people’s values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:
I think I mostly agree with you here, but I’m slightly confused by HLI’s definition of “happiness”—I meant my comment as a set of questions for Michael, inspired by the points you made.