1) We sometimes distinguish between “experienced utility” and “decision utility” and we know that the two sometimes diverge. Do you know of experiments that tried to explain discrepancies between choice behaviour and reported happiness with affective forecasting errors? Less ambitiously, how much work is there showing that the presence of these biases predicts choice?
2) If a large part of the discrepancies between choice and experienced wellbeing are driven by affective forescasting errors, I should be extremely motivated to become better at affective forecasting. How can I become better at affective forecasting?
3) It seems like the “future anhedonia” bias and the “intensity” bias go in opposite directions. When is each more likely to be operating?
1. Yes, there’s definitely some work showing that these errors guide choice, though usually not discussed using the experience vs. decision utility framework (instead, it’s typically described as expected vs. experienced). One example is with the so-called “end of history illusion”, in which people overpay for future experiences (e.g., concert tickets) because they fail to realize that their preferences and feelings will change as much as they actually do. Another example is with medical decision making. Patients often have to make difficult choices about surgeries, taking preventative medications, etc. that affect their quality of life. Patients often forego these treatments because they think they will worsen their happiness more than they likely will.
2. That’s the million dollar question! To my knowledge, debiasing strategies have largely proven ineffective (though there’s some promising work with overcoming other biases like confirmation bias). I still do think “knowledge is power” to some extent, but the research doesn’t entirely back that up. Instead, one really cool paper found that asking others who have been in the situation that one is forecasting is a simple strategy for improving affective forecasting accuracy. So, 1) ask people who have been there and 2) for repeated decisions, pay close attention to your experienced feelings (and the speed and extent to which they fade) and use those observations as data to inform your predictions for the next time you are imagining a similar circumstance.
3. While I definitely understand that it seems the two conflict, they actually don’t! Intensity bias is when experienced feelings are less intense than predicted feelings (e.g., I predicted that receiving $20 would make me feel 9⁄10 happiness, but after I received it, it actually made me feel 6⁄10 happiness). Meanwhile, future anhedonia is when predicted future feelings are less intense than predicted present feelings (e.g., I predict that $20 today will make me 9⁄10 happiness today, but I predict that $20 will make me feel 6⁄10 happiness in 3 months). To clarify, the 6⁄10 and 9⁄10 happiness ratings are hypothetical for example’s sake, so I’m not making a point about the actual magnitude of these errors.
That’s a great report! Three sets of questions:
1) We sometimes distinguish between “experienced utility” and “decision utility” and we know that the two sometimes diverge. Do you know of experiments that tried to explain discrepancies between choice behaviour and reported happiness with affective forecasting errors? Less ambitiously, how much work is there showing that the presence of these biases predicts choice?
2) If a large part of the discrepancies between choice and experienced wellbeing are driven by affective forescasting errors, I should be extremely motivated to become better at affective forecasting. How can I become better at affective forecasting?
3) It seems like the “future anhedonia” bias and the “intensity” bias go in opposite directions. When is each more likely to be operating?
Hi Caspar, thanks for the questions!
1. Yes, there’s definitely some work showing that these errors guide choice, though usually not discussed using the experience vs. decision utility framework (instead, it’s typically described as expected vs. experienced). One example is with the so-called “end of history illusion”, in which people overpay for future experiences (e.g., concert tickets) because they fail to realize that their preferences and feelings will change as much as they actually do. Another example is with medical decision making. Patients often have to make difficult choices about surgeries, taking preventative medications, etc. that affect their quality of life. Patients often forego these treatments because they think they will worsen their happiness more than they likely will.
2. That’s the million dollar question! To my knowledge, debiasing strategies have largely proven ineffective (though there’s some promising work with overcoming other biases like confirmation bias). I still do think “knowledge is power” to some extent, but the research doesn’t entirely back that up. Instead, one really cool paper found that asking others who have been in the situation that one is forecasting is a simple strategy for improving affective forecasting accuracy. So, 1) ask people who have been there and 2) for repeated decisions, pay close attention to your experienced feelings (and the speed and extent to which they fade) and use those observations as data to inform your predictions for the next time you are imagining a similar circumstance.
3. While I definitely understand that it seems the two conflict, they actually don’t! Intensity bias is when experienced feelings are less intense than predicted feelings (e.g., I predicted that receiving $20 would make me feel 9⁄10 happiness, but after I received it, it actually made me feel 6⁄10 happiness). Meanwhile, future anhedonia is when predicted future feelings are less intense than predicted present feelings (e.g., I predict that $20 today will make me 9⁄10 happiness today, but I predict that $20 will make me feel 6⁄10 happiness in 3 months). To clarify, the 6⁄10 and 9⁄10 happiness ratings are hypothetical for example’s sake, so I’m not making a point about the actual magnitude of these errors.