3rd PPE student at UCL and former President of the EA Society there. I was an ERA Fellow in 2023 & researched historical precedents for AI advocacy.
Charlie Harrison
Hello there,
There’s lots of points here. While they are possible, I would suggest they are not particularly common/well-suported in the psychological literature as it is today.
In addition, I don’t know why these explanations would lead to desensitisation towards positive and negative events.
Hello, Huw!
I can’t think of a good theoretical reason why true effects should fall so significantly – like 40%. That’s striking. The same attenuation result holds, even including income/age/event prevalence.
“Intuitively, if wellbeing saturates at the top end, having a really positive thing happen to me genuinely might not move the needle as much.”
This is true. Another way of saying this is: “the true effects fall as you get happier”. But then, given reported happiness has stayed constant, why would the effects fall?
Hm, I don’t think I agree with you on linearity. Andrew Oswald was writing about this in 2008. One option is that the function is logistic/arctan: i.e,. quite concave/flat at high latent happiness levels. That is, you can’t shift reported happiness above a 10 (a ceiling effect), even if you get happier.
In this case: even if the reporting function is non-linear (and assuming true effect sizes are constant), why would the observed effects fall? Because people are getting happier. Again, this is a different way of saying rescaling is happening.
I don’t think you did mention this before...! I think this graph is just for 1 country. Perhaps Japan.
To be honest, I don’t know what to think of the Wolfers/Stevenson objections! My only thought is: differences of, e.g,. 0.2 points, would look pretty small in comparison to the potential rescaling effects I suggest here.
Thanks, this is interesting. I wonder if this sort of individual-level noise might be smoothed out by large-n experience sampling.
Hello Vasco, thanks!
Calibrating with biological measures. Hm, could be a interesting, albeit labour intensive … !
I’ve seen this graph a couple times on the Forum, now. I am confused why these lines are going up, but LS is generally flat. The one thing that stands out to me is that the timeframes are generally smaller than multidecade ones used for most studies on the Easterlin Paradox.
I’d also guess it’d be harder to calibrate the categorical response happiness question (This’d certainly be the case if you used my method, here.)
On income increasing over time. I discuss this more in the paper. We think that increasing income is the main pathway that rescaling occurs through. So, including it as a control could introduce over-control bias.
Oh, and I rounded from .62 something to .6 for the indexed effect size :)
Rescaling and The Easterlin Paradox (2.0)
I’m currently working on a paper which suggests ‘scale norming’ could lead to quite a large bias/underestimate of average national life satisfaction. Hope to post a version of this on the Forum soon.
Thank you for writing this!
Id like to selfishly point to a previous post I’ve written on this point: Sometimes, We Can Just Say No
You might find it interesting to compare notes :P
Hey Mo, thanks so much!
I don’t have a particularly strong view on this.
I guess:
First, there are differences in the metrics used – the life satisfaction (0-10) is more granular than the 4 category response questions.
Additionally, the plot from OWID, a lot of the data seems quite short-term – e.g., 10 years or so. Easterlin always emphasises that the paradox is across the whole economic cycle, but a country might experience continuous growth in the space of a decade.
My overall view – several happiness economists I’ve spoken to basically think the Easterlin Paradox is correct (at least, to be specific: self-reported national life satisfaction is flat in the long-run), so I defer to them.
haha, yes, people have done this! This is called ‘vignette-adjustment’. You basically get people to read short stories and rate how happy they think the character is. There are a few potential issues with this method: (1) they aren’t included in long-term panel data; (2) people might interpret the character’s latent happiness differently based on their own happiness
All good. Easy to tie yourself in knots with this …
Hi Zachary, yeah, see the other comment I just wrote. I think stretching could plausibly magnify or attenuate the relationship, whilst shifting likely wouldn’t.
While I agree in principle, I think the evidence is that the happiness scale doesn’t compress at one end. There’s a bunch of evidence that people use happiness scales linearly. I refer to Michael Plant’s report (pp20-22 ish): https://wellbeing.hmc.ox.ac.uk/wp-content/uploads/2024/02/2401-WP-A-Happy-Probability-DOI.pdf
Thanks for this example, Geoffrey. Hm, that’s interesting! This has gotten a bit more complicated than I thought.
It seems:
Surprisingly, scale stretching could lead to attenuation or magnification depending on the underlying relationship (which is unobserved)
Let h be latent happiness; let LS be reported happiness.
Your example:
So yes, the gradient gets steeper.
Consider another function. (This is also decreasing in h)
i.e., the gradient gets flatter.
2. Scale shifting should always lead to attenuation (if the underlying relationship is negative and convex, as stated in the piece)
Your linear probability function doesn’t satisfy convexity. But, this seems more realistic, given the plots from Oswald/Kaiser look less than-linear, and probabilities are bounded (whilst happiness is not).
Again consider:
T=1: LS = h ⇒ P(h) =1/LS
T=2: LS = h-5 ⇔ h = LS+5 ⇒ P(h) = 1/(LS+5)
Overall, I think the fact that the relationship stays the same is some weak evidence against shifting – not stretching. FWIW, in the quality-of-life literature, shifting occurs but little stretching.
Sorry – this is unclear.
“If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS”
This means, specifically, a flatter gradient (i.e., ‘attenuation’) – smaller in absolute terms. In reality, I found a slightly increasing (absolute) gradient/steeper. I can change that sentence.
I could imagine thinking about “people don’t settle for half-good any more” as a kind of increased happiness
This feels similar to Geoffrey’s comment. It could be that it takes less unhappiness for people to take decisive life action now. But, this should mean a flatter gradient (same direction as rescaling)
And yeah, this points towards culture/social comparison/expectations being more important than absolute £.
Thanks a lot for this. I hadn’t actually come across these terms; that’s super useful. I’ll have to read both these articles when I get a chance, will report back.
Hi Geoffrey,
Thank you!
It’s possible that these 3 exit actions have gotten easier to do, over time. Intuitively, though, this would be pushing in the same direction as rescaling: e.g., if getting a divorce is easier, it takes less unhappiness to push me to do it. This would mean the relationship should (also) get flatter. So, still surprising, that the relationship is constant (or even getting stronger).
Hey Eugene, interesting stuff!
1) Long-term AI is very likely a complement; short-term, it may be a substitute”
I wonder why you think this?
2) “Good evidence suggests AI benefits the already skilled”
I feel like the evidence here is quite mixed: e.g., see this article from the economist: https://www.economist.com/finance-and-economics/2025/02/13/how-ai-will-divide-the-best-from-the-rest
If we treat digital minds like current animal livestock, the expected value of the future could be really bad.
Thank you for this interesting post.
“By assumption, the AI can perfectly substitute for human AI researchers.”
Any idea/intuition about what would happen if you relaxed this assumption?