3rd PPE student at UCL and former President of the EA Society there. I was an ERA Fellow in 2023 & researched historical precedents for AI advocacy.
Charlie Harrison
haha, yes, people have done this! This is called ‘vignette-adjustment’. You basically get people to read short stories and rate how happy they think the character is. There are a few potential issues with this method: (1) they aren’t included in long-term panel data; (2) people might interpret the character’s latent happiness differently based on their own happiness
All good. Easy to tie yourself in knots with this …
Hi Zachary, yeah, see the other comment I just wrote. I think stretching could plausibly magnify or attenuate the relationship, whilst shifting likely wouldn’t.
While I agree in principle, I think the evidence is that the happiness scale doesn’t compress at one end. There’s a bunch of evidence that people use happiness scales linearly. I refer to Michael Plant’s report (pp20-22 ish): https://wellbeing.hmc.ox.ac.uk/wp-content/uploads/2024/02/2401-WP-A-Happy-Probability-DOI.pdf
Thanks for this example, Geoffrey. Hm, that’s interesting! This has gotten a bit more complicated than I thought.
It seems:
Surprisingly, scale stretching could lead to attenuation or magnification depending on the underlying relationship (which is unobserved)
Let h be latent happiness; let LS be reported happiness.
Your example:
So yes, the gradient gets steeper.
Consider another function. (This is also decreasing in h)
i.e., the gradient gets flatter.
2. Scale shifting should always lead to attenuation (if the underlying relationship is negative and convex, as stated in the piece)
Your linear probability function doesn’t satisfy convexity. But, this seems more realistic, given the plots from Oswald/Kaiser look less than-linear, and probabilities are bounded (whilst happiness is not).
Again consider:
T=1: LS = h ⇒ P(h) =1/LS
T=2: LS = h-5 ⇔ h = LS+5 ⇒ P(h) = 1/(LS+5)
Overall, I think the fact that the relationship stays the same is some weak evidence against shifting – not stretching. FWIW, in the quality-of-life literature, shifting occurs but little stretching.
Sorry – this is unclear.
“If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS”
This means, specifically, a flatter gradient (i.e., ‘attenuation’) – smaller in absolute terms. In reality, I found a slightly increasing (absolute) gradient/steeper. I can change that sentence.
I could imagine thinking about “people don’t settle for half-good any more” as a kind of increased happiness
This feels similar to Geoffrey’s comment. It could be that it takes less unhappiness for people to take decisive life action now. But, this should mean a flatter gradient (same direction as rescaling)
And yeah, this points towards culture/social comparison/expectations being more important than absolute £.
Thanks a lot for this. I hadn’t actually come across these terms; that’s super useful. I’ll have to read both these articles when I get a chance, will report back.
Hi Geoffrey,
Thank you!
It’s possible that these 3 exit actions have gotten easier to do, over time. Intuitively, though, this would be pushing in the same direction as rescaling: e.g., if getting a divorce is easier, it takes less unhappiness to push me to do it. This would mean the relationship should (also) get flatter. So, still surprising, that the relationship is constant (or even getting stronger).
Are People Happier Than Before? I Tested for “Rescaling” & Found Little Evidence
Hey Eugene, interesting stuff!
1) Long-term AI is very likely a complement; short-term, it may be a substitute”
I wonder why you think this?
2) “Good evidence suggests AI benefits the already skilled”
I feel like the evidence here is quite mixed: e.g., see this article from the economist: https://www.economist.com/finance-and-economics/2025/02/13/how-ai-will-divide-the-best-from-the-rest
If we treat digital minds like current animal livestock, the expected value of the future could be really bad.
Great analogy Alistair
Hi Aaron,
I’m sorry it’s taken me a little while to get back to you.
In hindsight, the way I worded this was overly strong. Cultural explanations are possible, yes.
I guess I see this evidence as a weak update on a fairly strong prior that the burden of knowledge (BOK) is increasing – given the range of other variables (e.g., age of innovation, levels of specialisation), and similar trends within patent data. For example, you couldn’t attribute increasing collaboration on patents to norms within academia.
I’d be interested to compare # researchers with # papers. The ratio of these two growth rates is key, for the returns to research parameter from Bloom et al. Do send me this, if you have remembered in the intervening time.
Charlie
Are New Ideas in AI Getting Harder to Find?
Thanks for writing this Gideon.
I think the risks around securitisation are real/underappreicated, so I’m grateful you’ve written about them. As I’ve written about, I think the securitisation of the internet after 9/11 impeded proper privacy regulation in the US, and prompted Google towards an explicitly pro-profit business model. (Although, this was not a case of macrosecuritization failure).
Some smaller points:
Secondly, its clear that epistemic expert communities, which the AI Safety community could clearly be considered
This is argued for at greater length here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641526
but ultimately the social construction of the issue as one of security is what is decisive, and this is done by the securitising speech act.
I feel like this point was not fully justified. It seems likely to me that whilst rhetoric around AGI could contribute to securitisation, other military/economic incentives could be as (or more) influential.
What do you think?
Hi, thanks for this! Any idea how this compares to total costs?
thanks, Charlie! Seems like a reasonable concern. I feel like a natural response is that hedonic wellbeing is only one factor within life satisfaction. Though, I had a quick look online, and one study suggests they’re pretty strongly correlated (r between 0.8 and 0.9) https://www.sciencedirect.com/science/article/abs/pii/S0167487018305087
Thanks, glad you enjoyed 👍
At least, from one of showing the plot. I’m more skeptical of the line, the further out it goes, especially to a region with only a few points.
Fair.
This data is the part I was nervous about. I don’t see a great indication of “leveling off” in the blue lines. Many have a higher slope than the red lines, and the slope=0 item seems like an anomaly.
To be clear – there are 2 version of levelling off.
Absolute levelling off: slopes indistinguishable from 0
Relative levelling off: slopes which decrease after the income threshold.
And for both 1) and 2), I am referring to the bottom percentiles. This is the unhappy minority which Kahneman and Killingsworth are referring to. So: the fact that slopes are indistinguishable after the income threshold for p=35, 50, 70 is consistent with the KK findings. The fact the slope increased for the 85th percentile is also consistent with the KK findings. Please look at Figure 1 if you want to double check.
I think there is stronger evidence for 2) than for 1).At percentiles p=5, 10, 15, 20, 25, 30 there was a significant decrease in the slope (2): see below. I agree that occurrences of 1) (i.e. insignificant slopes above £50k) may be because of a lack of data.
I also agree with you that the 0 slope is strange. I found this at the 10th and 30th percentiles. I think the problem might be that there wasnt many unhappy rich people in the sample.
Thank you! I haven’t used GitHub much before … Next time 🫡
Hey Mo, thanks so much!
I don’t have a particularly strong view on this.
I guess:
First, there are differences in the metrics used – the life satisfaction (0-10) is more granular than the 4 category response questions.
Additionally, the plot from OWID, a lot of the data seems quite short-term – e.g., 10 years or so. Easterlin always emphasises that the paradox is across the whole economic cycle, but a country might experience continuous growth in the space of a decade.
My overall view – several happiness economists I’ve spoken to basically think the Easterlin Paradox is correct (at least, to be specific: self-reported national life satisfaction is flat in the long-run), so I defer to them.