2nd PPE student at UCL and outgoing President of the EA Society there. I was an ERA Fellow in 2023 & researched historical precedents for AI advocacy.
Charlie Harrison
Hi, thanks for this! Any idea how this compares to total costs?
thanks, Charlie! Seems like a reasonable concern. I feel like a natural response is that hedonic wellbeing is only one factor within life satisfaction. Though, I had a quick look online, and one study suggests they’re pretty strongly correlated (r between 0.8 and 0.9) https://www.sciencedirect.com/science/article/abs/pii/S0167487018305087
Thanks, glad you enjoyed 👍
At least, from one of showing the plot. I’m more skeptical of the line, the further out it goes, especially to a region with only a few points.
Fair.
This data is the part I was nervous about. I don’t see a great indication of “leveling off” in the blue lines. Many have a higher slope than the red lines, and the slope=0 item seems like an anomaly.
To be clear – there are 2 version of levelling off.
Absolute levelling off: slopes indistinguishable from 0
Relative levelling off: slopes which decrease after the income threshold.
And for both 1) and 2), I am referring to the bottom percentiles. This is the unhappy minority which Kahneman and Killingsworth are referring to. So: the fact that slopes are indistinguishable after the income threshold for p=35, 50, 70 is consistent with the KK findings. The fact the slope increased for the 85th percentile is also consistent with the KK findings. Please look at Figure 1 if you want to double check.
I think there is stronger evidence for 2) than for 1).At percentiles p=5, 10, 15, 20, 25, 30 there was a significant decrease in the slope (2): see below. I agree that occurrences of 1) (i.e. insignificant slopes above £50k) may be because of a lack of data.
I also agree with you that the 0 slope is strange. I found this at the 10th and 30th percentiles. I think the problem might be that there wasnt many unhappy rich people in the sample.
Thank you! I haven’t used GitHub much before … Next time 🫡
Hey Ozzie!
1) Thank you!
2) < Or, the 15 percentile slopes are far higher than the other slopes > Agreed, this is probably the most robust finding. I feel pretty uncomfortable about translating this into policy or prescriptions about cash transfers, because this stuff was all correlative; and unearned income might affect happiness differently from earned income.
3) < 50k threshhold seems arbitrary > This is explained in the second footnote. It is worth >$100 USD now, I believe.
< I’d also flag that it seems weird to me to extend the red lines so far to the left, when there are so few data points at less than ~3k > Do you mean from an aesthetic point of view, or a statistical one? The KK (2022) paper uses income groups – and uses the midpoints for the regressions – which is why their lines don’t extend back to very low income.
< I’m skeptical of what you can really takeaway after the 50k pound marks. There seems to be a lot of randomness here >
I think this depends on what claim you are making. I think there is pretty strong evidence for relative levelling off – i.e. significant decrease in the slope for lower percentiles. You can look at the Table for t/p values.
[Editted: didn’t phrase this well]. Though, I agree with you that there is less evidence for absolute levelling off (i.e. 0 slopes above 50k). The fact that the slopes for lower percentiles weren’t significantly positive might be because of a lack of data. 0 slopes for p=10, 30 seems to corroborate this.
Although, if the problem was a generic lack of observations above £50k, then we wouldn’t see significant positive slopes for the higher percentiles. Perhaps, the specific problem was that there wasnt many unhappy rich people in the sample. I will add something to the summary about this.
I haven’t checked for outliers via influence plots or the like.
4) Yeahh, I feel like that would be cool, but would be better to do on the bigger dataset that Killingsworth used. The usefulness here was to use the same methods on different (worse) data.
Thanks Larks!
Thank you John, I appreciate it :)
Against a Happiness Ceiling: Replicating Killingsworth & Kahneman (2022)
I’d guess less than 1⁄4 of the people had engaged w AIS (e.g. read some books/articles). Perhaps 1⁄5 had heard about EA before. Most were interested in AI though.
Ah nice! I had forgotten about this Anscombe article, which is where this point had come from. Thanks for pointing that out.
Interesting! Makes sense that this is common advise. I’ve heard similar stuff from CBT therapists, as you mention.
That point was fairly anecdotal, and I don’t think contributes too much to the argument in this section. I place more weight on the Stanford article/Chao-Hwei responses.
I don’t think that the quote you mention is exactly what Singer believes. He’s setting up the problem for Chao-Hwei to respond to. His own view is that the view “suffering is bad” is a self-evident perception. Perhaps this is subtly different from Singer disliking suffering, or wanting others to alleviate it. Perhaps self-evident in the same way colour is. I think moral realists lean on this analogy sometimes.
Reflections on The Buddhist and the Ethicist
Some thoughts from a University AI Debate
Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important.
You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/merit – because individuals’ actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandel’s book, The Tyranny of Merit. Sandel takes issue with the attitude of “winners” within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst “high-impact individuals” .
Thank you Nathan!!
‘Surveillance Capitalism’ & AI Governance: Slippery Business Models, Securitisation, and Self-Regulation
I’m so sorry it’s taken me so long to respond, Mikhail!
<I would like to note that none of that had been met with corporations willing to spend potentially dozens of billions of dollars on lobbying>
I don’t think this is true, for GMOs, fossil fuels, or nuclear power. It’s important total lobbying capacity/potential, from actual amount spent on lobbying.… Total annual total technology lobbying is in the order hundreds of million: the amount allocated for AI lobbying is, by definition, less. This is a similar to total annual lobbying (or I suspect lower) than than biotechnology spending for GMOs. Annual climate lobbying over £150 million per year as I mentioned in my piece. The stakes are also high for nuclear power. As mentioned in my piece, legislation in Germany to extend plant lifetimes in 2010 offered around €73 billion in extra profits for energy companies, some firms sued for billions of Euros after Germany’s reversal. (Though, I couldn’t find an exact figure for nuclear lobbying).
< none of these clearly stand out to policymakers as something uniquely important from the competitiveness perspective >
I also feel this is too strong. Reagan’s national security advisors were reluctant about his arms control efforts in 1980s because of national security concerns. Some politicians in Sweden believed nuclear weapons were uniquely important for national security. If your point is that AI is more strategically important than these other examples, then I would agree with you. Though your phrasing is overly strong.
< AI is more like railroads >
I don’t know if this is true … I wonder how strategically important railroads were? I also wonder how profitable they were? Seems to be much more state involvement in railroads versus AI… Though, this could be an interesting case study project!
< AI is more like CFCs in the eyes of policymakers, but for that, you need a clear scientific consensus on the existential threat from AI >
I agree you need scientific input, but CFCs also saw widespread public mobilisation (as described in the piece).
< incentivising them to address the public’s concerns won’t lead to the change we need >
This seems quite confusing. Surely, this depends on what the public’s concerns are?
< the loudest voices are likely to make claims that the policymakers will know to be incorrect >
This also seems confusing to me. If you believe that policymakers regularly sort the “loudest voices” from real scientists, in general, why do you think that regulations with “substantial net-negative impact” passed wrt GMOs/nuclear?
< Also, I’m not sure there’s an actual moratorium on GM crops in Europe >
Yes, with “moratorium” I’m referring to a de-facto moratorium on new approvals of GMOs 1999-2002. In general, though, Europe grows a lot less GMOs than other countries: 0.1 million hectares annually versus >70 million hectares in US. I wasn’t aware Europe imports GMOs from abroad.
Sorry that this is still confusing. 5-15 is the confidence interval/range for the counterfactual impact of protests, i.e. p(event occurs with protests) - p(event occurs without protests) = somewhere between 5 and 15. Rather than p(event occurs with protests) = 5, p(event occurs without protests) = 15, which wouldn’t make sense.
Thanks for writing this Gideon.
I think the risks around securitisation are real/underappreicated, so I’m grateful you’ve written about them. As I’ve written about, I think the securitisation of the internet after 9/11 impeded proper privacy regulation in the US, and prompted Google towards an explicitly pro-profit business model. (Although, this was not a case of macrosecuritization failure).
Some smaller points:
This is argued for at greater length here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641526
I feel like this point was not fully justified. It seems likely to me that whilst rhetoric around AGI could contribute to securitisation, other military/economic incentives could be as (or more) influential.
What do you think?