Some Reflections on Philosophy Tube’s “The Rich Have Their Own Ethics: Effective Altruism & the Crypto Crash”

[I expect other people to have more valuable reflections. These are mine, and I’m not very confident in many of them.]

Some reflections on parts of a recent Philosophy Tube video by Abigail Thorn on effective altruism and longtermism. Jessica Wen has a summary of the video.

What I liked

The video is very witty and articulate, and it has an pleaseant tone that doesn’t take itself too seriously. Abigail Thorn’s criticism is generally well-researched and I think it’s a good-faith criticism of effective altruism and longtermism. For example, she gives a surprisingly accurate portrayal of earning to give, a concept often poorly portrayed in the media.

Where I agree

I found myself agreeing with her on quite a few points, such as that the treatment of AI risk in What We Owe the Future (WWOTF) is pretty thin.[1] Or that effective altruism may have undervalued the importance of ensuring that decisions (or even just discussions) about the shape of the long-run future are not made by a small group of people who can’t even begin to represent the whole of humanity.

Where I’m Confused/​Disagree

I’m not surprised that I disagree with or am confused by parts of the criticism. The video is ambitious and covers a lot of ground – FTX, Measurability Bias, Longtermism, Pascal’s Mugging, MrBeast, EA as suspiciously well aligned with business interests, The Precipice versus WWOTF, etc. – within 40 entertaining minutes.

Measurability Bias and Favoring the Short-Term

“EA tends to favor short-term, small-scale interventions that don’t tackle the root of the problems. Some of those short-term interventions don’t last or have long-term negative effects. This is all to say, that it is by no means settled what ‘effective’ actually means. And in fairness to them, some EAs are aware of this and they do talk about it, but none of the EA philosophers I’ve read quite seem to understand the depth of this issue.” – Abigail Thorn

While this may have been an accurate description of early effective altruism, in 2019 – out of all of the most engaged EAs – only 28-32% of people were clearly working on short-term, ‘easily’ measured research/​interventions. Although around 70% of funding was directed towards near-term research/​interventions.[5] And a lot of the global health funding goes to large-scale interventions that work at a policy level – such as the Lead Exposure Elimination Project or the RESET Alcohol Initiative.[6]

Deciding on the Future on Our Own

“MacAskill and Ord write a lot about progress and humanity’s potential, but they say almost nothing about who gets to define those concepts. Who gets seen as an expert, who decides what counts as evidence, whose vision of the future gets listened to?” – Abigail Thorn

I agree that we should be very hesitant of a small, weird, elite group of people having an incredibly outsized ability to shape the future.[2]

But many (most?) EAs in longtermism work on preventing existential risks rather than on designing detailed plans for an ideal future. This does not mean that their research is free from ethical judgments, but the work is focused on ensuring that there is a future that anyone can shape at all. And therefore it’s plausible that individuals with diverging visions for the future are likely to end up working together to address existential risks.[3]

I’d also note that longtermist philosophers are also not making sweeping claims about the optimal shape of the future. In The Precipice there is a longer section on the “Long Reflection” about handing off the decision about the shape of the future to others. MacAskill agrees in WWOTF that even though “it seems unlikely to me that anything like the long reflection will occur. [… W]e can see it as an ideal to try to approximate.

Conflating longtermism and existential risk reduction

Philosophy Tube is never explicit about who is part of longtermism, it remains as some vague group of people doing ‘stuff’. But in reality, most of the people you’d likely include in the longtermism community are working on existential risk. Many of them[4] are not motivated by longtermist ideas or are motivated by a common sense version of (weak) longtermism. Wanting to protect the current world population or their grandchildren’s children is enough motivation for many to work on reducing existential risks.

Pascal’s Mugging

The thought experiment is beautifully presented, but it is unclear to me how Philosophy Tube relates it to longtermism. The argument only works if the positive expected value comes from a huge upside despite an infinitesimal likelihood. Most effective altruists consider the existential risk to be much higher than typically envisioned in Pascal’s Mugging examples. Toby Ord estimates a 1 in 6 chance of an existential catastrophe in the next century – slightly higher than the “1 in a trillion” chance mentioned by Abigail Thorn in her version of Pascal’s Mugging.

Possible Nitpick on Reproductive Rights

The part about WWOTF’s handling of reproductive rights issues struck me as potentially misleading.

“Turning now to what I didn’t like so much about the book, you can kind of tell it was written by a man because there is almost zero discussion of reproductive rights. If I was bringing out a book in current year about the moral duties that we have to unborn people, the first thing I would’ve put in it, page 1, 72 point font, ‘Do not use this book to criminalize abortion!’ Maybe he’ll discuss that in a future edition.” – Abigail Thorn

This left me with the impression, that MacAskill was simply blasé about abortion. But going back to the book, he does write:

“Of course, whether to have children is a deeply personal choice. I don’t think that we should scold those who choose not to, and I certainly don’t think that the government should restrict people’s reproductive rights by, for example, limiting access to contraception or banning abortion.”

I understand that this might not be forceful enough because it’s in the middle of the book and not literally on “page 1, 72 point font” – but what it says is almost exactly what Philosophy Tube wants it to say. Criticising MacAskill for ‘doing the right thing, but just slightly off’ feels needlessly antagonistic.

A Personal Takeaway

  • In public writing, if you believe a point is important (like not wanting to advocate taking away reproductive rights in any way) it’s not enough to just say the point. You should be very aware of how forcefully the point comes across.

Again, I really enjoyed the video in general. And I’m glad that the first in-depth criticism of EA with a wide reach was made by a channel that spends so much time researching its topics and strives to present tricky issues in an even-handed manner.

  1. ^

    Especially compared to how important it is to many in the existential risk community.

  2. ^

    I would have personally enjoyed a chapter in WWOTF about this problem.

  3. ^

    I’m pretty uncertain about this. There might be ways of smuggling in assumptions about morality in work on existential risk reduction I’m currently not thinking of.

  4. ^

    My guess is most, but I don’t know of any polls.

  5. ^

    Also, I’m confused about which interventions EAs favor have had “long-term negative effects”. I would not be surprised if that were the case, but I’ve not seen any concrete examples. The source for this claim doesn’t point to any cases where EAs have caused negative effects, it discusses a pathway through which EAs could have a long-term negative effect.

  6. ^

    And this is not a new thing: Effective altruists love systemic change (from 2015)