Four Concerns Regarding Longtermism

I’m submitting the below as part of the Red Teaming contest. Any prize money won will be donated to the non-profit I founded (the Rikers Debate Project).

--

I think it makes a bit of sense to start with who I am for this post, because I’m hoping to provide a bit of a curious outsider’s perspective on EA in the hopes of offering a somewhat unique perspective.

I consider myself “EA adjacent.” I have a number of good friends (Josh Morrison, Jay Shooster, Alex Silverstein) who are varying parts of the movement (then again, because I’m only adjacent I have no idea if any of you will know these names). I’m a rationalist by nature, did high-level college debate with decent success, and was part of the Moneyball-era baseball analytics movement (so I’m good with the intersection of numbers and logic). I think all of those qualities give me a proclivity toward EA. These days I’m a rather boring complex commercial litigator who does a good amount of pro bono work. My greatest civic contribution is founding (with Josh Morrison and others) the Rikers Debate Project, which I consider to be a rather typical, not especially EA-y non-profit.

I’m writing this because I think I’ve absorbed enough about EA via osmosis to provide a halfway intelligible critique. [1] To state my lack of bona fides: I sometimes read EA stuff accidentally, but neither typically or intentionally. I had to make an account for this forum to post this. I’ve never heard Will MacAskill speak before, but I have had multiple people tell me about him. I read Peter Singer before I associated him with EA.

I am generally sympathetic to and supportive of the positions of the EA movement. I mean, who can oppose ambitious people coming together to provide efficacious good for the world? It sounds wonderful. In both theory and in practice, I care about the movement (from a distance) and hope it succeeds.

The devil is in the details, as always. I provide the following critique basically in part because I feel guilty that others have done far more to try to help the world than I have, and the least I can do is provide my thoughts to help the movement get better. I caution that the below is based on an outsider’s second-hand perspective. Therefore, it may/​will get details wrong. But my hope is that the spirit is right.

I have heard that there is a focus in EA lately on longterm problems. You can read a bit about longtermism in Simon Bazelon’s post here. [2] Intuitively, longtermism makes sense. EA, as a collective movement, has a finite amount of resources in the present day, yet it is uniquely (literally) temporally positioned. Therefore, as a % of all resources from hereon out, the current amount of resources should be used to take advantage of the unique temporal position to maximize future utility returns. To invert the hypo a bit—if the movement could go back in time, then using the money on anything else but, say, ending slavery or stopping the Holocaust (or, to be more EA about it, handing out vaccines/​cures for the Bubonic plague) would be not just foolhardy, but ethically disastrous. Therefore, we must focus on fat-tail future risks that threaten future life.

I like this thought a lot. I think it makes sense. I write to provide four discrete (but at times overlapping) concerns/​cautions w/​r/​t longtermism. At most, the implication of these criticisms is that there may be a current overcorrection toward longtermism that should be somewhat corrected back in the direction of the prior distribution. This doesn’t mean we stop focusing on longterm risks (far from it), [3] but simply that we recalibrate the risk utility curve, and potentially allocate more resources toward present-ish causes. At the least, I think these criticisms should be discussed and persuasive reasons should be offered for why they are not of serious consideration.

  1. Political Capital Concern: There is a compelling case to be made that a wildly successful EA movement could do as much good for the world as almost any other social movement in history. Even if the movement is only marginally successful, if the precepts underlying the movement are somewhat sound, the utility implications are enormous.

    To that end, it is incredibly worthwhile for the movement to be politically/​socially successful. If the movement dies in the present moment, it can do little to help future life. But because helping future people seems abstract and foreign to the everyday person who wants help right now in the present, and because future life is easily otherized, the movement is susceptible to the criticism that it’s not actually helping anyone. Indeed, people in the present day for the most part will consider movements that help future life to be the moral equivalent of not helping anyone (this is, obviously, massively wrong, but still an important observation).

    One way to address this political capital concern is to provide direct, tangible utility to present humans. I know this happens in the movement, and my point isn’t to take away from those gains made to help people in the present. Instead, the thought is: when running your utility models, factor this in however you can. Consider that utility translated from EA resources to present life, when done effectively and messaged well, [4] redounds as well on the gains to future life.

  2. Social Capital Concern: This one might be a bit meandering, but I’ll get there, hopefully. I think this is probably the most important criticism of my four. This point is rather meta, and I agree with the point made by Michael Nielsen that it is necessary and healthy for the EA movement to practice consistent self-reflection.[5] [6]

    EA proponents should not have to live hermetic lifestyles. In fact, a collective spirit with communal living is healthy and good for the movement. This is not the very tired “EA is a cult” jab, which I find uncompelling. That being said, the movement should be aware of potential pitfalls that come with this approach, and apply appropriate guardrails.

    Here is my concern regarding the intersection of EA-as-community and longtermism: focusing on longterm problems is probably way more fun than present ones.[7] Longtermism projects seem inherently more big picture and academic, detached from the boring mundanities of present reality. There is a related concern with this approach that longtermism may fetishize future life, in the sense of seeing ourselves as saviors who will be looked back on by billions in the future with gratitude and outright reverence for caring so much about posterity. [8]

    But that aside, if I am correct that longtermism projects are sexier by nature, when you add communal living/​organizing to EA, it can probably lead to a lot of people using flimsy models to talk and discuss and theorize and pontificate, as opposed to creating tangible utility, so that they can work on cool projects without having to get their hands too dirty, all while claiming the mantle of not just the same, but greater, do-gooding. “Head to Africa to do charity work? Like a normie who never read Will MacCaskill? Buddy, I’m literally saving a billion lives in 2340 right now.” So individual EA actors, given social incentives brought upon by increased communal living, will want to find reasons to engage in longtermism projects because it will increase their social capital within the community. I don’t mean to imply that anyone is doing this consciously/​in bad faith, [9] but that doesn’t mean it won’t happen.

    So this concern takes no issue with either longtermism projects or EA-as-community practices, respectively. Instead, the concern is that, when in tandem, the latter will make people gravitate toward the former not out of disciplined dedication to greatest-utility principles, but simply because they seem cool. EA followers may be less immune to animal spirits than the average joe, but they are not immune.

  3. Muscle Memory Concern: I have founded an organization that helps people. It is a whole lot of work (and I’ve done it outside of my already stressful job). Aside from providing the core value-add of your organization, you need to worry about the day-to-day running of the organization (people, finances, legal, etc). And it can become increasingly easy for your organization, once it is self-sufficient and not facing existential threats consistently, to get distracted and focus on new projects that seem exciting and fresh.

    This is why it’s important to have quick muscle memory to get back to your core value-add in case you stray and find your organization lacking its typical punch (here, the steady conversion of resources into efficiently-distributed utility), you want to be able to snap back into it fast, or else the movement can become rigid and stale and you may never find yourself back to your former self.

    I think this is a reason to avoid a disproportionate emphasis on longtermism projects. Because longtermism efficacy is inherently more difficult to calculate with confidence, it can become quite easy to forget how to provide utility quickly and confidently. Basically: if you read enough AI doomposts, you might forget how to build a malaria net.

  4. Discount Factor Concern: This one is simple. Future life is less likely to exist than current life. I understand the irony here, since longtermism projects seek to make it more likely that future life exists. But inherently you just have to discount the utility of each individual future life. In the aggregate, there’s no question that the utility gains are still enormous. But each individual life should have some discount based on this less-likely-to-exist factor.

Anyway, those are my thoughts. I hope that they provide some benefits to the community. And I do greatly appreciate the sacrifices people are making to help others! It’s inspiring. Good luck!

  1. ^

    I’m not having anyone edit or review this for me; I’d like all my thoughts and mistakes to be my own.

  2. ^

    Simon has a follow-up post where he discusses a common critique of longtermism: uncertainty. I don’t address that critique here, since I find it unpersuasive. It’s a concern, sure, but one inherent in any longtermism approach. I think it’s best to focus here on ideas that aren’t so obvious.

  3. ^

    “Overcorrection” here does not mean that there should never have been a correction. There should have been. I take the focus on longtermism, relative to the focus beforehand, to be a welcome development.

  4. ^

    Picking effective politicians affiliated with the the movement is obviously very important. I’ll attribute the choices on that front so far to, uh, growing pains...

  5. ^

    Michael is correct that what helps make EA an attractive ideology is the idea that self-reflection and openness to criticism is healthy for the organization. That is a wonderful principle for an organization/​community committed to improving, rather than simply consolidating power for individual actors.

  6. ^

    I don’t want to get sidetracked, but I also have to mention that I tend to agree more with this tweet/​thread by Alexander Berger than I do with most of Michael’s post. Maybe another post, another day.

  7. ^

    If this is wrong, my entire point fails.

  8. ^

    Hot take: lots of EA people think they’re playing Ender’s Game where (spoiler alert) they actually save humanity in the end.

  9. ^

    There is a related concern here which is that longtermism projects may be easier to get funding for with weak data, a la tech founders and VC firms in the last few years, but I imagine the movement already considers this seriously.