I havenât yet decided, but itâs likely that a majority of my donations will go to this yearâs donor lottery. Iâm fairly convinced by the arguments in favour of donor lotteries [1, 2], and would encourage others to consider them if theyâre unsure where to give.
Having said that, lotteries have less fuzzies than donating directly, so I may separately give to some effective charities which Iâm personally excited about.
Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months Iâve noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
Open Phil renaming itâs EA Community Growth (Longtermism) Team to GCR Capacity Building
This post from Claire Zabel (OP)
Giving What We Canâs new Cause Area Fund being named âRisk and Resilience,â with the goal of âReducing Global Catastrophic Risksâ
Longview-GWWCâs Longtermism Fund being renamed the âEmerging Challenges Fundâ
Anecdotal data from conversations with people working on GCRs /â X-risk /â Longtermist causes
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation /â working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I canât help but feel something is off about this framing. Some concerns (no particular ordering):
From a longtermist (~totalist classical utilitarian) perspective, thereâs a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
(see Parfit Reasons and Persons for the full thought experiment)
From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesnât differentiate between âhumanity prevents GCRs and realises 1% of itâs potentialâ and âhumanity prevents GCRs realises 99% of its potentialâ
Preventing an extinction-level GCR might move us from 0% to 1% of future potential, but thereâs 99x more value in focusing on going from the âokay (1%)â to âgreat (100%)â future.
See Aird 2020 for more nuances on this point
From a longtermist (~suffering focused) perspective, reducing GCRs might be net-negative if the future is (in expectation) net-negative
E.g. if factory farming continues indefinitely, or due to increasing the chance of an S-Risk
See Melchin 2021 or DiGiovanni 2021 for more
(Note this isnât just a concern for suffering-focused ethics people)
From a longtermist perspective, a focus on GCRs neglects non-GCR longtermist interventions (e.g. trajectory changes, broad longtermism, patient altruism/âphilanthropy, global priorities research, institutional reform, )
From a âcurrent generationsâ perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people /â animals alive today
Iâm pretty uncertain about this, but my guess is that alleviating farmed animal suffering is more welfare-increasing than e.g. working to prevent an AI catastrophe, given the latter is pretty intractable (But I havenât done the numbers)
See discussion here
If GCRs actually are more cost-effective under a âcurrent generationsâ worldview, then I question why EAs would donate to global health /â animal charities (since this is no longer a question of âworldview diversificationâ, just raw cost-effectiveness)
More meta points
From a community-building perspective, pushing people straight into GCR-oriented careers might work short-term to get resources to GCRs, but could lose the long-run benefits of EA /â Longtermist ideas. I worry this might worsen community epistemics about the motivation behind working on GCRs:
If GCRs only go through on longtermist grounds, but longtermism is false, then impartial altruists should rationally switch towards current-generations opportunities. Without a grounding in cause impartiality, however, people wonât actually make that switch
From a general virtue ethics /â integrity perspective, making this change on PR /â marketing reasons aloneâwithout an underlying change in longtermist motivationâfeels somewhat deceptive.
As a general rule about integrity, I think itâs probably bad to sell people on doing something for reason X, when actually you want them to do it for Y, and youâre not transparent about that
Thereâs something fairly disorienting about the community switching so quickly from [quite aggressive] âyay longtermism!â (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention /â admission that this happened or why