Movement Collapse Scenarios

Epistemic sta­tus: I mostly want to provide a start­ing point for dis­cus­sion, not make any claims with high con­fi­dence.

In­tro­duc­tion and summary

It’s 2024. The effec­tive al­tru­ism move­ment no longer ex­ists, or is no longer do­ing pro­duc­tive work, for rea­sons our cur­rent selves wouldn’t en­dorse. What hap­pened, and what could we have done about it in 2019?

I’m con­cerned not to hear this ques­tion dis­cussed more of­ten (though CEA briefly spec­u­lates on it here). It’s a pru­dent topic for a move­ment to be think­ing about at any stage of its life cy­cle, but our small, young, rapidly chang­ing com­mu­nity should be tak­ing it es­pe­cially se­ri­ously—it’s very hard to say right now where we’ll be in five years. I want to spur think­ing on this is­sue by de­scribing four plau­si­ble ways the move­ment could col­lapse or lose much of its po­ten­tial for im­pact. This is not meant to be an ex­haus­tive list of sce­nar­ios, nor is it an at­tempt to pre­dict the fu­ture with any sort of con­fi­dence—it’s just an ex­plo­ra­tion of some of the pos­si­bil­ities, and what could log­i­cally lead to what.

  • Se­ques­tra­tion: The EAs clos­est to lead­er­ship be­come iso­lated from the rest of the com­mu­nity. They lose a source of out­side feed­back and a check on their epistemics, putting them at a higher risk of form­ing an echo cham­ber. Mean­while, the rest of the move­ment largely dis­solves.

  • At­tri­tion: Value drift, burnout, and lifestyle changes cause EAs to drift away from the move­ment one by one, faster than they can be re­placed. The im­pact of EA ta­pers, though some as­pects of it may be pre­served.

  • Dilu­tion: The move­ment be­comes flooded with new­com­ers who don’t un­der­stand EA’s core con­cepts and mis­ap­ply or poli­ti­cize the move­ment’s ideas. Dis­cus­sion qual­ity de­grades and “effec­tive al­tru­ism” be­comes a mean­ingless term, mak­ing the origi­nal ideas im­pos­si­ble to com­mu­ni­cate.

  • Dis­trac­tion: The com­mu­nity be­comes en­grossed in con­cerns tan­gen­tial to im­pact, loses sight of the ob­ject level, and veers off track of its goals. Re­sources are mis­di­rected and the best tal­ent goes el­se­where.

Below, I ex­plore each sce­nario in greater de­tail.

Sequestration

To quote CEA’s three-fac­tor model of com­mu­nity build­ing,

Some peo­ple are likely to have a much greater im­pact than oth­ers. We cer­tainly don’t think in­di­vi­d­u­als with more re­sources mat­ter any more as peo­ple, but we do think that helping di­rect their re­sources well has a higher ex­pected value in terms of mov­ing to­wards CEA’s ul­ti­mate goals.

How­ever,

good com­mu­nity build­ing is about in­clu­sion, whereas good pri­ori­ti­za­tion is about exclusion

and

It might be difficult in prac­tice for us to be elitist about the value some­one pro­vides whilst be­ing egal­i­tar­ian about the value they have, even if the the­o­ret­i­cal dis­tinc­tion is clear.

I don’t want to be seen as ar­gu­ing for any po­si­tion in the de­bate about whether and how much to pri­ori­tize those who ap­pear most tal­ented—a suffi­ciently nu­anced writeup of my thoughts would dis­tract from my main point here. How­ever, I do want to high­light a pos­si­ble risk of too much elitism that I haven’t re­ally seen talked about. The terms “core” and “mid­dle” are com­monly used here, but I gen­er­ally find their use con­flates level of in­volve­ment or com­mit­ment with level of promi­nence or au­thor­ity. In this post I’ll be us­ing the fol­low­ing defi­ni­tions:

  • Group 1 EAs are in­ter­ested in effec­tive al­tru­ism and may give effec­tively or at­tend the oc­ca­sional meetup, but don’t spend much time think­ing about EA or con­sider it a cru­cial part of their iden­tities and their lives.

  • Group 2 EAs are highly ded­i­cated to the com­mu­nity and its pro­ject of mak­ing the world a bet­ter place; they de­vour EA con­tent on­line and/​or reg­u­larly at­tend mee­tups. How­ever, they are not in fre­quent con­tact with EA de­ci­sion-mak­ers.

  • Group 3 EAs are well-known com­mu­nity mem­bers, or those who have been iden­ti­fied as po­ten­tially high-im­pact and have promi­nent EAs or orgs like 80K in­vest­ing in their de­vel­op­ment as effec­tive al­tru­ists.

A se­ques­tra­tion col­lapse would oc­cur if EA lead­er­ship stopped pay­ing much at­ten­tion to Groups 1 and 2, or be­came so tone-deaf about putting Group 3 first that ev­ery­one else would feel alienated and leave the move­ment. Without di­rec­tion and sup­port, most of Group 1 and some of Group 2 would likely give up on the idea of do­ing good effec­tively. The oth­ers might try to go it alone, or even try to found a par­allel move­ment—but with­out the shared re­sources, co­or­di­na­tion abil­ity, and es­tab­lished net­works of the origi­nal com­mu­nity, they would be un­likely to re­cap­ture all the im­pact lost in the split. Mean­while, Group 3 would be left with lit­tle to no re­cruit­ment abil­ity, since most Group 3 EAs pass through Groups 1 and 2 first.

Fi­nally, Group 2 and es­pe­cially Group 1 act as a bridge be­tween Group 3 and the rest of the world, and the more grounded, less rad­i­cal per­spec­tive they bring may help pre­vent group­think, group po­lariza­tion, and similar dy­nam­ics. Without it, Group 3 would be left dan­ger­ously iso­lated and more prone to epistemic er­rors. Over­all, los­ing Groups 1 and 2 would cur­tail EA’s available re­sources and threaten the effi­ciency with which we used them—pos­si­bly for­ever.

Again, pri­ori­ti­za­tion of promis­ing mem­bers is a very hard nee­dle to thread. How­ever, EA lead­er­ship should put a great deal of thought and effort into wel­com­ing, in­clu­sive com­mu­ni­ca­tion and try hard to avoid im­ply­ing that cer­tain peo­ple aren’t valuable. They should also keep an eye on the sta­tus and health of the com­mu­nity: if de­ci­sion-mak­ers get out of touch with the per­spec­tives, cir­cum­stances, and prob­lems of the ma­jor­ity of EAs, their best efforts at in­clu­sivity are un­likely to suc­ceed. Promi­nent EAs should strive to be ac­cessible to mem­bers of Groups 1 and 2 and to hear out non-ex­perts’ thoughts on im­por­tant is­sues, es­pe­cially ones con­cern­ing com­mu­nity health. Lo­cal group or­ga­niz­ers should cre­ate new­comer-friendly, non­judg­men­tal spaces and re­spond to un­in­formed opinions with pa­tience and re­spect. We can all work to up­hold a cul­ture of ba­sic friendli­ness and open­ness to feed­back.

Attrition

Over time, some EAs will in­evitably lose their sense of moral ur­gency or stop feel­ing per­son­ally com­pel­led to act against suffer­ing. Some will over­work them­selves, ex­pe­rience burnout, and re­treat from the com­mu­nity. Some will find that as they grow older and move into new life stages, an al­tru­ism-fo­cused lifestyle is no longer prac­ti­cal or sus­tain­able. Some will de­cide they dis­agree with the move­ment’s ideals or the di­rec­tion it seems to be mov­ing in, or be drawn by cer­tain fac­tors but re­pel­led by oth­ers and find that over time their aver­sion wins out. Each per­son’s path to leav­ing the move­ment will be unique and highly per­sonal. But these one-offs will pose a se­ri­ous dan­ger to the move­ment if they ac­cu­mu­late faster than we can bring new peo­ple in.

In an at­tri­tion col­lapse sce­nario, the move­ment’s im­pact would ta­per slowly as peo­ple dropped out one by one. EA’s ideas may in­fluence ex-mem­bers’ think­ing over their life­times, and some peo­ple might con­tinue to donate sub­stan­tially to high-im­pact char­i­ties with­out nec­es­sar­ily fol­low­ing the lat­est re­search or mak­ing the com­mu­nity a part of their lives. Some highly ac­tive per­centage of EAs would con­tinue to pur­sue effec­tive al­tru­ist goals as peo­ple bled away around them, pos­si­bly keep­ing some of the move­ment’s in­sti­tu­tions on life sup­port. If we man­aged to re­tain a billion­aire or two, we could even con­tinue work like the Open Philan­thropy Pro­ject. But even if we did, our ca­pac­ity would be greatly re­duced and our fun­da­men­tal ideas and as­pira­tions would die out.

Whether and when to leave the move­ment is some­thing we should all de­cide for our­selves, so we shouldn’t fight at­tri­tion on the level of in­di­vi­d­u­als. In­stead, we should shore up the move­ment as a whole. EA lead­er­ship should keep an eye on the size of the com­mu­nity and be sure to de­vote enough re­sources to re­cruit­ment to keep EAs off the en­dan­gered species list. Lo­cal group or­ga­niz­ers can main­tain wel­com­ing en­vi­ron­ments for new­com­ers and make sure they’re cre­at­ing a warm and sup­port­ive com­mu­nity that peo­ple en­joy en­gag­ing with. We should all work hard to be that com­mu­nity, on­line and in per­son.

Dilution

From CEA’s fidelity model:

A com­mon con­cern about spread­ing EA ideas is that the ideas will get “diluted” over time and will come to rep­re­sent some­thing much weaker than they do cur­rently. For ex­am­ple, right now when we talk about which cause ar­eas are high im­pact, we mean that the area has strong ar­gu­ments or ev­i­dence to sup­port it, has a large scope is rel­a­tively ne­glected and is po­ten­tially solv­able.
Over time we might imag­ine that the idea of a high im­pact cause comes to mean that the area has some ev­i­dence be­hind it and has some plau­si­ble in­ter­ven­tions that one could perform. Thus, in the fu­ture, ad­her­ence to EA ideas might im­ply rel­a­tively lit­tle differ­ence from the sta­tus quo.
I’m un­cer­tain about whether this is a se­ri­ous worry. Yet, if it is, spread­ing mes­sages about EA with low fidelity would sig­nifi­cantly ex­ac­er­bate the prob­lem. As the depth and breadth of ideas gets stripped away, we should ex­pect the ideas around EA to weaken over time which would even­tu­ally cause them to as­sume a form that is closer to the main­stream.

In a dilu­tion sce­nario, the move­ment be­comes flooded with new­com­ers who don’t un­der­stand EA’s core con­cepts and mis­ap­ply or poli­ti­cize the move­ment’s ideas. Dis­cus­sion qual­ity de­grades and so many things fall un­der the ban­ner of “effec­tive al­tru­ism” that it be­comes mean­ingless to talk about. “It’s effec­tive!” starts to look like “It’s healthy!” or “It’s en­vi­ron­men­tally friendly!”: of­ten poorly thought out or mis­lead­ing. It be­comes much harder to dis­t­in­guish the sig­nal from the noise. CEA uses the pos­si­bil­ity of this sce­nario as an ar­gu­ment against “low-fidelity” out­reach strate­gies like mass me­dia.

I think it’s pos­si­ble that EA be­com­ing more main­stream would re­sult in a two-way trans­fer of ideas. Depend­ing on the scale and speci­fics of this pro­cess, the benefits of slightly im­prov­ing de­ci­sion-mak­ing in large swaths of so­ciety may com­pletely swamp the effects from dam­age to the origi­nal move­ment. This seems plau­si­ble, though not nec­es­sar­ily prob­a­ble, for global poverty re­duc­tion and an­i­mal welfare. It seems very un­likely for x-risk re­duc­tion, which may suc­ceed or fail based on the qual­ity of ideas of a rel­a­tively small num­ber of peo­ple.

Could we just shrug and sneak away from the con­fu­sion to quietly pur­sue our origi­nal goals? Prob­a­bly not. Peo­ple we needed to com­mu­ni­cate with would of­ten mi­s­un­der­stand us, in­ter­pret­ing what we said through the lens of main­stream not-quite-EA. It would also be difficult to sep­a­rate our new brand from the pol­luted old one, mean­ing the prob­lem would likely fol­low us wher­ever we went.

As­sum­ing we de­cide a dilu­tion sce­nario is bad, what can we do to avoid it? As CEA em­pha­sizes, we should make sure to com­mu­ni­cate about the move­ment in high-fidelity ways. That means tak­ing care with how we com­mu­ni­cate about EA and avoid­ing the temp­ta­tion to mis­rep­re­sent the move­ment to make it eas­ier to ex­plain. Ex­pe­rienced EAs should try to be available and ap­proach­able for new­com­ers to cor­rect mis­con­cep­tions and ex­plain ideas in greater depth. Outreach should fo­cus on long-form and high-band­width com­mu­ni­ca­tion like one-on-ones, and we should grow the move­ment care­fully and in­ten­tion­ally to give each new­comer the chance to ab­sorb EA ideas cor­rectly be­fore they go off and spread them to oth­ers.

Distraction

In this col­lapse sce­nario, EA re­mains an ac­tive, thriv­ing com­mu­nity, but fails to di­rect its efforts to­ward ac­tu­ally pro­duc­ing im­pact. We’ve fol­lowed our in­stru­men­tal ob­jec­tives on tan­gents away from our ter­mi­nal ones, un­til we’ve for­got­ten what we came here to do in the first place.

I’m not talk­ing about the risk that we’ll get caught up in a promis­ing-look­ing but ul­ti­mately use­less pro­ject, as long as it’s a le­gi­t­i­mate at­tempt to do the most good. Avoid­ing that is just a ques­tion of do­ing our jobs well. In­stead, I’m point­ing at some­thing sort of like un­in­ten­tion­ally Good­hart­ing EA: op­ti­miz­ing not for the ac­tual goal of im­pact, but for ev­ery­thing else that has built up around it—the com­mu­nity, the lifestyle, the vaguely re­lated in­ter­ests. Com­pare meta traps #2 and #4.

Here are a few ex­am­ples of how a dis­trac­tion col­lapse could man­i­fest:

  • EA busy­work:

    • We get so wrapped up in our the­o­ries that we for­get to check whether work on them will ever af­fect re­al­ity.

    • We chase top­ics rather than goals. For ex­am­ple, af­ter some­one sug­gests that a hy­po­thet­i­cal tech­nol­ogy could be EA-rele­vant, we spend re­sources in­ves­ti­gat­ing whether it could work with­out re­ally eval­u­at­ing its im­por­tance if it did.

  • Os­sifi­ca­tion:

    • We fo­cus so much on our cur­rent cause ar­eas that we for­get to reeval­u­ate them and keep an eye out for bet­ter ones, miss­ing op­por­tu­ni­ties to do the most good.

    • We do things be­cause they’re the kinds of things EAs do with­out hav­ing ac­tual routes to value in mind, and run pro­jects that don’t have mechanisms to af­fect the things we say we want to change.

  • Fun shiny things:

    • Gos­sip about com­mu­nity dy­nam­ics and philo­soph­i­cal de­bate ir­rele­vant to our de­ci­sions crowds out dis­cus­sion of things like study re­sults and cru­cial con­sid­er­a­tions.

    • We let the hard work of hav­ing an im­pact slide in fa­vor of the so­cial and cul­tural as­pects of the move­ment, while still feel­ing vir­tu­ous for do­ing EA ac­tivi­ties.

Dis­trac­tion is, of course, a mat­ter of de­gree: we’re al­most cer­tainly wast­ing effort on all sorts of pointless things right now. A col­lapse sce­nario would oc­cur only if use­less ac­tivi­ties crowded out use­ful ones so much that we lost our po­ten­tial to be a se­ri­ous force driv­ing the world to­ward bet­ter out­comes.

In this pos­si­ble fu­ture, im­pact would ta­per grad­u­ally and sub­tly as more and more per­son-hours and fund­ing streams were di­verted to use­less work. Some peo­ple would rec­og­nize the dy­namic and take their tal­ent el­se­where, wors­en­ing the prob­lem through evap­o­ra­tive cool­ing. The ver­sion of EA that re­mained would still ac­com­plish some good: I don’t think we’d com­pletely aban­don bed nets in this sce­nario. But the im­por­tant work would be hap­pen­ing el­se­where, or else not hap­pen­ing at all.

A dis­trac­tion sce­nario is hard to rec­og­nize and avoid. Work sev­eral steps re­moved from the prob­lem is of­ten nec­es­sary and valuable, but it can be hard to tell the use­less and the use­ful apart: you can make up plau­si­ble in­di­rect im­pact mechanisms for any­thing. We may want to spend more time ex­plic­itly map­ping out our al­tru­is­tic pro­jects’ routes to im­pact. It’s prob­a­bly a good habit of mind to con­stantly ask our­selves about the ul­ti­mate pur­pose of our cur­rent EA-mo­ti­vated task: does it bot­tom out in im­pact, or does it not?

Conclusion

EA is car­ry­ing pre­cious cargo: a unique, bold, and rigor­ous set of ideas for im­prov­ing the world. I want us to pass this del­i­cate in­her­i­tance to our fu­ture selves, our chil­dren, and their chil­dren, so they can iter­ate and im­prove on it and cre­ate the world we dream of. And I want to save a whole lot of kids from malaria as we sail along.

If the ship sinks, its cargo is lost. So­cial move­ments sail through murky wa­ters: strate­gic un­cer­tain­ties, scan­dals and in­fight­ing, and a chang­ing zeit­geist, with un­known un­knowns loom­ing in the dis­tance. I want EA to be ro­bust against those challenges.

Part of this is sim­ply move­ment best prac­tices: think­ing de­ci­sions through care­fully, be­ing kind to each other, cre­at­ing a healthy in­tel­lec­tual cli­mate. It’s also cru­cial to con­sider col­lapse sce­nar­ios in ad­vance, so we can safely steer away from them.

Hav­ing done this, I have one ma­jor recom­men­da­tion: be­ware ide­olog­i­cal iso­la­tion. This is a risk fac­tor for both the se­ques­tra­tion and dis­trac­tion sce­nar­ios, as well as a bar­rier to good truth­seek­ing in gen­eral. Though the com­mu­nity tends to ap­pre­ci­ate the value of crit­i­cism, we still seem very much at risk of be­com­ing an echo cham­ber—and to some de­gree cer­tainly are one already. We tend to at­tract peo­ple with similar back­grounds and think­ing styles, limit­ing the di­ver­sity of per­spec­tives in dis­cus­sions. Our ideas are com­plex and coun­ter­in­tu­itive enough that any­one who takes the time to un­der­stand them prob­a­bly thinks we’re onto some­thing, mean­ing much of the out­side crit­i­cism we re­ceive is un­in­formed and shal­low. It’s vi­tal that we pur­sue our ideas in all the un­con­ven­tional di­rec­tions they take us, but at each step the move­ment be­comes more niche and in­fer­en­tial dis­tance grows.

I don’t know what to do about this prob­lem, but I don’t think be­ing pas­sively open to crit­i­cism is enough to keep us safe: if we want high-qual­ity anal­y­sis from al­ter­nate view­points, we have to ac­tively seek it out.

Thanks to Vaidehi Agar­walla for the con­ver­sa­tion that in­spired this post, and to Vaidehi, Tay­mon Beal, Sammy Fries, lexande, Joy O’Hal­lo­ran, and Peter Park for pro­vid­ing feed­back. All of you are won­der­ful and amaz­ing peo­ple and I ap­pre­ci­ate it.

If you’d like to sug­gest ad­di­tions to this list, please se­ri­ously con­sider whether talk­ing about your col­lapse sce­nario will make it more likely to hap­pen.