CEA on community building, representativeness, and the EA Summit

(This post was writ­ten by Kerry Vaughan and Larissa Hes­keth-Rowe with con­tri­bu­tions from other mem­bers of the CEA staff)

There has been dis­cus­sion re­cently about how to ap­proach build­ing the EA com­mu­nity, in light of last week­end’s EA Sum­mit and this post on prob­lems with EA rep­re­sen­ta­tive­ness and how to solve it. We at CEA thought it would be helpful to share some of our think­ing on com­mu­nity build­ing and rep­re­sen­ta­tive­ness in EA.

This post com­prises four sec­tions:

  1. Why work to build the EA com­mu­nity? - why we pri­ori­tize build­ing the EA com­mu­nity and think this is a promis­ing area for peo­ple to work in.

  2. The challenges of pri­ori­ti­za­tion—how pri­ori­tiz­ing some ac­tivi­ties can pre­sent challenges for com­mu­nity build­ing and rep­re­sen­ta­tive­ness.

  3. CEA wants to sup­port other com­mu­nity builders—how we can do bet­ter by work­ing with other or­ga­ni­za­tions and in­di­vi­d­u­als.

  4. Our views on rep­re­sen­ta­tive­ness in EA—why we be­lieve EA should be cause-im­par­tial, but CEA’s work should be mostly cause-gen­eral, and in­volve more de­scrip­tion of com­mu­nity pri­ori­ties as they are.

Why work to build the EA com­mu­nity?

Ul­ti­mately, CEA wants to im­prove the world as much as pos­si­ble. This means we want to do things that ev­i­dence and rea­son sug­gest are par­tic­u­larly high im­pact.

In or­der to make progress on un­der­stand­ing the world, or in solv­ing any of the world’s most press­ing prob­lems, we are go­ing to need ded­i­cated, al­tru­is­tic peo­ple who are think­ing care­fully about how to act. Those peo­ple can have a much higher im­pact if they are guided by and can add to cut­ting-edge ideas, have ac­cess to the nec­es­sary re­sources (e.g. money) and can co­or­di­nate with one an­other.

Due to this need, we think one way we can have sig­nifi­cant im­pact is by build­ing a global com­mu­nity of peo­ple who have made helping oth­ers a core part of their lives, and who use ev­i­dence and rea­son to figure out how to do so as effec­tively as pos­si­ble.

This is why we con­sider work­ing on build­ing the EA com­mu­nity a pri­or­ity path for those look­ing to have an im­pact with their ca­reer. Work done to bring peo­ple or re­sources into the com­mu­nity, or to help build on our ideas and co­or­di­na­tion ca­pac­ity, can mul­ti­ply our im­pact sev­eral-fold even if we change our minds about which prob­lems are most press­ing in the fu­ture.

(You can see some con­sid­er­a­tions against work­ing on EA com­mu­nity build­ing here.)

The challenge of prioritization

CEA’s challenge is pri­ori­ti­za­tion. Given that we have a finite amount of money, staff and man­age­ment ca­pac­ity, we have to choose where to fo­cus our efforts. CEA can­not do ev­ery­thing that the EA com­mu­nity needs alone.

This year, we’ve been pri­mar­ily fo­cus­ing on peo­ple who have already en­gaged a lot with the ideas and com­mu­nity as­so­ci­ated with effec­tive al­tru­ism, so that we can bet­ter un­der­stand what those peo­ple need and help them put their knowl­edge and ded­i­ca­tion to good use. We think of this as analo­gous to fo­cus­ing at the bot­tom of a mar­ket­ing fun­nel and get­ting to know our “core users”.

In prac­tice, this has meant fo­cus­ing on pro­jects like run­ning smaller re­treats for peo­ple who are already highly en­gaged with EA and putting more at­ten­tion on a smaller num­ber of lo­cal groups, rather than try­ing to provide broad sup­port to many.

Our plan has been to get these pro­jects up and run­ning and re­li­ably do­ing valuable work be­fore ex­pand­ing our sup­port fur­ther up the fun­nel. At this point, how­ever, we are start­ing prepa­ra­tions to get more done higher in the fun­nel. Some valuable ac­tions we’d like to take up the fun­nel soon in­clude run­ning a broader range of events, fund­ing more pro­jects, and sup­port­ing more lo­cal groups. To achieve these new goals, we’ve re­cently been look­ing to hire com­mu­nity spe­cial­ists, events spe­cial­ists, and an EA Grants Eval­u­a­tor.

Inevitably, fo­cus­ing on one area usu­ally means de­pri­ori­tiz­ing other things that would also add a lot of value to the EA com­mu­nity. We try to miti­gate some of the costs of pri­ori­ti­za­tion by helping other groups provide sup­port in­stead.

CEA sup­ports other com­mu­nity builders

We gen­er­ally en­courage mem­bers of the EA com­mu­nity to get in­volved in build­ing the EA com­mu­nity, es­pe­cially in ar­eas that are valuable but cur­rently not pri­ori­tized by CEA. Be­cause CEA is cur­rently man­age­ment and staff-con­strained, the eas­iest way for us to sup­port oth­ers is with fund­ing, brand­ing, and ex­per­tise.

Some ac­tions we’ve taken (or plan to take) to sup­port the work of oth­ers in­clude:

  • Pro­vid­ing more than $650,000 to groups and in­di­vi­d­u­als do­ing lo­cal com­mu­nity build­ing (in progress).

  • Re-launch­ing EA Grant ap­pli­ca­tions to the pub­lic with a £2,000,000 bud­get and a rol­ling ap­pli­ca­tion pro­cess (to be launched by the end of Oc­to­ber 2018).

  • Helping groups run EAGx con­fer­ences in their lo­cal ar­eas by pro­vid­ing the brand, fund­ing (both for the event and a stipend to or­ga­niz­ers), and ad­vice (this year we sup­ported events in Aus­tralia, Bos­ton, and the Nether­lands).

  • Sup­port­ing Re­think Char­ity’s work on LEAN with a $50,000 grant (grant pro­vided).

  • Sup­port­ing Char­ity En­trepreneur­ship’s work to build new EA char­i­ties with a $100,000 grant (grant cur­rently be­ing fi­nal­ized).

  • Sup­port­ing the LessWrong 2.0 team with a $75,000 grant.

  • Sup­port­ing the EA Sum­mit with a $10,000 grant.

There’s cer­tainly more we can do to sup­port the work oth­ers are do­ing, and we’ll be on the look­out for more op­por­tu­ni­ties in the fu­ture.

The EA Summit

A re­cent ex­am­ple of one of the ways we’re try­ing to sup­port non-CEA com­mu­nity build­ing efforts is by sup­port­ing the EA Sum­mit, which took place last week­end. The EA Sum­mit was a small con­fer­ence for EA com­mu­nity builders, in­cu­bated by Paradigm Academy with par­ti­ci­pa­tion from CEA, Char­ity Science, and the Lo­cal Effec­tive Altru­ism Net­work (LEAN), a pro­ject of Re­think Char­ity.

In late June, Peter Buck­ley and Mindy McTeigue ap­proached Kerry and Larissa to dis­cuss their con­cerns around a grow­ing bias to­wards in­ac­tion in the EA com­mu­nity and a slow­down in efforts to build a ro­bust, thriv­ing EA com­mu­nity. We de­cided that these were im­por­tant prob­lems and that the EA Sum­mit was a good mechanism for ad­dress­ing them, so we were happy to sup­port the pro­ject.

The largest con­sid­er­a­tion against sup­port was based on the con­cern that the Sum­mit was in­cu­bated by Paradigm Academy, which is closely con­nected to Lev­er­age Re­search. We con­cluded that this was not a com­pel­ling rea­son to avoid sup­port­ing the con­fer­ence. The EA Sum­mit was a trans­par­ent pro­ject of clear value to the EA com­mu­nity.

Three CEA staff mem­bers at­tended the con­fer­ence with Kerry pro­vid­ing the clos­ing keynote. Our im­pres­sion was that the con­fer­ence was a suc­cess. De­spite be­ing or­ga­nized on short no­tice, the event had over 100 at­ten­dees, was well run, and ended with an ex­cel­lent party. At­ten­dees seemed to come away with the mes­sage that there are use­ful pro­jects they can work on that CEA would sup­port, and over­all had over­whelm­ingly pos­i­tive things to say about the con­fer­ence.

How­ever, the fact that Paradigm in­cu­bated the Sum­mit and Paradigm is con­nected to Lev­er­age led some mem­bers of the com­mu­nity to ex­press con­cern or con­fu­sion about the re­la­tion­ship be­tween Lev­er­age and the EA com­mu­nity. We will ad­dress this in a sep­a­rate post in the near fu­ture. [Edit: We de­cided not to work on this post at this time.]

EA and Representativeness

One area the EA Sum­mit aimed to ad­dress was con­cern about rep­re­sen­ta­tive­ness in EA, most re­cently raised by Joey Savoie. The ques­tion of how CEA should rep­re­sent the EA com­mu­nity is one we’ve thought about and dis­cussed in­ter­nally for some time. We plan to write a sep­a­rate post on this, but here is an out­line of our think­ing so far. We be­lieve the EA Fo­rum should be a place for ev­ery­one to share and build upon ideas and mod­els, so we’d love to see dis­cus­sion of this here.

On rep­re­sen­ta­tive­ness, our cur­rent view is that:

  1. The EA com­mu­nity should be cause-im­par­tial, but not cause-ag­nos­tic.

  2. CEA’s work should be broadly cause-gen­eral.

  3. Some of CEA’s work should be de­scrip­tive of what is hap­pen­ing in the com­mu­nity, but some of our work should also be pre­scrip­tive, mean­ing that it is based on our best guess as to what will have the largest im­pact.

  4. We’re un­sure who our work should be rep­re­sen­ta­tive of.

  5. While we took some steps to ad­dress rep­re­sen­ta­tive­ness prior to Joey’s post, we wel­come sug­ges­tions on how we can im­prove.

The EA com­mu­nity should be cause-im­par­tial:

EA is about figur­ing out how to do the most good and then do­ing it. This means we don’t fa­vor any par­tic­u­lar benefi­cia­ries, ap­proaches or cause ar­eas from the start, but in­stead se­lect causes based on an im­par­tial calcu­la­tion of im­pact (cause-im­par­tial­ity). This in turn means we should be both seek­ing to re­duce our un­cer­tainty about the rele­vant im­pact of differ­ent causes and seek­ing to find new ar­eas that could po­ten­tially be even more im­por­tant (see Three Heuris­tics for Find­ing Cause X for some ideas on how this might be done).

Suc­cess for the EA com­mu­nity should in­clude a strong pos­si­bil­ity that we learn more, change our minds, and there­fore no longer work on causes that we once thought were im­por­tant.

CEA’s work should be broadly cause-gen­eral:

The rea­son we have an EA com­mu­nity in­stead of in­di­vi­d­ual com­mu­ni­ties fo­cused on spe­cific causes is:

  1. We don’t know for cer­tain what causes are most im­por­tant and we may dis­cover a new Cause X in the fu­ture.

  2. We don’t know for cer­tain which ap­proaches to ex­ist­ing causes are most im­por­tant and we may dis­cover new ap­proaches in the fu­ture.

  3. De­spite our un­cer­tainty, we can take ac­tions that are use­ful across many causes.

CEA’s work should be broadly benefi­cial re­gard­less of one’s views on the rel­a­tive im­por­tance of differ­ent causes. This is why our mis­sion is to build the EA com­mu­nity. We be­lieve our com­par­a­tive ad­van­tage lies in find­ing and co­or­di­nat­ing with peo­ple who can work on im­por­tant prob­lems.

CEA’s work as both de­scrip­tive and pre­scrip­tive:

While most of our work is cause-gen­eral, there will be cases where we have op­por­tu­ni­ties to sup­port work in par­tic­u­lar cause ar­eas that we cur­rently be­lieve are likely to have the high­est im­pact.

We think it is there­fore helpful to make a dis­tinc­tion be­tween as­pects of CEA’s work that are de­scrip­tive and those that are more pre­scrip­tive.

De­scrip­tive work aims to re­flect what is ac­tu­ally hap­pen­ing in the EA com­mu­nity; the kinds of pro­jects peo­ple are work­ing on and is­sues peo­ple are think­ing about. The EA Newslet­ter is a clear ex­am­ple of this be­cause it in­cludes up­dates from around the com­mu­nity and from a va­ri­ety of EA and EA-ad­ja­cent or­ga­ni­za­tions.

Other as­pects of CEA’s work should be pre­scrip­tive, mean­ing that they in­volve tak­ing a view on where the com­mu­nity should be headed or on what causes are likely to be most im­por­tant. For ex­am­ple, CEA’s In­di­vi­d­ual Outreach team does things like help con­nect mem­bers of the com­mu­nity with jobs we con­sider high-im­pact.

In fo­rums where CEA is pro­vid­ing a re­source to the en­tire EA com­mu­nity (for ex­am­ple, the EA Fo­rum, Effec­tive Altru­ism Funds, or events like EA Global), our work should tend to­wards be­ing more de­scrip­tive.

We’re un­sure who our work should be rep­re­sen­ta­tive of:

One challenge in mak­ing our work more rep­re­sen­ta­tive is that it’s un­clear what refer­ence class we should be us­ing when mak­ing our work more rep­re­sen­ta­tive.

On one ex­treme, we could use all self-iden­ti­fy­ing EAs as the refer­ence class. This has the down­side of po­ten­tially re­quiring that our work ad­dress is­sues that ex­pert con­sen­sus in­di­cates are not par­tic­u­larly im­por­tant.

On an­other ex­treme, we could use the con­sen­sus of com­mu­nity lead­ers as the rele­vant refer­ence class. This has the down­side of po­ten­tially re­quiring that our work not ad­dress the is­sues that the over­whelming num­ber of com­mu­nity mem­bers ac­tu­ally care about.

The best solu­tion is likely some hy­brid ap­proach, but it’s un­clear pre­cisely how such an ap­proach might work.

Solic­it­ing a wider range of view­points:

Although we think we should do more to ad­dress rep­re­sen­ta­tive­ness con­cerns, we had already taken some steps to ad­dress this con­cern prior to Joey’s post on this is­sue.

Th­ese in­cluded:

  • Con­sult­ing ~25 ad­vi­sors from differ­ent fields about EA Global con­tent (already in place).

  • Chang­ing the EA hand­book to be more rep­re­sen­ta­tive (in progress).

  • Select­ing new man­agers of the Long-Term Fu­ture and EA Com­mu­nity EA Funds (in progress).

We do, how­ever, rec­og­nize that when con­sult­ing oth­ers it’s easy to end up se­lect­ing for peo­ple with similar views, and that this can leave us with blind spots in par­tic­u­lar ar­eas. We are think­ing about how to ex­pand the range of peo­ple we get ad­vice from. While we can­not promise to en­act all sug­ges­tions, we would like to hear sug­ges­tions from fo­rum users about what else they might like to see from CEA in this area.