Harvard EA’s 2018–19 Vision

A Vi­sion for Har­vard EA, 2018–19

Cul­len O’Keefe, President

Google Doc Ver­sion (eas­ier-to-read with foot­notes)

Preface

This doc­u­ment will hope­fully provide a short, com­pre­hen­si­ble, and con­crete model for how I would like to run the Har­vard Univer­sity Effec­tive Altru­ism Stu­dent Group (“HUEASG”) this year.[1] I’m shar­ing it here so that other EA lead­ers to use it to the ex­tent they find use­ful.[2]

This third draft is a re­sult of syn­the­siz­ing ideas from the be­low doc­u­ments, re­ceiv­ing feed­back from read­ers of ear­lier drafts as well as from con­ver­sa­tions with other EA lead­ers (to whom I am im­mensely grate­ful).

Foun­da­tional Texts

(Read these first if you haven’t)

● Ales Flidr & James Aung, Heuris­tics from Run­ning Har­vard and Oxford EA Groups

● CEA, A Three-Fac­tor Model of Com­mu­nity Build­ing [here­inafter, Three-Fac­tor Model]

● CEA, The Fun­nel Model [here­inafter, Fun­nel Model]

● CEA, A Model of an EA Group [here­inafter, Model of a Group]

● CEA, CEA’s Cur­rent Think­ing [here­inafter, Cur­rent Think­ing]

● CEA, Effec­tive Altru­ism Com­mu­nity Build­ing [here­inafter, Com­mu­nity Build­ing]

Guid­ing Principles

It would be need­lessly re­dun­dant to try to for­mu­late, from scratch, a set of val­ues for HUEASG. In­stead, I will stress the im­por­tance of re­main­ing al­igned with the most re­cent prin­ci­ples and best prac­tices pub­lished by EA lead­ers (e.g., CEA).[3] CEA’s cur­rent defi­ni­tion of EA is: “us­ing ev­i­dence and rea­son to figure out how to benefit oth­ers as much as pos­si­ble, and tak­ing ac­tion on that ba­sis.”[4] CEA lists the fol­low­ing as EA guid­ing prin­ci­ples:[5]

  • Com­mit­ment to others

  • Scien­tific Mindset

  • Openness

  • Integrity

  • Col­lab­o­ra­tive Spirit[6]

Cur­rent think­ing within CEA en­dorses a long-ter­mist ap­proach to EA: “We be­lieve that the most effec­tive op­por­tu­ni­ties to do good are aimed at helping the long-term fu­ture.”[7] HUEASG should mir­ror this,[8] with­out marginal­iz­ing or es­chew­ing short-term causes. Note that this en­com­passes both pre­ven­tion of ex­is­ten­tial risk and “tra­jec­tory shift­ing.”[9]

Some might think that this fram­ing con­flicts with our fun­da­men­tal com­mit­ment to cause neu­tral­ity.[10] How­ever, as we use it here, “cause-neu­tral” means roughly “cause-im­par­tial”: “se­lect[ing] causes based on im­par­tial es­ti­mates of im­pact.”[11] Thus, it is perfectly com­pat­i­ble with im­par­tially-reached cause de­cid­ed­ness.[12] Fur­ther­more, long-ter­mism is an epistemic fram­ing rather than a cause:[13] it en­courages us to pay sub­stan­tial con­sid­er­a­tion to in­ter­ven­tions’ long-term effects. “How­ever, we rec­og­nize that this ar­gu­ment rests on some sig­nifi­cant moral and em­piri­cal as­sump­tions, and so we re­main un­cer­tain about how valuable the long-term fu­ture is rel­a­tive to other prob­lems. We think that there are other im­por­tant cause ar­eas, par­tic­u­larly an­i­mal welfare and effec­tive global health in­ter­ven­tions [even when con­sid­er­ing only the short-term effects of these].”[14] Thus, the long-ter­mist fram­ing effects a quan­ti­ta­tive change of fo­cus, not a whole­sale re­jec­tion of any par­tic­u­lar cause.[15]

Role of HUEASG Within EA

Gen­er­al­iz­ing a bit, main­stream EA or­ga­ni­za­tions are not pri­mar­ily fund­ing-con­strained.[16] Mean­while, im­por­tant tal­ent gaps per­sist within ma­jor or­ga­ni­za­tions.[17] A re­lated ob­ser­va­tion is that “some peo­ple have the po­ten­tial to be dis­pro­por­tionately im­pact­ful. It ap­pears as though some peo­ple may be many or­ders of mag­ni­tude more im­pact­ful than av­er­age just by virtue of the re­sources (money, skills, net­work) they have available.”[18]

Be­cause of these and other fac­tors, most value from our group will likely come from Core EAsroughly, peo­ple who give ≥80% weight­ing to EA con­sid­er­a­tions when de­cid­ing on a ca­reer[19] path.[20] Key routes to value are, there­fore: cat­alyz­ing,[21] cul­ti­vat­ing,[22] and re­tain­ing[23] po­ten­tial Core EAs.

Re­lat­edly, Har­vard EA likely cre­ates the most value for the EA com­mu­nity by play­ing to our com­par­a­tive ad­van­tages. Our main com­par­a­tive ad­van­tage is ac­cess to young, promis­ing,[24] and well-con­nected stu­dents.

Since stu­dents broadly lack the re­sources, skills, and knowl­edge to be im­me­di­ately use­ful to the EA com­mu­nity,[25] this all im­plies that our pri­ori­ties should be:[26]

  • Cat­alyz­ing, cul­ti­vat­ing, and re­tain­ing Core EAs, with a spe­cial em­pha­sis on ed­u­cat­ing Core EAs on the in­tel­lec­tual/​aca­demic foun­da­tions of effec­tive al­tru­ism[27]

  • Helping Core EAs de­vise ca­reer plans[28]

  • Direct­ing Core EAs to use­ful re­sources and up­skil­ling opportunities

Re­la­tion to the Fun­nel Model

Given the above, a ma­jor heuris­tic for HUEASG ac­tivi­ties should be how they fit in with the Fun­nel Model. That is, we should ask our­selves, “How will this ac­tivity move peo­ple fur­ther down the fun­nel?”[29] “Try­ing to get a few peo­ple all the way through the fun­nel is more im­por­tant than get­ting ev­ery per­son to the next stage.”[30] I ex­plore the im­pli­ca­tions of this be­low.

The Top of the Fun­nel: “Taste of EA”

This idea draws heav­ily from Flidr & Aung.

“In­tro­duc­tion to EA”-type events have been the de­fault be­gin­ning-of-the-year ac­tivity for a while. I think we should move away from this model for a num­ber of rea­sons. First, I worry that “In­tro to EA” events are po­ten­tially low fidelity, which can have bad effects on our group rep­u­ta­tion and EA gen­er­ally.[31] Re­lat­edly, since EA com­prises a num­ber of nu­anced ideas across sev­eral differ­ent aca­demic dis­ci­plines, In­tro to EA events that are ex­pan­sive enough to cover all im­por­tant points are not very deep, and are likely over­whelming to new­com­ers.[32]

Flidr & Aung there­fore offer the fol­low­ing ad­vice:

[T]hink about out­reach efforts as an ‘offer’ of EA where peo­ple can get a taste of what it’s about and take it or leave it. It’s OK if some­one’s not in­ter­ested. A use­ful heuris­tic James used for test­ing whether to run an out­reach event is to ask “to what ex­tent would the au­di­ence mem­ber now know whether effec­tive al­tru­ism is an idea they would be in­ter­ested in?” It turned out that many speaker events that Oxford were run­ning didn’t fit this test, and nei­ther did the fundrais­ing cam­paign.

Don’t “in­tro­duce EA”. It’s fine if peo­ple don’t come across EA ideas in a par­tic­u­lar se­quence. First, find en­try points that cap­ture a per­son’s in­ter­est. If some­one finds EA in­ter­est­ing and likes the com­mu­nity, they will ab­sorb the ba­sics pretty soon.

My own model is that Core EAs usu­ally need a few very im­por­tant traits:

(1) Ded­i­ca­tion to do­ing good[33]

(2) Hu­man cap­i­tal (i.e., skills and re­sources)[34]

(3) Reli­a­bil­ity/​dependability

(4) Knowl­edge of top­ics re­lated to effec­tive al­tru­ism[35]

Early out­reach efforts should op­ti­mize for some com­bi­na­tion of (1) and (4).[36] That is, we should should aim at en­tic­ing peo­ple pre­dis­posed to do­ing good and peo­ple from fields with a strong track record of pro­duc­ing Core EAs (e.g., philos­o­phy, com­puter sci­ence, eco­nomics, biol­ogy).[37] Op­ti­miza­tion for the re­main­ing traits comes at later stages of the fun­nel.

Ad­mit­tedly, I still don’t have a great idea of what this will look like. Com­bined with in­sights from Oxford EAs, my ex­pe­rience at HLS sug­gests that in­tro­duc­tory talks and stu­dent org fairs are still lu­cra­tive mass out­reach tools. How­ever, go­ing for­ward I ex­pect to:

  • put more em­pha­sis on in­tro­duc­ing the main mo­ti­va­tions for EA (e.g., differ­en­tial cost-effec­tive­ness, lack of quan­tifi­ca­tion and ev­i­dence)

  • put more em­pha­sis on in­tro­duc­ing spe­cific, rep­re­sen­ta­tive pro­jects that EAs do, the value of which is com­pre­hen­si­ble to non-EAs

  • put less em­pha­sis on more spe­cific con­cepts in EA (e.g., scope in­sen­si­tivity, long-ter­mism)

Huw Thomas sug­gests that fo­cused out­reach to stu­dents in his­tor­i­cally EA-pro­duc­tive dis­ci­plines (e.g., com­puter sci­ence) might also be worth­while. How­ever, to pro­mote group di­ver­sity, broad out­reach is still highly de­sir­able.

Next Steps: 1-on-1s

1-on-1s (1:1s) are a good next step for sev­eral rea­sons.[38] First, they be­gin to screen (albeit very lightly) for the third trait-cluster Core EAs need: re­li­a­bil­ity/​de­pend­abil­ity.[39] Se­cond, 1:1s offer a good way to build friend­ship with po­ten­tial Core EAs (more later). Third, they offer a good way to com­mu­ni­cate EA ideas in a more high-fidelity man­ner.[40] Fi­nally, they offer a good op­por­tu­nity to “sign­post”: point new­com­ers to ex­ist­ing EA liter­a­ture[41] and or­ga­ni­za­tions that suit their in­ter­ests,[42] thus in­creas­ing their knowl­edge of EA.

The Ex­pand­ing Core

The fi­nal step, of course, is to con­tinue the Fun­nel pro­cess by mov­ing com­mit­ted in­di­vi­d­u­als to­wards Core in­volve­ment. I ad­mit that I don’t have a solid model for what this should look like. If my hy­poth­e­sis about what traits a Core EA needs is cor­rect, then the per­son should be largely self-mo­ti­vated to con­tinue to learn EA con­tent on their own. If this too is right, then per­haps most of the value from org lead­er­ship can do is from:

  • Prevent­ing or slow­ing down attrition

  • Con­tin­u­ing to sign­post to and net­work­ing on be­half[43] of the newcomer

  • Aid­ing in ca­reer plan­ning[44]

  • Ac­tivi­ties aimed at in­creas­ing fol­low-through with EA-in­formed ca­reer plans[45]

I imag­ine that for­mal events will be less im­por­tant to this pro­cess than de­vel­op­ing a sense of com­mu­nity with newer Core mem­bers.[46] Get­ting new and prospec­tive Core EAs to feel at home in and strongly iden­tify with EA is likely one of the best ways to get them to re­main in the Core. So­cial events seem like a good way to achieve this. We should also ask cur­rent org mem­bers about what pro­gram­ming they would find valuable.[47]

Im­pli­ca­tions for Group Struc­ture and Function

Spe­cial­ize Events and Roles

School groups should map their plan­ning to the Fun­nel Model.[48] For groups with suffi­cient per­son­nel, it might make sense to be­gin to spe­cial­ize for each stage, too (i.e., have ded­i­cated out­reach, 1:1, and Core leads).

Note that, al­though these are or­di­nal steps, there is no rea­son why a group can­not make par­allel efforts at each level (in­clud­ing reg­u­lar 1:1s).[49] That is, the Fun­nel does not have to map cleanly onto the en­tire school year, with out­reach hap­pen­ing only at the be­gin­ning.

Fo­cus on Skill-Building

While we should en­courage EAs to em­brace other means of up­skil­ling,[50] we should also grow Core EAs’ hu­man cap­i­tal when it plays to our com­par­a­tive ad­van­tages.[51] Along this line, Ales Flidr sug­gests “[f]ocus­ing on the memes of ra­tio­nal­ity, pri­ori­ti­za­tion and ex­plor­ing un­con­ven­tional op­tions [for do­ing good].”

Strong Fo­cus on Struc­tured EA So­cial Events

This model puts a strong em­pha­sis on com­mu­nity so­cial events, since I think those are quite likely to effec­tively move peo­ple down the Fun­nel and re­tain Core EAs. Such so­cial events, how­ever, should be some­what struc­tured while still al­low­ing am­ple time for ca­sual so­cial­iz­ing.

Im­pli­ca­tions for Pledge Work

This im­plies that we should put less em­pha­sis on pledge work. I think there’s still a place for such work now,[53] but such work should not be the main ac­tivity of our groups ex­cept in­so­far as it is a good mechanism for driv­ing group or­ga­niz­ers to stay com­mit­ted.[54]

Ideally, an EA or­ga­ni­za­tion would have pro­gram­ming cleanly differ­en­ti­ated to en­gage peo­ple in all stages of the Fun­nel (in­clud­ing those un­likely to move fur­ther down). In such a world, pledge work would re­main valuable as a way to get non-Core EAs (and oth­ers) to en­act EA prin­ci­ples. In­deed, I think it’s plau­si­ble that a lot of EA’s fu­ture value will come from chang­ing norms about philan­thropy, if not ca­reer choice. But right now, we prob­a­bly lack the per­son­nel to en­act such a tiered ap­proach. In the long run, it might be worth­while to con­sider cre­at­ing two sub­sidi­ary EA groups: one fo­cused on tra­di­tional pledge-type work and one fo­cused on ca­reer choice.[55]

Other Av­enues for Value

Ales Flidr sug­gests the fol­low­ing as pos­si­ble means of cre­at­ing value with HUEASG:

  • Tar­geted re­la­tion­ship build­ing with key pro­fes­sors and their grad stu­dents, par­tic­u­larly those who have a good chance

  • Re­lated, study­ing peo­ple who went through the law school (pro­fes­sors, stu­dents) who had the great­est im­pact on the world. What they did, how they in­ter­acted with groups, etc. Similarly re­search cur­rent fac­ulty and stu­dents.

  • Study what con­cretely most suc­cess­ful groups (by our met­rics, i.e., think­ing and be­hav­ior change) do.

  • Try­ing to cre­ate bet­ter work­ing con­nec­tions with [aca­demic EA in­sti­tu­tions like the Re­search Schol­ars Pro­gramme] and GPI

Endnotes

[1] This en­com­passes all Har­vard schools ex­cept for Har­vard Col­lege.

[2] The fol­low­ing peo­ple have been es­pe­cially in­fluen­tial to my think­ing: Frankie An­der­sen-Wood, James Aung, Chris Bak­er­lee, Harri Besceli, Ryan Carey, Holly El­more, Ales Flidr, Eric Gast­friend, Kit Har­ris, Ja­cob Lager­ros, Ed Lawrence, Dar­ius Meiss­ner, Linh Chi Nguyen, Alex Nor­man, Huw Thomas, and Hay­den Wilk­in­son. All mis­takes are my own.

Un­less a di­rect quo­ta­tion, refer­ences to any of the above do not nec­es­sar­ily im­ply a di­rect en­dorse­ment, but rather sug­gest that the idea is re­lated to (in­clud­ing po­ten­tially a re­ply to) their com­ments or in­put.

[3] Cf. CEA, Cur­rent Think­ing (“We be­lieve that in­di­vi­d­u­als will have a greater im­pact if they co­or­di­nate with the com­mu­nity, rather than act­ing alone.”).

[4] CEA, CEA’s Guid­ing Prin­ci­ples.

[5] Id.

[6] See also Ste­fan Schu­bert & Owen Cot­ton-Bar­ratt, Con­sid­er­ing Con­sid­er­ate­ness.

[7] CEA, Cur­rent Think­ing.

[8] H/​T Dar­ius Meiss­ner.

[9] See CEA, Cur­rent Think­ing.

[10] H/​T James Aung; Holly El­more; Huw Thomas.

[11] See Ste­fan Schu­bert, Un­der­stand­ing Cause-Neu­tral­ity.

[12] Cf. id.

[13] H/​T Holly El­more.

[14] CEA, Cur­rent Think­ing; H/​T Huw Thomas.

[15] H/​T Holly El­more.

[16] See 80K, What Are The Most Im­por­tant Ta­lent Gaps in the Effec­tive Altru­ism Com­mu­nity? (mean re­ported fund­ing con­straint of 1.3 out of 4); cf. CEA, Cur­rent Think­ing (“We be­lieve that CEA can cur­rently be most use­ful by al­lo­cat­ing money to the right pro­jects rather than by bring­ing in more donors.”).

[17] See 80K, supra note 16. Note that re­spon­dents said they would need $250,000 to re­lease the most re­cent ju­nior hire for three years. This sug­gests a very high trade­off in value be­tween ca­reer-ori­ented ac­tivi­ties and dona­tion-ori­ented ones. In prac­tice, I imag­ine that this means that our re­sources will vir­tu­ally always be best spent when op­ti­mized for ca­reer changes.

[18] CEA, Cur­rent Think­ing. H/​T Dar­ius Meiss­ner.

[19] With earn­ing to give as a plau­si­ble EA ca­reer. H/​T James Aung.

[20] This defi­ni­tion bor­rows heav­ily from Ed Lawrence.

[21] Defined as the coun­ter­fac­tual act of con­sid­er­ing ded­i­cat­ing one’s ca­reer to EA.

[22] Defined as in­creas­ing one’s ded­i­ca­tion to be­ing or be­com­ing a “core EA.”

[23] Defined as pro­tect­ing peo­ple com­pletely ded­i­cated to be­ing a “core EA” from be­com­ing less in­volved (e.g., due to burnout, at­tri­tion, and value drift).

[24] Rea­sons Har­vard stu­dents are promis­ing in­clude high ca­reer flex­i­bil­ity com­pared to other stu­dents, ac­cess to sub­stan­tial poli­ti­cal and so­cial cap­i­tal due to Har­vard af­fili­a­tion, ac­cess to world class aca­demic re­sources, high ex­pected ca­reer earn­ings, and so­cial at­ti­tudes con­ducive to EA. H/​T Dar­ius Meiss­ner.

[25] See Flidr & Aung.

[26] A com­ment from Holly El­more: “I’m torn be­tween this path and a path of spread­ing the word through Har­vard. My guess is that the lat­ter raises the pres­tige of EA among in­fluen­tial peo­ple, re-an­chors peo­ple on do­ing more char­ity than they pre­vi­ously con­sid­ered, and in­creases the chance of find­ing po­ten­tial core EAs. The former (the one you have here) seems to make EA into a se­cret so­ciety. There’s some­thing un­com­fortable about that to me, but I’m open to it. I think HUEA/​HCEA were strongest when we had a lot of pub­lic-fac­ing things and a big be­hind-the-scenes fo­cus on core or­ga­niz­ers. The or­ga­niz­ers had some­thing to that was more im­me­di­ately al­tru­is­tic than plan­ning their ca­reers and it kept them in touch with the ba­sics of EA.”

I’m largely in agree­ment with this. I also agree that long-run changes in cul­tural at­ti­tudes to­wards philan­thropy are po­ten­tially an im­por­tant part of EA’s ex­pected value. How­ever, while I think there con­tinues to be value in those ac­tivi­ties—and so we should there­fore not es­chew them—the fore­go­ing con­sid­er­a­tion make this seem less valuable as com­pared to de­vel­op­ing Core EAs, though still ab­solutely valuable. We are agreed that di­rect work is good for boost­ing morale and keep­ing peo­ple en­gaged.

[27] See CEA, Com­mu­nity Build­ing. H/​T Dar­ius Meiss­ner.

[28] Holly El­more raised the fol­low­ing ques­tions in a pre­vi­ous draft: “Are we just a feeder for EA orgs or should we con­sider part of our role to cul­ti­vate cre­ative think­ing about how to ac­com­plish a lot of good? Or are you just say­ing we shouldn’t en­courage earn­ing to give and in­stead fo­cus on de­vel­op­ing peo­ple who are well-versed in and ded­i­cated to EA?”

Re­gard­ing the first ques­tion, I see “cre­ative think­ing about how to ac­com­plish a lot of good” as perfectly com­pat­i­ble with my defi­ni­tion of Core EA. More speci­fi­cally, be­ing a “Core EA” need not and should not be limited to pur­su­ing cur­ren 80K pri­or­ity ca­reer paths. In­deed, more in­di­vi­d­u­al­ized—and by ex­ten­sion less or­tho­dox—guidance might play to our com­par­a­tive ad­van­tage since we have the op­por­tu­nity to de­velop closer re­la­tion­ships to Core EAs than or­ga­ni­za­tions like 80K. Also, “ca­reer” might be defined more loosely than “what one does to pay the bills.”

On the sec­ond ques­tion, cur­rent CEA think­ing is that “[w]e think that the com­mu­nity con­tinues to benefit from some peo­ple fo­cused on earn­ing-to-give . . . .” CEA, Cur­rent Think­ing; see also 80K, Ca­reer Re­views (recom­mend­ing or some­times recom­mend­ing sev­eral ca­reers at least par­tially for ETG rea­sons). “Roughly, we think that if an in­di­vi­d­ual is a good fit to work on the most im­por­tant prob­lems, this should prob­a­bly be their fo­cus, even if they have a high earn­ing po­ten­tial. If di­rect work is not a good fit, in­di­vi­d­u­als can con­tinue to have a sig­nifi­cant im­pact through dona­tions.” CEA, Cur­rent Think­ing.

[29] Cf. Flidr & Aung (“En­gage­ment is more im­por­tant than wide-reach.”).

[30] CEA, Model of a Group.

[31] See Flidr & Aung; CEA, Cur­rent Think­ing; see also Kerry Vaughan, The Fidelity Model of Spread­ing Ideas.

[32] Cf. Vaughan, supra note 31.

[33] See CEA, Com­mu­nity Build­ing. Thanks to Frankie An­der­sen-Wood for push­ing for clar­ifi­ca­tion of this con­cept.

[34] H/​T Hay­den Wilk­in­son; Dar­ius Meiss­ner.

[35] E.g., evolu­tion­ary biol­ogy, philos­o­phy of mind, eco­nomics, moral philos­o­phy, poli­ti­cal sci­ence. H/​T Dar­ius Meiss­ner.

[36] H/​T Dar­ius Meiss­ner.

[37] Thanks to James Aung, Chris Bak­er­lee, Ed Lawrence, and Huw Thomas for de­vel­op­ing this.

[38] Cf. Flidr & Aung (“De­fault to 1:1′s. In hind­sight, it is some­what sur­pris­ing that 1:1 con­ver­sa­tions are not the de­fault stu­dent group ac­tivity. They have a num­ber of benefits: you get to know peo­ple on a per­sonal level, you can pre­sent in­for­ma­tion in a nu­anced way, you can tai­lor recom­mended re­sources to in­di­vi­d­ual in­ter­ests etc. Proac­tively reach out to mem­bers in your com­mu­nity and offer to grab a coffee with them or go for a walk. 1:1′s also give you a good yard­stick to eval­u­ate how valuable longer pro­jects have to be to be worth ex­e­cut­ing: e.g. a 7-hour pro­ject would have to be at least as valuable as 7 1:1′s, other things equal. Caveat: we definitely don’t mean to im­ply that you should cut all group or larger-scale ac­tivi­ties. We will share some ideas for such ac­tivi­ties in a fol­low-up post.”).

[39] The idea be­ing that will­ing­ness to sign up for a 1:1 is some­what in­dica­tive of one’s open­ness to EA and to putting se­ri­ous thought into do­ing good gen­er­ally. This is dis­tinct from screen­ing for moral ded­i­ca­tion: Frankie An­der­sen-Wood and Dar­ius Meiss­ner use­fully point out that for many peo­ple, moral ded­i­ca­tion/​con­vic­tion de­vel­ops over time with ex­po­sure to, e.g., moral philos­o­phy.

Linh Chi Nguyen and Chris Bak­er­lee rightly sug­gest that, while 1:1s are valuable, other modes of out­reach and on­board­ing re­tain value. One ex­am­ple is al­low­ing peo­ple to “non­bind­ingly sit in a dis­cus­sion round where they can check out if they like the (peo­ple in the) com­mu­nity.” H/​T Linh Chi Nguyen.

[40] See Vaughan, supra note 31 (“An ex­am­ple of a high fidelity method of com­mu­ni­cat­ing EA would be a lengthy per­sonal con­ver­sa­tion. In this con­text you could cover a large num­ber of ideas in great de­tail in an en­vi­ron­ment (face-to-face con­ver­sa­tion) that is par­tic­u­larly well-suited to up­dat­ing.”). But cf. Flidr & Aung (“Don’t teach, sign­post. Avoid the temp­ta­tion to teach EA to peo­ple. There’s a lot of great on­line con­tent, and you won’t be able to ex­plain the same ideas as well or in as much nu­ance as longform writ­ten con­tent, well-pre­pared talks or pod­cast epi­sodes.”).

[41] Dar­ius Meiss­ner sug­gests that lend­ing out EA books might be a good way to do this.

[42] Cf. Flidr & Aung (“In­stead of view­ing your­self as a teacher of EA, think of your­self as a sign­post. Be able to point peo­ple to in­ter­est­ing and rele­vant ma­te­rial on all ar­eas of EA, and re­move fric­tion for peo­ple learn­ing more by proac­tively recom­mend­ing them con­tent. For ex­am­ple, af­ter a 1:1 meet­ing, mes­sage over 3 links that are rele­vant to their cur­rent bot­tle­neck/​area of in­ter­est.”).

[43] That is, in­tro­duc­ing the new Core EA to other Core EAs with rele­vant in­ter­ests.

[44] A ma­jor un­cer­tainty for me is how to do this in a way that is al­igned with 80K’s work. Per­haps there needs to be a sin­gle, trustable, well-read HUEASG ca­reer ad­vi­sor who is re­spon­si­ble with stay­ing up-to-date with 80K’s lat­est think­ing and pri­ori­ties.

[45] Frankie An­der­sen-Wood sug­gests ac­tivi­ties fo­cused on men­tal health, skill build­ing, ra­tio­nal­ity, and con­nec­tion to EA com­mu­ni­ties out­side of school.

[46] Holly El­more rightly points out that “oner­ous work of putting on events that makes the core mem­bers 1) ac­tu­ally trust each other and 2) makes the club feel le­git.” H/​T also Linh Chi Nguyen.

[47] H/​T Dar­ius Meiss­ner.

[48] Of course, some events may strad­dle stages of the Fun­nel.

[49] H/​T James Aung.

[50] H/​T Ed Lawrence.

[51] H/​T Frankie An­der­sen-Wood.

[52] H/​T Holly El­more; Linh Chi Nguyen; Chris Bak­er­lee. I agree with Chris Bak­er­lee that “events I’ve been to at Eric [Gast­friend]’s are a pretty good model: nom­i­nally fo­cused on a spe­cific topic (e.g., the state of EA an­i­mal ad­vo­cacy; end-of-year dona­tion dis­cus­sion) but al­low­ing plenty of time and op­por­tu­nity for peo­ple to mill about, munch on things, and talk about what­ever. Some­thing that would be ad­ver­tised via face­book event rather than pos­ter­ing the Science Cen­ter.”

[53] Mainly be­cause this new model is novel. Although we should feel free to change tra­jec­to­ries and pri­ori­ties if we think it’s more effec­tive, I think a com­plete break from pledge or­ga­ni­za­tions right now would dam­age our re­la­tion­ships with his­tor­i­cally sup­port­ive or­ga­ni­za­tions. This is prob­a­bly a bad dy­namic to un­der­take with­out very good rea­son to think that we have a bet­ter plan. Cf. Schu­bert & Cot­ton-Bar­ratt, supra note 6. Such work is also valuable be­cause, anec­do­tally, a num­ber of Core EAs (e.g., me) have be­come Core EAs by first be­ing in­ter­ested in global poverty pledge work. So sup­port­ing this on-ramp is an im­por­tant part of the Fun­nel too.

[54] H/​T Holly El­more.

[55] I un­der­stand Oxford EA is con­sid­er­ing this.