Problems with EA representativeness and how to solve it

An is­sue a lot of EAs have, but I think few have for­mal­ized in writ­ing, is a con­cern with cause rep­re­sen­ta­tion within the EA move­ment.

The ba­sic idea be­hind rep­re­sen­ta­tion is that the ma­te­ri­als cre­ated or aimed at be­ing part of the pub­lic-fac­ing view of EA are in line with what a suit­ably broad se­lec­tion of EAs truly think. To take an ob­vi­ous ex­am­ple, if some­one was very pro-Fair Trade and started say­ing EA was all about Fair Trade, this would not re­ally be rep­re­sen­ta­tive of what most EAs think (even if this per­son was con­vinced that Fair Trade was the best cause within the EA frame­work). Nat­u­rally, in a move­ment as large as the EA move­ment, there re­mains a di­ver­sity of view­points, but, nonethe­less, I think it’s fairly easy for ex­pe­rienced EAs to have a sense of what is a com­mon EA view, and what is not (al­though, peo­ple can definitely be af­fected by peer group pres­sure and city se­lec­tion in cre­at­ing a bub­ble). There have been a lot of im­plicit and ex­plicit con­flicts around this is­sue and there are differ­ent pos­si­ble solu­tions to deal­ing with it.

First, some ex­am­ples of clear prob­lems with rep­re­sen­ta­tive­ness.

  • The EA Handbook

  • EA Global

  • Fund­ing gaps

  • EA chap­ter building

The EA Handbook

The EA Hand­book was one of the most pub­lic ex­am­ples of the prob­lem at hand: you can look at the face­book com­ments and EA Fo­rum com­ments to get a sense of what peo­ple have said. If I had to sum­ma­rize a group of peo­ple’s con­cerns, it would have to do with rep­re­sen­ta­tive­ness. As was put it in one of the com­ments:

“I don’t feel like this hand­book rep­re­sents EA as I un­der­stand it. By page count, AI is 45.7% of the en­tire causes sec­tions. And as Cather­ine Low pointed out, in both the an­i­mal and the global poverty ar­ti­cles (which I didn’t count to­ward the page count), more than half the ar­ti­cle was ded­i­cated to why we might not choose this cause area, with much of that space also fo­cused on far-fu­ture of hu­man­ity. I’d find it hard for any­one to read this and not take away that the com­mu­nity con­sen­sus is that AI risk is clearly the most im­por­tant thing to fo­cus on.”

I per­son­ally think peo­ple would not have re­acted so strongly to the Hand­book if it had not seemed like be­ing part of a big­ger trend, one I hope to crys­tal­lize in this blog post.

EA Global

EA Global is an area that is pretty pub­lic and pretty nu­mer­i­cally mea­surable. If you look at all the talks given in 2018 by their cause area, they end up look­ing like 3 hours worth of global poverty, 4.5 hours an­i­mal welfare talks, and 11.5 hours of x-risk. This is not count­ing the meta talks that could have been about any cause area but were of­ten effec­tively x-risk-re­lated (e.g. 80,000 Hours’ ca­reer ad­vice). This is also count­ing the “far fu­ture an­i­mal welfare talks” as nor­mal an­i­mal welfare talks. You can also split it up as near fu­ture cause ar­eas, with 5.5 hours ded­i­cated on them, vs far fu­ture cause ar­eas, with 13.5 hours spent on those.

This con­cern has been true for the last few EAGs and it’s get­ting more no­tice­able over time. Part of the rea­son I only go to ev­ery 2nd EAG, and why many of the peo­ple I would de­scribe as lead­ers in EA poverty do not go at all, is due to the lack of rep­re­sen­ta­tion, and thus the lack of draw of EAs who would want to talk about other causes. This is a self-per­pet­u­at­ing prob­lem as well, since if fewer EAs go the events they be­come in­trin­si­cally less and less friendly to­wards the EAs of that cause area. After a cou­ple years, you could even do a sur­vey and say “well, the av­er­age at­ten­der of EAG thinks X cause is of the high­est im­pact”, but that would only be true be­cause of ev­ery­one with differ­ent views drop­ping out over time due to frus­tra­tion and a feel­ing of dis­con­nec­tion. This is an­other is­sue I have talked about with a lot of in­volved EAs, and is part of the rea­son there is in­ter­est in a differ­ent EA re­lated con­fer­ence.

Fund­ing Gaps

De­tails on fund­ing gaps can be found here. Gen­er­ally, how­ever, claiming that “the EA move­ment is not largely fund­ing con­strained” is an­other ex­am­ple of a gen­eral trend of im­ply­ing that the par­tic­u­lar things that are rep­re­sen­ta­tive of par­tic­u­lar groups of EAs ul­ti­mately rep­re­sent the move­ment as a whole.

Say­ing that “the far fu­ture is fund­ing-filled and thus, if you care about it, you should not like earn­ing-to-give as much” is more hon­est and true than claims along the lines of “the whole EA move­ment is fund­ing-filled”.

EA Chap­ter Building

The fi­nal ex­am­ple is more sub­tle to quan­tify, but it’s also one I have heard about from quite a few differ­ent sources. EA chap­ter build­ing is cur­rently fairly tightly con­trol­led and fo­cused heav­ily on the cre­ation of far fu­ture and AI-fo­cused EAs. Again, if an or­ga­ni­za­tion is gen­uine about this, that is one thing, but I feel as though the av­er­age EA (un­less they have had di­rect ex­pe­rience with try­ing to run a chap­ter) would guess that groups are gen­er­ally dis­cussing all cause ar­eas and are get­ting sup­ported similarly, re­gard­less of fo­cus.

While these are not the only ex­am­ples, I feel they are sadly enough to point at a more over­ar­ch­ing trend.

I would also like to in­clude some ar­eas where I feel this has not hap­pened. Some good ex­am­ples:

  • EA Forum

  • EA Face­book jobs

  • EA Wikipedia

  • Do­ing Good Better

The EA Fo­rum is sur­pris­ingly di­verse, and the cur­rent karma sys­tem does not seem to con­sis­tently favour one cause area or EA or­ga­ni­za­tion over an­other. As stated in this post, it’s true that fre­quent fo­rum users tend to have a di­ver­sity of views. This could change in the fu­ture, given the up­com­ing changes, but cur­rently, I see this medium as one of the less con­trol­led sys­tems within the EA.

The EA Face­book jobs group has helped a lot of peo­ple (in­clud­ing many of the staff cur­rently work­ing at EA or­gani­sa­tions) find jobs from a wide range of EA-re­lated or­ga­ni­za­tions. If you take a sam­pling of the job ads, they tend to be dis­perse and more rep­re­sen­ta­tive of the differ­ent cause ar­eas.

The EA Wikipe­dia page cur­rently shows all three causes and con­cepts that most EAs would broadly agree with as core to the move­ment and rep­re­sen­ta­tive of those within.

Do­ing Good Bet­ter, much like the Wikipe­dia page, does not hold an ag­gres­sively sin­gle cause fo­cus through­out the book. In­stead, it cov­ers clas­sic EA and is­sues that al­most all would agree with.

How do we know what is rep­re­sen­ta­tive?

Rep­re­sen­ta­tive­ness is defined as “typ­i­cal of a class, group, or body of opinion”. So the rep­re­sen­ta­tive­ness of the EA move­ment would be ex­pressed via what is typ­i­cal of many EAs within the move­ment. This would, ideally be de­ter­mined via a ran­dom sam­ple that hits a large per­centage of the EA move­ment. For ex­am­ple, through the EA sur­vey or by gath­er­ing the per­spec­tive of ev­ery­one who has signed up to the EA Fo­rum. Both of these would hit a very large per­centage of the EA move­ment rel­a­tive to more in­for­mal mea­sures.

What is rep­re­sen­ta­tive of EA lead­ers?

One of the re­sponses against hav­ing a rep­re­sen­ta­tive sam­ple is that per­haps there are EAs who are more well-in­formed than oth­ers. To take a more ob­jec­tive crite­ria, per­haps the av­er­age EA who has been in­volved in the EA move­ment for 5 years or more is more in­formed than the av­er­age EA who has been in­volved for 5 days. I think there are ways to go about de­ter­min­ing things like this from more ag­gre­gate data (for ex­am­ple, du­ra­tion of in­volve­ment, or the per­centage donated, might both cor­re­late with more in­volved EAs). Per­haps even do a sur­vey which makes sure to sam­ple ev­ery or­ga­ni­za­tion that over 50% of the broader EA com­mu­nity thinks of as of an “EA or­ga­ni­za­tion”.

While this post does not aim to de­ter­mine the “perfect” way to sam­ple EAs or EA lead­ers, it does aim to point in the right di­rec­tion in the face of the nu­mer­ous is­sues with sam­pling EAs. Clearly, a sur­vey of only the EA lead­ers within my city (or any other spe­cific lo­ca­tion) would be crit­i­cally bi­ased, as would be one with a dis­pro­por­tional fo­cus on a par­tic­u­lar or­ga­ni­za­tion. Another un­rep­re­sen­ta­tive sam­ple might be de­rived from among “EAG lead­ers”, as the lead­ers are cho­sen by a sin­gle or­ga­ni­za­tion and gen­er­ally hold that or­ga­ni­za­tion’s cause as salient. This is­sue is worth an­other post al­to­gether.

Pos­si­ble solutions

Have a low but con­sis­tent bar for rep­re­sen­ta­tive­ness, al­low­ing mul­ti­ple groups to put for­ward com­pet­ing pre­sen­ta­tions of EA. For ex­am­ple, any­one can make an EA hand­book that’s heav­ily fo­cused on a sin­gle cause area and call it an EA hand­book.

Pros—This solu­tion is fairly easy to im­ple­ment, and al­lows a wide va­ri­ety of ideas to co-ex­ist and flour­ish. Things will nat­u­rally get more pop­u­lar if they rep­re­sent the EAs bet­ter as they will be shared more through­out the move­ment.

Cons—Leaves the move­ment pretty vuln­er­a­ble to co-op­tion and mis­rep­re­sen­ta­tion (e.g. an EA Fair Trade hand­book),which could harm move­ment build­ing/​newer peo­ple’s views of EA.

Have a high and con­sis­tent bar for rep­re­sen­ta­tive­ness. For ex­am­ple, if some­thing is branded in a way that sug­gests that it is rep­re­sen­ta­tive of EA, it ex­hibits at least 20% of each cause area (x-risk, AR, poverty) and does not clearly pitch or favour a sin­gle or­ga­ni­za­tion or ap­proach. Alter­na­tively, some kind of a more for­mal sys­tem, based off ob­jec­tive mea­sures from the com­mu­nity, could be in­stalled.

Pros—Does not make EA easy to co-opt, and makes sure that the most seen EA con­tent gives ap­pro­pri­ate rep­re­sen­ta­tion to differ­ent ideas.

Cons—Ra­tios and ex­act num­bers would be hard to calcu­late and get a sense of. They would also change over time (e.g. if a new cause got added).

Com­mu­nity-build­ing or­ga­ni­za­tions could strive for cause in­differ­ence. For ex­am­ple, cur­rent EA is built via a few differ­ent move­ment-build­ing or­ga­ni­za­tions. A case could be made that or­ga­ni­za­tions fo­cused speci­fi­cally on move­ment-build­ing should strive to be rep­re­sen­ta­tive or cause in­differ­ent. One of the ways they could do this is through cross-or­ga­ni­za­tion con­sul­ta­tion be­fore host­ing events or pub­lish­ing ma­te­ri­als meant to rep­re­sent the move­ment as a whole.

Pros—Re­duces the odds of du­pli­cat­ing move­ment out­reach work (e.g. AI EA chap­ters and poverty EA chap­ters). In­creases the odds that long term the EA move­ment will be cause di­verse, lead­ing to higher odds of find­ing Cause X, a cause that’s bet­ter than cur­rently ex­ist­ing cause ar­eas that we haven’t dis­cov­ered yet.

Cons—Many of the most es­tab­lished EA or­gani­sa­tions have a cause fo­cus of some sort. Would be hard to en­force, but could nonethe­less be an ideal worth striv­ing to­wards.