The almighty Hive will

I’ve been won­der­ing whether EA can’t find some strate­gic benefits from a) a peer-to-peer trust econ­omy, or b) ra­tio­nal co­or­di­na­tion to­wards var­i­ous goals. Th­ese seem like sim­ple ideas, but I haven’t seen them pub­li­cly dis­cussed.

I’ll start from the re­lated and over­sim­plify­ing as­sump­tions that

a) there’s a wholly fun­gible pool of EA money (for want of a bet­ter name, let’s call it Gringotts) shared among EAs and EA or­gani­sa­tions, and

b) all EAs trust all other EAs as much as they trust them­selves such that we form a mega­mind (the Hive), and

c) all EAs con­sider all EA goals to be worth­while and high value, even if they see some as sub­stan­tially less so than oth­ers, such that we all have ba­si­cally the same goal (col­lect­ing all the Poke­mon)

In some cases these as­sump­tions are so flawed as to be po­ten­tially fatal, but I think they’re an in­ter­est­ing start­ing point for some thought ex­per­i­ments—and we can fo­cus on rele­vant prob­lems with them as we go. But the EA move­ment is get­ting large enough that even if these as­sump­tions were only to hold for micro­cosms of it, we might still be able to get some big wins. So here are some ideas for ex­ploit­ing our Hivery, in two broad cat­e­gories:

Build­ing an EA so­cial safety net

1) In­tra-Hive insurance

Nor­mal in­surance is both in­her­ently waste­ful (in­surance com­pa­nies have to spend ages as­sess­ing risk to en­sure that they make a profit on their rates) and nega­tive ex­pected value for the in­suree (who pays for the waste, plus the in­surance prof­its). In a well func­tion­ing Hive seek­ing all the Poke­mon, with a suffi­ciently siz­able Gringotts, each EA could just reg­ister things of ir­re­place­able value to them per­son­ally, and if they ever broke/​lost/​ac­ci­den­tally swal­lowed the item, job, ex­is­ten­tial sta­tus etc, they would get some com­men­su­rate amount of money with few ques­tions asked. That would save Gringotts from the nega­tive ex­pected value (EV) of al­most all in­surance, give ev­ery­one peace of mind, and avoid a lot of time and angst spent deal­ing with po­ten­tially un­scrupu­lous or opaque in­sur­ers.

In prac­tice this is only as good an idea as our sim­plify­ing as­sump­tions com­bined, and cre­ates some du­bi­ous in­cen­tives, so might be a pipe dream. Still, if the EA com­mu­nity re­ally is or could be a lot closer to the as­sumed ideal than so­ciety at large, it seems like there could be room for some small-scale op­er­a­tions like this—for ex­am­ple EA or­gani­sa­tions offer­ing such pseudo-in­surance to their staff, and large-scale donors offer­ing it to the EA or­gani­sa­tions.

One way to po­ten­tially strengthen the trust re­quire­ment would be to de­velop an opt-in EA rep­u­ta­tion sys­tem on an EA app or web­site, much like the rat­ings for Uber drivers. If it felt un­com­fortable, ob­vi­ously you wouldn’t have to get in­volved, but it could al­low a fairly straight­for­ward tier-based sys­tem of what you were el­i­gible for based on your rat­ing (prob­a­bly weighted by other fac­tors, like how many peo­ple had voted). You could also add some weight­ing to peo­ple cur­rently work­ing for EA or­gani­sa­tions, though it might be too limit­ing to make that a strong pre­req­ui­site (Earn-to-Givers might want to in­sure them­selves so they could safely give a higher pro­por­tion, for eg). As with nor­mal in­surance it would cre­ate moral haz­ard prob­lems, but hope­fully with some in­tel­li­gent but low-cost rep­u­ta­tion man­age­ment this could still be a big net pos­i­tive for Gringotts.

Per­son­ally, I think the rep­u­ta­tion app is a re­ally cool idea even if it never got used for any­thing sub­stan­tial, but I’m pre­pared to be alone in that.

1.1) Guaran­teed in­come pool for en­trepreneurial EAs

This is much like in­surance, with similar limi­ta­tions and much the same po­ten­tial for miti­gat­ing them. Ex­cept here it’s based on the idea that en­trepreneuri­al­ism is one of the high­est EV-earn­ing path­ways, but be­cause of the ul­tra-high risks it’s out of reach to any­one who can’t get in­sta-VCed or sup­port them­selves for sev­eral months. As with in­surance, Gringotts is big enough that it doesn’t re­ally suffer from risks that af­fect a sin­gle per­son. In this case though, a fur­ther fac­tor is that the EA would need to demon­strate some de­gree of com­pe­tence to en­sure that they were ac­tu­ally do­ing some­thing pos­i­tive EV, and to the ex­tent that they could do so they might be able to get fund­ing from reg­u­lar path­ways.

Some­thing similar might also be use­ful for peo­ple in­ter­ested in start­ing EA char­i­ties, be­fore the stage where they might be el­i­gible for a sub­stan­tial grant from eg Givewell or OpenPhil. Again, I’m not sure such a win­dow ex­ists, but it seems worth look­ing at for peo­ple from poorer back­grounds.

2) Low in­ter­est loans

Loans have all the waste and nega­tive EV of in­surance, ex­cept that you get the money straight away—and there’s no ques­tion about whether you get it. This maybe makes them a stronger can­di­date for Gringotts-cov­er­age, since it re­moves one risk fac­tor. Re­lat­edly, they also avoid the in­cen­tive-dis­tort­ing effects of in­surance, re­mov­ing an­other.

In the real world, loans also re­quire a credit rat­ing check, which can be based on some quite ar­bi­trary crite­ria, such as not be­ing able to guaran­tee re­pay­ments be­cause you’re poor, whether you use a credit card or debit card, or even whether you’re reg­istered to vote. And given the rel­a­tively low num­ber of fac­tors the credit rat­ing re­lies on, there would prob­a­bly be a lot of ran­dom noise in it even if they were all sen­si­ble.

Lastly, with a nor­mal loan, some­thing has nec­es­sar­ily gone wrong for the cred­i­tor if a re­pay­ment is missed. Gringotts, on the other hand, might some­times be con­tent for a deb­itor to miss re­pay­ments if the money nonethe­less went to­wards gath­er­ing a lot of Poke­mon, or even if it had been wisely spent on an ul­ti­mately doomed ven­ture.

3) A Hive ex­is­ten­tial space network

Couch­sur­fing may already be A Thing, but there might be some op­por­tu­ni­ties for mak­ing it a smoother ex­pe­rience given a ro­bust trust net­work. Also, since sleep­ing isn’t the only mode of ex­is­tence, ‘liv­ing spaces’ aren’t the only form of liv­ing spaces; given how many EAs work re­motely, there’s prob­a­bly also a lot of de­mand for work­ing spaces. EAs with more mod­est liv­ing ac­com­mo­da­tion could also offer solo- or duo- (etc) work­ing spaces, in the former case if they would nor­mally work el­se­where. It might even be helpful to have co-work­ing spaces in a fairly close area, with ex­plic­itly differ­ing cul­tures (eg one be­ing mostly silent, the other hav­ing mu­sic or am­bi­ent sound, or freer con­ver­sa­tion, or with peo­ple work­ing on similar pro­ject types, peo­ple with similar—or de­liber­ately dis­parate—skills, peo­ple bring­ing chil­dren etc)

Given the psy­cholog­i­cal benefits for some of us of hav­ing a sep­a­rate space for liv­ing and work­ing com­bined with the emo­tional benefits of hav­ing a short com­mute, EAs who live near each other might even benefit from just swap­ping homes for the work­ing day.

4) EA for-prof­its offer­ing dis­counts on VAT-ap­plied goods and services

At the mo­ment there are few EA for prof­its, and many of those mainly offer ser­vices to dis­ad­van­taged sub­groups rather than to other EAs. Nonethe­less in fu­ture we might see a pro­lifer­a­tion of EA star­tups, even if the only sense in which they’re EA is a strong effec­tive giv­ing cul­ture among their founders. In such a case, if the goods/​ser­vices they offer are VAT (or similar) tax­able, it would provide an in­cen­tive for them to offer heavy dis­counts to other EA or­gani­sa­tions and/​or EAs—since the lower the cost, the less Gringotts would leak in ser­vice tax.

Gringotts could in­cen­tivise this with one of the strate­gies above, though there might be le­gal im­pli­ca­tions. Nonethe­less, the Hive would surely benefit from find­ing out ex­actly what the le­gal limits are and ex­plor­ing the pos­si­bil­ities of go­ing right up to them.

Max­imis­ing the value of EA employees

5) EA or­gani­sa­tions par­tially sub­sti­tut­ing salaries with benefits

Every time an EA work­ing at an EA org buys some­thing the org could have bought, Gringotts loses the in­come tax on what­ever they’ve bought. In the UK at least, there’s a tax-free thresh­old of £11,500, but in an ideal world ev­ery­thing EA em­ploy­ees would want to spend money on above that thresh­old would be bought for them by EA or­gani­sa­tions. More re­al­is­ti­cally, to keep things rel­a­tively egal­i­tar­ian and main­tain sen­si­ble in­cen­tives, the ideal might be to pay for any things that EA em­ploy­ees would need to main­tain a healthy lifestyle. An ini­tial laun­dry list of can­di­dates I put to­gether:

  • ac­com­mo­da­tion, (not nec­es­sar­ily just for em­ploy­ees—we might ul­ti­mately be able to build peer-to-org or org-to-org ex­is­ten­tial space net­works, per point 3 above)

  • bills,

  • travel to and from work,

  • a gym mem­ber­ship (or some equiv­a­lent phys­i­cal ac­tivity for peo­ple who find the gym too ster­ile),

  • out-of-work ed­u­ca­tion,

  • elec­tronic equip­ment for un­re­stricted (within rea­son) per­sonal use,

  • clothes,

  • pen­sion con­tri­bu­tions,

  • toi­letries,

  • food,

  • med­i­cal supplies

I know some of these are already offered by some EA or­gani­sa­tions (and many for-prof­its, come to that), and there will surely be le­gal re­stric­tions on how much money you can spend on em­ploy­ees like this with­out it get­ting taxed. But the po­ten­tial sav­ings are so big that again the Hive should surely ex­plore and share knowl­edge of the ex­act le­gal bound­aries.

6) Em­ploy­ees of EA or­gani­sa­tions not donat­ing

Since ev­ery such dona­tion is made with an EA’s taxed in­come, the same con­sid­er­a­tions as in 5) ap­ply. Every time an EA does this, Gringotts loses the tax value of the dona­tion. The sim­plest way to avoid this would be for EAs to just ask for a 10% lower salary (or what­ever dona­tion pro­por­tion they would imag­ine them­selves hav­ing made) than they would have done for a com­pa­rable job el­se­where .

This would po­ten­tially re­dis­tribute money among causes since EAs work­ing at one org might not think it’s ac­tu­ally the best one. But un­less the pro­por­tion re­dis­tributed would be more than the av­er­age in­come tax on an EA salary (some­where in the vicinity of 20% seems like a plau­si­ble es­ti­mate), this would be an iter­ated pris­oner’s dilemma. Any in­di­vi­d­ual could move more money to their cause of choice by re­quest­ing a higher in­come, but the fewer of us did so, the more money would end up with all the causes. And it feels like a Hive of co­op­er­at­ing al­tru­ists should be able to deal with one lit­tle wafer thin pris­oner’s dilemma…

In cases where in­di­vi­d­u­als are work­ing for an EA org but feel that other or­gani­sa­tions are sub­stan­tially more than 20% more effec­tive than their own, it feels like they should of­ten pre­fer just earn­ing to give. There are nu­mer­ous pos­si­ble ex­cep­tions—for ex­am­ple if you feel like the mul­ti­plier on the other org is higher than 20% but you wouldn’t earn enough in an­other to mul­ti­ply to a net plus, or you’re plan­ning to move to an­other EA or­gani­sa­tion but are work­ing in your cur­rent job to gain skills and rep­u­ta­tion. It seems like such mo­ti­va­tions would have in­tra-EA sig­nal­ling costs, though, since they im­ply both that you’re defect­ing in the pris­oner’s dilemma and that you don’t value the work of the peo­ple around you that highly. Iron­i­cally, it might ac­tu­ally look bad for an EA em­ployee to ad­mit to char­i­ta­ble dona­tions.

Even so, the ex­tra-EA sig­nal­ling costs of not giv­ing could con­ceiv­ably out­weigh both the in­tra-EA sig­nals and the tax sav­ings from do­ing so. If we be­lieve this, an al­ter­na­tive ap­proach would be to have EA orgs ex­plic­itly run dona­tion-di­rect­ing schemes. Each org could con­tribute to a pool of money they planned to redi­rect whose size was de­pen­dent on their num­ber of staff and staff salaries. Then each em­ployee could di­rect some pro­por­tion of it to the cause of their choice; the weight of their di­rec­tion could ei­ther be pro­por­tional to the differ­ence be­tween their salary and the max salary they could have asked for or, more diplo­mat­i­cally, just equal for each em­ployee. That way the money would still be dis­tributed in much the same pro­por­tion as it cur­rently is, but with­out be­ing taxed—and EAs could still be said to be donat­ing in some sense at least (and would still have an in­cen­tive to keep abreast of what’s go­ing on el­se­where in the EA world).

7) Bas­ing salary on ex­pected ca­reer trajectory

Similarly to the pre­vi­ous idea, if I’m work­ing at an EA or­gani­sa­tion but ex­pect that in the near fu­ture I’ll end up work­ing in the pri­vate sec­tor—ei­ther be­cause I’m earn­ing to give, be­cause I’m try­ing to build ca­reer cap­i­tal, or any num­ber of other pos­si­ble rea­sons—it doesn’t make sense for me to get a sub­stan­tial amount more than I need to live on at the EA org and then give a lot of money away af­ter I tran­si­tion. Bet­ter to earn less now and give slightly less later.

Again, this fol­lows from tax­a­tion—whether I later pay back the tax on the ex­tra money I earned at the EA org or not, Gringotts will be that much poorer (be­cause it partly com­prises me). It also com­pounds to the ex­tent that you agree with the haste con­sid­er­a­tion—the money saved now could be worth sub­stan­tially more than the money you give later.

If you’re mov­ing from the pri­vate sec­tor into an EA org, the same strat­egy would prob­a­bly make sense in re­verse—if you’re in com­merce and tran­si­tion­ing into an EA or­gani­sa­tion, you would keep more now and ask for a com­men­su­rately lower salary from the EA org—though it would be less clear/​less pro­nounced an effect be­cause of the haste con­sid­er­a­tion. Also, the haste con­sid­er­a­tion sug­gests that if you’re never ex­pect­ing to work at an EA or­gani­sa­tion, it might be bet­ter to donate a de­clin­ing pro­por­tion of your in­come (or rather, to donate in such a way as to in­crease the amount you keep for your­self over time, hold­ing the net life­time amount you ex­pect to donate con­stant). Since this front-loads your dona­tions, it also has the side-benefit of mak­ing fu­ture burnout less costly to Gringotts, and per­haps also less tempt­ing for fu­ture-you.

This strat­egy is fairly high risk for an in­di­vi­d­ual, in that if you sud­denly need to pay for ur­gent med­i­cal as­sis­tance or some other emer­gency ex­pen­di­ture in your younger life, you might find your­self un­able to af­ford it—but that’s just the sort of is­sue that could be miti­gated or even re­solved by Hive in­surance.

It would also ‘lose’ the in­ter­est you’d have earned on the money you’d kept ear­lier, but you can ac­count for that when calcu­lat­ing fu­ture dona­tions. The effect will be dom­i­nated by the tax sav­ings, and in any case, the money will still hav­ing been earn­ing a (greater) re­turn on in­vest­ment through its EA use el­se­where in the Hive.

One com­pli­cat­ing fac­tor is that some­times com­mer­cial em­ploy­ers will offer a salary based on the size of your cur­rent one, so tak­ing a low salary from an EA org might harm fu­ture earn­ing prospects. A pos­si­ble rem­edy for this, if it wasn’t per­ceived as dishon­est, and as­sum­ing the EA is leav­ing their or­gani­sa­tion openly and on good terms with it, would be for them to briefly take a higher salary just as they started hunt­ing for their next job. Per­son­ally I think this would be a po­etic an­ti­dote to this ob­nox­ious prac­tice in the first place, but wider pub­lic opinion might dis­agree with me.

8) Offer­ing clear fi­nan­cial se­cu­rity to all EA employees

Seem­ingly con­trar­i­wise to the above, but bear with me…

EA em­ploy­ees will be more pro­duc­tive if they aren’t deal­ing with fi­nan­cial in­se­cu­rity, since such in­se­cu­rity has high costs in both time and men­tal health.

Ac­cord­ing to 80K’s tal­ent gap sur­vey, even a ju­nior hire is worth about $83,000 per year on av­er­age (me­dian—much higher mean) to their EA or­gani­sa­tion. If we take this liter­ally, then a) EA or­gani­sa­tions could com­fortably test the effect of dou­bling (or more) the offered salaries on the num­ber and qual­ity of ap­pli­ca­tions and per­haps more re­al­is­ti­cally b) they could af­ford to offer suffi­ciently high rates to even their most ju­nior em­ploy­ees that money isn’t a sub­stan­tial limit­ing fac­tor in their lives.

What ‘isn’t a sub­stan­tial limit­ing fac­tor’ means is ob­vi­ously fairly vague, but it seems like if any EA is eg spend­ing a lot of time com­mut­ing, wait­ing for dated hard­ware to run, or eat­ing a lot of cheap un­healthy food or not par­ti­ci­pat­ing in healthy hob­bies, or oth­er­wise los­ing time or health to save money, then it will im­pede their pro­duc­tivity. Again, tak­ing the above sur­vey at an ad­mit­tedly naive face value, it would be worth the av­er­age EA org spend­ing up to $830 more per year to in­crease their ju­nior em­ploy­ees’ pro­duc­tivity by just 1% (per­haps more if the em­ployee’s pro­duc­tivity in­crease would com­pound over their ca­reer).

We should prob­a­bly be scep­ti­cal of such strik­ing sur­vey re­sults—nonethe­less, there’s room to be more con­ser­va­tive and still see the po­ten­tial for gain here. In an ideal world, the fi­nan­cial se­cu­rity offered could mostly come from the benefits and in­surance dis­cussed above—ie at a ~20% dis­count.

Lastly, this rea­son­ing ar­gues only for the op­tion of higher salaries/​benefits—many EAs on very low salaries seem perfectly able and will­ing to get by on them—and only to peo­ple who would oth­er­wise be be­low what­ever fi­nan­cial thresh­old would al­low them to stop feel­ing con­strained or anx­ious in daily life.

I’m aware that some EA or­gani­sa­tions are already im­ple­ment­ing some form of these strate­gies, but they’re far from uni­ver­sally adopted. Per­haps this is be­cause they’re bad ideas—this was quite an off-the-cuff post—but I haven’t re­ally heard sub­stan­tial dis­cus­sion of any of them, so let’s have it now. And if there’s any mileage in the core as­sump­tions, I’d hope such dis­cus­sion will re­veal sev­eral more ways we can use our almighty col­lec­tive will.

Full dis­clo­sure—I work for an EA or­gani­sa­tion (Founders Pledge), so some of these strate­gies would po­ten­tially benefit me. But hope­fully they’d benefit FP still more.

Thanks to Kirsten Hor­ton and John Halstead for some great feed­back on this post.