The Future of Earning to Give

Link post

[I have medium con­fi­dence in the broad pic­ture, and some­what lower con­fi­dence in the spe­cific pieces of ev­i­dence. I’m likely bi­ased by my com­mit­ment to an ETG strat­egy.]

Earn­ing to Give (ETG) should be the de­fault strat­egy for most Effec­tive Altru­ists (EAs).

Five years ago, EA goals were pretty clearly con­strained a good deal by fund­ing. To­day, there’s al­most enough money go­ing into far fu­ture causes, so that vet­ting and tal­ent con­straints have be­come at least as im­por­tant as fund­ing. That led to a multi-year trend of in­creas­ingly down­play­ing ETG that was ini­tially ap­pro­pri­ate, but which has gone too far.

Noth­ing in this post should be in­ter­preted to dis­cour­age peo­ple from de­vot­ing a year or two of their life, at some early stage, to search­ing for ways that they can do bet­ter than ETG. A 10% chance of be­com­ing a good AI safety re­searcher, or the founder of the next AMF, is worth a good deal of at­ten­tion.

I’m as­sum­ing for the pur­poses of this post that non­hu­man an­i­mals have neg­ligible moral im­por­tance, since I’m mainly aiming at peo­ple who fo­cus on hu­man wellbe­ing. If I were to al­ter that as­sump­tion, I’d be a lot more un­cer­tain about whether any spe­cific cause should get more fund­ing, but I’d also see many ad­di­tional fund­ing op­por­tu­ni­ties that look like low-hang­ing fruit.

I’ll also as­sume that we should ex­pect there will be less low-hang­ing fruit in the fu­ture, so that philan­thropic money should be spent soon. That as­sump­tion ought to be some­what con­tro­ver­sial, and I’ll dis­cuss the al­ter­na­tive near the end of this post.

I will try to err in this post in the di­rec­tion of be­ing too pes­simistic about our abil­ity to dis­t­in­guish good char­i­ties, in or­der to demon­strate that the value of ETG is not strongly de­pen­dent on the wis­dom of donors.

Is there an effi­cient mar­ket in char­ity?

If there are well-en­dowed char­i­ties that are suffi­ciently wise and al­tru­is­tic (Open Philan­thropy and the Gates Foun­da­tion?), then maybe ETG is unim­por­tant be­cause we can count on them to do all the fund­ing.

I find that less plau­si­ble than the idea that the two best VCs can fund all the star­tups that need fund­ing. The VC world has bet­ter in­cen­tives than the EA world, and maybe bet­ter feed­back. Yet I still see lit­tle rea­son for con­fi­dence that the right star­tups are be­ing funded.

Also, philan­thropy in gen­eral has a track record which sug­gests mediocre, but im­prov­ing, effi­ciency. Those seem to be keys points of the origi­nal ideas that GiveWell, Peter Singer, and Will MacAskill made when helping to cre­ate the EA move­ment. It would be some­what sur­pris­ing if a pat­tern in­volv­ing billions of dol­lars dis­ap­peared quickly af­ter be­ing crit­i­cized.

Th­ese rea­sons sug­gest we should have a clear prior that philan­thropic in­sti­tu­tions are not close to be­ing as effi­cient as the stock mar­ket.

If we had wise billion­aires who were fund­ing all worth­while char­i­ties, then most EAs should do di­rect work. But we should ex­pect any re­ally small group of fun­ders to have quirks, blind spots, and self­ish de­sires to avoid spend­ing weird­ness points. So we should ex­pect them to leave fund­ing gaps that can be filled by some­what av­er­age EAs, and also plenty of need for fur­ther vet­ting by above-av­er­age EAs.

Near term opportunities

Here are some ed­u­cated guesses about which EA char­i­ties can pro­duc­tively use more money this year:

  • ALLFED

  • GiveDirectly

  • AMF

  • CFAR

  • Ought

  • MAPS

Open Philan­thropy’s plans to fund at most half of the needs of most good char­i­ties cre­ates a pre­sump­tion that some of them won’t be fully funded, un­less there’s some other large char­ity that seems will­ing and able to eval­u­ate them. I see some large char­i­ties that partly qual­ify, but I don’t see them as hav­ing a broad enough scope to fund all the op­por­tu­ni­ties of this type.

It’s pos­si­ble that some of these char­i­ties have been ex­pand­ing as fast as their skill/​man­power al­lows, and look un­der­funded be­cause donors wait to donate un­til the char­i­ties need funds. But this seems to re­quire a some­what im­plau­si­ble de­gree of wis­dom on the part of donors. I’m con­fi­dent that the startup world hasn’t worked that well, so why should it work bet­ter with young char­i­ties?

Maybe those op­por­tu­ni­ties will be fully funded soon. Many peo­ple should look a decade or so into the fu­ture be­fore de­cid­ing whether to pur­sue an ETG strat­egy. Also, it would be nice to know whether ETG will be­come ir­rele­vant if the Gates Foun­da­tion spends most of its money op­ti­mally and soon.

So I’ve de­cided that these near-term op­por­tu­ni­ties aren’t all that im­por­tant to my main point, and I’ll fo­cus in­stead on a longer term out­look.

Big­ger, harder causes

I won’t try to iden­tify the best causes in this sec­tion. In­stead, I’ll de­scribe causes that are benefi­cial enough and ver­ifi­able enough to jus­tify an ETG strat­egy. I ex­pect that by the time there are enough donors to fund the causes in this sec­tion, there will be more wis­dom available to donors, who will find op­por­tu­ni­ties that are bet­ter than many of these. So please take this sec­tion as be­ing some­what closer to a worst case anal­y­sis than to a pre­dic­tion of what EAs will fund.

One or two of these op­por­tu­ni­ties may sur­prise me by be­ing cheap to solve, but I ex­pect I’ve cho­sen hard enough prob­lems that some of them will be ex­pen­sive to solve.

Prizes

My first cat­e­gory is prizes for med­i­cal ad­vances.

For ex­am­ple, an in­sti­tu­tion might offer $50 billion for a cure for ag­ing, or for a gen­eral-pur­pose cure for can­cer (maybe with par­tial pay­ments for sig­nifi­cant progress).

No, I don’t mean offer­ing re­wards for a drug that would de­lay ag­ing or can­cer by a few months—there’s plenty of money go­ing into that already (mostly treat­ing Western dis­ease or a small sub­set of can­cers). Prac­ti­cally all of that ap­pears to be fol­low­ing a paradigm that shows lit­tle promise of cur­ing ag­ing. I mean some­thing more au­da­cious, in the sense that Aubrey de Grey’s ap­proach is au­da­cious.

Pro­duc­ing a treat­ment that cures ag­ing is likely more ex­pen­sive and failure prone than is pro­duc­ing a drug that yields a small benefit in a large num­ber of peo­ple, yet we’ve got a sys­tem that re­wards the two about the same. That means there’s lit­tle pres­sure to fo­cus re­search on cures for ag­ing. Prizes can be much more re­sult-ori­ented than cur­rent fund­ing of med­i­cal re­search, so I ex­pect them to provide op­por­tu­ni­ties to redi­rect some of that re­search to the most valuable cures.

Aging is an area where it’s un­rea­son­ably hard for most of us to eval­u­ate whether any one re­search pro­gram is do­ing a good job, and I sus­pect that most in­vestors in this area are do­ing poorly. But with prizes, the fun­ders only need to be able to eval­u­ate re­sults well af­ter they’ve been achieved, and whether they’re giv­ing the prizes to the peo­ple who are re­spon­si­ble for those re­sults. There still needs to be some­body with skill at pre­dict­ing which re­search will suc­ceed, but it can be a VC-style ex­pert who is re­act­ing to good fi­nan­cial in­cen­tives.

Prizes aren’t as effi­cient as di­rect grants to the best re­search pro­grams, so it takes a fair amount of hu­mil­ity for a fun­der to choose prizes. I ex­pect it re­quires an un­usual per­son to adopt ar­ro­gant goals like cur­ing ag­ing, with­out also hav­ing ar­ro­gant be­liefs about their un­der­stand­ing of which strate­gies are worth fund­ing.

Sarah Con­stantin es­ti­mates that ag­ing re­search is com­pet­i­tive with GiveWell’s cur­rent top char­i­ties. I’ll be pes­simistic here, and es­ti­mate that it will be more like 120 as cost effec­tive as GiveWell’s char­i­ties, due to a com­bi­na­tion of in­her­ently low tractabil­ity and our poor abil­ity to iden­tify the best re­search strate­gies. That seems likely to look com­pet­i­tive af­ter the next $10 billion of low-hang­ing philan­thropic fruit has been picked.

Ok, some of you are prob­a­bly say­ing I’m not be­ing pes­simistic enough. Maybe ag­ing and can­cer are in­tractable prob­lems, and prizes for them won’t at­tract any le­gi­t­i­mate treat­ments. I don’t have a sim­ple way to con­vince you to trust my in­tu­itions about their tractabil­ity.

It’s likely that there are some other med­i­cal prob­lems that are tractable, and where multi-billion dol­lar prizes would be pro­duc­tive.

If a cure for ag­ing is in­tractable, then there’s likely plenty of room for im­prove­ment in qual­ity of life for the el­derly. Note that hunter-gath­er­ers seem to not get cer­tain de­bil­i­tat­ing age-re­lated dis­eases such as di­a­betes and de­men­tia, and the el­derly tend to re­main ac­tive un­til a few days be­fore death (see Lin­de­berg’s Food and Western Disease).

Men­tal health seems like an­other area there’s large room for im­prove­ment, but also large un­cer­tainty about what strate­gies are tractable. Alas, I feel rather un­cer­tain whether progress in men­tal health will be con­strained by money or by some­thing else.

I ad­mit that the difficulty of choos­ing the right kinds of prizes weak­ens my ar­gu­ment by a mod­est amount. Yet even if half the prize money goes to poorly thought out goals, this ap­proach will still shift med­i­cal re­search into di­rec­tions that fo­cus much more on max­i­miz­ing benefits than is cur­rently the case.

Afford­able drugs

My next idea is Michael Kre­mer’s pro­posal for drug patent buy­outs (or see this ver­sion that’s a bit more ori­ented to­ward lay­men), un­der which a wealthy in­sti­tu­tion buys most drug patents and puts them in the pub­lic do­main. This would dra­mat­i­cally re­duce the prob­lems as­so­ci­ated with patent mo­nop­o­lies.

For ex­am­ple, drug com­pa­nies some­times try to sell drugs to poor coun­tries at a rel­a­tively low price, and re­coup their drug de­vel­op­ment costs by charg­ing much higher prices in wealthy coun­tries. Alas, that leads to drug smug­gling. This makes it ex­pen­sive to sell drugs in poor coun­tries, likely lead­ing to a situ­a­tion where peo­ple in poor coun­tries can’t af­ford drugs that they would be able to af­ford if they could guaran­tee they wouldn’t re­sell the drug. Pa­tent buy­outs can elimi­nate this per­ver­sity for a sub­stan­tial frac­tion of drugs.

This strat­egy has some risks [1] if you try to im­ple­ment it with a bud­get that’s com­pa­rable to the mar­ket value of the drugs, so I’m re­luc­tant to recom­mend at­tempt­ing it with a bud­get as small as that of the Gates Foun­da­tion.

Global warming

Global warm­ing is likely to cause wide­spread harm un­der stan­dard fore­casts, and we should worry more about small risks of a larger catas­tro­phe from un­ex­pected weather changes that might be trig­gered by warm­ing.

There are a num­ber of in­ter­ven­tions that seem promis­ing, such as pre­vent­ing de­foresta­tion, re­foresta­tion, and albedo en­hance­ment.

I don’t know which in­ter­ven­tions of this kind ought to be funded, but this seems like an ob­vi­ous can­di­date for spend­ing $10+ billion per year if we’re run­ning low on other philan­thropic op­por­tu­ni­ties.

Seast­eads /​ Char­ter Cities

For $100 billion or so, we could build a bunch of new ter­ri­to­ries that will provide peo­ple with more op­tions to move away from re­gions with bad weather or bad gov­ern­ments. I.e. seast­eads, or maybe char­ter cities if they’re poli­ti­cally fea­si­ble.

In keep­ing with my pes­simistic as­sump­tions, I’ll ig­nore the stan­dard hopes for seast­eads, and as­sume for this post that seast­eads will mainly just provide real es­tate and fairly av­er­age gov­er­nance, in or­der to en­able the world’s less for­tu­nate peo­ple to lift them­selves up to some­thing close to the global av­er­age.

UBI

A uni­ver­sal ba­sic in­come has some po­ten­tial to pro­tect against tech­nolog­i­cal un­em­ploy­ment, and do a rel­a­tively effi­cient job of elimi­nat­ing poverty, with­out the in­cen­tive prob­lems of means-test­ing.

I’m not sug­gest­ing a poli­ti­cal move­ment, since I’m try­ing to err in this post on the side of pes­simism about our abil­ity to iden­tify good in­sti­tu­tions, and it’s too easy for poli­ti­cal move­ments to end up be­ing bent to­ward other goals.

This might be achieved by a large ex­pan­sion of GiveDirectly.

There have been some con­cerns that GiveDirectly has prob­lems with money go­ing to po­si­tional goods. I’m un­sure whether we should be con­cerned about that, but I ex­pect those prob­lems would diminish if GiveDirectly ex­pands dona­tions to give money to ev­ery­one in a given village, or larger re­gion.

Manna is an­other ex­am­ple of a strat­egy that might lead to a UBI, al­though I don’t want to en­dorse that par­tic­u­lar or­ga­ni­za­tion.

Ma­jor Funders

Some of these causes may be taken on by ma­jor gov­ern­ments, mak­ing ETG ir­rele­vant for those causes. But gov­ern­ments don’t have a par­tic­u­larly great track record com­pared to the best char­i­ties. I’m bet­ting that gov­ern­ments will ei­ther ig­nore some big causes of this na­ture, or will bun­gle the solu­tions in ways that leave needs for more EA money.

Why isn’t the Gates Foun­da­tion fund­ing these? Here are some guesses:

  • the foun­da­tion ex­pects to find enough bet­ter strate­gies to use up all their money be­fore the low-hang­ing fruit is ex­hausted [re­quires mod­er­ate op­ti­mism about the foun­da­tion’s abil­ities].

  • the foun­da­tion ex­pects that many of their re­cip­i­ents can’t han­dle more money effec­tively to­day, but that some re­cip­i­ents will soon ex­pand their abil­ity to han­dle more.

  • the strate­gies aren’t pres­ti­gious enough, or look too weird.

  • they are fol­low­ing stan­dard philan­thropic pro­ce­dures for de­cid­ing where to spend money, and some­thing like peer pres­sure has dis­cour­aged them from eval­u­at­ing the al­ter­na­tives.

  • the foun­da­tion is un­will­ing to ad­mit that there are im­por­tant limits to how many pro­jects their em­ploy­ees can su­per­vise. Switch­ing from di­rect grants to prizes would by­pass some of those limits, at the cost of re­quiring more hu­mil­ity than I ex­pect from some­one who makes a ca­reer out of eval­u­at­ing char­i­ties.

  • See also some com­ments by Carl Shul­man.

More low-hang­ing fruit next year?

How would the value of ETG be af­fected if, in­stead, we as­sume that we’ll find bet­ter giv­ing op­por­tu­ni­ties in the fu­ture?

It im­plies greater need for peo­ple to look for those op­por­tu­ni­ties, but still, the his­tory of philan­thropy sug­gests that only a tiny num­ber of peo­ple suc­ceed in cre­at­ing a bet­ter op­por­tu­nity than philan­thropy pre­vi­ously had.

“Gen­er­ate new char­ity” is much less amenable to de­com­po­si­tion into easy parts than, say, micro­pro­ces­sor de­sign, so if many peo­ple at­tempt it, it will end up more like a com­pe­ti­tion to cre­ate the next Google or Face­book.

Maybe it’s good for a hun­dred new peo­ple each year to en­ter some sort of com­pe­ti­tion to find/​cre­ate the next EA char­ity, or try to do x-risk re­search, even if less than one per year will suc­ceed. But most of these peo­ple should no­tice af­ter a few years that they’re not bet­ter at it than the peo­ple who cre­ated AMF, FHI, etc., and fall back on an­other strat­egy.

“The best way to save lives or re­duce suffer­ing” should be ex­pected to pro­duce a much nar­rower set of an­swers than “some­thing con­sumers will pay for”, so we should ex­pect there to be a much smaller num­ber of EA char­i­ties than good busi­nesses.

Are EAs more pro­duc­tive at di­rect work?

I imag­ine that there a large frac­tion of EAs who ex­pect to be more pro­duc­tive in di­rect work than in an ETG role. But I’m not too clear why we should be­lieve that. The skills and man­power needed by EA or­ga­ni­za­tions ap­pear to be a small sub­set of the to­tal ca­reers that the world needs, and it would seem an odd co­in­ci­dence if the com­par­a­tive ad­van­tage of peo­ple who be­lieve in EA hap­pens to over­lap heav­ily with the needs of EA or­ga­ni­za­tions. Re­mem­ber that EA prin­ci­ples sug­gest that you should donate to ap­prox­i­mately one char­ity (i.e. the cur­rent best one). The same gen­eral idea ap­plies to need for tal­ent: there are a rel­a­tively small num­ber of tasks that stand out as un­usu­ally in need of more tal­ent. Ta­lent is more com­plex than money, so it doesn’t mean there’s only one kind of tal­ent that mat­ters. But a heuris­tic which treats most tal­ent as equally valuable seems as sus­pi­cious to me as a heuris­tic that treats all char­i­ties as equally valuable.

I can imag­ine that some of the hopes for do­ing di­rect work are due to self­ish de­sires to sig­nal one’s EA cre­den­tials. That kind of self­ish­ness seems fine if done hon­estly, but I want to keep that goal sep­a­rate from al­tru­is­tic goals.

What about vet­ting?

Don’t we need more peo­ple do­ing vet­ting, and isn’t that more like di­rect work than it is like ETG?

I’m fairly con­fi­dent that a sub­op­ti­mal amount of vet­ting is be­ing done. But vet­ting more vet­tors doesn’t seem any eas­ier than vet­ting char­i­ties that do ob­ject-level work. If any­thing, it’s harder.

As an anal­ogy, I as an in­vestor be­lieve there’s lots of money to be made by good VCs and VC-like funds, and I’ve seen a fair num­ber of op­por­tu­ni­ties to in­vest in VC or similar funds. Yet I haven’t in­vested in any such funds, be­cause the ones that want more money are ones that I don’t ex­pect to be able to eval­u­ate with a rea­son­able amount of effort.

When in­vestors be­come ea­ger to trust new VC funds, as of­ten hap­pens near stock mar­ket peaks, am­a­teurs en­ter the field and run VC funds with­out ac­quiring much skill, and end up with poor re­turns on in­vest­ment. If EAs were that ea­ger to trust new peo­ple to dis­burse char­i­ta­ble dona­tions, the same kind of prob­lem would arise, and would be harder to de­tect, since char­i­ta­ble donors don’t have feed­back mechanisms that are as hard to fake as get­ting dou­ble their in­vest­ment back.

Conclusion

There are some tough calls that need to be made, by any­one do­ing ETG, about com­par­ing safe dona­tions to dona­tions that look more promis­ing but run higher risks of bi­ased eval­u­a­tion. Still, there is a range of plau­si­ble-look­ing an­swers for which we ought to be mod­er­ately con­fi­dent that ETG will re­main valuable for quite some time.

Most likely there will be a trillion dol­lars or more in op­por­tu­ni­ties re­main­ing for the fore­see­able fu­ture. Trillions per year if we in­clude a fully char­ity-driven UBI, but I’ll guess that we’ll end up in­stead with a patch­work of ba­sic in­come pro­grams that are funded by char­i­ties in some na­tions, and funded by gov­ern­ments in oth­ers.

My best guess is that the cur­rent marginal cost of sav­ing a hu­man life is around $5k to $10k, and that un­der mildly op­ti­mistic as­sump­tions about the growth of EA style char­ity, it will rise to $100k in a decade or so. $100k is well within the range of costs that lead me to en­courage ETG.

It’s quite pos­si­ble that an ETG strat­egy will pro­duce poor re­sults, but it sure looks like the most likely source of failure is poor choices of where to donate, not a short­age of low-hang­ing fruit.

Or maybe AI will ren­der this all ir­rele­vant soon. But that’s not an ETG-spe­cific risk.

Footnote

[1] - [Highly tech­ni­cal point, which most read­ers shouldn’t worry about:] The joint ran­dom­iza­tion for sub­sti­tutes works well if there’s un­limited money to buy patents. I’m wor­ried about what hap­pens when a char­ity with a $10 billion bud­get tries to buy a patent that’s worth about $5 billion. If patent hold­ers with sev­eral $2 billion patents claim, with some ex­ag­ger­a­tion, that those other drugs are sub­sti­tutes for the drug be­ing bought, then the char­ity faces prob­lems with ei­ther be­ing un­able to af­ford the pur­chase, or poli­ti­cal fal­lout from un­fairly(?) un­der­cut­ting sales of some of the drugs. I’m un­clear whether this causes prob­lems in re­al­is­tic cases.