Estimating the Philanthropic Discount Rate

Cross-posted to my web­site. I have tried to make all the for­mat­ting work on the EA Fo­rum, but if any­thing doesn’t look right, try read­ing on my web­site in­stead.

Summary

  • How we should spend our philan­thropic re­sources over time de­pends on how much we dis­count the fu­ture. A higher dis­count rate means we should spend more now; a lower dis­count rate tells us to spend less now and more later.

  • We (prob­a­bly) should not as­sign less moral value to fu­ture be­ings, but we should still dis­count the fu­ture based on the pos­si­bil­ity of ex­tinc­tion, ex­pro­pri­a­tion, value drift, or changes in philan­thropic op­por­tu­ni­ties.

  • Ac­cord­ing to the Ram­sey model, if we es­ti­mate the dis­count rate based on those four fac­tors, that tells us how quickly we should con­sume our re­sources[1].

  • We can de­crease the dis­count rate, most no­tably by re­duc­ing ex­is­ten­tial risk and guard­ing against value drift. We still have a lot to learn about the best ways to do this.

  • Ac­cord­ing to a sim­ple model, im­prov­ing our es­ti­mate of the dis­count rate might be the top effec­tive al­tru­ist pri­or­ity.

Introduction

Effec­tive al­tru­ists can be­come more effec­tive by care­fully con­sid­er­ing how they should spread their al­tru­is­tic con­sump­tion over time. This sub­ject re­ceives some at­ten­tion in the EA com­mu­nity, but a lot of low hang­ing fruit still ex­ists, and EAs could prob­a­bly do sub­stan­tially more good by fur­ther op­ti­miz­ing their con­sump­tion sched­ules (for our pur­poses, “con­sump­tion” refers to money spent try­ing to im­prove the world).

So, how should al­tru­ists use their re­sources over time? In 1928, Frank Ram­sey de­vel­oped what is now known as the Ram­sey model. In this model, a philan­thropic ac­tor has some stock of in­vested cap­i­tal that earns in­ter­est over time. They want to know how to max­i­mize util­ity by spend­ing this cap­i­tal over time. The key ques­tion is, at what rate should they spend to max­i­mize util­ity?

(Fur­ther sup­pose this philan­thropic ac­tor is the sole fun­der of a cause. If other ac­tors also fund this cause, that sub­stan­tially changes con­sid­er­a­tions be­cause you have to ac­count for how they spend their money[2]. For the pur­poses of this es­say, I will as­sume the cause we care about only has one fun­der, or that all fun­ders can co­or­di­nate.)

Speci­fi­cally, we as­sume the ac­tor’s cap­i­tal grows ac­cord­ing to a con­stant (risk-free) in­ter­est rate . Ad­di­tion­ally, we dis­count fu­ture util­ity at some rate , so that if perform­ing some ac­tion this year would pro­duce 1 util­ity, next year it will only give us dis­counted util­ity. The ac­tor then needs to de­cide at what rate to con­sume their cap­i­tal.

To­tal util­ity equals the sum of dis­counted util­ities at each mo­ment in time. In math­e­mat­i­cal terms, we write it as

where c(t) gives the amount of re­sources to be con­sumed (that is, spent on al­tru­is­tic en­deav­ors) at time t, and u(c) gives util­ity of con­sump­tion.

This model makes many sim­plifi­ca­tions—see Ram­sey (1928)[3] and Greaves (2017)[4] for a de­tailing of the re­quired as­sump­tions, of both an em­piri­cal and a philo­soph­i­cal na­ture. To keep this es­say rel­a­tively sim­ple, I will take the Ram­sey model as given, but it should be noted that chang­ing these as­sump­tions could change the re­sults.

It is com­mon to as­sume that ac­tors have con­stant rel­a­tive risk aver­sion (CRRA), which means their level of risk aver­sion doesn’t change based on how much money they have. Some­one with log­a­r­ith­mic util­ity of con­sump­tion has CRRA, as does any­one whose util­ity func­tion looks like for some con­stant .

An ac­tor with CRRA max­i­mizes util­ity by fol­low­ing this con­sump­tion sched­ule[3:1]:

where is the in­ter­est rate and is elas­tic­ity of marginal util­ity. Higher in­di­cates greater risk aver­sion. cor­re­sponds to log­a­r­ith­mic util­ity.

(Origi­nal re­sult is due to Ram­sey (1928), but credit to Philip Tram­mell[5] for this spe­cific for­mu­la­tion.)

The scale fac­tor tells us what pro­por­tion of the port­fo­lio to spend dur­ing each pe­riod in or­der to max­i­mize util­ity. A higher dis­count rate means we should spend more now, while a lower dis­count rate tells us to save more for later. In­tu­itively, if we dis­count the fu­ture more heav­ily, that means we care rel­a­tively less about fu­ture spend­ing, so we should spend more now (and vice versa).

Ac­cord­ing to the Ram­sey model, fol­low­ing a differ­ent con­sump­tion sched­ule than the above re­sults in sub-max­i­mal util­ity. If we spend too much early on, we pre­vent our as­sets from grow­ing as quickly as they should. And if we spend too lit­tle, we don’t reap suffi­cient benefits from our as­sets. There­fore, we would like to know the value of so we know how to op­ti­mally spread our spend­ing over time. (The pa­ram­e­ters and mat­ter as well, but in this es­say, I will fo­cus on .)

If we have a pure time prefer­ence, that means we dis­count fu­ture util­ity be­cause we con­sider the fu­ture less morally valuable, and not be­cause of any em­piri­cal facts. Ram­sey called a pure time prefer­ence “eth­i­cally in­defen­si­ble.” But even if we do not ad­mit any pure time prefer­ence, we may still dis­count the value of fu­ture re­sources for four core rea­sons:

  1. All re­sources be­come use­less (I will re­fer to this as “eco­nomic nul­lifi­ca­tion”).

  2. We lose ac­cess to our own re­sources.

  3. We con­tinue to have ac­cess to our own re­sources, but do not use them in a way that our pre­sent selves would ap­prove of.

  4. The best in­ter­ven­tions might be­come less cost-effec­tive over time as they get more heav­ily funded, or might be­come more cost-effec­tive as we learn more about how to do good.

(“Re­sources” can in­clude money, stocks, gold, or any other valuable and spend­able as­set. I will mostly treat re­sources as equiv­a­lent to money.)

In the next sec­tion, I ex­plain why we might care about the long-run dis­count rate in ad­di­tion to the cur­rent dis­count rate. In “Break­ing down the cur­rent dis­count rate”, I con­sider the cur­rent dis­count rate in terms of the above four core rea­sons and roughly es­ti­mate how much we might dis­count based on each rea­son. In “Break­ing down the long-run dis­count rate”, I do the same for the dis­count rate into the dis­tant fu­ture. In “Can we change the dis­count rate?”, I briefly in­ves­ti­gate the value of re­duc­ing the dis­count rate as an effec­tive al­tru­is­tic ac­tivity. Similarly, in “Sig­nifi­cance of mis-es­ti­mat­ing the dis­count rate”, I find that sim­ply im­prov­ing our es­ti­mate of the dis­count rate could pos­si­bly be a top effec­tive al­tru­ist cause. Fi­nally, the con­clu­sion pro­vides some take­aways and sug­gests promis­ing ar­eas for fu­ture re­search.

In this es­say, I deal with some com­pli­cated sub­jects that de­serve a much more de­tailed treat­ment. I provide an­swers to ques­tions when­ever pos­si­ble, but these an­swers should be in­ter­preted as ex­tremely pre­limi­nary guesses, not con­fi­dent claims. The pri­mary pur­pose of this es­say is merely to provide a start­ing point for dis­cus­sion and raise some im­por­tant and ne­glected re­search ques­tions.

This es­say ad­dresses the philan­thropic dis­count rate, refer­ring speci­fi­cally to the dis­count rate that effec­tive al­tru­ists should use. This re­lates to the eco­nomic con­cept of the so­cial dis­count rate, which (to sim­plify) is the rate at which gov­ern­ments should dis­count the value of fu­ture spend­ing. Effec­tive al­tru­ists tend to have sub­stan­tially differ­ent val­ues and be­liefs than gov­ern­ments, re­sult­ing in sub­stan­tially differ­ent dis­count rates. But if we know the so­cial dis­count rate, we can use it to “re­verse-en­g­ineer” the philan­thropic dis­count rate by sub­tract­ing out any fac­tors gov­ern­ments use that we do not be­lieve philan­thropists should care about, and then adding in any fac­tors gov­ern­ments tend to ne­glect (e.g., per­haps we be­lieve most peo­ple un­der­es­ti­mate the prob­a­bil­ity of ex­tinc­tion). For now, I will not at­tempt this ap­proach, but this would make a good sub­ject for fu­ture re­search. For a more de­tailed sur­vey of the so­cial dis­count rate and the con­sid­er­a­tions sur­round­ing it, see Greaves (2017)[4:1].

When at­tempt­ing to make pre­dic­tions, I will fre­quently re­fer to Me­tac­u­lus ques­tions. Me­tac­u­lus is a web­site that “poses ques­tions about the oc­cur­rence of a va­ri­ety of fu­ture events, on many timescales, to a com­mu­nity of par­ti­ci­pat­ing pre­dic­tors” with the aim of helping hu­man­ity make bet­ter pre­dic­tions. It has a rea­son­ably im­pres­sive track record. Although Me­tac­u­lus’ short-term track record might not ex­trap­o­late well to the long-term ques­tions refer­enced in this es­say, the ag­gre­gated pre­dic­tions made by Me­tac­u­lus are prob­a­bly more re­li­able than un­in­formed guesses[6]. Me­tac­u­lus pre­dic­tions can change over time as more users make pre­dic­tions, so the num­bers I quote in this es­say might not re­flect the most up-to-date in­for­ma­tion. In or­der to avoid dou­ble-count­ing my per­sonal opinion, I have not reg­istered my own pre­dic­tions on any of the linked Me­tac­u­lus ques­tions.

Sjir Hoeij­mak­ers, se­nior re­searcher at Founders Pledge, has writ­ten a similar es­say about how we should dis­count the fu­ture. I read his post be­fore pub­lish­ing this, but I wrote this es­say be­fore I knew he was work­ing on the same topic, so any over­lap in con­tent is co­in­ci­den­tal.

Sig­nifi­cance of a de­clin­ing long-run dis­count rate

The ba­sic Ram­sey model as­sumes a fixed dis­count rate. But it seems plau­si­ble that the dis­count rate de­clines over time. How does that af­fect how we should al­lo­cate our spend­ing across time?

In short, we should spend more when the dis­count rate is high, and de­crease our rate of spend­ing as the dis­count rate falls. See Ap­pendix for proof.

The pace of this de­cline in spend­ing heav­ily de­pends on model as­sump­tions. If we use (as in the Ap­pendix), the op­ti­mal con­sump­tion rate does not have a closed-form solu­tion, but we can ver­ify nu­mer­i­cally that with rea­son­able pa­ram­e­ters, the op­ti­mal rate at time t = 0 only slightly ex­ceeds the op­ti­mal long-run rate (e.g., 0.11% vs. 0.10% when ). But if we use a dis­crete state-based model (as in Tram­mell[5:1] sec­tion 3), un­der some rea­son­able pa­ram­e­ters, the cur­rent con­sump­tion rate equals the cur­rent dis­count rate.

Given these rea­son­able but con­flict­ing mod­els, it is un­clear how much we should con­sume to­day as a func­tion of the cur­rent and long-run dis­count rates. More in­ves­ti­ga­tion is re­quired, but un­til then, it makes sense to at­tempt to es­ti­mate both the cur­rent and long-run dis­count rates.

Ad­di­tion­ally, some ar­gu­ments sug­gest that we do not live at a par­tic­u­larly in­fluen­tial time. If true, that means most es­ti­mates of the cur­rent dis­count rate are way too high, the cur­rent rate prob­a­bly re­sem­bles the long-run rate, and the long-run rate should be used in calcu­lat­ing op­ti­mal con­sump­tion.

Break­ing down the cur­rent dis­count rate

In this part, I ex­am­ine some plau­si­ble rea­sons why each of the four types of events (eco­nomic nul­lifi­ca­tion, ex­pro­pri­a­tion, value drift, change in op­por­tu­ni­ties) could oc­cur, and roughly rea­son about how they should fac­tor into the dis­count rate.

Eco­nomic nullification

An eco­nomic nul­lifi­ca­tion event is one in which all our re­sources be­come worth­less. Let’s break this down into three cat­e­gories: ex­tinc­tion, su­per­in­tel­li­gent AI, and eco­nomic col­lapse. Other types of events might re­sult in eco­nomic nul­lifi­ca­tion, but these three seem the most sig­nifi­cant.

Extinction

Even if we do not pri­ori­tize ex­tinc­tion risk re­duc­tion as a top cause area[7], we should fac­tor the prob­a­bil­ity of ex­tinc­tion into the dis­count rate. In pos­si­ble fu­tures where civ­i­liza­tion goes ex­tinct, we have no way of cre­at­ing value.

We only have very rough es­ti­mates of the prob­a­bil­ity of ex­tinc­tion. I will cite three sources that ap­pear to give among the best-qual­ity es­ti­mates we have right now.

  1. Pam­lin and Arm­strong (2015), 12 Risks That Threaten Hu­man Civ­i­liza­tion es­ti­mated a 0.13% prob­a­bil­ity of ex­tinc­tion in the next cen­tury from all causes ex­clud­ing AI, and a 0-10% chance of ex­tinc­tion due to AI[8].

  2. Sand­berg and Bostrom (2008)’s Global Catas­trophic Risks Sur­vey es­ti­mated a 19% prob­a­bil­ity of ex­tinc­tion be­fore 2100, based on a sur­vey of par­ti­ci­pants at the Global Catas­trophic Risks Con­fer­ence.

  3. “Database of ex­is­ten­tial risk es­ti­mates (or similar)”, a Google Doc com­piled by Michael Aird, in­cludes a list of pre­dic­tions on the prob­a­bil­ity of ex­tinc­tion. As of 2020-06-19, these pre­dic­tions (ex­clud­ing the two I already cited) give a me­dian an­nual prob­a­bil­ity of 0.13% and a mean of 0.20% (see my copy of the sheet for calcu­la­tions)[9].

Th­ese es­ti­mates trans­late into an an­nual ex­tinc­tion prob­a­bil­ity of 0.0013% to 0.26%, de­pend­ing on which num­bers we use.

For more, see Rowe and Si­mon (2018), “Prob­a­bil­ities, method­olo­gies and the ev­i­dence base in ex­is­ten­tial risk as­sess­ments.”, par­tic­u­larly the ap­pendix, which pro­vides a list of es­ti­mates of the prob­a­bil­ity of ex­tinc­tion or re­lated events[10].

Michael Aird (2020), “Database of ex­is­ten­tial risk es­ti­mates” (an EA Fo­rum post ac­com­pa­ny­ing the above-linked spread­sheet), ad­dresses the fact that we only have ex­tremely rough es­ti­mates of the ex­tinc­tion prob­a­bil­ity. He re­views some of the im­pli­ca­tions of this fact, and ul­ti­mately con­cludes that at­tempt­ing to con­struct such es­ti­mates is still worth­while. I think he ex­plains the rele­vant is­sues pretty well, so I won’t ad­dress this prob­lem other than to say that I ba­si­cally en­dorse Aird’s anal­y­sis.

Su­per­in­tel­li­gent AI

If we de­velop a su­per­in­tel­li­gent AI sys­tem, this could re­sult in ex­tinc­tion. Alter­na­tively, it could re­sult in such a fan­tas­ti­cally pos­i­tive out­come that any money or re­sources we have now be­come use­less. Even though a “friendly” AI does not con­sti­tute an ex­is­ten­tial threat, it could still put us in a situ­a­tion where ev­ery­one’s money loses its value, so we should in­clude this pos­si­bil­ity in the dis­count rate.

AI Im­pacts re­viewed AI timeline sur­veys, in which AI ex­perts es­ti­mated their prob­a­bil­ities of see­ing hu­man-level AI by a cer­tain date. We can use these sur­vey re­sults to calcu­late the im­plied prob­a­bil­ity of ar­tifi­cial gen­eral in­tel­li­gence P(AGI)[11].

Let’s take the 2013 FHI sur­vey as an ex­am­ple. This sur­vey gives a me­dian es­ti­mated 10% chance of AGI by 2020 and 50% chance by 2050. A 10% chance be­tween 2013 and 2020 sug­gests an an­nual prob­a­bliity of 1.37%; and a 50% chance be­tween 2013 and 2050 im­plies a 1.11% an­nual prob­a­bil­ity.

The 10% and 50% es­ti­mates given by each of the sur­veys re­viewed by AI Im­pacts im­ply an­nual prob­a­bil­ities rang­ing from a min­i­mum of 0.56% to a max­i­mum of 1.78%, with a mean of 1.13% and a stan­dard de­vi­a­tion of 3.2 per­centage points.

Three rel­a­tively re­cent sur­veys asked par­ti­ci­pants for pre­dic­tions rather than prob­a­bil­ities, and these im­ply P(AGI) rang­ing from 0.51% to 1.78%.

Me­tac­u­lus pre­dicts that AGI has a 50% chance of emerg­ing by 2043 (with 168 pre­dic­tions), im­ply­ing a 2.97% an­nual prob­a­bil­ity of AGI.

A su­per­in­tel­li­gent AI could lead to an ex­tremely bad out­come (ex­tinc­tion) or an ex­tremely good one (post-scarcity), or it could land us some­where in the mid­dle, where we can still use our re­sources to im­prove the world, and there­fore money has value. Or the AI might be able to use our ac­cu­mu­lated re­sources to con­tinue pro­duc­ing value—in fact, this seems likely. So we should only treat the prob­a­bil­ity of AGI as a dis­count in­so­far as we ex­pect it to re­sult in ex­tinc­tion or post-scarcity.

What is the prob­a­bil­ity of an ex­treme out­come (good or bad)? Again, we do not have any good es­ti­mates of this. As an up­per bound, we can sim­ply as­sume a 100% chance that a su­per­in­tel­li­gent AI re­sults in an ex­treme out­come. Com­bin­ing this with the AI Im­pacts sur­vey re­view gives an es­ti­mated 1.78% an­nual prob­a­bil­ity of an ex­treme out­come due to AI, equat­ing to a 1.78% dis­count fac­tor.

As a lower bound, as­sume only ex­tinc­tion can re­sult in ex­treme out­comes, and that the ex­treme up­side (post-scarcity) can­not hap­pen. Tak­ing the up­per end of the ex­tinc­tion risk es­ti­mate from Pam­lin and Arm­strong (2015) gives a 0.1% an­nual prob­a­bil­ity of ex­tinc­tion, and thus a 0.1% an­nual prob­a­bil­ity of an ex­treme out­come due to AI. So based on these es­ti­mates, our dis­count fac­tor due to AI falls some­where be­tween 0.1% and 2.97% (or pos­si­bly lower), and this may largely or en­tirely over­lap with the dis­count fac­tor due to ex­tinc­tion.

Me­tac­u­lus gives a 57% prob­a­bil­ity (with 77 pre­dic­tions) that an AGI will lead to a “pos­i­tive tran­si­tion.” Müller & Bostrom (2016)[12] sur­veyed AI ex­perts and came up with a 78% prob­a­bil­ity on a similar re­s­olu­tion. This gives us some idea of to what ex­tent the dis­count due to AGI over­laps with the dis­count due to ex­tinc­tion.

We could spend time ex­am­in­ing plau­si­ble AI sce­nar­ios and how these im­pact the dis­count rate, but I will move on for now. For more on pre­dic­tions of AI timelines (and the prob­lems thereof), see Muehlhauser (2015), What Do We Know about AI Timelines?

Eco­nomic collapse

Money could be­come use­less if the global econ­omy ex­pe­riences a catas­trophic col­lapse, even if civ­i­liza­tion ul­ti­mately re­cov­ers.

Depend­ing on the na­ture of the event, it may be pos­si­ble to guard against an eco­nomic col­lapse. For ex­am­ple, hy­per­in­fla­tion de­stroys the value of cash and bonds, but might leave stocks, gold, and real es­tate rel­a­tively un­af­fected, so in­vestors in these as­sets could still pre­serve (some of) their wealth.

We have seen some coun­tries ex­pe­rience se­vere eco­nomic tur­moil, such as Ger­many af­ter WWI and Zim­babwe in 2008, but these would not have re­sulted in com­plete loss of cap­i­tal for a highly di­ver­sified in­vestor (i.e., one who holds some gold or other real as­sets).

Al­most any se­vere eco­nomic col­lapse would merely re­sult in a near loss of all re­sources and not a com­plete loss. We should only dis­count fu­ture wor­lds where we see a com­plete loss, be­cause any par­tial loss of cap­i­tal can get rol­led into the in­ter­est rate.

Pam­lin and Arm­strong (2015) in­clude catas­trophic eco­nomic col­lapse as one of their 12 risks that threaten civ­i­liza­tion, but do not provide a prob­a­bil­ity es­ti­mate.

Ex­pro­pri­a­tion and value drift

Ob­vi­ously, ex­pro­pri­a­tion and value drift are not the same thing. But over longer time pe­ri­ods, it is not always clear whether an old in­sti­tu­tion ceased to ex­ist due to out­side forces or be­cause its lead­ers lost fo­cus.

I am not aware of any de­tailed in­ves­ti­ga­tions on the rate of in­sti­tu­tional failure. Philip Tram­mell stated on the 80,000 Hours Pod­cast:

I did a cur­sory look at what seemed to me like the more rele­vant foun­da­tions and in­sti­tu­tions that were set up over the past thou­sand years or some­thing. [...] I came up with a very ten­ta­tive value drift/​ex­pro­pri­a­tion rate of half a per­cent per year for ones that were ex­plic­itly aiming to last a long time with a rel­a­tively well defined set of val­ues.

Ac­cord­ing to Sand­berg (n.d.)[13], na­tions have a 0.5% an­nual prob­a­bil­ity of ceas­ing to ex­ist. Most in­sti­tu­tions don’t last as long as na­tions, but an in­sti­tu­tion that’s de­signed to be long-last­ing might out­last its sovereign coun­try. So per­haps we could in­fer an in­sti­tu­tional failure rate of some­where around 0.5%.

Expropriation

Ac­cord­ing to Dim­son, Marsh, and Staunton’s Global In­vest­ment Re­turns Year­book 2018 (hence­forth “DMS”), from 1900 to 2018, only two ma­jor coun­tries (out of 23) ex­pe­rienced a na­tion­wide ex­pro­pri­a­tion of gov­ern­ment as­sets: Rus­sia and China (in both cases be­cause of a com­mu­nist rev­olu­tion). This gives a his­tor­i­cal an­nual 0.05% prob­a­bil­ity of ex­pro­pri­a­tion when coun­tries are weighted by mar­ket cap­i­tal­iza­tion (0.07% when coun­tries are equal-weighted).

Both ex­pro­pri­a­tion events oc­curred in un­sta­ble coun­tries that DMS clas­sify as hav­ing been “emerg­ing” at the time (defined as hav­ing a GDP per cap­ita un­der $25,000, ad­justed for in­fla­tion). Thus, it seems in­vestors have some abil­ity to pre­dict in ad­vance whether their coun­try has a par­tic­u­larly high risk of ex­pro­pri­a­tion. We can prob­a­bly as­sume that de­vel­oped coun­tries such as the United States have an ex­pro­pri­a­tion risk of less than 0.05% be­cause no de­vel­oped-coun­try ex­pro­pri­a­tions oc­curred in DMS’s sam­ple.

Note that some other coun­tries (such as Cuba) did ex­pro­pri­ate cit­i­zens’ funds, but are not in­cluded in DMS. DMS’s sam­ple cov­ers 98% of world mar­ket cap, so the re­main­ing coun­tries mat­ter lit­tle on a cap-weighted ba­sis. Fur­ther­more, if in­vestors can pre­dict in ad­vance that they live in a high-risk coun­try, this holds dou­bly so for fron­tier mar­kets like Cuba.

So it seems the risk of na­tion­wide ex­pro­pri­a­tion in de­vel­oped coun­tries is so small that it’s a round­ing er­ror com­pared to other fac­tors like value drift.

What about the risk that your per­sonal as­sets are ex­pro­pri­ated? If gov­ern­ments only ex­pro­pri­ate as­sets from cer­tain peo­ple or in­sti­tu­tions, the risk to any par­tic­u­lar in­di­vi­d­ual is rel­a­tively small, sim­ply be­cause that in­di­vi­d­ual will prob­a­bly not be among the tar­geted group. But as these sorts of events do not ap­pear in stock mar­ket re­turns, we can­not es­ti­mate the risk based on DMS data, and the risk is harder to es­ti­mate in gen­eral. As in­di­vi­d­ual ex­pro­pri­a­tion hap­pens fairly rarely, I would ex­pect that in­vestors ex­pe­rience greater risk from na­tion­wide ex­pro­pri­a­tion. As a naive ap­proach, we could dou­ble the 0.05% figure from be­fore to get a 0.1% all-in an­nual prob­a­bil­ity of ex­pro­pri­a­tion, al­though I sus­pect this over­states the risk.

More fre­quently, gov­ern­ments seize some but not all of cit­i­zens’ as­sets, for ex­am­ple when the United States gov­ern­ment forced all cit­i­zens to sell their gold at be­low-mar­ket rates. Such events do not ex­is­ten­tially threaten one’s fi­nan­cial po­si­tion, so they should not be con­sid­ered as part of the ex­pro­pri­a­tion rate for our pur­poses.

Me­tac­u­lus pre­dicts that donor-ad­vised funds (DAFs) have a some­what higher prob­a­bil­ity of ex­pro­pri­a­tion, al­though this is based on a limited num­ber of pre­dic­tions, and it only ap­plies to philan­thropists who use DAFs.

In­vestors can pro­tect against ex­pro­pri­a­tion by domi­cil­ing their as­sets in mul­ti­ple coun­tries. Prob­a­bly the safest le­gal way to do this is to buy for­eign real es­tate, which is the most difficult as­set for gov­ern­ments to ex­pro­pri­ate. But in gen­eral, in­vestors can­not eas­ily shield their as­sets from ex­pro­pri­a­tion. In Deep Risk, William Bern­stein con­cludes that the benefits of avoid­ing ex­pro­pri­a­tion prob­a­bly do not jus­tify the costs for in­di­vi­d­ual in­vestors. The same is prob­a­bly true for philan­thropists.

Value drift

When dis­cussing value drift, we must dis­t­in­guish be­tween in­di­vi­d­u­als and in­sti­tu­tions. Both types of ac­tors must make de­ci­sions about how to use their money over time, but they ex­pe­rience sub­stan­tially differ­ent con­sid­er­a­tions. Most ob­vi­ously, in­di­vi­d­u­als can­not con­tinue donat­ing money for mul­ti­ple gen­er­a­tions.

For the pur­poses of this es­say, we care more about the in­sti­tu­tional rate of value drift:

  1. Effec­tive al­tru­ist in­sti­tu­tions have much more money. In­deed, suffi­ciently wealthy in­di­vi­d­u­als typ­i­cally cre­ate in­sti­tu­tions to man­age their money.

  2. In­so­far as in­di­vi­d­u­als have a higher value drift rate, they can miti­gate this by giv­ing their money to long-lived in­sti­tu­tions. (Although for many in­di­vi­d­u­als, most of their dona­tions will come from fu­ture in­come, and donat­ing fu­ture in­come now poses some challenges, to say the least.)

  3. In­di­vi­d­ual effec­tive al­tru­ists typ­i­cally share val­ues and goals with many other peo­ple. A sin­gle in­di­vi­d­ual ceas­ing to donate to a cause al­most never ex­is­ten­tially threat­ens the goals of that cause.

That said, I will briefly ad­dress in­di­vi­d­ual value drift. We don’t know much about it, but we have some in­for­ma­tion:

  1. Ac­cord­ing to the 2018 EA Sur­vey, 40% of Giv­ing What We Can pledge-sign­ers do not re­port keep­ing up with the pledge (al­though this is par­tially due to lack of re­port­ing)

  2. An anal­y­sis of the 2014-2018 EA Sur­veys sug­gests about a 60% 4-5 year sur­vival rate.

  3. A poll of one in­di­vi­d­ual’s con­tacts found a 45% 5-year sur­vival rate.

Each of these sources sug­gests some­thing like a 10% an­nual value drift rate. This is much higher than any other rate es­ti­mated in this es­say. On the bright side, one sur­vey found that wealthier in­di­vi­d­u­als tend to have a lower rate of value drift, which means the dol­lar-weighted value drift rate might not be quite as bad as 10%.

For long-lived in­sti­tu­tions, it’s hard to mea­sure the value drift rate in iso­la­tion. We can more eas­ily mea­sure the com­bined ex­pro­pri­a­tion/​value drift rate. As dis­cussed above, some pre­limi­nary ev­i­dence sug­gests a rate of about 0.5%. Fur­ther in­ves­ti­ga­tion could sub­stan­tially re­fine this es­ti­mate.

Changes in opportunities

I’ve saved the best for last, be­cause changes in op­por­tu­ni­ties ap­pears to be the most im­por­tant fac­tor in the dis­count rate.

First, I should note that it doesn’t re­ally make sense to model the rate of changes in op­por­tu­ni­ties as part of the dis­count rate. Fu­ture util­ity doesn’t be­come less valuable due to changes in op­por­tu­ni­ties; rather, money be­comes less (or more) effec­tive at pro­duc­ing util­ity. It might make more sense to treat changes in op­por­tu­ni­ties as part of the util­ity func­tion[14], or to cre­ate a sep­a­rate pa­ram­e­ter for it. Per­haps we can spend money on re­search to im­prove the value of fu­ture op­por­tu­ni­ties, and we could ac­count for this. Un­for­tu­nately, that would prob­a­bly mean we no longer have a closed-form solu­tion for the op­ti­mal con­sump­tion rate. So for the sake of mak­ing the math eas­ier, let’s pre­tend it makes sense to in­clude changes in op­por­tu­ni­ties within the dis­count rate, and as­sume the rate of change is fixed and we can’t do any­thing to change it. A fu­ture pro­ject can re­lax this as­sump­tion and see how it changes re­sults.

Our top causes could get bet­ter over time as we learn more about how to do good, or they could get worse as the best causes be­come fully funded. We have some rea­son to be­lieve both of these things are hap­pen­ing. Which effect is stronger?

Let’s start by look­ing at GiveWell top char­i­ties, where we have a par­tic­u­larly good (al­though nowhere near perfect) idea of how much good they do.

This table lists the most cost-effec­tive char­ity for each year ac­cord­ing to GiveWell’s es­ti­mates, in terms of cost per life-saved equiv­a­lent (CPLSE). The “real” column ad­justs each CPLSE es­ti­mate to Novem­ber 2015 dol­lars.

Year Or­ga­ni­za­tion CPLSE nom­i­nal CPLSE real
2012 Against Malaria Foun­da­tion $2004 $2066
2013 Against Malaria Foun­da­tion $3401 $3463
2014 De­worm the World $1625 $1633
2015 Against Malaria Foun­da­tion $1783 $1783
2016 De­worm the World $901 $886
2017 De­worm the World $851 $819
2018 De­worm the World $652 $592
2019 De­worm the World $480 $443

We can­not take these ex­pected value es­ti­mates liter­ally, but they might tell us some­thing about the di­rec­tion of change.

GiveWell does not provide cost-effec­tive­ness es­ti­mate spread­sheets for ear­lier years, but its ear­lier es­ti­mates tended to be lower, e.g., “un­der $1000 per in­fant death averted” for VillageReach in 2009. For a time, GiveWell’s es­ti­mates in­creased over time due to (ac­cord­ing to GiveWell) ex­ces­sive op­ti­mism in the ear­lier calcu­la­tions. How­ever, the es­ti­mates have been near-mono­ton­i­cally de­creas­ing since 2013 (ev­ery year ex­cept 2014-2015). Me­tac­u­lus pre­dicts (with 117 pre­dic­tions) the 2021 real cost-effec­tive­ness es­ti­mate to lie be­tween the val­ues for 2018 and 2019, sug­gest­ing a pos­i­tive but small change in cost. It pre­dicts (with 49 pre­dic­tions) that GiveWell’s 2031 real cost-effec­tive­ness es­ti­mate will be $454, nearly the same as 2019, im­ply­ing that Me­tac­u­lus ex­pects GiveWell’s es­ti­mates to sta­bi­lize.

Has the in­creased cost-effec­tive­ness come from an im­prove­ment in the top char­i­ties’ pro­grams, or sim­ply from changes in es­ti­mates? I did not ex­am­ine this in de­tail, but ac­cord­ing to GiveWell’s 2018 changelog, the im­prove­ments in De­worm the World oc­curred pri­mar­ily due to a re­duc­tion in cost per child de­wormed per year. Per­haps we should clas­sify this more as an op­er­a­tional im­prove­ment than as learn­ing, but it falls in the same gen­eral cat­e­gory.

What about the value of find­ing new top char­i­ties? Ac­cord­ing to GiveWell, its cur­rent recom­mended char­i­ties are prob­a­bly more cost-effec­tive than its 2011 top recom­men­da­tion of VillageReach. Since 2014, GiveWell has not found any char­i­ties that it ranks as more cost-effec­tive than De­worm the World, but we should ex­pect some non­triv­ial prob­a­bil­ity that it finds one in the fu­ture.

Other cause ar­eas have a much weaker knowl­edge base than global poverty. Even if top global poverty char­i­ties were get­ting less cost-effec­tive over time due to limited learn­ing, I would still ex­pect us to be able to find in­ter­ven­tions in an­i­mal welfare or ex­is­ten­tial risk that work sub­stan­tially bet­ter than our cur­rent best ideas. Th­ese cause ar­eas prob­a­bly have a rel­a­tively high an­nual “learn­ing rate”, which we should sub­tract from the dis­count rate (pos­si­bly re­sult­ing in a nega­tive dis­count).

Un­der plau­si­ble as­sump­tions, some cause ar­eas could have a learn­ing rate on the or­der of mag­ni­tude of 10% (trans­lat­ing to a −10% dis­count), or could have a 10% rate of op­por­tu­ni­ties dis­ap­pear­ing.

Com­bined estimate

This sec­tion sum­ma­rizes all the es­ti­mates given so far. I came up with these based on limited in­for­ma­tion, and they should not be taken as re­li­able. But this can give us a start­ing point for think­ing about the dis­count rate.

Cat­e­gory Rate
ex­tinc­tion 0.001% – 0.2%
su­per­in­tel­li­gent AI 0.001% – 3%
eco­nomic col­lapse ?
ex­pro­pri­a­tion 0% – 0.05%
in­sti­tu­tional value drift 0.5%
in­di­vi­d­ual value drift 10%
changes in op­por­tu­ni­ties −10% – 10%

Re­call that the es­ti­mate for su­per­in­tel­li­gent AI does not in­di­cate chance of de­vel­op­ing AI, but the chance that AI is de­vel­oped and money be­comes use­less as a re­sult.

Ad­ding these up gives an in­sti­tu­tional dis­count rate of 0.5% – 2.3%, ex­clud­ing the dis­count due to changes in op­por­tu­ni­ties. In­tro­duc­ing this ex­tra dis­count dra­mat­i­cally widens the con­fi­dence in­ter­val.

My cur­rent best guess:

  1. Philan­thropists who pri­ori­tize global poverty ex­pe­rience a slightly pos­i­tive dis­count due to changes in op­por­tu­ni­ties, and prob­a­bly ex­pect a rel­a­tively low prob­a­bil­ity of ex­tinc­tion, sug­gest­ing an all-in dis­count rate of around 0.5% – 1%.

  2. Philan­thropists who pri­ori­tize more ne­glected cause ar­eas ex­pe­rience a sub­stan­tially pos­i­tive learn­ing rate, and there­fore a nega­tive all-in dis­count rate. This sug­gests con­sump­tion should be post­poned un­til the learn­ing rate sub­stan­tially diminishes, al­though in prac­tice, there is no clear line be­tween “con­sump­tion” and “do­ing re­search to learn more about how to do good.”

Break­ing down the long-run dis­count rate

Eco­nomic nullification

Again, let’s con­sider three pos­si­ble causes of eco­nomic nul­lifi­ca­tion: ex­tinc­tion, su­per­in­tel­li­gent AI, and eco­nomic col­lapse.

Extinction

If we use a mod­er­ately high es­ti­mate for the cur­rent prob­a­bil­ity of ex­tinc­tion (say, 0.2% per year), it seems im­plau­si­ble that this prob­a­bil­ity could re­main at a similar level for thou­sands of years. A 0.2% an­nual ex­tinc­tion prob­a­bil­ity trans­lates into a 1 in 500 mil­lion chance that hu­man­ity lasts longer than 10,000 years. Hu­man­ity has already sur­vived for about 200,000 years, so on pri­ors, this tiny prob­a­bil­ity seems ex­tremely sus­pect.

Pam­lin and Arm­strong (2015)’s more mod­est es­ti­mate of 0.0013% trans­lates to a more plau­si­ble 88% chance of sur­viv­ing for 10,000 years, and a 27% chance of mak­ing it 100,000 years.

One of these three claims must be true:

  1. The an­nual prob­a­bil­ity of ex­tinc­tion is quite low, on the or­der of 0.001% per year or less.

  2. Cur­rently, we have a rel­a­tively high prob­a­bil­ity of ex­tinc­tion, but if we sur­vive through the cur­rent cru­cial pe­riod, then this prob­a­bil­ity will dra­mat­i­cally de­crease.

  3. The cur­rent rel­a­tively high prob­a­bil­ity of ex­tinc­tion will main­tain in­definitely. There­fore, hu­man­ity is highly likely to go ex­tinct over an “evolu­tion­ary” times­pan (10,000 to 100,000 years), and all but guaran­teed not to sur­vive (some­thing like 1 in a googol chance) over a “ge­olog­i­cal” time scale (10+ mil­lion years).

In “Are we liv­ing at the most in­fluen­tial time in his­tory?” (2018), Will MacAskill offers some jus­tifi­ca­tion for (but does not strongly en­dorse) the first claim on this list. The sec­ond claim seems to rep­re­sent the most com­mon view among long-term-fo­cused effec­tive al­tru­ists.

If we ac­cept the first or sec­ond claim, this im­plies ex­is­ten­tial risk has nearly zero im­pact on the long-run dis­count rate. The third claim al­lows us to use a non­triv­ial long-term dis­count due to ex­is­ten­tial risk. I find it the least plau­si­ble of the three—not be­cause of any par­tic­u­larly good in­side-view ar­gu­ment, but be­cause it seems un­likely on pri­ors.

Su­per­in­tel­li­gent AI

With AGI, we can con­struct the same ternary choice that we did with ex­tinc­tion:

  1. We have a low an­nual prob­a­bil­ity of de­vel­op­ing AGI.

  2. The prob­a­bil­ity is cur­rently rel­a­tively high, but will de­crease over time.

  3. The prob­a­bil­ity is high and will re­main high in per­pe­tu­ity.

Again, I find the third op­tion the least plau­si­ble. Surely if we have not de­vel­oped su­per­in­tel­li­gent AI af­ter 1000 years, there must be some fun­da­men­tal bar­rier pre­vent­ing us from build­ing it. In this case, I find the first op­tion im­plau­si­ble as well. Based on what we know about AI, it seems the prob­a­bil­ity that we de­velop it in the near fu­ture must be high (for our pur­poses, a 0.1% an­nual prob­a­bil­ity qual­ifies as high). The Open Philan­thropy Pro­ject agrees with this view, claiming “a non­triv­ial like­li­hood (at least 10% with mod­er­ate ro­bust­ness, and at least 1% with high ro­bust­ness) that trans­for­ma­tive AI will be de­vel­oped within the next 20 years.”

If we ac­cept one of the first two claims, then we should use a low long-run dis­count rate due to the pos­si­bil­ity of de­vel­op­ing su­per­in­tel­li­gent AI.

Eco­nomic collapse

Un­like in the pre­vi­ous cases, I find it at least some­what plau­si­ble that the prob­a­bil­ity of catas­trophic eco­nomic col­lapse could re­main high in per­pe­tu­ity. Over the past sev­eral thou­sand years, many parts of the world have ex­pe­rienced pe­ri­ods of ex­treme tur­moil where most in­vestors lost all of their as­sets. Although in­vestors to­day can more eas­ily di­ver­sify globally across many as­sets, this in­creased global­iza­tion plau­si­bly also in­creases the prob­a­bil­ity of a wor­ld­wide col­lapse.

Un­like ex­tinc­tion, and prob­a­bly un­like the de­vel­op­ment of AGI, a global eco­nomic col­lapse could be a re­peat­able event. If civ­i­liza­tion as we know it ends but hu­man­ity sur­vives, we could slowly re­build so­ciety and even­tu­ally re-es­tab­lish an in­ter­con­nected global econ­omy. And if we can es­tab­lish a global econ­omy for a sec­ond time, it can prob­a­bly also col­lapse for a sec­ond time. Per­haps civ­i­liza­tion could ex­pe­rience 10,000-year long “mega cy­cles” of tech­nolog­i­cal de­vel­op­ment, global­iza­tion, and col­lapse.

This is not to say I am con­fi­dent that the fu­ture will look like this. I merely find it some­what plau­si­ble.

Let’s say we be­lieve with 10% prob­a­bil­ity that the fu­ture will ex­pe­rience a catas­trophic eco­nomic col­lapse on av­er­age once ev­ery 10,000 years. This trans­lates into a 0.001% an­nual prob­a­bil­ity of eco­nomic col­lapse. This prob­a­bly mat­ters more than the long-run prob­a­bil­ity of ex­tinc­tion or AGI, but is still so small as to not be worth con­sid­er­ing for our pur­poses.

Ex­pro­pri­a­tion and value drift

Based on his­tor­i­cal ev­i­dence, it ap­pears that in­sti­tu­tions’ abil­ity to pre­serve them­selves or their val­ues fol­lows some­thing like an ex­po­nen­tial dis­tri­bu­tion: as we look back fur­ther in time, we see dra­mat­i­cally fewer in­sti­tu­tions from that time that still ex­ist to­day. Thus, it seems plau­si­ble that the rate of value drift could re­main sub­stan­tially greater than zero in the long run.

Ex­pro­pri­a­tion/​value drift might not fol­low an ex­po­nen­tial curve—we know ex­tremely lit­tle about this. An ex­po­nen­tial dis­tri­bu­tion seems plau­si­ble on pri­ors, but it also seems plau­si­ble that the rate could de­crease over time as in­sti­tu­tions learn more about how to pre­serve them­selves. Similarly, or­ga­ni­za­tions that avoid value drift will tend to gain power over time rel­a­tive to those that don’t. On this ba­sis, we might ex­pect the value drift rate to de­cline over time as value-sta­ble in­sti­tu­tions gain an in­creas­ing share of the global mar­ket.

Changes in opportunities

In the long run, the learn­ing rate must ap­proach 0. There must be some best ac­tion to take, and we can never do bet­ter than that best ac­tion. Over time, we will gain in­creas­ing con­fi­dence in our abil­ity to iden­tify that best ac­tion. Either we even­tu­ally con­verge on the best ac­tion, or we hit some up­per limit on how much it’s pos­si­ble to learn. Either way, the learn­ing rate must ap­proach 0.

We can also ex­pect giv­ing op­por­tu­ni­ties to get worse over time as the best op­por­tu­ni­ties be­come fully funded. The util­ity of dona­tions might asymp­tote to­ward the util­ity of gen­eral con­sump­tion—that is, in the long run, you might not be able to do more good by donat­ing money than you can by spend­ing it on your­self. Or new op­por­tu­ni­ties might con­tinue to emerge, and might even get bet­ter over time. It seems con­ceiv­able that they could con­tinue get­ting bet­ter in per­pe­tu­ity, al­though I’m not sure how that would work. But in any case, the available op­por­tu­ni­ties can­not get worse in per­pe­tu­ity. Money might have less marginal util­ity in the fu­ture as peo­ple be­come bet­ter off, but the Ram­sey model already ac­counts for this in the pa­ram­e­ter—for ex­am­ple, in­di­cates log­a­r­ith­mic util­ity of money, which means ex­po­nen­tially grow­ing peo­ple’s wealth only lin­early in­creases util­ity.

Com­bined estimate

In sum­mary:

  • The out­side view sug­gests a low long-run ex­tinc­tion rate.

  • It’s hard to say any­thing of sub­stance about the long-run rate of eco­nomic col­lapse or ex­pro­pri­a­tion/​value drift.

  • It seems the rate of changes in op­por­tu­ni­ties must ap­proach 0.

It seems plau­si­ble that value drift is the largest fac­tor in the long run, which per­haps sug­gests a 0.5% long-run dis­count rate if we as­sume 0.5% value drift. But this es­ti­mate seems much weaker than the (already-weak) ap­prox­i­ma­tion for the cur­rent dis­count rate.

Can we change the dis­count rate?

So far, we have as­sumed we can­not change the dis­count rate. But the cause of ex­is­ten­tial risk re­duc­tion fo­cuses on re­duc­ing the dis­count rate by de­creas­ing the prob­a­bil­ity of ex­tinc­tion. Pre­sum­ably we could also re­duce the ex­pro­pri­a­tion and value drift rates if we in­vested sub­stan­tial effort into do­ing so.

The sig­nifi­cance of re­duc­ing value drift

Effec­tive al­tru­ists in­vest sub­stan­tial effort in re­duc­ing ex­is­ten­tial risk (al­though, ar­guably, so­ciety at large does not in­vest nearly enough). But we know al­most noth­ing about how to re­duce value drift. Some re­search has been done on value drift among in­di­vi­d­u­als in the effec­tive al­tru­ism com­mu­nity, but it’s highly pre­limi­nary, and I am not aware of any com­pa­rable re­search on in­sti­tu­tional value drift.

Ar­guably, ex­is­ten­tial risk mat­ters a lot more than value drift. Even in the ab­sence of any philan­thropic in­ter­ven­tion, peo­ple gen­er­ally try to make life bet­ter for them­selves. If hu­man­ity does not go ex­tinct, a philan­thropist’s val­ues might even­tu­ally ac­tu­al­ize, de­pend­ing on their val­ues and on the di­rec­tion hu­man­ity takes.

Un­der most (but not all) plau­si­ble value sys­tems and be­liefs about the fu­ture di­rec­tion of hu­man­ity, ex­is­ten­tial risk looks more im­por­tant than value drift. The ex­tent to which it looks more im­por­tant de­pends on how much bet­ter one ex­pects the fu­ture world to be (con­di­tional on non-ex­tinc­tion) with philan­thropic in­ter­ven­tion than with its de­fault tra­jec­tory.

A sam­pling of some be­liefs that could af­fect how much one cares about value drift:

  1. If eco­nomic growth con­tinues as it has but we do not see any trans­for­ma­tive events (such as de­vel­op­ment of su­per­in­tel­li­gent AI), global poverty will prob­a­bly dis­ap­pear in the next few cen­turies, if not sooner.

  2. Even if hu­man­ity erad­i­cates global poverty, we might con­tinue dis­valu­ing non-hu­man an­i­mals’ well-be­ing and sub­ject­ing them to great un­nec­es­sary suffer­ing. Philan­thropic efforts in the near term could sub­stan­tially al­ter this tra­jec­tory.

  3. Some peo­ple, par­tic­u­larly peo­ple in­ter­ested in AI safety, be­lieve that if we avoid ex­tinc­tion, we will al­most cer­tainly de­velop a friendly AI which will carry all sen­tient life into par­adise. If that’s true, we re­ally only care about pre­vent­ing ex­tinc­tion, and par­tic­u­larly about en­sur­ing we don’t make an un­friendly AI.

  4. It might be crit­i­cally im­por­tant to do a cer­tain amount of AI safety re­search be­fore AGI emerges, and this re­search might not hap­pen with­out sup­port from effec­tive al­tru­ist donors.

Beliefs #1 and #3 im­ply rel­a­tively less con­cern about value drift (com­pared to ex­tinc­tion), while #2 and #4 im­ply rel­a­tively more.

Note that even if you ex­pect good out­comes to be re­al­ized in the long run, you still care about how value drift im­pacts philan­thropists’ abil­ity to do good in the next few decades or cen­turies.

I do not think it is ob­vi­ous that re­duc­ing the prob­a­bil­ity of ex­tinc­tion does more good per dol­lar than the value drift rate, which naively sug­gests the effec­tive al­tru­ist com­mu­nity should in­vest rel­a­tively more into re­duc­ing value drift. But I find it plau­si­ble that, upon fur­ther anal­y­sis, it would be­come clear that ex­is­ten­tial risk mat­ters much more.

Aside: I spent some time con­struct­ing an ex­plicit quan­ti­ta­tive model of the sig­nifi­cance of value drift ver­sus ex­is­ten­tial risk. I will not re­pro­duce the model here, but it bore out the in­tu­ition that the ra­tio (im­por­tance of value drift):(im­por­tance of ex­tinc­tion risk) is ba­si­cally pro­por­tional to the ra­tio (welfare of fu­ture wor­lds by de­fault):(welfare of fu­ture wor­lds with philan­thropic in­ter­ven­tion), with some con­sid­er­a­tion given to the prob­a­bil­ities of ex­tinc­tion and value drift.

Re­duc­ing risk by cre­at­ing mul­ti­ple funds

Un­like self-in­ter­ested in­vestors, philan­thropists don’t just care about how much money they have. They also care about the as­sets of other value-al­igned peo­ple. This al­lows philan­thropists to pro­tect against cer­tain risks in ways self-in­ter­ested in­vestors can­not.

To miti­gate ex­pro­pri­a­tion risk, differ­ent value-al­igned philan­thropists can in­vest their as­sets in differ­ent coun­tries. To some ex­tent, this already hap­pens au­to­mat­i­cally: if Alice lives in France and Bob lives in Aus­tralia, and they share the same val­ues, they already nat­u­rally split their as­sets be­tween the two coun­tries. If, say, France un­der­goes a com­mu­nist rev­olu­tion and na­tion­al­izes all cit­i­zens’ as­sets, Bob still has his port­fo­lio, so Alice and Bob have only lost half the money they care about. If enough value-al­igned philan­thropists ex­ist across many coun­tries, to­tal ex­pro­pri­a­tion can prob­a­bly only oc­cur in the case of an eco­nomic nul­lifi­ca­tion-like event, such as the for­ma­tion of a one-world com­mu­nist gov­ern­ment.

The same ap­plies to value drift. If a set of philan­thropic in­vestors share val­ues but one mem­ber of the group be­comes more self­ish over time, only a small por­tion of the col­lec­tive al­tru­is­tic port­fo­lio has been lost. It seems to me that the prob­a­bil­ity of value drift is mostly in­de­pen­dent across in­di­vi­d­u­als, al­though I can think of some ex­cep­tions (e.g., if ties weaken within the effec­tive al­tru­ism com­mu­nity, this could in­crease the over­all rate of value drift). There­fore, the prob­a­bil­ity of to­tal value drift rapidly de­creases as the num­ber of philan­thropists in­creases. But there’s still the pos­si­bil­ity that the EA com­mu­nity as a whole could ex­pe­rience value drift.

We should con­sider the spe­cial case where as­set own­er­ship is fat tailed—that is, a small num­ber of al­tru­ists con­trol al­most all the wealth. In prac­tice, wealth does fol­low a fat-tailed dis­tri­bu­tion, with the Open Philan­thropy Pro­ject con­trol­ling a ma­jor­ity of (ex­plic­itly) effec­tive al­tru­ist as­sets, and large donors con­sti­tut­ing a much big­ger frac­tion of the pie than small donors[15]. As­set con­cen­tra­tion sub­stan­tially in­creases the dam­age caused by ex­pro­pri­a­tion or value drift. The larger philan­thropists can miti­gate this by giv­ing their money to smaller ac­tors, effec­tively di­ver­sify­ing against value drift/​ex­pro­pri­a­tion risk. Although gifts of this sort are tech­ni­cally fea­si­ble and do oc­cur in small por­tions, large philan­thropists rarely (if ever) dis­tribute the ma­jor­ity of their as­sets to other value-al­igned ac­tors for the pur­pose of re­duc­ing con­cen­tra­tion risk. I would guess they do not dis­tribute their funds pri­mar­ily be­cause (1) large philan­thropists do not trust oth­ers to per­sis­tently share their val­ues, (2) they do not trust oth­ers to do a good job iden­ti­fy­ing the best giv­ing op­por­tu­ni­ties, and (3) they do not take con­cen­tra­tion risk par­tic­u­larly se­ri­ously. At the least, large philan­thropists should take con­cen­tra­tion risk more se­ri­ously, al­though I do not know what to do about the other two points.

If large philan­thropists do want to spread out their money, it makes sense that they should take care to en­sure they only give it to com­pe­tent, value-al­igned as­so­ci­ates.

Alter­na­tively, in­sti­tu­tions can di­ver­sify by spin­ning off sep­a­rate or­ga­ni­za­tions. This avoids the com­pe­tence and value-al­ign­ment prob­lems be­cause they can form the new or­ga­ni­za­tions with ex­ist­ing staff mem­bers, but it in­tro­duces a new set of com­pli­ca­tions.

Ob­serve that even when as­sets are dis­tributed across mul­ti­ple funds, ex­pro­pri­a­tion and value drift still re­duce the ex­pected rate of re­turn on in­vest­ments in a way that look­ing at his­tor­i­cal mar­ket re­turns does not ac­count for. This is a good trade—de­creas­ing the dis­count rate and de­creas­ing the in­vest­ment rate by the same amount prob­a­bly in­creases util­ity in most situ­a­tions—but it isn’t as good as elimi­nat­ing the risks en­tirely.

Re­lat­edly, wealthy in­di­vi­d­u­als of­ten cre­ate foun­da­tions to man­age their dona­tions, which (among other benefits) re­duces value drift by pro­vid­ing checks on dona­tion de­ci­sions (by in­volv­ing paid staff in the de­ci­sions, or by psy­cholog­i­cally re­in­forc­ing com­mit­ment to al­tru­is­tic be­hav­ior). Con­vert­ing wealthy-in­di­vi­d­ual money into foun­da­tion money prob­a­bly works ex­tremely well at de­creas­ing the value drift rate, and for­tu­nately, it’s already com­mon prac­tice.

What about in­di­vi­d­ual value drift?

As we saw, the ex­ist­ing (limited) ev­i­dence sug­gests about a 10% value drift rate among in­di­vi­d­ual effec­tive al­tru­ists. When in­di­vi­d­u­als stop donat­ing, this does not con­sti­tute a com­plete loss of cap­i­tal be­cause other value-al­igned al­tru­ists can con­tinue to provide fund­ing; but it does hurt the effec­tive in­vest­ment rate of re­turn.

Imag­ine if philan­thropists could in­vest in an as­set with 10 per­centage points higher re­turn than the mar­ket (at the same level of risk). That would rep­re­sent a phe­nom­e­nal op­por­tu­nity. But that’s ex­actly what we can get by re­duc­ing the value drift rate. We can’t get the in­di­vi­d­ual value drift rate all the way down to 0%, but it’s so high right now that we could prob­a­bly find a lot of im­pact­ful ways to re­duce it. Re­duc­ing this rate from 10% to 5% might re­quire less effort than re­duc­ing the prob­a­bil­ity of ex­tinc­tion from (say) 0.2% to 0.19%. Th­ese num­bers are not based on any mean­ingful anal­y­sis, but they seem plau­si­ble given the ex­treme ne­glect­ed­ness of this cause area.

Marisa Jur­czyk offers some sug­ges­tions on fu­ture re­search that could help re­duce in­di­vi­d­ual value drift.

Sig­nifi­cance of mis-es­ti­mat­ing the dis­count rate

As Weitz­man (2001)[16] wrote, “the choice of an ap­pro­pri­ate dis­count rate is one of the most crit­i­cal prob­lems in all of eco­nomics.” Chang­ing the es­ti­mated dis­count rate sub­stan­tially changes the im­plied op­ti­mal be­hav­ior.

Some might ar­gue that we sim­ply can­not es­ti­mate the dis­count rate, and it re­mains fun­da­men­tally un­know­able. While I agree that we have no idea what dis­count rate to use, I do not be­lieve we should equiv­o­cate be­tween (1) the rad­i­cally un­cer­tain state of knowl­edge if we don’t think about the dis­count rate at all, (2) the highly un­cer­tain state of knowl­edge if we think about it a lit­tle bit, and (3) what our state of knowl­edge could be if we in­vested much more in es­ti­mat­ing the dis­count rate. Philan­thropists’ be­hav­ior nec­es­sar­ily en­tails some (im­plicit) dis­count rate; it is bet­ter to use a poor es­ti­mate than no es­ti­mate at all.

Aird (2020), “Database of ex­is­ten­tial risk es­ti­mates”, ar­gues for the im­por­tance of bet­ter es­ti­mat­ing the prob­a­bil­ity of ex­tinc­tion. Our es­ti­mates for value drift and changes in op­por­tu­ni­ties ap­pear even rougher than for ex­tinc­tion, so work­ing on im­prov­ing these might be eas­ier and there­fore more cost-effec­tive.

Some eco­nomic liter­a­ture ex­ists on es­ti­mat­ing the dis­count rate (such as Weitz­man (2001)[16:1], Nord­haus (2007)[17], and Stern (2007)[18]), but philan­thropists do not always dis­count for the same rea­sons as self-in­ter­ested ac­tors, so for our pur­poses, these es­ti­mates provide limited value.

How much should we value marginal re­search on es­ti­mat­ing the philan­thropic dis­count rate?

Ex­tended Ram­sey model with es­ti­mated dis­count rate

In­tu­itively, it seems that mis-es­ti­mat­ing the dis­count rate could re­sult in sub­stan­tially wrong de­ci­sions about how much to spend vs. save, and this could mat­ter a lot. Some quan­ti­ta­tive anal­y­sis with a sim­ple model sup­ports this in­tu­ition.

In the in­tro­duc­tion, I pre­sented the Ram­sey model as a sim­ple the­o­ret­i­cal ap­proach for de­ter­min­ing how to spend re­sources over time. Let’s re­turn to this model. Ad­di­tion­ally, let’s as­sume we ex­pe­rience log­a­r­ith­mic util­ity of con­sump­tion, be­cause do­ing so pro­duces the sim­plest pos­si­ble for­mula for the con­sump­tion sched­ule.

An ac­tor max­i­mizes util­ity by fol­low­ing this con­sump­tion sched­ule[3:2]:

gives the pro­por­tion of as­sets to be con­sumed each pe­riod[19], and tells us the size of the port­fo­lio at time t (re­call that r is the in­vest­ment rate of re­turn). Ac­cord­ing to the cho­sen set of as­sump­tions, the op­ti­mal con­sump­tion rate ex­actly equals the dis­count rate.

Sup­pose a philan­thropist at­tempts to fol­low this op­ti­mal con­sump­tion sched­ule. Sup­pose they es­ti­mate the dis­count rate as , which might differ from the true . In that case, the philan­thropist’s to­tal long-run util­ity is given by

To see how quickly util­ity in­creases as we move closer to , we should look at the deriva­tive of util­ity with re­spect to :

What does this mean, ex­actly?

Sup­pose we have a choice be­tween (1) mov­ing closer to or (2) im­prov­ing how effec­tively we use money by chang­ing our util­ity func­tion from to for some in­creas­ing “im­pact fac­tor” . When should we pre­fer (1) over (2)?

We should pre­fer im­prov­ing when­ever util­ity in­creases faster by de­creas­ing than by in­creas­ing b, that is, when­ever for some par­tic­u­lar val­ues of (us­ing ab­solute val­ues be­cause we only care about the mag­ni­tude of change, not the di­rec­tion).

The for­mula for is hard to com­pre­hend in­tu­itively. But if we plug in some val­ues for , , and , we see that for most rea­son­able in­puts. For ex­am­ple, (a mis-es­ti­mate of 0.3 per­centage points) gives . A closer es­ti­mate of gives . There­fore, ac­cord­ing to this model, im­prov­ing looks highly effec­tive.

We also care about the rate at which we can im­prove and b. Pre­sum­ably, mov­ing closer to be­comes some­thing like ex­po­nen­tially more difficult over time—we could model this pro­cess as , where is effort spent re­search­ing the cor­rect dis­count rate and is some con­stant. Then we need a func­tion for the difficulty of in­creas­ing the im­pact fac­tor b, per­haps .

Ul­ti­mately, we would need a much more com­pli­cated for­mu­la­tion to some­what-ac­cu­rately model our abil­ity to im­prove the dis­count rate, and we can­not draw strong con­clu­sions from the ba­sic Ram­sey model. But in our sim­ple model, is much larger than for rea­son­able pa­ram­e­ters, which does at least hint that im­prov­ing our es­ti­mate of the dis­count rate—and ad­just­ing our spend­ing sched­ules ac­cord­ingly—could be a highly effec­tive way of in­creas­ing util­ity, es­pe­cially given the weak­ness of our cur­rent es­ti­mates, and how much low-hang­ing fruit prob­a­bly still ex­ists. This pre­limi­nary re­sult seems to jus­tify spend­ing a sub­stan­tially larger frac­tion of al­tru­is­tic re­sources on es­ti­mat­ing .

A plan for a (slightly) more re­al­is­tic model

The model in the pre­vi­ous sec­tion as­sumes that a philan­thropist can choose be­tween sav­ing and con­sump­tion at each mo­ment in time, and can also spend out of an en­tirely sep­a­rate bud­get to im­prove . This makes the op­ti­miza­tion prob­lem eas­ier, but doesn’t re­ally make sense.

Un­der a more re­al­is­tic model, the philan­thropist can choose be­tween three op­tions: (1) sav­ing, (2) con­sump­tion, and (3) im­prov­ing . That is, re­search on es­ti­mat­ing the dis­count rate comes out of the same bud­get as gen­eral con­sump­tion.

Un­der this model, the philan­thropist wishes to maximize

with the con­straint that c(t) can­not be a func­tion of , it can only be a func­tion of . Ad­di­tion­ally, we can define a func­tion giv­ing the best es­ti­mate of as a func­tion of Y(t), where Y(t) gives cu­mu­la­tive spend­ing on de­ter­min­ing up to time .

Solv­ing this prob­lem re­quires stronger calcu­lus skills than I pos­sess, so I will leave it as an open ques­tion for fu­ture re­search.

Some other use­ful model ex­ten­sions:

  • Allow the philan­thropist to in­vest in risky as­sets. As a start­ing point, see Levhari and Srini­vasan (1969), Op­ti­mal Sav­ings Un­der Uncer­tainty.

  • Make the dis­count rate a func­tion of re­sources spent on re­duc­ing it (such as via x-risk re­search). That is, .

Weitz­man-Gol­lier puzzle

Ac­cord­ing to Gol­lier and Weitz­man (2010), in the face of un­cer­tainty about the dis­count rate, “[t]he long run dis­count rate de­clines over time to­ward its low­est pos­si­ble value.” There ex­ists some dis­agree­ment in the eco­nomic liter­a­ture as to whether the dis­count rate should trend to­ward its low­est or its high­est pos­si­ble value. This dis­agree­ment is known as the Weitz­man-Gol­lier puz­zle (WGP). I have not stud­ied this dis­agree­ment well enough to have an in­formed opinion, but Greaves (2017)[4:2] claims “there is a wide­spread con­sen­sus” that “some­thing like” the low­est pos­si­ble long-run dis­count rate should be used.

How much we care about this puz­zle for the pur­poses of this es­say de­pends on how we in­ter­pret long-term dis­count rates. If cur­rent con­sump­tion is only a func­tion of the cur­rent dis­count rate, then WGP doesn’t mat­ter. If in­stead we be­lieve that the long-run rate af­fects how much we should con­sume to­day, then Weitz­man-Gol­lier be­comes rele­vant. I already ar­gued that we should ex­pect the dis­count rate to de­cline over time (e.g., as ex­tinc­tion risk de­creases and in­sti­tu­tions be­come more ro­bust), so Weitz­man-Gol­lier pro­vides an ad­di­tional ar­gu­ment in fa­vor of this policy.

Some ar­gu­ments against pri­ori­tiz­ing im­prov­ing the dis­count rate estimate

Ar­gu­ment from long-term con­ver­gence: Over a suffi­ciently long time hori­zon, it seems our es­ti­mate will surely con­verge on the true dis­count rate, even if we don’t in­vest much in figur­ing it out. At that time, and in per­pe­tu­ity af­ter that, we can fol­low the op­ti­mal spend­ing rate. If we pri­ori­tize figur­ing out now, that only helps us from now un­til when we would have solved any­way. (But on the other hand, im­prov­ing our es­ti­mate in the short term could still in­crease util­ity by a lot.)

Ar­gu­ment from in­tu­itive mean­ingful­ness: Im­prov­ing our es­ti­mate of the dis­count rate feels some­how less mean­ingful than ac­tively re­duc­ing the dis­count rate (e.g., by re­duc­ing risk of ex­tinc­tion). In some sense, by im­prov­ing our es­ti­mate, we aren’t re­ally do­ing any­thing. Ob­vi­ously we do in­crease ex­pected util­ity by bet­ter spread­ing out our spend­ing over time, but this doesn’t feel like the same sort of benefit as im­prov­ing the effec­tive­ness of our spend­ing, or ex­pand­ing the com­mu­nity to in­crease the pool of dona­tions. Even if the Ram­sey model sup­ports im­prov­ing as pos­si­bly the most effec­tive in­ter­ven­tion, this model en­tails a lot of as­sump­tions, so we should pay at­ten­tion to in­tu­itions that con­tra­dict the model.

Ar­gu­ment from model un­cer­tainty: Causes like global poverty pre­ven­tion look good across many mod­els and even many value sys­tems (al­though we don’t re­ally know if global poverty pre­ven­tion is even net pos­i­tive). Un­der the Ram­sey model, im­prov­ing still looks good across a lot of value sys­tems—it benefits you to im­prove the spend­ing sched­ule no mat­ter what util­ity func­tion you use—but we don’t know if it holds up in non-Ram­sey-like mod­els. Fur­ther­more, it’s a new idea that has not been sub­jected to much scrutiny.

Ar­gu­ment from mar­ket effi­ciency: Ac­cord­ing to the effi­cient mar­ket hy­poth­e­sis (EMH), the cor­rect dis­count rate should be em­bed­ded in mar­ket prices. Mar­ket forces don’t always ap­ply to philan­thropic ac­tors, but it seems plau­si­ble that some­thing like a weaker ver­sion of EMH might still hold. Thus, we might ex­pect the “philan­thropic mar­ket” to ba­si­cally cor­rectly de­ter­mine the dis­count rate, even if no in­di­vi­d­ual ac­tor has high con­fi­dence in their par­tic­u­lar es­ti­mate. On the other hand, in prac­tice, the philan­thropic mar­ket ap­pears far less effi­cient than the for-profit sec­tor (or else the effec­tive al­tru­ist ap­proach would be much more pop­u­lar!).

Ap­ply­ing the im­por­tance/​tractabil­ity/​ne­glect­ed­ness framework

Let’s qual­i­ta­tively con­sider im­prov­ing the dis­count rate and see how it fits in the im­por­tance/​tractabil­ity/​ne­glect­ed­ness frame­work.

Importance

If we use philan­thropic re­sources slightly too slowly, we lose out on the benefits of this marginal con­sump­tion, and con­tinue los­ing out ev­ery year in per­pe­tu­ity (or at least un­til we cor­rect our es­ti­mate of the dis­count rate).

If we use re­sources too quickly, this eats into po­ten­tial in­vest­ment re­turns, de­creas­ing the size of our fu­ture port­fo­lio and ham­string­ing philan­thropists’ abil­ity to do good in the fu­ture.

Un­der the Ram­sey model, slightly re­fin­ing the dis­count rate es­ti­mate greatly in­creases util­ity. But the pre­vi­ous sec­tion does provide some ar­gu­ments against the im­por­tance of a cor­rect dis­count rate.

Im­prov­ing our es­ti­mate of the dis­count rate only mat­ters in situ­a­tions where we provide all the fund­ing for a cause, or where we can co­or­di­nate with all (or most) other fun­ders. If we only con­trol a small por­tion of funds and other fun­ders do not fol­low op­ti­mal con­sump­tion, then we sim­ply want to bring over­all spend­ing closer to the op­ti­mal rate, which re­quires us to con­sume ei­ther all or none of our re­sources. In this situ­a­tion, we do not need to ex­actly es­ti­mate the dis­count rate—we only need to know whether other fun­ders use a dis­count that’s too low or too high. But we do care about the ex­act rate in smaller causes (prob­a­bly in­clud­ing ex­is­ten­tial risk, and pos­si­bly farm an­i­mal welfare) where we can co­or­di­nate with other donors.

Tractability

Es­ti­mat­ing the dis­count rate ap­pears much eas­ier than, say, end­ing global poverty. I can eas­ily come up with sev­eral ways we could im­prove our es­ti­mate:

  • Bet­ter sur­veys or stud­ies on the prob­a­bil­ity of ex­tinc­tion, or bet­ter at­tempts to syn­the­size an es­ti­mate out of ex­ist­ing sur­veys.

  • Re­search on his­tor­i­cal move­ments and learn more about why they failed or suc­ceeded.

  • The­o­ret­i­cal re­search on how philan­thropists should con­sume as a func­tion of the dis­count rate.

  • The­o­ret­i­cal re­search on how to break down the dis­count rate.

This sug­gests we could sub­stan­tially im­prove our es­ti­mate with rel­a­tively lit­tle effort.

Neglectedness

Some aca­demic liter­a­ture ex­ists on es­ti­mat­ing the dis­count rate, al­though much of this liter­a­ture doesn’t en­tirely ap­ply to effec­tive al­tru­ists. Within EA, I am only aware of one prior at­tempt to es­ti­mate the dis­count rate (from Tram­mell[5:2]), and this was only given as a rough guideline. Even within academia, one could fairly de­scribe this area of re­search as ne­glected; within EA, it has barely even been men­tioned. The sheer ne­glect­ed­ness of this is­sue sug­gests that even a tiny amount of effort could sub­stan­tially im­prove our es­ti­mate.

All things con­sid­ered, it seems likely to me that the effec­tive al­tru­ism com­mu­nity sub­stan­tially un­der-in­vests in try­ing to de­ter­mine the cor­rect dis­count rate, but the sim­ple ex­ten­sion to the Ram­sey model per­haps over­states the case.

Conclusion

In this es­say, I have re­viewed a num­ber of philan­thropic op­por­tu­ni­ties that, ac­cord­ing to the sim­plis­tic Ram­sey model, could sub­stan­tially im­prove the world. Some of these are already widely dis­cussed in the EA com­mu­nity, oth­ers re­ceive a lit­tle at­ten­tion, and some are barely known at all. Th­ese op­por­tu­ni­ties in­clude:

  1. Re­duc­ing ex­is­ten­tial risk.

  2. Re­duc­ing in­di­vi­d­ual value drift.

  3. Im­prov­ing the abil­ity of in­di­vi­d­u­als to del­e­gate their in­come to value-sta­ble in­sti­tu­tions.

  4. Mak­ing ex­pro­pri­a­tion and value drift less threat­en­ing by spread­ing al­tru­is­tic funds more evenly across ac­tors and coun­tries.

  5. Re­duc­ing the in­sti­tu­tional value drift/​ex­pro­pri­a­tion rate.

  6. More ac­cu­rately es­ti­mat­ing the dis­count rate in or­der to know how best to use re­sources over time.

Be­fore writ­ing this es­say, I cre­ated some ba­sic mod­els of the cost-effec­tive­ness of each of these. The mod­els are suffi­ciently com­pli­cated, and provide suffi­ciently lit­tle ex­plana­tory value, that I will not pre­sent them here. Suffice it to say the mod­els sug­gest that #6—im­prov­ing the es­ti­mate of the dis­count rate—does the most good per dol­lar spent. Ob­vi­ously this heav­ily de­pends on model as­sump­tions (and my mod­els made a lot of as­sump­tions). The take­away is that, based on what we cur­rently know, any of these six op­por­tu­ni­ties could plau­si­bly rep­re­sent the best effec­tive al­tru­ist cause right now.

Let’s briefly ad­dress each of these op­por­tu­ni­ties.

Ex­is­ten­tial risk already re­ceives much at­ten­tion in the EA com­mu­nity, so I have lit­tle to add.

A few EAs have writ­ten about in­di­vi­d­ual value drift, most no­tably Marisa Jur­czyk, who also pro­vided some qual­i­ta­tive sug­ges­tions for how to re­duce value drift. But, as Jur­czyk noted, “[t]he study of EAs’ ex­pe­riences with value drift is rather ne­glected, so fur­ther re­search is likely to be highly im­pact­ful and benefi­cial for the com­mu­nity.”

If in­di­vi­d­u­als want to del­e­gate their dona­tions to in­sti­tu­tions, they run into the prob­lem that most of their dona­tions come from fu­ture in­come, and they can­not move this in­come from the fu­ture to the pre­sent. Donors have a few op­tions for “lev­er­ag­ing” dona­tions, but none of them look par­tic­u­larly fea­si­ble. If we iden­ti­fied bet­ter ways to help in­di­vi­d­u­als del­e­gate their fu­ture dona­tions, that could provide a lot of value.

To my knowl­edge, the idea of spread­ing al­tru­is­tic funds has never been mean­ingfully dis­cussed. It poses sub­stan­tial challenges in prac­tice, and I can see why in­sti­tu­tions gen­er­ally don’t want to do it. But I do think this idea has po­ten­tial if we can figure out how to make it work.

Many types of in­sti­tu­tions, not just effec­tive al­tru­ists, should care about re­duc­ing the in­sti­tu­tional value drift/​ex­pro­pri­a­tion rate. It’s pos­si­ble that there already ex­ists liter­a­ture on this sub­ject, al­though I’m not aware of any. More re­search in this area could prove highly valuable.

I dis­cussed im­prov­ing our es­ti­mate of the dis­count rate in the pre­vi­ous sec­tion. Ac­cord­ing to my pre­limi­nary in­ves­ti­ga­tion, this could be a highly im­pact­ful area of re­search.

This table pro­vides my (ex­tremely) rough guesses as to the im­por­tance, tractabil­ity, and ne­glect­ed­ness of these cause ar­eas rel­a­tive to each other. When I say, for ex­am­ple, that I be­lieve ex­is­ten­tial risk has low ne­glect­ed­ness, that’s rel­a­tive to the other causes on this list, not in gen­eral. (Ex­is­ten­tial risk is highly ne­glected com­pared to, say, de­vel­oped-world ed­u­ca­tion.)

Im­por­tance Tractabil­ity Ne­glect­ed­ness
ex­is­ten­tial risk high low low
in­di­vi­d­ual value drift low medium medium
del­e­gat­ing in­di­vi­d­u­als’ dona­tions low medium medium
spread­ing al­tru­is­tic funds medium high high
in­sti­tu­tional value drift/​ex­pro­pri­a­tion medium medium medium
es­ti­mat­ing dis­count rate medium high high

(While re­vis­ing this es­say, I ba­si­cally com­pletely re-did this table twice. My opinion might com­pletely change again by next week. So don’t treat these as well-in­formed guesses.)

Fi­nally, ques­tions that merit fu­ture in­ves­ti­ga­tion:

  1. What im­pli­ca­tions do we get if we change var­i­ous model as­sump­tions?

  2. How does the dis­count rate for effec­tive al­tru­ists com­pare to the more tra­di­tional so­cial dis­count rate, and what is the sig­nifi­cance of this com­par­i­son? What do we get if we at­tempt to de­rive our dis­count rate from the so­cial dis­count rate?

  3. How should we de­rive op­ti­mal con­sump­tion from the cur­rent and long-term dis­count rates?

  4. What co­effi­cient of rel­a­tive risk aver­sion () and in­vest­ment rate of re­turn (r) should be used? Should we ex­pect them to change in the long run?

  5. Why do effec­tive al­tru­ist or­ga­ni­za­tions re­port such high dis­count rates?

Liter­a­ture already ex­ists on some of these, e.g., Hakans­son (1970)[20] on mod­ify­ing the Ram­sey model to al­low for risky in­vest­ments. Fu­ture work could re­view some of this liter­a­ture and draw im­pli­ca­tions for effec­tive al­tru­ists’ be­hav­ior.

Thanks to Mindy McTeigue and Philip Tram­mell for pro­vid­ing feed­back on this es­say.

Ap­pendix: Proof that spend­ing should de­crease as the dis­count rate decreases

In the ba­sic Ram­sey model, the dis­count fac­tor (call it D(t)) is given by . If we gen­er­al­ize the dis­count fac­tor and al­low it to obey any func­tion, we can rewrite to­tal util­ity as

Let be the dis­count rate, where . (Ob­serve that when , .) We want the dis­count rate to de­cline with time. Many pos­si­ble func­tions could give a de­clin­ing dis­count rate, but for the sake of illus­tra­tion, let’s use . With this dis­count func­tion, the dis­count rate grad­u­ally de­creases over time to a min­i­mum of . is a scale pa­ram­e­ter that de­ter­mines how rapidly the dis­count rate de­creases. This cor­re­sponds to dis­count fac­tor . This is similar to the “Gamma dis­count” used by Weitz­man (2001)[16:2][21].

Un­der this dis­count rate, the op­ti­mal con­sump­tion rate de­clines over time. We can prove this by fol­low­ing the same proof steps as Tram­mell[5:3], but us­ing a differ­ent dis­count fac­tor.

Tram­mell defines y(t) as “the re­sources al­lo­cated at time 0 for in­vest­ment un­til, fol­lowed by spend­ing at, t.” He ob­serves that util­ity is max­i­mized when the deriva­tive of dis­counted util­ity with re­spect to y(t) equals some con­stant k, and then solves for y(t). If we solve for y(t) with a gen­er­al­ized time-de­pen­dent dis­count fac­tor, we get

Ob­serv­ing that al­lows us to solve for k. Plug­ging in , solv­ing the in­te­gral, and re­ar­rang­ing gives

where is the Gamma func­tion.

Plug­ging this into the for­mula for y(t) gives

Ob­serve that . There­fore, c(t) is pro­por­tional to .

Let be op­ti­mal con­sump­tion ac­cord­ing to the vari­able-dis­count model, and similarly with for the fixed-dis­count model. Re­call that . If , then . There­fore, grows more slowly than (when t > 1). The fixed-dis­count case has a con­stant con­sump­tion rate, so the vari­able-dis­count case must have a de­creas­ing con­sump­tion rate.

Some brief ob­ser­va­tions about this vari­able-dis­count model:

  1. When , it be­haves iden­ti­cally to the fixed-dis­count case with .

  2. Like the fixed-dis­count model, when , the model sug­gests we should save in­definitely and never con­sume. This con­di­tion does not de­pend on t—that is, this model will never recom­mend con­sum­ing for a while and then ceas­ing con­sump­tion once the dis­count rate drops be­low a cer­tain level.

  3. Op­ti­mal con­sump­tion at time 0 is not defined be­cause .

  4. Know­ing op­ti­mal con­sump­tion does not tell us the op­ti­mal con­sump­tion rate. I do not be­lieve the op­ti­mal con­sump­tion rate has a closed-form solu­tion.

  5. The op­ti­mal con­sump­tion sched­ule de­pends on what one con­sid­ers the “start time”, and one’s be­liefs about op­ti­mal con­sump­tion can be in­con­sis­tent across time. Loewen­stein and Pr­elec (1992)[22] dis­cuss this and other re­lated is­sues. How­ever, this prob­lem does not se­ri­ously af­fect the model as I have por­trayed it[23].

Notes


  1. The Ram­sey model also de­pends on two other pa­ram­e­ters: the in­ter­est rate and the elas­tic­ity of marginal util­ity of con­sump­tion. Those pa­ram­e­ters are be­yond the scope of this es­say. ↩︎

  2. I won’t go into de­tail, but we have good the­o­ret­i­cal rea­sons to ex­pect most ac­tors to spend im­pa­tiently, so for most causes, we plau­si­bly want to in­vest all our money be­cause other ac­tors already over-spend ac­cord­ing to our val­ues. See Tram­mell[5:4] for more ↩︎

  3. Ram­sey (1928). A Math­e­mat­i­cal The­ory of Sav­ing. ↩︎ ↩︎ ↩︎

  4. Greaves (2017). Dis­count­ing for pub­lic policy: A sur­vey. ↩︎ ↩︎ ↩︎

  5. Tram­mell (2020). Dis­count­ing for Pa­tient Philan­thropists. Work­ing pa­per (un­pub­lished). Ac­cessed 2020-06-17. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  6. See Mul­lins (2018), Ret­ro­spec­tive Anal­y­sis of Long-Term Fore­casts. This re­port found that “[a]ll fore­cast method­olo­gies provide more ac­cu­rate pre­dic­tions than un­in­formed guesses.” ↩︎

  7. In fact, if we do pri­ori­tize re­duc­ing ex­is­ten­tial risk, the model as pre­sented in this es­say does not work, be­cause the dis­count rate due to ex­tinc­tion is no longer a con­stant. ↩︎

  8. The re­port gave point prob­a­bil­ity es­ti­mates for all causes other than AI. But for AI, it gave a prob­a­bil­ity range, be­cause “Ar­tifi­cial In­tel­li­gence is the global risk where least is known” (p. 164). ↩︎

  9. I calcu­lated these sum­mary statis­tics with­out re­gard to the qual­ity of the in­di­vi­d­ual pre­dic­tions. Two of the in­di­vi­d­ual pre­dic­tions pro­vided lower bounds, not point pre­dic­tions, but I treated them as point pre­dic­tions any­way. ↩︎

  10. Note that the pro­vided hy­per­link goes to a work­ing ver­sion of the pa­per, be­cause as far as I can tell, the fi­nal pa­per is not available for free on­line. ↩︎

  11. Some peo­ple dis­t­in­guish be­tween su­per­in­tel­li­gent AI and AGI, where the lat­ter merely has hu­man-level in­tel­li­gence, not su­per­hu­man-level. For sim­plic­ity, I treat the two terms as in­ter­change­able. ↩︎

  12. Müller & Bostrom (2016). Fu­ture Progress in Ar­tifi­cial In­tel­li­gence: A Sur­vey of Ex­pert Opinion. ↩︎

  13. Sand­berg (n.d.). Every­thing is tran­si­tory, for suffi­ciently large val­ues of “tran­si­tory.” ↩︎

  14. Op­por­tu­ni­ties get­ting worse with in­creased spend­ing is ac­counted for by the con­cav­ity of the util­ity func­tion. But it might make sense to only in­clude EA spend­ing in the util­ity func­tion, and treat other par­ties’ spend­ing as a sep­a­rate pa­ram­e­ter. ↩︎

  15. Wealth in gen­eral is fat-tailed, but it ap­pears even more fat-tailed in EA, where the top one donor con­trols more than half the wealth. As of this writ­ing, the rich­est per­son in the world con­trols “only” 0.03% of global wealth ($113 billion out of $361 trillion). ↩︎

  16. Weitz­man (2001). Gamma Dis­count­ing. ↩︎ ↩︎ ↩︎

  17. Nord­haus (2007). The Challenge of Global Warm­ing: Eco­nomic Models and En­vi­ron­men­tal Policy. ↩︎

  18. Stern Re­view (2007). The Eco­nomics of Cli­mate Change. ↩︎

  19. Tech­ni­cally this is a con­tin­u­ous model so there are no dis­crete pe­ri­ods, but you know what I mean. ↩︎

  20. Hakans­son (1970). Op­ti­mal In­vest­ment and Con­sump­tion Strate­gies Un­der Risk for a Class of Utility Func­tions. ↩︎

  21. A proper dis­count fac­tor should rep­re­sent a prob­a­bil­ity dis­tri­bu­tion, which means it should have D(0) = 1 and should in­te­grate to 1; but these de­tails don’t mat­ter for the pur­poses of this proof. ↩︎

  22. Loewen­stein and Pr­elec (1992). Ano­ma­lies in In­tertem­po­ral Choice: Ev­i­dence and an In­ter­pre­ta­tion. ↩︎

  23. The tra­di­tional prob­lem of hy­per­bolic dis­count­ing is that it causes one’s prefer­ences to change over time, even if no in­for­ma­tion changes. For ex­am­ple, given the choice be­tween re­ceiv­ing $100 in six months’ time and $120 in seven months, peo­ple tend to choose the lat­ter. But if you wait six months and then ask them if they’d rather re­ceive $100 now or $120 in a month, they gen­er­ally choose the former, even though fun­da­men­tally this is the ex­act same choice.

    The model un­der dis­cus­sion in this es­say does not suffer from this prob­lem. In tra­di­tional hy­per­bolic dis­count­ing, dis­count rates de­cline as a func­tion of their dis­tance from the pre­sent. But in this model, dis­count rates de­cline as a re­sult of changes in facts about re­al­ity, in­de­pen­dent of the time of con­sid­er­a­tion. That is, al­though dis­count rates de­crease hy­per­bol­i­cally, ac­tors at differ­ent points in time agree on the value of the dis­count rate at any par­tic­u­lar time, be­cause that dis­count rate is a func­tion of the ex­tinc­tion/​ex­pro­pri­a­tion/​value drift risk, not of pure time prefer­ence. ↩︎