Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift

This post is mo­ti­vated by Joey’s post on ‘Em­piri­cal data on value drift’ and some of the com­ments.

Introduction

“And Harry re­mem­bered what Pro­fes­sor Quir­rell had said be­neath the starlight: Some­times, when this flawed world seems un­usu­ally hate­ful, I won­der whether there might be some other place, far away, where I should have been…

And Harry couldn’t un­der­stand Pro­fes­sor Quir­rell’s words, it might have been an alien that had spo­ken, (...) some­thing built along such differ­ent lines from Harry that his brain couldn’t be forced to op­er­ate in that mode. You couldn’t leave your home planet while it still con­tained a place like Azk­a­ban. You had to stay and fight.”

Harry Pot­ter and the Meth­ods of Ra­tion­al­ity

I use the terms value drift and lifestyle drift in a broad sense to mean in­ter­nal or ex­ter­nal changes lead­ing you to lose most of the ex­pected al­tru­is­tic value of your life.

  • Value drift is in­ter­nal; it de­scribes changes to your value sys­tem or mo­ti­va­tion.

  • Lifestyle drift is ex­ter­nal; the term cap­tures changes in your life cir­cum­stances lead­ing to difficul­ties im­ple­ment­ing your val­ues.

In­ter­nally, value drift could oc­cur by ceas­ing to see helping oth­ers as one of your life’s pri­ori­ties (los­ing the ‘A’ in EA), or loos­ing the mo­ti­va­tion to work on the high­est-pri­or­ity cause ar­eas or in­ter­ven­tions (los­ing the ‘E’ in EA). Ex­ter­nally, lifestyle drift could oc­cur (as de­scribed in Joey’s post) by giv­ing up a sub­stan­tial frac­tion of your effec­tively al­tru­is­tic re­sources for non-effec­tively al­tru­is­tic pur­poses, thus re­duc­ing your ca­pac­ity to do good. Con­cretely, this could in­volve de­cid­ing to spend a lot of money on buy­ing a (larger) house, hav­ing a (fancier) wed­ding, trav­el­ing around the world (more fre­quently or ex­pen­sively), etc.

Of course, chang­ing your cause area or in­ter­ven­tion to some­thing that is equally or more effec­tive within the EA frame­work does not count as value drift. Note that even if your fu­ture self were to de­cide to leave the EA com­mu­nity, as long as you still see ‘helping oth­ers effec­tively’ as one of your top-pri­ori­ties in life it might not con­sti­tute value drift. You don’t need to call your­self an EA to have a large im­pact. But I am con­vinced that EA as a com­mu­nity helps many mem­bers up­hold their mo­ti­va­tion for do­ing the most good.

Why this is im­por­tant for altruists

There is a differ­ence be­tween the po­ten­tial al­tru­is­tic value and the ex­pected al­tru­is­tic value you may achieve over the course of your life­time. Risks of value or lifestyle drift may make you lose most of the ex­pected al­tru­is­tic value of your life, thus pre­vent­ing you from re­al­iz­ing a large frac­tion of your po­ten­tial al­tru­is­tic value.

Most of the po­ten­tial al­tru­is­tic value of EAs lies in the medium- to long-term, when more and more peo­ple in the com­mu­nity take up highly effec­tive ca­reer paths and build their pro­fes­sional ex­per­tise to reach their ‘peak pro­duc­tivity’ (likely in their 40s). How­ever, if value and lifestyle drift are com­mon, most of an EA’s ex­pected al­tru­is­tic value lies in the short- to medium-term; the rea­son be­ing, that many of the peo­ple cur­rently ac­tive in the com­mu­nity will cease to be in­ter­ested in do­ing the most good long be­fore they reach their peak pro­duc­tivity.

This is why, speak­ing for my­self, los­ing my al­tru­is­tic mo­ti­va­tion or giv­ing up a large frac­tion of my al­tru­is­tic re­sources in the fu­ture would equal a small moral tragedy to my pre­sent self. I think that as EAs we can rea­son­ably have a prefer­ence for our fu­ture selves not to aban­don our fun­da­men­tal com­mit­ment to al­tru­ism or effec­tive­ness.

What you can do to re­duce risks of value drift and lifestyle drift:

Caveat: the fol­low­ing sug­ges­tions are all very ten­ta­tive and largely based on my in­tu­ition of what I think will help me avoid value drift; please take them with a large grain of salt. I ac­knowl­edge that other peo­ple func­tion differ­ently in some re­spects, that some of the sug­ges­tions be­low will not have benefi­cial effects for many peo­ple and could even be harm­ful for some. Also keep in mind that some of the sug­ges­tions might in­volve trade-offs with other goals. A toy ex­am­ple to illus­trate the point: it might turn out that get­ting an EA tat­too is a great com­mit­ment mechanism, how­ever it could con­flict with the goal (among oth­ers) to spend your limited weird­ness points wisely and might have nega­tive effects on how EA is per­ceived by peo­ple around you. Please re­flect care­fully on your per­sonal situ­a­tion be­fore adopt­ing any of the fol­low­ing.

  • Be­ware of fal­ling prey to cog­ni­tive bi­ases when think­ing about value drift: You prob­a­bly sys­tem­at­i­cally un­der­es­ti­mate a) the like­li­hood of chang­ing sig­nifi­cantly in the fu­ture (i.e. End-of-his­tory-illu­sion) and b) the role that so­cial dy­nam­ics play in your mo­ti­va­tion. There is a dan­ger in be­liev­ing both that your fun­da­men­tal val­ues will not change or that you have con­trol over how they will change, and in be­liev­ing that your mind works rad­i­cally differ­ently from other peo­ple (e.g. atyp­i­cal mind fal­lacy or bias blind spot); for in­stance, that your mo­ti­va­tion is grounded more in ra­tio­nal ar­gu­ments than it is for oth­ers and less in so­cial dy­nam­ics. In par­tic­u­lar, be­ware of base rate ne­glect when think­ing that the risk of value drift oc­cur­ring to your own per­son is very low; Joey’s post pro­vides a very rough base rate for ori­en­ta­tion.

  • Sur­round your­self with value al­igned peo­ple: There is a say­ing that you be­come the av­er­age of the five peo­ple clos­est to you. There­fore, sur­round your­self with peo­ple who mo­ti­vate and in­spire you in your al­tru­is­tic pur­suits. From this per­spec­tive, it seems es­pe­cially benefi­cial to spend time with other EAs to hold up and re­gain your mo­ti­va­tion; though ‘value al­igned’ peo­ple don’t have to be EAs, of course. How­ever, it is worth point­ing out that you should be­ware of group­think and sur­round­ing your­self only with peo­ple who are very similar to you. As a com­mu­nity we should re­tain our abil­ity to take the out­side view and en­gage crit­i­cally with com­mu­nity trends and ideas. If you de­cide you want to spend more time with value al­igned peo­ple /​ other EAs, here are some con­crete ways: mak­ing an effort to have reg­u­lar so­cial in­ter­ac­tions with value al­igned peo­ple (e.g. meet­ing for lunch/​din­ner, coffee), en­gag­ing in or start­ing your own lo­cal EA chap­ter, at­tend­ing EA Global con­fer­ences or re­treats, be­com­ing friends with EAs, com­plete in­tern­ships at EA al­igned or­gani­sa­tions, get­ting in touch with value al­igned peo­ple & other EAs on­line and chat­ting/​skyp­ing to ex­change ideas, shar­ing a flat etc. Avoid­ing value drift might in­crease the im­por­tance you should place on liv­ing in an EA hub, such as the Bay Area, Lon­don, Oxford or Ber­lin, or other places with a sup­port­ive com­mu­nity.

  • Dis­count the ex­pected value of your longer term al­tru­is­tic plans by the prob­a­bil­ity that they will never be re­al­ised due to value or lifestyle drift (see Joey’s post for a very rough base rate). This con­sid­er­a­tion might lead you to place rel­a­tively more weight on how you can achieve near term im­pact or re­duce risks of value drift. How­ever, a counter-con­sid­er­a­tion is that your fu­ture self will have more skills, knowl­edge and re­sources to do good, which could make ca­pac­ity build­ing in the near term ex­tremely valuable. At­tempt to bal­ance these con­sid­er­a­tions – the risk of value drift to­mor­row against the risk of un­der­in­vest­ing in build­ing your ca­pac­ity to­day.

  • Make re­duc­ing risks of value and lifestyle drift a top al­tru­is­tic pri­or­ity: Think about whether you agree that most of the po­ten­tial so­cial im­pact of your life lies sev­eral years or decades in the fu­ture. If yes, then think­ing about risks of value drift in your own life and im­ple­ment­ing con­crete steps to re­duce them, is likely go­ing to be (among) the high­est ex­pected value ac­tivi­ties for you in the short-term. I ex­pect that learn­ing more about the causes of value drift on the in­di­vi­d­ual level has a high moral value of in­for­ma­tion by mak­ing it eas­ier for your­self to an­ti­ci­pate and avoid fu­ture life cir­cum­stances that con­tribute to it. Joey’s post in­di­cates that value drift oc­curs for var­i­ous differ­ent rea­sons and many of those seem to be cir­cum­stan­tial rather than com­ing from dis­agree­ment with fun­da­men­tal EA prin­ci­ples (e.g. mov­ing to a new city with­out a sup­port­ive EA com­mu­nity, tran­si­tion­ing from uni­ver­sity to work­force, find­ing a non-EA part­ner and in­vest­ing heav­ily in the re­la­tion­ship, mar­ry­ing, get­ting kids etc.).

  • Think about what your pri­ori­ties are in life: There are many differ­ent ways to lead a happy and fulfilling life. A sub­set of those ways re­volve around al­tru­ism. And a sub­set of these count as effec­tively al­tru­is­tic. While you should be care­ful not to sac­ri­fice your long term hap­piness to short-term al­tru­is­tic goals – be­ing un­happy with your way of life, even if it is do­ing a ton of good in the short-term, is a safe way to lose your mo­ti­va­tion and pivot over time – there are ways to live a very happy and fulfilled life that also is ded­i­cated to EA prin­ci­ples.

  • Con­front your­self with your ma­jor mo­ti­va­tional sources reg­u­larly: This is re­lated to the above point. For ex­am­ple, talk to other EAs about what mo­ti­vates you and them, reread your preferred book by your favourite moral philoso­pher, watch mo­ti­vat­ing talks or ar­ti­cles (quick shout-out for Nate Soare’s ‘On Car­ing’) or what­ever in­creased your mo­ti­va­tion to be­come EA in the first place. In ad­di­tion, con­sider writ­ing a list of per­son­al­ised, mo­ti­va­tional af­fir­ma­tions for your­self that you read reg­u­larly or when feel­ing low and un­mo­ti­vated. When con­sid­er­ing (re-)watch­ing emo­tion­ally salient videos (e.g. slaugh­ter­house videos), please bear in mind that this can have trau­matic effects for some peo­ple and might thus be coun­ter­pro­duc­tive.

  • Send your fu­ture self let­ters: de­scribing a) your al­tru­is­tic mo­ti­va­tion, b) wishes for how you should live your life in the years to come and in­clud­ing c) con­crete re­sources (e.g. the new EA Hand­book) to re-learn and po­ten­tially re­gain mo­ti­va­tion. Con­sider adding d) a list of ways in which your pre­sent self would ac­cept value changes to pre­vent your fu­ture self from ra­tio­nal­is­ing value drift af­ter the fact (e.g. value changes re­sult­ing from your fu­ture self be­ing bet­ter in­formed, say, about moral philos­o­phy and over­all more ra­tio­nal – as op­posed to purely cir­cum­stan­tial value drift).

  • Con­duct (semi-)an­nual re­views and plan­ning: By eval­u­at­ing how your life is go­ing ac­cord­ing to your own pri­ori­ties, goals and val­ues, you can know whether you are still on track to achiev­ing them or whether you should make changes to the sta­tus quo.

  • Really make bod­ily and men­tal health a pri­or­ity: This is par­tic­u­larly im­por­tant for the EA com­mu­nity, which is fo­cused on (self-)op­ti­miza­tion and where some peo­ple might be tempted in the short-run to work re­ally hard and long hours, re­duce sleep, ne­glect nu­tri­tion and ex­er­cise, and do other things that are nei­ther healthy nor sus­tain­able in the long run. Ex­per­i­ment with and im­ple­ment prac­tices to your life to re­duce the chance of fu­ture (men­tal) health break­down, which would a) be very bad by it­self, b) rad­i­cally limit your abil­ity to do good in the short-term and c) could cause a reshuffling of your pri­ori­ties or act as a Schel­ling point for your fu­ture self to dis­en­gage from EA. Ju­lia Wise offers great ad­vice on self-care and burnout pre­ven­tion for EAs.

  • Make do­ing good en­joy­able: This is re­lated to the above point on men­tal health. By find­ing ways to make en­gag­ing in al­tru­is­tic be­havi­our en­joy­able, you cre­ate a pos­i­tive emo­tional as­so­ci­a­tion with the ac­tivity. This should help you keep up the com­mit­ment in the long-run. On the flip­side, be care­ful when en­gag­ing in al­tru­is­tic ac­tivi­ties that you have (strong) nega­tive as­so­ci­a­tions with. Ju­lia Wise writes “effec­tive al­tru­ism is not about driv­ing your­self to a break­down. We don’t need peo­ple mak­ing sac­ri­fices that leave them drained and mis­er­able. We need peo­ple who can walk cheer­fully over the world”. A fur­ther ad­van­tage of find­ing ways to com­bine effec­tive al­tru­ism with ‘hav­ing fun’ or ‘be­ing cheer­ful’ is that it will likely make EA much more at­trac­tive for oth­ers. Con­cretely, you might want to try the fol­low­ing: Many ac­tivi­ties are more fun in a group than alone, so en­gage in al­tru­is­tic en­deav­ours to­gether with oth­ers if pos­si­ble. At­tempt to as­so­ci­ate EA in your life not just with work, but also with so­cial­is­ing, friend­ship and fun. Make sure not to over­work your­self and keep in mind that “the im­por­tant les­son of work­ing a lot is to be com­fortable with tak­ing a break” (from Peter Hur­fords ‘How I Am Pro­duc­tive’).

  • Do good di­rectly: You might want to con­sider keep­ing habits of do­ing good di­rectly, even in cases where these are not top-pri­or­ity do-good­ing ac­tivi­ties by them­selves. I be­lieve this can be helpful to keep up and in­crease in­ter­nal mo­ti­va­tion to en­gage in al­tru­is­tic ac­tivi­ties as well as for cul­ti­vat­ing a sense of ‘be­ing an al­tru­is­tic per­son’. For ex­am­ple, you could live veg*an, live fru­gally, donate some amount of money ev­ery year (even if the sums are small) and keep up to date with cause area and char­ity recom­men­da­tions when mak­ing your dona­tion de­ci­sions. How­ever, as a counter to this point, I have met some­one ar­gu­ing that spend­ing willpower on low-im­pact ac­tivi­ties might po­ten­tially lead to ego de­ple­tion (note that this effect is dis­puted) or com­pas­sion fa­tigue for some peo­ple, thereby de­creas­ing their mo­ti­va­tion to en­gage in high-im­pact be­havi­our. Re­gard­ing ca­reer choice, you might see re­duc­ing risks of value drift as one rea­son to place a higher weight on di­rect work or re­search within an EA al­igned or­gani­sa­tion rel­a­tive to other op­tions such as earn­ing to give or build­ing ca­reer cap­i­tal.

  • Con­sider ‘lock­ing in’ part of your dona­tion or ca­reer plans: While the flex­i­bil­ity to change your plans and re­tain fu­ture op­tion value are im­por­tant con­sid­er­a­tions, in some cases mak­ing hard-to-re­verse de­ci­sions could be benefi­cial to avoid value drift. Ap­pli­ca­tion for ca­reer plan­ning: be wary of build­ing very gen­eral ca­reer cap­i­tal for a long time, “par­tic­u­larly if the built ca­pac­ity is broad and leaves open ap­peal­ing non-al­tru­ist paths”, Joey writes. In­stead you might con­sider spe­cial­is­ing and build­ing more nar­row, EA-fo­cused ca­reer cap­i­tal (which is en­dorsed by 80,000 Hours for peo­ple fo­cus­ing on top-pri­or­ity paths any­way). How­ever, in this ar­ti­cle Ben Todd dis­cusses some coun­ter­ar­gu­ments to lock­ing in your ca­reer de­ci­sions too early. Ap­pli­ca­tion for dona­tions: Con­sider putting your dona­tions in a donor ad­vised fund in­stead of a sav­ings ac­count and po­ten­tially take a dona­tion pledge (see point be­low). Joey writes, “that way even if you be­come less al­tru­is­tic in the fu­ture, you can’t back out on the pledged dona­tions and spend it on a fancier wed­ding or a big­ger house”.

  • Con­sider tak­ing the Giv­ing What We Can pledge: For me, the ‘lock in’ as­pect of the pledge as a com­mit­ment de­vice was among the strongest rea­sons to take it. It is worth point­ing out though that tak­ing the pledge could have down­sides for some peo­ple (e.g. los­ing flex­i­bil­ity and fal­ling prey to the over­jus­tifi­ca­tion effect; for de­tails, read Michael Dicken’s post).

  • Com­mit your­self pub­li­cly: This is an­other form of ‘lock in’. For ex­am­ple, you could par­ti­ci­pate in an EA group, write ar­ti­cles de­scribing EA and your mo­ti­va­tion to ded­i­cate your life to do­ing the most good, post on so­cial me­dia about this, talk to other peo­ple about EA and be pub­lic about your EA ca­reer and dona­tion plans, wear EA-T-shirts etc. The idea be­hind this is to en­g­ineer peer pres­sure for your fu­ture self and a po­ten­tial loss of so­cial sta­tus that could come with aban­don­ing EA prin­ci­ples; I be­lieve this works (sub­con­sciously) for many as a mo­ti­va­tional driv­ing force to stay en­gaged. For this strat­egy to work it seems more im­por­tant what you think your peers think of you, then what they ac­tu­ally think of you. Hav­ing said that, I en­courage fos­ter­ing a so­cial norm among EAs not to shame or blame oth­ers when value drift oc­curs to them, in line with the over­all recom­men­da­tion for EAs to be es­pe­cially nice and con­sid­er­ate.

  • Reg­u­larly en­gage with EA con­tent: Have habits in place to reg­u­larly en­gage with con­tent of some form that helps you keep up your mo­ti­va­tion or in­creases your knowl­edge of how to do the most good. For ex­am­ple, by sub­scribing to EA newslet­ters or RSS feeds (e.g. EA Newslet­ter, 80,000 Hours, An­i­mal Char­ity Eval­u­a­tors, Open Philan­thropy Pro­ject, GiveWell, EA Fo­rum feed), listen­ing to EA & ra­tio­nal­ist pod­casts (e.g. 80,000 Hours pod­cast, Ra­tion­ally Speak­ing Pod­cast), read­ing EA Fo­rum ar­ti­cles, befriend­ing and sub­scribing other (effec­tive) al­tru­ists on FB, read­ing EA or ra­tio­nal­ity blogs (e.g. see list of EA blogs, LessWrong, Slate Star Codex), read­ing utopian fic­tion etc.

  • Re­la­tion­ships: For those look­ing for a part­ner, I en­dorse the recom­men­da­tion of gen­er­ally just choos­ing who­ever makes you hap­piest. For most peo­ple this any­way in­cludes find­ing part­ners who share their val­ues. It is worth point­ing out that avoid­ing value drift might give you an ad­di­tional rea­son to place some weight on find­ing part­ners who share your val­ues and wouldn’t put you un­der pres­sure in the long-term to give up your al­tru­is­tic com­mit­ments or make it much harder to im­ple­ment them. Con­cretely, you might con­sider look­ing for part­ners via plat­forms that al­low you to share a lot about your­self and don’t match you with peo­ple with op­pos­ing val­ues (e.g. OkCupid).

  • Ap­ply find­ings of be­havi­oural sci­ence re­search: I sus­pect that there are rele­vant in­sights from the re­search on nudg­ing or on suc­cess­ful habit cre­ation and re­ten­tion (e.g. see these ar­ti­cles, one & two), that can be ap­plied to help you avoid long-term value drift. One way to use nudges to make your­self en­gage in a de­sired al­tru­is­tic be­havi­our is by mak­ing the be­havi­our the de­fault op­tion. For in­stance, you might set up au­to­mated, re­cur­ring dona­tions (i.e. donat­ing as de­fault op­tion) or, Joey writes, “ask your em­ployer to au­to­mat­i­cally donate a pre-set por­tion of your in­come to char­ity be­fore you even see it in your bank ac­count”. As an­other ex­am­ple, by work­ing for an EA al­igned or­gani­sa­tion you can make high-im­pact di­rect work or re­search your de­fault op­tion.

What EA or­gani­sa­tions can do to deal with value and lifestyle drift:

  • En­courage norms of con­sid­er­ate­ness, friendli­ness and wel­com­ing­ness within the EA com­mu­nity, which is benefi­cial in its own right but also helps keep mo­ti­va­tional lev­els of com­mu­nity mem­bers high.

  • Con­duct fur­ther re­search on causes of value and lifestyle drift and how to avoid them. An ob­vi­ous start­ing point is re­search­ing the EA ‘refer­ence class’, i.e. look­ing at the value drift ex­pe­riences of other so­cial move­ments. I ac­knowl­edge that many EA or­gani­sa­tions have already spent sig­nifi­cant efforts on similar re­search pro­jects (e.g. Open Philan­thropy Pro­ject, Sen­tience In­sti­tute). In par­tic­u­lar, there might be ways for Re­think Char­ity to ex­pand the EA sur­vey to gather more rigor­ous data on value drift (se­lec­tion effects are ob­vi­ously prob­le­matic – the peo­ple whose val­ues drifted the most will likely not par­ti­ci­pate in the sur­vey).

  • Con­tinue to sup­port and ex­pand op­por­tu­ni­ties for com­mu­nity mem­bers to sur­round them­selves with other great peo­ple, e.g. by or­ganis­ing EAG(x) con­fer­ences and EA re­treats, sup­port­ing lo­cal chap­ters and cre­at­ing friendly and wel­com­ing on­line com­mu­ni­ties (such as this fo­rum or EA Face­book groups).

  • In­cor­po­rate the find­ings of re­search on value drift into EA ca­reer ad­vice, es­pe­cially when recom­mend­ing ca­reers whose value will only be re­al­ized decades in the fu­ture. Rob Wiblin already in­di­cated that 80,000 Hours con­sid­ers in­cor­po­rat­ing this into their dis­cus­sion of dis­count rates.

I would highly ap­pre­ci­ate your sug­ges­tions for con­crete ways to re­duce risks of value drift in the com­ments.

I warmly thank the fol­low­ing peo­ple for pro­vid­ing me with their in­put, sug­ges­tions and com­ments to this post: Joey Savoie, Pas­cal Zim­mer, Greg Lewis, Jasper Göt­ting, Ai­dan Goth, James Aung, Ed Lawrence, Linh Chi Nguyen, Huw Thomas, Till­man Schenk, Alex Nor­man, Char­lie Rogers-Smith.

[Edit, May 2019: Up­dated my defi­ni­tion of value and lifestyle drift above + added a sec­tion on why I be­lieve this topic ought to be a pri­or­ity for al­tru­ists]