Empirical data on value drift
Why Itâs Important to Know the Risk of Value Drift
The concept of value drift is that over time, people will become less motivated to do altruistic things. This is not to be confused with changing cause areas or methods of doing good. Value drift has a strong precedent of happening for other related concepts, both ethical things (such as being vegetarian) and things that generally take willpower (such as staying a healthy weight).
Value drift seems very likely to be a concern for many EAs, and if it were a major concern, it would substantially affect career and donation plans.
For example, if value drift rarely happens, putting money into a savings account with the intent of donating it might be basically as good as putting it into a donor-advised fund. However, if the risk of value drift is higher, a dollar in a savings account is more likely to later be used for non-altruistic reasons and thus not nearly as good as a dollar put into a donor advised fund, where itâs very hard not to donate it to a registered charity.
In a career context, a plan such as building career capital for 8 years and then moving into an altruistic job would be considered a much better plan if value drift were rare than if it were common. The more common value drift is, the stronger near-term focused impact plans are relative to longer-term focused impact plans. For example, you might get an entry-level position at a charity and build up capacity by getting work experience. This has the potential, though not always, to be slower at building your CV than getting a degree or working in a low-impact but high-prestige field. However, it has impact right away, which matters more if the risk of value drift is high.
The Data
Despite the importance of value drift to important questions, itâs rarely been talked about or studied. One of the reasons it is so under-studied is that it would take a long time to get good data.
I have been in the EA movement for ~5 years. I decided to pool some data from contacts who I met in my first year of EA. I only included people who would have called themselves EAs for 6 months or longer (I would not include someone who was only into EA for a month and then disappeared), and who and took some sort of EA action (working for an EA org, taking the GWWC pledge, running an EA group). I also only included people who I knew and kept in touch with well enough to know what happened to them (even if they left the EA movement). It is ultimately a convenience sample, but it was based on working for 4 current EA orgs and living in 4 different countries over that time, so itâs not focused on a single location or organization.
I also broke the groups down into ~10% donors and ~50% donors, because many times I have heard people being more or less concerned about one of these groups vs the other. These broad groups are not just focused on people doing earning to give. Someone who is working heavy hours for an EA organization and making most of their life decisions with EA as their number one priority would be considered in the 50% group. Someone running an EA chapter who makes decisions with EA as a factor, but prioritizes other factors above it, would be put in the 10% group. The percentages are aimed at rough proxies of how important EA is in these peopleâs lives, not strictly financial donations. I did not count changing cause areas as value drift (e.g. changing from donating 10% to MIRI to AMF) -- only different levels of overall altruistic involvement.
The results over 5 years are as follows:
16 people were ~50% donors â 9/â16 stayed around 50%
22 people were ~10% donors â 8â22 stayed around 10%
No one moved from the 10% category to the 50% category, and I only counted fairly noticeable changes (if someone changed their donations from 50% to 40%, I would not have the resolution to notice).
Value drift was high across both groups, with roughly 50% of the population drifting over 5 years. I talked to many of those people about value drift and their thoughts on long term altruism, and most of them, like most people I talk to now, had previously been very confident that they would stay altruistic in the long term. Why people generally value drifted was a mix of reasons with no clear consistent source, although life changes were a large factor for many (e.g. people moving from university to the workforce, changing cities or workplaces, marrying or having kids).
Interestingly, this data also sheds a little light on concerns of âpushing yourself too hardâ and vs âtaking it too easy on yourselfâ, with generally more involved or dedicated people value drifting noticeably less (~30% vs ~60%).
Discussion
Overall, these results seem pretty scary to me, especially since thereâs a natural selection effect where I tend to make friends who are more dedicated and drift apart from the ones who leave the movement. Itâs also worth noting that there have not been particularly major controversies or problems in the EA movement that would cause a lot of value drift over this period of time vs. any other. Historically, many EAs have been young non-family-starters, so weâve arguably been seeing a period of artificially low value drift that is not sustainable as the movement gets older and goes through standard life changes.
Of course, the data could be a lot better quality, and I wish it were measured in a more rigorous way. I would be keen to see any more data that anyone else has along these lines. Despite quality concerns, I still think we can draw some conclusions from it, particularly given that people are already effectively drawing conclusions about the likelihood of value drift from no data at all. I also spoke to a few EAs who have been around the movement awhile, and this data broadly fit their intuitions, which gives me more confidence that itâs not 100% off the mark.
The implications of this data are that people should be cautious of deferring impact to later, and should set up commitment devices to help them stick to what they care about.
One example: be wary of building capacity for very long periods of time, particularly if the built capacity is broad and leaves open appealing non-altruist paths. Instead, see if you can build capacity in such a way that it also does good at the moment, such as getting work experience with non-profits.
For instance, if you want to do direct impact, volunteering for an organization and showing how good your work is is often better than having a degree, especially an unrelated one. Itâs also substantially faster and does good directly. Degrees are largely a way to signal that youâre a hard worker with a decent amount of intelligence, and if all you are is a CV in somebodyâs inbox, thatâs very important. However, if youâve been working alongside them for months, theyâll already know these traits of yours.
Another way to build career capacity with value drift in mind is to get experience and credentials that make it harder to work in a non-altruistic area. This could be getting a degree in development economics instead of economics generally, or working for prestigious nonprofits instead of other prestigious organizations. Option value is great if the risk of value drift is low, but if itâs high, it makes it easier for you to slip. Itâs like only having healthy food in the house. If the only easy options are also altruistic, youâre much more likely to stick it out in the long haul.
If your primary path to impact is donations and you want to keep value drift in mind, but you donât know where you want to give yet, donât save those donations. Put them into a donor-advised fund. That way, even if you become less altruistic in the future, you canât back out on the pledged donations and spend it on a fancier wedding or a bigger house. You can also set up monthly donations, or ask your employer to automatically donate a preset portion of your income to charity before you even see it in your bank account.
Overall, if 50% of the EAs I met 5 years ago have value drifted, this should factor into your plans. Nobody thinks theyâll value drift, just like no teenager with a fast metabolism thinks theyâll be the one who gains weight when they hit middle age. By all means, indulge in junk food every once in awhile and donât constantly stress about calories, but put some time into setting up your life to make it easier for you to reach for a banana instead of the ice cream, or in this case, the altruistic path instead of the less altruistic one.
For a deeper dive into concrete ways to reduce value drift, check out this post.
- The value of conÂtent density by 20 Jul 2022 15:34 UTC; 207 points) (
- A QualÂiÂtaÂtive AnalÂyÂsis of Value Drift in EA by 12 Feb 2020 5:41 UTC; 145 points) (
- More emÂpiriÂcal data on âvalue driftâ by 29 Aug 2020 11:44 UTC; 114 points) (
- EA is a CaÂreer Endpoint by 14 May 2021 23:58 UTC; 109 points) (
- EsÂtiÂmatÂing the PhilanÂthropic DisÂcount Rate by 3 Jul 2020 16:58 UTC; 81 points) (
- What new EA proÂject or org would you like to see creÂated in the next 3 years? by 11 Jun 2019 20:56 UTC; 75 points) (
- EffecÂtive anÂiÂmal adÂvoÂcacy moveÂment buildÂing: a neÂglected opÂporÂtuÂnity? by 11 Jun 2019 20:33 UTC; 68 points) (
- The case for inÂvestÂing to give later by 3 Jul 2020 15:23 UTC; 54 points) (
- EA SurÂvey 2018 Series: How Long Do EAs Stay in EA? by 31 May 2019 0:32 UTC; 53 points) (
- EA uniÂverÂsity groups are missÂing out on most of their poÂtenÂtial by 7 Jan 2023 12:44 UTC; 52 points) (
- Be the First PerÂson to Take the BetÂter CaÂreer Pledge! by 9 Dec 2024 16:04 UTC; 50 points) (
- ReÂtenÂtion in EAâPart I: SurÂvey Data by 5 Feb 2021 19:09 UTC; 40 points) (
- BindÂing Fuzzies and Utilons Together by 25 Apr 2020 16:28 UTC; 40 points) (
- KeepÂing evÂeryÂone moÂtiÂvated: a case for effecÂtive caÂreers outÂside of the highÂest imÂpact EA organizations by 22 Aug 2019 6:43 UTC; 39 points) (
- Why Groups Should ConÂsider Direct Work by 27 May 2018 20:37 UTC; 31 points) (
- Mind EnÂhanceÂment: A High ImÂpact, High NeÂglect Cause Area? by 24 Mar 2022 20:33 UTC; 29 points) (
- 13 Sep 2019 9:10 UTC; 29 points) 's comment on Are we livÂing at the most inÂfluenÂtial time in hisÂtory? by (
- Should EA have a caÂreer-foÂcused âDo the most goodâ pledge? by 20 Jul 2021 13:47 UTC; 28 points) (
- Why foundÂing charÂiÂties is one of the highÂest imÂpact things one can do by 13 May 2018 20:13 UTC; 27 points) (
- 18 Jun 2019 2:10 UTC; 24 points) 's comment on EA SurÂvey 2018 Series: Do EA SurÂvey TakÂers Keep Their GWWC Pledge? by (
- 6 Feb 2019 1:47 UTC; 24 points) 's comment on EA HoÂtel Fundraiser 2: CurÂrent guests and their projects by (
- Long-term DonaÂtion BunchÂing? by 27 Sep 2019 12:40 UTC; 23 points) (LessWrong;
- 28 Feb 2019 23:49 UTC; 22 points) 's comment on After one year of apÂplyÂing for EA jobs: It is reÂally, reÂally hard to get hired by an EA organisation by (
- Value uncertainty by 29 Jan 2020 20:16 UTC; 20 points) (LessWrong;
- 24 Mar 2020 19:38 UTC; 18 points) 's comment on Why not give 90%? by (
- Long-term DonaÂtion BunchÂing? by 27 Sep 2019 13:09 UTC; 17 points) (
- Is value drift net-posÂiÂtive, net-negaÂtive, or neiÂther? by 4 May 2019 22:01 UTC; 16 points) (
- Is value drift net-posÂiÂtive, net-negaÂtive, or neiÂther? by 4 May 2019 22:01 UTC; 16 points) (
- Is value drift net-posÂiÂtive, net-negaÂtive, or neiÂther? by 5 May 2019 2:37 UTC; 5 points) (LessWrong;
- 23 Mar 2020 23:30 UTC; 4 points) 's comment on Why not give 90%? by (
- Let Values Drift by 20 Jun 2019 20:45 UTC; 4 points) (LessWrong;
- REACH Meetup â Value Drift by 30 May 2018 4:53 UTC; 4 points) (LessWrong;
- 6 Jul 2020 3:36 UTC; 3 points) 's comment on The case for inÂvestÂing to give later by (
- 25 Mar 2020 11:22 UTC; 2 points) 's comment on Why not give 90%? by (
- 4 Jun 2018 21:53 UTC; 1 point) 's comment on Open Thread #39 by (
- 27 Sep 2019 19:32 UTC; 0 points) 's comment on Long-term DonaÂtion BunchÂing? by (LessWrong;
The reference class Iâve always used when casually thinking about something like âvalue driftâ is the original CEA team from 2011.
Hereâs my summary of the public information relevant to their âEA dedicationâ today (please do comment with additional relevant public info):
Will MacAskillâif anyone is the face of EA, itâs probably Will
Toby OrdâFHI research fellow
Nick BecksteadâOPP program officer (GCRs)
Michelle HutchinsonâGPI operations director
Holly MorganâEA London strategy director
Mark Leeâunknown
Tom AshâRethink Charity board member
Matt WageâI think is still donating ~50%
Ben Todd â 80k CEO
Tom Rowlandsâunknown
Niel Bowerman â 80k coach
Robbie Shadeâthe homepage of his website (last updated in 2017 AFAICT) simply says, âIâm Robbie Shade, and I work at Google. Iâm interested in effective altruism, and maximising the good I can do through my career.â
Matt Gibbâunknown
Richard Batty â 80k researcher as of Oct 2017, but doesnât appear to be currently
Sally MurrayâInternational Growth Centre senior country economist
Rob Gledhillâunknown
Andreas MogensenâGPI research fellow
If I had to sum that up Iâd say: ~75% of the CEA founding team (n=17) are still highly dedicated to doing the most good, 6.5 years on.
If early involvement and higher involvement/âdedication are correlated (which I suspect they are), this data fits well with the following observation:
The CEA founding team seems like the absolute best case for value drift, because to found CEA one must have a much higher baseline inclination towards EA than the average person. Also probably a lot of power, which helps them control their environment while many EAs would be forced into non-EA lifestyles by factors beyond their control. So 25% drifters of the original CEA team feels more scary to me than 40-70% of average EAs.
Iâm not so convinced on this. I think the framing of âthis was the founding teamâ was a little misleading: in 2011 all of us were volunteers and students. The lower bar for doing ~5 hours a week of volunteering for EA for ~1 year. Obviously students are typically in a uniquely good position for having time to volunteer. But itâs not clear all the people on this list had uniquely large amounts of power. Also, I think situational effects were still strong: I felt it made a huge difference to what I did that I made a few friends who were very altruistic and had good ideas of how to put that into practice. I donât think we can assume that all of us on this list would have displayed similarly strong effective altruist inclinations without having met others in the group.
I think thatâs basically right, though I also have the intuition that drift from the very early days will be higher, since at that point it was undecided what EA even was, and everyone was new and somewhat flung together.
This is really helpful, thanks.
Itâs interesting to note that itâs now two years later, and I donât think the picture above has really changed.
So the measured marginal drift rate is ~0%.
On the previous estimate of 25% leaving after 6.5 years, thatâs about 5% per year, which would have predicted 1.4 extra people leaving in two years.
Of course these are tiny samples, but I think our expectation should be that âdriftâ rates decrease over time. My prior is that if someone stays involved from age 20 to age 30, then thereâs a good chance they stay involved the rest of their career. I guess my best guess should be that they stay involved for another 10 years.
If I eyeball the group above, my guess is that this pattern also holds if we look back further i.e. there was more drift in the early years among people who were involved for less time.
One small comment on the original analysis is that in addition to how long someone has already been involved, I expect âdegree of social & identity involvementâ to be a bigger predictor of staying involved than âclaimed level of dedicationâ e.g. Iâd expect someone who works at an EA org is more likely to stay involved than someone who says they intend to donate 50% but doesnât have any good friends in the community. It would be cool to try to do an analysis more based around that factor, and it might reveal a group with lower drop out rates. The above analysis with CEA is better on these grounds but could still be divided further.
That doesnât seem rightâsince this comment was made, Hollyâs gone from being EA London strategy director to not really identifying with EA, which is more like the 5% per year.
Since the comment was made, Rob Gledhill has returned to CEA as the CBG Programme Manager. (Not totally confident that they are the same person though)
They are.
Thanks for collecting the data Joey! Really useful.
i) Iâm not sure whether âvalue driftâ is a good term to describe loss of motivation for altruistic actions. Iâm also not sure whether the data you collected is a good proxy for loss of motivation for altruistic actions.
To me the term value drift implies that the values of the value drifting person are less important to them than they used to be, as opposed to finding them harder to implement. Your data is consistent with both interpretations. I also wouldnât call someone who still cares as much about their values but finds it harder to be motivated having âvalue driftedâ.
If we observe someone moving to a different location and then contributing less EA wise, then this can have multiple causes. Maybe their values actually changed, maybe they lost motivation or EA contributions have just become harder to do because thereâs less EA information and fewer people to do projects with around.
As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses.
ii) This is somewhat tangential to the post, but since having kids came up as a potential reason for value drifting, Iâd like to mention how unfortunate it can be for people who have had kids if other EAs assume they have value drifted as a result.
Iâve had a lot of trouble within the last year in EA spaces after having a baby. EAs around me constantly assume that I suddenly donât care anymore about having a high impact and might just want to be a stay at home parent. This is incredibly insulting and hurtful to me. Especially if it comes from people whom I have known for a long time and who should know this would completely go against my (EA & feminist) values. Particularly bitter is how gendered this assumption is. My kidsâ dad (also an EA) never gets asked whether he wants to be a stay at home parent now.
I really had expected the EA community to be better at this. It also makes me wonder on how many opportunities to contribute I might have missed out on. The EA community often relays information about opportunities only informally, if someone is assumed to not be interested in contributing the information about opportunities is much less likely to reach them. Thus the belief that EAs will contribute much less once they have kids might turn into a self-fulfilling prophecy.
I agree regarding implementation difficulties, particularly long term ones (e.g. losing a visa for a place you were living in with a big EA community) can muddy the waters a lot. Itâs hard to get into the details, but I would generally consider someone not drifted if it was a clearly capacity affecting thing (e.g. they got carpal tunnel) but outside of that they are working on the same projects they would have wanted to in all cases.
A more nuanced view might be break it down into: âValue change away from EAââdefined as changing fundamental ethical views, maybe changing to valuing people within your country more than outside of it.. âAction change away from EAââdefined as changing one of the fundamental applications of your still similarly held values. Maybe you think being veg is good, but you are no longer veg due to moving to a different, less conducive living situation.
With short and long term versions of both and with it being pretty likely that âvalue changeâ would lead to âaction changeâ over time, I used value drift as a catch-all for both the above. Itâs also how I have heard it commonly used as, but I am open to changing the term to be more descriptive.
âAs the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses.â
I strongly agree. These seem to be very different groups. I also think you could even break it down further into âEAs who rationalize doing a bad thing as the most ethical thingâ and âEAs who accept as humans that they have multiple drives they need to trade off betweenâ. Most of my suggestions in the post are aimed at actions one could take now that reduce both âaction changeâ and âvalue changeâ. Once someone has changed I am less sure about what the way forward is, but I think that could warrant more EA thought (e.g. how to re-engage someone who was disconnected for logistical reasons).
On ii)
Sorry to hear you have had trouble with the EA community and children. I think itâs one of the life changes that is generally updated too strongly on by EAs and assuming that a person (of any gender) will definitely value drift upon having children is clearly incorrect. Personally I have found the EAs who I have spoken to who have kids to be unusually reflective about its effects on them compared to other similar life changes, perhaps because it has been more talked about in EA than say partner choice or moving cities. When a couple who plans to have kids has kids and changes their life around that in standard/âexpected ways, I do not see that as a value drift from their previous state (of planning to have kids and planning to have life changes around that).
I also think people will run into problems pretty quickly if they assume that every time someone goes through a life change that the person will change radically and become less EA. I think I see it intuitively as more of a bayesian prior. If someone has been involved in EA for a week and then they are not involved for 2 weeks, it might be sane to consider the possibilities of them not coming back. On the flip side, if an EA has been involved for years and was not involved for 2 weeks, people would think nothing of it. The same holds true for large life changes. Itâs more about the personâs pattern of long term of behavior and a combined âoverallâ perspective.
My list of concerns about a new trend of EAâs ârelaying information about opportunities only informallyâ is so long it will have to be reserved for a whole other blog post.
I still think youâre focussing too much on changed values as opposed to implementation difficulties (I consider lack of motivation an example of those).
I think itâs actually usually the other way aroundâaction change comes first, and then value change is a result of that. This also seems to be true for your hypothetical Alice in your comment above. AFAIK itâs a known psychology result that people donât really base their actions on their values, but instead derive their values from their actions.
All in all, I consider the ability to have a high impact EA-wise much more related to someoneâs environment than to someoneâs âtrue self with the right valuesâ. I would therefore frame the focus on how to get people to have a high impact somewhat differently: How can we set up supportive environments so people are able to execute the necessary actions for having a high impact?
And not how can we lock in people so they donât change their valuesâthough the actual answers to those questions might not be that different.
I second the being sorry about the trouble with EAs and kids. Having kids does make it more difficult to be a 50% EA, but there definitely examples such as Julia Wise/âJeff Kaufman, Toby Ord/âBernadette Young, and myself. As for the gendered response, about 3% of US stay-at-home parents are dads. But one time I thought through my friends, and it was 50%! Granted, they were pretty left-leaning, but so is EA. As an aside, now that young women make more money than young men (largely because women go to college at higher rates than men), if we made the decision just based on money, we could have the majority of stay-at-home parents be dads.
Thanks very much for doing this.
Could you possibly say more (i.e. as much as you can) about why people left? Moving city, leaving university or starting a family donât have to stop someone being an EA. More explanation seems needed. For instance, âX moved cityâ by itself doesnât really explain what happened, whereas âX moved city, didnât know any EAs and lost motivation without group supportâ or âY started a family and realised they wanted a higher quality of life than they could find working for an EA orgâ do. Putting this in dating terms, one reason people sometimes give when they break up with someone is âIâm moving to city Z and it would never workâ but thatâs not quite a sufficient/âhonest reason, which would be âIâm moving to Z and this will make things sufficiently hard I want to stop. If I liked you a lot more Iâd suggest we do long distance; but I donât like you that much, so weâre breaking upâ. Iâd want to know if people stop âbelievingâ in EA, kept thinking it was important but lost motivation or something else.
Equally, Iâd be interested if you did a survey of the people who stayed and ask why they stayed to see what the differences were. If the explanations for the remainer and the leavers are consistent with each other than they donât provide any explanatory power.
Iâd add the (usual) proviso that people donât really know why they do what they do and self-reports are to be treated with some suspicion. Itâs generally more useful to see what people do rather than listen to what they say.
Finally, it would be interesting to compare these retention ratios to other thingsâreligion, using a given tech product, dieting, etc. - it strikes me that, if some sense, 50% retention after 5 years might be pretty good in some sense, though I agree itâs also worrying put another way.
So I want to be pretty careful about going into details, but I can mix some stories together to make a plausible sounding story based on what I have heard. Please keep in mind this story is a fiction based off a composite of case studies Iâve witnessed, not a real example of any particular person.
Say Alice is an EA. She learns about it in his first year of college. She starts by attending an EA event or two and eventually ends up being a member of his university chapter and pretty heavily reading the EA forum. She takes the GWWC pledge and a year later she takes a summer internship at an EA organization. During this time she identifies strongly with the EA movement and considers it one of her top priorities. Sadly, as Alice is away at her internship her chapter suffers and when she gets back she hits a particularly rough year of school and due to long term concerns, she prioritizes school over setting the chapter back up, mainly thinking about her impact. The silver lining is at the end of this rough year she starts a relationship. The person is smart and well suited, but does not share her charitable interest. Over time she stops reading the EA content she used to and the chapter never gets started again. After her degree ends she takes a job in consulting that she says will give her career capital, but she has a sense her heart is not as into EA as she once was. She knows a big factor is her boyfriendâs family would approve of a more normal job than a charity focused one, plus she is confident she can donate and have some impact that way. Her first few paychecks she rationalizes as needing to move out and get established. The next few to build up a safe 6 month runway. The donations never happen. Thereâs always some reason or another to put it off, and EA seems so low on the priorities list now, just a thing she did in college, like playing a sport. Alice ends up donating a fairly small amount to effective charities (a little over 1%). Her involvement was at its peak when she was in college and she knows her college self would be disappointed. Each choice made sense at the time. Many of them even follow traditional EA advice, but the endline result is Alice does not really feel she is an EA anymore. She has many other stronger identities. In this story, with different recommendations from the EA movement and different choices from Alice, she could have ended up doing earning to give and donating a large percentage long term or working with an EA org long term, but instead she âvalue driftedâ.
Many aspects of this story sound kinda like things that have happened to me to make me less hardcore. I definitely still strongly affiliate with EA, donate ~15% /â $30K, and spend about 20hrs/âweek on EA projects, but my college EA idealistic self expected me to donate ~$100K/âyr by now or work full-time 60hrs/âweek on EA projects. Iâm unsure how âbadâ of a âvalue driftâ this is, but definitely short of my full potential.
Maybe your college EA idealistic self expectationâs were never that likely, so you shouldnât beat yourself up about them.
Thanks. I donât feel guilty about it. I just chose a different life. EA is still very important to me, but not as important as it once was. I think a lot of it is, like Joey said, the slow build up of small path changes over time.
Good.
If you feel youâve become much less EA, I wonder what many others who were very into it must feel. From the outside you seem extremely involved - .impact/âRethink Charity do a huge amount with limited resources, and it seems like you do substantial volunteering with them, which doesnât seem like putting little of yourself into EA. Thanks for what you do.
Ah, thatâs great. Thanks very much for that. I think âdating a non-EAâ is a particularly dangerous(/ânegative impact?) phenomenon we should probably be talking about more. I also know someone, A, whose non-EA-inclined partner, B, was really unhappy that A wasnât aiming to get a high-paying professional job and it really wrenched A from focusing on trying do the most useful stuff. Part of the problem was Bâs family wanted Bâs partner to be dating a high earner.
This comment comes across as a tad cult-y.
I did think that while writing it, and it worried me too. Despite that, the thought doesnât strike me as totally stupid. If we think itâs reasonable to talk about commitment devices in general, it seems one we ought to talk about in particular in oneâs choice of partner. If you want to do X, finding someone that supports you to towards you goal of achieving X seems rather helpful, whereas finding a partner that will discourage you from achieving X seems unhelpful. Nevertheless, I accept one of the obvious warning signs of being in a cult is the cult leaders tell you to date only people inside the cult lest you get âcorruptedâ...
A particular word choice that put me at unease is calling âdating a non-EAâ âdangerousâ without qualifying this word properly. It is more precise to say that something is âgoodâ or âbadâ for a particular purpose than to just call it âgoodâ or âbadâ; just the same with âdangerousâ. If you call something âdangerousâ without qualification or other context, this leaves an implicit assumption that the underlying purpose is universal and unquestioned, or almost so, in the community youâre speaking to. In many cases itâs fine to assume EA values in these sorts of statementsâthis is an EA forum, after all. Doing so for statements about value drift appears to support the norm that people here should want to stay with EA values forever, a norm which I oppose.
haha yeah that was my take. I think the best norm to propagate is âgo out with whoever makes you happyâ
I think that there should be no norm here and we should simply consider the fact that dating a non-EA may cause a value drift before making decisions. Being altruistic sometimes means making sacrifices to your happiness. If having less money, less time and no children can be amongst the possible sacrifices, I see no reason why limiting the set of possible romantic partners could not be one of possible sacrifices as well. People are diverse. Maybe someone would rather donate less money but abstain from dating non-EAs, or even abstain from dating at all. One good piece of writing related to the subject is http://ââbriantomasik.com/ââpersonal-thoughts-on-romance/ââ
Males having a âdating EAs onlyâ rule is also dangerous (for the health of the community) when 70% of the community identifies as male and only 26% as female. Itâd promote unhealthy competition. What is more, communities are not that big in many of the cities which for many people would make the choice very limited. Especially since we should probably avoid flirting with newcomers because that might scare them away.
Maybe the partner doesnât have to be an EA to prevent the value drift, maybe the important thing is that the partner is supportive of EA-type sacrifices. Iâll put this as a requirement in my online dating profiles. I think that people who are altruistic (but not necessarily EAs) are especially likely to be supportive.
To flip this one on its head: I think counter-factually for most EAs it could actually be âbetterâ for the world at large to date non-EAs because of the whole drastic increase of impact that can typically be expected if you convince your lover of EAâwhich to me on balance seems more likely than value drift from dating a non-EA if you are in fact a committed EA. However, I think if you have long-term relationships exceeding 2 years then value drift becomes far more of an issue:
< 2 year relationship. Value drift potential = low. Convert lover to EA potential = very high
Suffice to say my current girlfriend is now much more EA-minded and I have received messages from my ex that she eats less meat still even after she stopped dating me (Iâll take her word for it). I know my behaviour has been very strongly impacted by the people Iâve dated so thereâs no reason to assume vice versa doesnât happen.
Fun-fact: I use this as an excuse to argue with my girlfriend that clearly I should be dating many many girls short-term for obvious EA-reasons.
This is a useful analysis, I expect it will be incorporated into our discussion of discount rates in the career guide.
Perhaps I missed it but how many of the 7 who left the 50% category went into the 10% category rather than dropping out entirely?
I think this is a direction Julia and I could have gone around 2011. We didnât donate for a year (Julia was in grad school, I took a pay cut to work at a startup trying to maximize risk neutral returns) and it would have been easy to drift away.
This also fits my experience.
A few other implications if value drift is more of a concern:
Movement growth looks a lot less impactful in absolute terms
Itâs an open question whether this means we should therefore focus our movement-building efforts on a minority of particularly likely to be engaged people or expand our numbers more to offset attrition (depending on details of your model)
Selecting primarily for high skill/âpotential-impact levels may be a mistake, as a person who among the very highest skill level, but who decides to do something else, may likely produce zero impact. There are, no doubt, more implications.
Indeed. I think there are a whole set of implications of value drift when it comes to movement building, particularity recruiting younger people who will not create huge amounts of good for a while.
Upvoted because this is an important topic Iâve seen little discussion of. Although you take pains to draw attention to the limitations of this data set, these caveats arenât included in the conclusion, so Iâd be wary of anyone acting on this verbatim. Iâd be interested in seeing drop out rates in other social movements to give a better idea of the base rate.
I agree. Other movement data would be interesting. The most relevant data I have seen is various veg rate studies (which generally shows like 80% dropout overall or the average person staying veg ~4 years). e.g. https://ââanimalcharityevaluators.org/ââresearch/ââdietary-impacts/ââvegetarian-recidivism/ââ
What percent of those who drifted from the 50% category ended up in the 10% category instead of out of the movement entirely?
And would the graph of the number of people remaining in the 50% category over time look roughly linear or was drifting concentrated at the beginning or near the end? What about for the 10% category?
I did not break down the data that way when I made it, but a quick look would suggest ~75% moved from 50% to 10% and drifting was mildly concentrated at the beginning.
So, to confirm, are you saying that maybe 5 out of the 7 people who moved out of the 50% category moved in the 10% category? I think itâs important to get clarity on this, since until encountering this comment I was interpreting your post (perhaps unreasonably) as saying that those 7 people had left the EA community entirely. If in fact only a couple of people in that class left the community, out of a total of 16, thatâs a much lower rate of drift than I was assuming, and more in line with anonymousâs analysis of value drift in the original CEA team.
Very interesting. As you say, this data is naturally rough, but it also roughly agrees with own available anecdata (my impression is somewhat more optimistic, although attenuated by likely biases). A note of caution:
The framing in the post generally implies value drift is essentially value decay (e.g. it is called a âriskâ, the comparison of value drift to unwanted weight gain/âpoor diet/â etc.). If so, then value drift/âdecay should be something to guard against, and maybe precommitment strategies/ââlashing oneself to the mastâ seems a good idea, like how we might block social media, donât have sweets in the house, and so on.
Iâd be slightly surprised if the account someone who âdriftedâ would often fit well with the sort of thing youâd expect someone to say if (e.g.) they failed to give up smoking or lose weight. Taking the strongest example, Iâd guess someone who dropped from 50% to 10ish% after marrying and starting a family would say something like, âI still think these EA things are important, but now I have other things I consider more morally important still (i.e. my spouse and my kids). So I need to allocate more of my efforts to these, thus I can provide proportionately less to EA mattersâ.
It is much less clear whether this person would think theyâve made a mistake in allocating more of themselves away from EA, either at t2-now (they donât regret they now have a family which takes their attention away from EA things), or at t1-past (if their previous EA-self could forecast them being in this situation, they would not be disappointed in themselves). If so, these would not be options that their t1-self should be trying to shut off, as (all things considered) the option might be on balance good.
I am sure there are cases where âlife gets in the wayâ in a manner it is reasonable to regret. But I would be chary if the only story we can tell for why someone would be âless EAâ are essentially greater or lesser degrees of moral failure, disappointed if suspicion attaches to EAs starting a family or enjoying (conventional) professional success, and caution against pre-commitment strategies which involve closing off or greatly hobbling aspects of oneâs future which would be seen as desirable by common-sense morality.
You discuss a case where there is regret from the perspective of both t1 and t2, and a case where there is regret from neither perspective. These are both plausible accounts. But thereâs also a third option that I think happens a lot in practice: Regret at t1 about the projected future in question, and less/âno regret at t2. So the t2-self may talk about âhaving become more wiseâ or âhaving learned something about myself,â while the t1-self would not be on board with this description and consider the future in question to be an unfortunate turn of events. (Or the t2-self could even acknowledge that some decisions in the sequence were not rational, but that from their current perspective, they really like the way things are.)
The distinction between moral insight and failure of goal preservation is fuzzy. Taking precautions against goal drift is a form of fanaticism and commonsense heuristics speak against that. OTOH, not taking precautions seems like not taking the things you currently care about seriously (at least insofar as there are things you care about that go beyond aspects related to your personal development).
Unfortunately I donât think there is a safe default. Not taking precautions is tantamount to making the decision to be okay with potential value drift. And we cannot just say we are uncertain about our values, because that could result in mistaking uncertainty with underdetermination. There are meaningful ways of valuing further reflection about oneâs own values, but those types of âindirectâ values, where one values further reflection, they can also suffer from (more subtle) forms of goal drift.
This is so necessary and helpful. This is a significant update for me toward a donor advised fund (and also reinforces my current practice of donating regularly rather than saving to donate).
This data to me suggests that the EA community may have made some mistakes in modeling our decisions as more rational than they are. Specifically, whether broad career capital makes sense depends a lot on whether we are rational and will optimize or whether we need commitment devices. Maybe we all need more of a behavioral econ update.
I think if people promise you that theyâll do something, and then they donât answer when you ask if they did it, itâs quite probably they did not do the thing.
Do you have any opinion on the role of community or social ties in preventing value drift in addition to individualized commitment mechanisms, like the GWWC Pledge.
Social ties seem quite important, particularly close ones (best friends, partners, close co-workers).
The social circle thing might interact in an interesting way with the apparently common insecurity of not being âEA enoughâ. Suppose I think of myself as an EA, but due to random life fluctuation I find myself not being âEA enoughâ for some time. This makes me feel like an imposter at EA events, which makes me go to them less, which decreases my social ties to other EAs, which decreases my motivation for EA work, which makes me do less EA work, which makes me feel like more of an imposter at EA events. This feedback loop theory suggests that drifting out of EA social circles and having ones values drift are often intertwined phenomena.
Of course, thatâs just a guess. It seems like it would be valuable to get some anonymized stories from ex-EAs to see what is really going on.
Anyway, I think commenting on forums like this one can be good. Reading what other EAs are working on gets me excited about EA stuff, and leaving comments is a low-effort way to feel helpful. I donât typically feel like an imposter when I do this, because it usually seems like sharing my perspective would be valuable even if I was a complete non-EA.
Whatâs your impression of how positively correlated close social ties are with staying altruistic among those individuals you surveyed?
My anecdata is that itâs very high, since people are heavily influenced by such norms and (imagined) peer judgement.
Cutting the other way, however, people who are brought into EA _by_ such social effects (e.g. because they were hanging around friends who were EA, so they became involved in EA too rather than in virtue of having (always had) intrinsic EA belief and motivation) would be much more vulnerable to value drift once those social pressures change. I think this is behind a lot of cases of value drift Iâve observed.
When I was systematically interviewing EAs for a research project this distinction, between social-network EAs and always-intrinsic EAs was one of the clearest and most important distinctions that arose. I think one might imagine that social-network EAs would be disproportionately less involved, more peripheral members, whereas the always-intrinsic EAs would be more core, but actually the tendency was roughly the reverse. The social-network EAs were often very centrally positioned in higher staff positions within orgs, whereas the always-instrinsic EAs were often off independently doing whatever they thought was most impactful, without being very connected.
It appears the best of both worlds might be to seed local EA presence where the initial social network is composed of individuals who were always intrinsically motivated by EA who were also friends. I wouldnât be surprised if thatâs the story behind many local EA communities which became well-organized independent of one another. Of course if this is the key to bringing together local EA presences as social networks which tend toward lower rates of value drift, the kind of data weâre collecting so far wonât be applicable to what we want to learn for long. The anecdata of EAs who have been in the community since before there was significant investment in and direction of movement growth wonât be relevant when weâre trying to systematize that effort in a goal-driven fashion. As EA enters another stage of organization as a movement, itâs a movement structured fundamentally differently than how it organically emerged from a bunch of self-reflective do-gooders finding each other on the internet 5+ years ago.
Does a donor-advised fund let you deduct money you put into the fund from your taxes? If so, that is a huge reason to use them.
Yes it does, and indeed that is another huge pro of them when compared to a normal savings fund. There are some cons of them often they are cumbersome to first set up and require a fairly large minimum deposit. But overall something I wish more EAs considered.
The other disadvantage of donor advised funds is that they often have restricted investment options. However, my financial advisor finally found one with investment freedom called the Community Foundation in Boulder (you donât have to be in Boulder Colorado to use it, but you would need to be in the US to get the tax deduction).
For people reading this post now as part of the decade review, I think this article was useful to get people thinking about this issue, but the more comprehensive data in this later post is more useful for actually estimating the rate of drop out.
Thank you very much for this important work. This should be an important consideration for everyone and an important factor in career planning. Iâll make sure to say something about that in our local EA group at some point.
First of all, thanks for this postâI think itâs really valuable to get a realistic sense of how these beliefs play out over the long term.
Like others in the comments, though, Iâm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.
Implicit is the assumption that if we change, it must be for âbadâ reasons. Itâs natural enoughâclearly we canât think of any good reasons, otherwise we would already have changedâbut it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.
More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?
(At the same time, I think weâre all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. Itâs also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)
GWWC says 4.8% per year attrition. If we say the OP data is half life of 5 years and exponential decay, that is 13% attrition per year. That would mean an expected duration of being an EA of eight years. I think I remember reading somewhere that GWWC was only assuming three years of donations, so eight years sounds a lot better to me. Another thought is that the pledge has been compared with marriage, so we could look at the average duration of the marriage. When I looked into this, it appeared to be fairly bimodal, with many ending relatively quickly, but many ending till death do they part. GWWC argues that consistent exponential decay would be too pessimistic. If we believe the 13% per year attrition, that means we need to recruit 13% more people each year just to stay the same size.
Iâd like to see this. I have some data on this from the EA Survey and intend to follow up on something similar later this year.
Please do share that data when you get a chance. You guys have a lot of fascinating data in those survey results, and while I understand you have limited time/âresources, it would be a shame to see them go untapped.
Thanks. Not publishing what I have on this is a 2017 regret of mine and I hope not to repeat it in 2018.
Good work, itâs great to have any numbers on this at all. Given these are acquaintances I wonder could you follow up to try to get some reasons from the drifters. I would like to be able to classiy the reasons for changes in behaviour in one of the following two buckets; 1) I am lazier, more self-centred than in the past 2) I was young and naive, I know better now
In combating future potential value drift, we are considering tactics to essentially coerce our future selves. If we are confident this is because 1) then I think this coercion is merited, but if itâs 2) then maybe we are compounding an error?
Excellent post! I think value drift is one of the largest challenges of local groups: many people who seemed enthusiastic donât show up after a couple of times and itâs hard to keep them motivated to keep going for the highest expected value in the long-term option.
The thing is, how do you communicate about the risk of value drift to others who are at risk? There is the problem of base rate neglect/âbias blind spot: that people think the risk does not apply to them. For example, multiple people have expressed they donât understand that I took the giving pledge to commit my future self to this, while I believe I might otherwise not act on my (current) values.
Thanks for this, Joey!
Iâd be very keen to see more thorough data on this, for example:
To what extent is 80kâs pivot away from recommending Management Consulting due to value drift?
My impression is that one of the reasons to focus less on GWWC has been attrition (i.e. value drift in these terms). Does anyone have access to those figures?
Would e.g. CEA or 80k be able to carry out a retrospective study on this?
Even more awesome would be to conduct a longitudinal cohort study on the topic.
I think this is something we may look at with the 2018 EA Survey, hopefully in cooperation with GWWC and 80K and leveraging their data as well.
summary: changes in peopleâs âvaluesâ arenât the same as changes in their involvement in EA and this analysis treats the two as the same thing; also, some observations from my own friendgroup on values changes v. retention
It sounds like no differentiation between âlowered involvement in EA and change in preferencesâ and âlowered involvement in EA while remaining equally altruisticâ was made here, given the wording used in âThe Dataâ section.
I can think of 3 people Iâve known who were previously EAs (by your six-months involvement definition) and then left, but who remained as altruistic as before, and two more who really liked the core ideas but bounced off within the first month and remained as altruistic as before. Thereâs 2 I know (both met the six months involvement measure) who left and ended up being less altruistic afterwards.
Which, really, is irrelevant, since youâd need a much more systematic effort at data collection to reach any serious conclusions about âvalue driftâ, but changes in peopleâs âvaluesâ arenât the same as changes in their involvement in EA. Iâm sure thereâs some non-EA literature on values and changes in values youâd benefit from engaging with.
(The two who became less altruistic were riding a surge of hope related to transhumanism that died down with time, and they left when that went out; the other five left for some mix of disliking the social atmosphere of EA and thinking it ineffective at reaching its stated goals. These are very different types of reasons to leave EA! I put scare quotes around values and value drift because I find it more informative to look at what actions people take rather than what values are important to them.)
I think the 10% versus 50% descriptions are useful, and Iâm surprised I have not seen them before on the forum, except for my comment here. In that comment, I was arguing that free time could be defined as 40 hours a week, so if you volunteer effectively four hours a week, that would make you a 10% EA. But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA. Another way is having an EA job (which is typically half of market salary, so it is like donating 50%) that is nominally 40 hours a week, but actually working 60 hours a week, so it is like you are volunteering half of your âfreeâ time. Then it would be nice clean orders of magnitude. But 100% is not very common, and it could be misleading, so 50% is ok.
If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own ânumbersâ with each other, which seems likely to be to be a bad idea)
Correctâto make this physically realistic (not able to exceed 100%), you would need to say that someone who donates 10% of money and does no volunteering is dedicating 5% of their total âpotential effort.â But it is more intuitive to say that GWWC is a â10%â EA.
I donât think the source of value drift is mysterious.
https://ââwww.youtube.com/ââwatch?v=YmQicNmhZmg
https://ââwww.youtube.com/ââwatch?v=nIRvNxykvTQ
Can we get the 1-3 sentence summary before committing to 43min of talks?
Oh, I meant those as references thinking many had already seen them rather than âwatch theseâ. They are both talks about the motivations for altruism given at EAG 2014-2015 by economists. tl;dr if you;re surprised that altruistic intentions tend to drift with age, other types of predictable value shifts your model is probably not taking into account some things known by economists.
This doesnât add much to the conversation. Obviously people get over-excited by EA and the personal and philosophical opportunities it provides to make an impact will lead lots of people being overconfident in their long-term commitment, and theyâll turn out not to be as altruistic as they think. The OP is already concerned about a default state of people becoming less altruistic over time, and focuses on how we can keep ourselves more altruistic than weâd otherwise tend to be, long-term, through things like commitment mechanisms. So theories of psychology which donât specify the mechanisms by which commitment devices fail arenât precise enough to be useful in answering the question of what to do about value drift to our satisfaction.
I wasnât commenting on the overall intention but on enumerations of causal levers outlined by economists in the talks given. I was objecting to the frame that these causal levers are obfuscated. I think presenting them as such is a way around them being low status to talk about directly.
Thanks for the context. That makes a lot of sense. Iâve undid my downvote on your parent comment, upvoted it, and also upvoted the above. (I think itâs important, as awkward as it might be, for rationalists and effective altruists to explicate their reasoning at various points throughout their conversation, and how they update at the end, to create a context of rationalists intending their signals to be clear and received without ambiguity. Itâs hard to get humans to treat each other with excellence, so if our monkey brains force us to treat each other like mere reinforcement learners, rationalists might as well be transparent and honest about it.)
It would appear the causal levers arenât obfuscated. Which ones do you expect are the most underrated?