This is an interesting coincidence. I’m someone who read and was influenced by EA blogs around 2014-2015, after working for an NGO for a few years. I was influenced enough to factor it in my decision to leave my job and go back to school to become a Nurse Practitioner. (As evidenced by the fact that nursing and advanced practice nursing aren’t highly recommended pathways in 80k hrs, it’s fair to say I factored what I read among EA sites alongside my own appraisals of priority areas, beliefs/attitudes, and individual circumstances).
Despite being in Boston, arguably an EA hub, I didn’t engage with the community there during my time in school. Although the NGO I worked for wasn’t EA, having a peer group that was concerned about various issues and recognized the need to deviate from mainstream culture when it came to matters of earnings and consumption, definitely counted for something. Compared to grad school, where classmates surely had diverse motivations for choosing the career path, the greatest common denominator revealed itself to be “achieving a comfortably middle to upper-middle class lifestyle and the ability to ‘help people.’”
I can tell you my values drifted. Concepts such a marginal impact, and the fact that clinicians’ marginal impact is much less than most believe, are threatening to front line clinicians—so I avoided those topics with peers. Surely many classmates were conventionally status-oriented that my previous peers in the non-profit. As I tried to learn about the more conservative norms regarding attire expected in some private practices, (as opposed to the NGO where I worked which was quite informal), I was exposed to more of such consumption-as-status messaging. After breaking up with a long term significant other, who at least understood my beliefs and attitudes about EA and simpler-living, and starting to date other professionals in a high-COL city, I definitely found myself thinking about more of my income going more towards self-presentation. With significant student loan debt from the program, my first job will definitely prioritize salary more than I’d otherwise like.
All the while, I always had EA in the back of my mind. I listened to MacAskill’s book in audio format at some point during my graduate program. As someone who seems to be more interested in managing downward risk combined with the fact that my previous work was in behavior change/nudging, I was always concerned I would not ‘come back’ to the community or succumb to norms of the dominant consumer culture.
On reflection, I think that is part of the reason I wasn’t deeply engaged with the community—I seem to be more concerned with making sure I have some kind of significant impact, than trying to maximize impact. I’ve long been concerned about an imagined future-self succumbing to burnout, resentment, or alienation. I wondered how my future self might cope if I invest heavily in a problem area that turns out to be, for one reason or another, no longer a high-impact area. It’s safe to say that I’m rather risk-intolerant.
Moving forward, I do plan to re-engage with the community, especially in person because I now appreciate how alienating it can be to not have peers than understand your deeply held values. I hope this post adds-value; rather than codify the challenges and protective-factors, I thought it’s best I just concretely describe my experience and leave it here for interpretation.
Thank you for this detailed description of your experience!
I would guess that many other people in the EA community have a similar story to tell about the challenge of self-presentation/conspicuous consumption, as well as the ease with which you can drift when you find a new partner/friend group. I’m trying to understand value drift better, and this comment added value for me.
Thanks for writing this—it seems worthwhile to be strategic about potential “value drift”, and this list is definitely useful in that regard.
I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.
In the vein of Denise_Melchin’s comment on Joey’s post, I believe most people who appear to have value “drifted” will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don’t think that attempting to override these other values is generally sustainable for long-term motivation.
This hypothesis would point away from pledges or ‘locking in’ (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to “reduce the risk of value drift”, we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one’s impactfulness.
Thanks for your comment! I agree with everything you have said and like the framing you suggest.
I believe most people who appear to have value “drifted” will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously
This is what I tried to address though you have expressed it more clearly than I could!
As some others have pointed out as well, it might make sense to differentiate between ‘value drift’ (i.e. change of internal motivation) and ‘lifestyle drift’ (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise’s comment points out, the term ‘value drift’ is not ideal in the way that Joey and I used it and that:
As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses. (Denise_Melchin comment).
However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).
I’d like to introduce a few considerations as an “older” EA (I am 43 now) :
Scope of measurement: Joey’s post was based on 5 year data. As Joey mentioned, “it would take a long time to get good data”. However, it may well be that expanding the time scope would yield very different results. It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag. I base this on purely anecdotal evidence, but I have seen many people (including myself) recover interests, hobbies, passions, etc. once their children are older. I am quite new to the movement, but there is no way that 10 years ago I would have put in the time I am now devoting to EA. If I had started my involvement in college —supposing EA had been around—, you could have seen a sharp decline during my thirties (and tag that as value drift)… without knowing there would be a sharp increase in my forties.
Expectations: This is related to my previous point. Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions. Keeping the initial engagement levels constant sounds good in theory, but it may not be the best strategy in the long run (e.g. potentially leading to burnout, etc). Maybe we should think of “engagement fluctuations” as something natural and to be expected instead of something dangerous that must be fought against.
EA interaction styles: If and as the median age of the community goes up, we may need to adapt the ways in which we interact (or rather add to the existing ones). It can be much harder for people with full-time jobs and children to attend regular meetings or late afternoon “socials”. How can we make it easier for people that have very strong demands on their time to stay involved without feeling that they are missing out or that they just can’t cope with everything? I don’t have an answer right now, but I think this is worth exploring.
The overall idea here is that instead of fighting an uneven involvement/commitment across time it may be better to actually plan for it and find ways of accommodating it within a “lifetime contribution strategy”. It may well be that there is a minimum threshold below which people completely abandon EA. If that it so I suggest we think of ways of making it easy for people to stay above that threshold at times when other parts of their lives are especially demanding.
It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag.
It would be very encouraging if this is a common phenomenon and many people ‘dropping out’ might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:
It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to ‘changing the world for the better’. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): “If you’re not a socialist at the age of 20 you have no heart. If you’re not a conservative at the age of 40, you have no head”.
Regarding your related point:
Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions (...) and find ways of accommodating it within a “lifetime contribution strategy”
I strongly agree with this, which was my motivation to write the post in the first place! I don’t think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never ‘find their way back’ to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I’d generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this—I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).
Good points. If I were doing a write up on this subject it would be something like this:
“As the years go by, you will likely go through stages during which you cannot commit as much time or other resources to EA. This is natural and you should not interpret lower-commitment stages as failures: the goal is to maximize your lifetime contributions and that will require balancing EA with other goals and demands. However, there is a risk that you may drift away from EA permanently if your engagement is too low for a long period of time. Here are some tools you can use to prevent that from happening:”
Wonderful post! This is easily the best resource I’m aware of on ways to reduce value drift, and I anticipate sharing it with a lot of people over the years.
In my view, one of the most threatening risks to EA is value drift—not collectively, but in the sense that many of the community’s most devoted members gradually lose interest and leave. There are a lot of people whose names you can see all over the 80K/GWWC websites from material produced a few years ago, but who are no longer involved in EA in any kind of public capacity (and may not be involved at all). We’re still growing, on net, but if getting older tends to lead to drift, I can imagine us hitting a point where so many people “age out” that growth drops to roughly zero.
If you decide you want to spend more time with value aligned people / other EAs, here are some concrete ways...
Something that wasn’t in your list: Helping the people who are already in your life become aligned with your values, or with the idea that you should keep your values.
The latter seems easier; it’s tough to get a random person to become truly interested in EA, but any close friend should care somewhat about your sticking to your plans and meeting your goals.
If my most religious friend told me they’d stopped going to church and felt “meh” about it, I’d be concerned for them even as an atheist, because the change might indicate that they were struggling with their life in general. If I decided to stop giving money to charity, I’d hope that my non-EA friends wouldn’t simply let the matter drop, and would at least gently ask questions that would prompt me to engage with my own beliefs and come up with a good reason that I’d abandoned something which was previously very important to me.
Send your future self letters...
My version of this is keeping a journal, where I sometimes address “Future Aaron” but mostly focus on recording my beliefs/feelings as they are on any given day, trusting that Future Aaron will read those entries and feel connected to me. I haven’t yet struggled with value drift, but I have seen my journal help me recover past states of mind to become more excited/inspired/etc. I hope that it will also reduce the odds that I drift away from EA over time.
There are a lot of people whose names you can see all over the 80K/GWWC websites from material produced a few years ago, but who are no longer involved in EA in any kind of public capacity (and may not be involved at all)
Do you know if anyone’s debriefed these folks?
Could be interesting to systematically interview people like this, to learn more about why people distance from EA & to see if any generalizable trends appear.
What you’re calling “value drift,” Evangelical Christians call “backsliding.” The idea is you’ve taken steps toward a countercultural lifestyle in line with your values, but now you’re sliding back toward the mainstream—for an Evangelical Christian, an example would be binge drinking with friends. Backsliding is common and Evangelicals use many of the techniques listed above to counteract it.
Evangelicals heavily emphasize community. Christians are encouraged to attend services, join a small group Bible study, socialize with each other, and marry other Christians.
I also remember being encouraged to establish good habits and stick with them—for example, reading the Bible every morning.
We also, of course, begin with a public commitment to Christianity. And community members will pull you aside and have a chat with you (read: judge you) if they think you’re in danger of backsliding.
I’ve seen all of these strategies work, although some have undesirable side effects.
For people who worry that the list sounds onerous I am happy to report that having done many of the items my life feels better, not worse. I’d say the biggest negative has been a reduction in how much I feel I can relate to people on more normal life paths, but this feels like an additional benefit in many ways since I wind up spending more time with people doing other non-standard things.
Thank you, Joey, for gathering those data. And thank you, Darius, for providing us with the suggestions for reducing this risk. I agree that further research on causes of value drift and how to avoid it is needed. If the phenomenon is explained correctly, that could be a great asset to the EA community building. But regardless of this explanation, your suggestions are valuable.
It seems to be a generally complex problem because retention encapsulates the phenomenon in which a person develops an identity, skill set, and consistent motivation or dedication to significantly change the course of their life. CEA in their recent model of community building framed it as resources, dedication, and realization.
Decreasing retention is also observed in many social movements. Some insights about how it happens can be culled from sociological literature. Although it is still underexplored and the sociological analysis might have mediocre quality, but it might still be useful to have a look at it. For example, this analysis implicate that “movement’s ability to sustain itself is a deeply interactive question predicted by its relationship to its participants: their availability, their relationships to others, and the organization’s capacity to make them feel empowered, obligated, and invested.”
The reasons for the value drift from EA seems to be as important in understanding the process, as the value drift that led to EA, e.g. In Joey’s post, he gave an illustrative story of Alice. What could explain her value drift was the fact that at people during their first year of college are more prone to social pressure and need for belonging. That could make her become EA and drifted when she left college and her EA peers. So “Surround yourself with value aligned people” for the whole course of your life. That also stresses the importance of untapped potential of local groups outside the main EA hubs. For this reason, it’s worth considering even If in case of outreach we shouldn’t rush to translate effective altruism
About the data itself. We might be making wrong inferences trying to explain those date. Because it shows only a fraction of the process and maybe if we would observe the curve of engagement it would fluctuate over a longer period of time, eg. 50% in the first 2-5 year, 10% in a 6th year, 1% in for the next 2-3 and then coming back to 10%, 50% etc.? Me might hypothesize that life situation influence the baseline engagement for short period (1 month- 3 years). As analogous for changes in a baseline of happiness and influences of live events explained by hedonic adaptation, maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.
Additionally, the level of engagement in EA and other significant variables does not correlate perfectly, the data could also be explained by the regression to the mean. If some of the EAs were hardcore at the beginning, they will tend to be closer to the average on a second measurement, so from 50% to 10%, and those from 10% to 1%. Anyhow, the likelihood that the value drift is true is higher than that it’s not.
More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations → multiple unsuccessful job applications → frustration → drop out).
Becuase mechanism of the value drift would determine the strategies to minimalize risk or harm of it and because the EA community might not be representative for other social movements, we should systematically and empirically explore those and other factors in order to find the 80⁄20 of long-lasting commitment.
More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations → multiple unsuccessful job applications → frustration → drop out).
Doing effective altruistic things ≠ Doing Effective Altruism™ things
All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can’t all work at the main EA orgs.
There are lots of highly impactful opportunities out there that aren’t branded as EA—check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.
We should be interested in having as high an impact as possible and not in ‘performing EA-ness’.
I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an ‘EA approved’ workplace like D pmind or J n Street) - or nothing. That’s counterproductive and sad.
A potential explanation: it’s difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and ‘status’.
As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.
That also stresses the importance of untapped potential of local groups outside the main EA hubs.
Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!
maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.
This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to ‘changing the world for the better’. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): “If you’re not a socialist at the age of 20 you have no heart. If you’re not a conservative at the age of 40, you have no head”.
More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap
This is a valuable and under-discussed point that I endorse!
Idea: the local group organisers might use something like spaced repetition to invite busy community members [say, people who are pursuing a demanding job to increase their career capital] to the social events.
Anki’s “Again”, “Hard”, “Good”, “Easy” might map to “1-on-1 over coffee in a few weeks”, “Invite to the upcoming event and pay more attention to the person”, “Invite person to the social event in 3mo”, “Invite person to the event in 6mo or to the EAG”.
Oh, underrated comment from 3 years ago. One problem, however, is that you don’t want too many connections to go through you specifically, since it’ll overload you and possibly replace other connections they might form. People don’t have infinite bandwidth for connections, and if they only have room for one EA friend, say, you don’t want to take up that slot long-term. You may not want to permanently set yourself up as the linchpin.
In addition to Darius’s suggestions, I recommend using Murphyjitsu to generate your personal list of failure modes. Imagine yourself one/five/ten years from now, no longer being an EA. Ask yourself: what happened? Then try to think of ways to prevent this from happening.
Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:
“Choose effective altruist endeavors that also grant you selfish benefits. There are a number of standard human motivators. Status, friends, mates, money, fame. When these things are on the line work actually gets done. Without these things it’s a lot harder. If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you. If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it. If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want. We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want. Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized. If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose.”
There’s probably something to be gained by investigating this further, but i would guess that most cases of value drift are because a loss of willpower and motivation, rather that an update of one’s opinion. I think the word value drift is a bit ambigious here, because i think the stuff you mention is something we don’t really want to include in whatever term we use here.
Now that i think about it, i think what really makes the difference here are deeply held intuitions about the range of our moral duty and so for which ‘changing your mind’ doesn’t alway seem appropriate.
Thanks, Tom! I agree with with you that all else being equal
solutions that destroy less option value are preferable
though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.
It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...
This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status.
However, as CalebWithers points out in a comment:
I believe most people who appear to have value “drifted” will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think.
If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of ‘being loyal to and cooperating with my future self’, it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.
It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don’t want to lose and what I’d like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.
It’s not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)
That is, if you believe in moral uncertainty, and believe you currently haven’t done a good job of figuring out the “correct” way of thinking about ethics, you may think you’re likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won’t even be interested in those questions anymore.
Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?
It could turn out to be bad. For example, say she pledges in 2000 to “never eat meat, dairy, or eggs again.” By 2030, clean meat, dairy, and eggs become near universal (something she did not anticipate in 2000). Her view in 2030 is that she should be willing to order non-vegan food at restaurants since asking for vegan food would make her seem weird while being unlikely to prevent animal suffering. If she takes her pledge seriously and literally, she is tied to a suboptimal position (despite only intending to prevent loss of motivation).
This could happen in a number of other ways:
She takes the Giving What We Can Further Pledge* intending to prevent herself from buying unnecessary stuff but the result is that her future self (who is just as altruistic) cannot move to a higher cost of living location.
She places her donation money into a donor-advised fund intending to prevent herself from spending it non-altruistically later but the result is that her future self (who is just as altruistic) cannot donate to promising projects that lack 501(c)(3) status.
She chooses a direct work career path with little flexible career capital intending to prevent herself from switching to a high earning career and keeping all the money but the result is that her future self (who is just as altruistic) cannot easily switch to a new cause area where she would be able to have a much larger impact.
It seems to me that actions that bind you can constrain you in unexpected ways despite your intention being to only constrain yourself in case you lose motivation. Of course, it may still be good to constrain yourself because the expected benefit from preventing reduced altruism due to loss of motivation could outweigh the expected cost from the possibility of preventing yourself from becoming more impactful. However, the possibility of constraining actions ultimately being harmful makes me think that they are distinct from actions like surrounding yourself with like-minded people and regularly consuming EA content.
It seems strange to override what your future self wants to do,
I think you’re just denying the possibility of value drift here. If you think it exists, then committment strategies could make sense. if you don’t, they won’t.
I disagree—I think you can believe “value drift” exists and also allow your future self autonomy.
My current “values” or priorities are different from my teenage values, because I’ve learned and because I have a different peer group now. In ten years, they will likely be different again.
Which “values” should I follow: 16-year-old me, 26-year-old me, or 36-year-old me? It’s not obvious to me that the right answer is 26-year-old me (my current values).
In particular, there might be ways for Rethink Charity to expand the EA survey to gather more rigorous data on value drift (selection effects are obviously problematic – the people whose values drifted the most will likely not participate in the survey).
An easy way to gather a pool of “value drifted” people to survey could be to look at previous iterations of the EA survey and identify people who filled out the survey at some point in the past, but haven’t filled it out in the past N years. Then you could email them a special survey asking why they haven’t been filling out the survey, perhaps offering a chance to win an Amazon gift card as an incentive, and include questions about sources of value drift.
I’m confused about what is happening here. I remember reading this article a year ago, and most of the comments are almost exactly one year old. But for some reason the date of the post is “8th May 2019” and the post is in the first page of the forum where it says that it was posted 8 days ago. I guess there is some kind of a bug in the forum that caused the date of the post to be wrong.
I believe if you save something as a draft and then re-publish it, it changes the publication date. Darius, is that maybe what happened? If you know the original publication date, the moderators can change it to the original.
Yes, this is what happened. There are cases where it’s good to be able to have the date adjust (e.g. if you accidentally publish a post before it’s finished and want to edit and repost), but in this case, it was unintentional. I’ll change the date.
One thing I find really helpful to remain consistent in my values is introspection followed by writing the results down in a note, both a physical one and in a text file in my pc. I observed that this strategy really works for me, both for figuring out who I am and for making my actions consistent with it for however long periods of time. I still have 70% of the notes I wrote 5 years ago, and 100% of the most important ones that are the core of all my values.
Good article in lots of ways. I’m perhaps slightly put off by the sheer amount of info here- I don’t feel like I can input all of this easily, given my own laziness and number of goals which I feel like I prioritise. Not sure there’s an easy solution to that (maybe some sort of two three top suggestions?), but feel like this is a bit of an information overload. Thanks for writing it though Darius, I enjoyed it :)
Personally, if I were to simplify this post down to top 2 pieces of advice 1) focus on doing good now 2) surround yourself with people who will keep encouraging you to do good long term.
If it gets people away from cultish movements with morally questionable ideologies, value drift is a good thing.
If you’re a collage kid who drinks the KoolAid and then outgrows it over time, all the more power to that future self.
Grounding your spending in your own wellbeing has high information value; the purchasing power allocated to your own preferences gives tangible feedback inside your own brain—you know what brings you utility and what doesn’t. You know what purchases you like and which ones you dislike.
Compare this with giving money to strangers who merely promise to make the world a better place based on lots of highly questionable empirical assumptions and even more questionable moral axioms. Surely you can see the difference in information value.
Frankly I am shocked that there are people who give 50% of their income away to Effective Altruism; the social dynamics and moral uncertainties surrounding the Effective Altruism movements don’t even remotely justify such a speculative investment.
This is an interesting coincidence. I’m someone who read and was influenced by EA blogs around 2014-2015, after working for an NGO for a few years. I was influenced enough to factor it in my decision to leave my job and go back to school to become a Nurse Practitioner. (As evidenced by the fact that nursing and advanced practice nursing aren’t highly recommended pathways in 80k hrs, it’s fair to say I factored what I read among EA sites alongside my own appraisals of priority areas, beliefs/attitudes, and individual circumstances).
Despite being in Boston, arguably an EA hub, I didn’t engage with the community there during my time in school. Although the NGO I worked for wasn’t EA, having a peer group that was concerned about various issues and recognized the need to deviate from mainstream culture when it came to matters of earnings and consumption, definitely counted for something. Compared to grad school, where classmates surely had diverse motivations for choosing the career path, the greatest common denominator revealed itself to be “achieving a comfortably middle to upper-middle class lifestyle and the ability to ‘help people.’”
I can tell you my values drifted. Concepts such a marginal impact, and the fact that clinicians’ marginal impact is much less than most believe, are threatening to front line clinicians—so I avoided those topics with peers. Surely many classmates were conventionally status-oriented that my previous peers in the non-profit. As I tried to learn about the more conservative norms regarding attire expected in some private practices, (as opposed to the NGO where I worked which was quite informal), I was exposed to more of such consumption-as-status messaging. After breaking up with a long term significant other, who at least understood my beliefs and attitudes about EA and simpler-living, and starting to date other professionals in a high-COL city, I definitely found myself thinking about more of my income going more towards self-presentation. With significant student loan debt from the program, my first job will definitely prioritize salary more than I’d otherwise like.
All the while, I always had EA in the back of my mind. I listened to MacAskill’s book in audio format at some point during my graduate program. As someone who seems to be more interested in managing downward risk combined with the fact that my previous work was in behavior change/nudging, I was always concerned I would not ‘come back’ to the community or succumb to norms of the dominant consumer culture.
On reflection, I think that is part of the reason I wasn’t deeply engaged with the community—I seem to be more concerned with making sure I have some kind of significant impact, than trying to maximize impact. I’ve long been concerned about an imagined future-self succumbing to burnout, resentment, or alienation. I wondered how my future self might cope if I invest heavily in a problem area that turns out to be, for one reason or another, no longer a high-impact area. It’s safe to say that I’m rather risk-intolerant.
Moving forward, I do plan to re-engage with the community, especially in person because I now appreciate how alienating it can be to not have peers than understand your deeply held values. I hope this post adds-value; rather than codify the challenges and protective-factors, I thought it’s best I just concretely describe my experience and leave it here for interpretation.
Thank you for this detailed description of your experience!
I would guess that many other people in the EA community have a similar story to tell about the challenge of self-presentation/conspicuous consumption, as well as the ease with which you can drift when you find a new partner/friend group. I’m trying to understand value drift better, and this comment added value for me.
Thanks for writing this—it seems worthwhile to be strategic about potential “value drift”, and this list is definitely useful in that regard.
I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.
In the vein of Denise_Melchin’s comment on Joey’s post, I believe most people who appear to have value “drifted” will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don’t think that attempting to override these other values is generally sustainable for long-term motivation.
This hypothesis would point away from pledges or ‘locking in’ (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to “reduce the risk of value drift”, we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one’s impactfulness.
Thanks for your comment! I agree with everything you have said and like the framing you suggest.
This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between ‘value drift’ (i.e. change of internal motivation) and ‘lifestyle drift’ (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise’s comment points out, the term ‘value drift’ is not ideal in the way that Joey and I used it and that:
However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).
Great posts, Joey and Darius!
I’d like to introduce a few considerations as an “older” EA (I am 43 now) :
Scope of measurement: Joey’s post was based on 5 year data. As Joey mentioned, “it would take a long time to get good data”. However, it may well be that expanding the time scope would yield very different results. It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag. I base this on purely anecdotal evidence, but I have seen many people (including myself) recover interests, hobbies, passions, etc. once their children are older. I am quite new to the movement, but there is no way that 10 years ago I would have put in the time I am now devoting to EA. If I had started my involvement in college —supposing EA had been around—, you could have seen a sharp decline during my thirties (and tag that as value drift)… without knowing there would be a sharp increase in my forties.
Expectations: This is related to my previous point. Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions. Keeping the initial engagement levels constant sounds good in theory, but it may not be the best strategy in the long run (e.g. potentially leading to burnout, etc). Maybe we should think of “engagement fluctuations” as something natural and to be expected instead of something dangerous that must be fought against.
EA interaction styles: If and as the median age of the community goes up, we may need to adapt the ways in which we interact (or rather add to the existing ones). It can be much harder for people with full-time jobs and children to attend regular meetings or late afternoon “socials”. How can we make it easier for people that have very strong demands on their time to stay involved without feeling that they are missing out or that they just can’t cope with everything? I don’t have an answer right now, but I think this is worth exploring.
The overall idea here is that instead of fighting an uneven involvement/commitment across time it may be better to actually plan for it and find ways of accommodating it within a “lifetime contribution strategy”. It may well be that there is a minimum threshold below which people completely abandon EA. If that it so I suggest we think of ways of making it easy for people to stay above that threshold at times when other parts of their lives are especially demanding.
Great points, thanks for raising them!
It would be very encouraging if this is a common phenomenon and many people ‘dropping out’ might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:
Regarding your related point:
I strongly agree with this, which was my motivation to write the post in the first place! I don’t think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never ‘find their way back’ to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I’d generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this—I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).
Good points. If I were doing a write up on this subject it would be something like this:
“As the years go by, you will likely go through stages during which you cannot commit as much time or other resources to EA. This is natural and you should not interpret lower-commitment stages as failures: the goal is to maximize your lifetime contributions and that will require balancing EA with other goals and demands. However, there is a risk that you may drift away from EA permanently if your engagement is too low for a long period of time. Here are some tools you can use to prevent that from happening:”
Wonderful post! This is easily the best resource I’m aware of on ways to reduce value drift, and I anticipate sharing it with a lot of people over the years.
In my view, one of the most threatening risks to EA is value drift—not collectively, but in the sense that many of the community’s most devoted members gradually lose interest and leave. There are a lot of people whose names you can see all over the 80K/GWWC websites from material produced a few years ago, but who are no longer involved in EA in any kind of public capacity (and may not be involved at all). We’re still growing, on net, but if getting older tends to lead to drift, I can imagine us hitting a point where so many people “age out” that growth drops to roughly zero.
Something that wasn’t in your list: Helping the people who are already in your life become aligned with your values, or with the idea that you should keep your values.
The latter seems easier; it’s tough to get a random person to become truly interested in EA, but any close friend should care somewhat about your sticking to your plans and meeting your goals.
If my most religious friend told me they’d stopped going to church and felt “meh” about it, I’d be concerned for them even as an atheist, because the change might indicate that they were struggling with their life in general. If I decided to stop giving money to charity, I’d hope that my non-EA friends wouldn’t simply let the matter drop, and would at least gently ask questions that would prompt me to engage with my own beliefs and come up with a good reason that I’d abandoned something which was previously very important to me.
My version of this is keeping a journal, where I sometimes address “Future Aaron” but mostly focus on recording my beliefs/feelings as they are on any given day, trusting that Future Aaron will read those entries and feel connected to me. I haven’t yet struggled with value drift, but I have seen my journal help me recover past states of mind to become more excited/inspired/etc. I hope that it will also reduce the odds that I drift away from EA over time.
Do you know if anyone’s debriefed these folks?
Could be interesting to systematically interview people like this, to learn more about why people distance from EA & to see if any generalizable trends appear.
What you’re calling “value drift,” Evangelical Christians call “backsliding.” The idea is you’ve taken steps toward a countercultural lifestyle in line with your values, but now you’re sliding back toward the mainstream—for an Evangelical Christian, an example would be binge drinking with friends. Backsliding is common and Evangelicals use many of the techniques listed above to counteract it.
Evangelicals heavily emphasize community. Christians are encouraged to attend services, join a small group Bible study, socialize with each other, and marry other Christians.
I also remember being encouraged to establish good habits and stick with them—for example, reading the Bible every morning.
We also, of course, begin with a public commitment to Christianity. And community members will pull you aside and have a chat with you (read: judge you) if they think you’re in danger of backsliding.
I’ve seen all of these strategies work, although some have undesirable side effects.
For people who worry that the list sounds onerous I am happy to report that having done many of the items my life feels better, not worse. I’d say the biggest negative has been a reduction in how much I feel I can relate to people on more normal life paths, but this feels like an additional benefit in many ways since I wind up spending more time with people doing other non-standard things.
Thank you, Joey, for gathering those data. And thank you, Darius, for providing us with the suggestions for reducing this risk. I agree that further research on causes of value drift and how to avoid it is needed. If the phenomenon is explained correctly, that could be a great asset to the EA community building. But regardless of this explanation, your suggestions are valuable.
It seems to be a generally complex problem because retention encapsulates the phenomenon in which a person develops an identity, skill set, and consistent motivation or dedication to significantly change the course of their life. CEA in their recent model of community building framed it as resources, dedication, and realization.
Decreasing retention is also observed in many social movements. Some insights about how it happens can be culled from sociological literature. Although it is still underexplored and the sociological analysis might have mediocre quality, but it might still be useful to have a look at it. For example, this analysis implicate that “movement’s ability to sustain itself is a deeply interactive question predicted by its relationship to its participants: their availability, their relationships to others, and the organization’s capacity to make them feel empowered, obligated, and invested.”
Additional aspects of value drift to consider on an individual level that might not be relevant to other social movements: mental health and well-being, pathological altruism, purchasing fuzzies and utilons separately.
The reasons for the value drift from EA seems to be as important in understanding the process, as the value drift that led to EA, e.g. In Joey’s post, he gave an illustrative story of Alice. What could explain her value drift was the fact that at people during their first year of college are more prone to social pressure and need for belonging. That could make her become EA and drifted when she left college and her EA peers. So “Surround yourself with value aligned people” for the whole course of your life. That also stresses the importance of untapped potential of local groups outside the main EA hubs. For this reason, it’s worth considering even If in case of outreach we shouldn’t rush to translate effective altruism
About the data itself. We might be making wrong inferences trying to explain those date. Because it shows only a fraction of the process and maybe if we would observe the curve of engagement it would fluctuate over a longer period of time, eg. 50% in the first 2-5 year, 10% in a 6th year, 1% in for the next 2-3 and then coming back to 10%, 50% etc.? Me might hypothesize that life situation influence the baseline engagement for short period (1 month- 3 years). As analogous for changes in a baseline of happiness and influences of live events explained by hedonic adaptation, maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.
Additionally, the level of engagement in EA and other significant variables does not correlate perfectly, the data could also be explained by the regression to the mean. If some of the EAs were hardcore at the beginning, they will tend to be closer to the average on a second measurement, so from 50% to 10%, and those from 10% to 1%. Anyhow, the likelihood that the value drift is true is higher than that it’s not.
More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations → multiple unsuccessful job applications → frustration → drop out).
Becuase mechanism of the value drift would determine the strategies to minimalize risk or harm of it and because the EA community might not be representative for other social movements, we should systematically and empirically explore those and other factors in order to find the 80⁄20 of long-lasting commitment.
Doing effective altruistic things ≠ Doing Effective Altruism™ things
All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can’t all work at the main EA orgs.
There are lots of highly impactful opportunities out there that aren’t branded as EA—check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.
We should be interested in having as high an impact as possible and not in ‘performing EA-ness’.
I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an ‘EA approved’ workplace like D pmind or J n Street) - or nothing. That’s counterproductive and sad.
A potential explanation: it’s difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and ‘status’.
As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.
Thanks for your comment, Karolina!
Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!
This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to ‘changing the world for the better’. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): “If you’re not a socialist at the age of 20 you have no heart. If you’re not a conservative at the age of 40, you have no head”.
This is a valuable and under-discussed point that I endorse!
Idea: the local group organisers might use something like spaced repetition to invite busy community members [say, people who are pursuing a demanding job to increase their career capital] to the social events.
Anki’s “Again”, “Hard”, “Good”, “Easy” might map to “1-on-1 over coffee in a few weeks”, “Invite to the upcoming event and pay more attention to the person”, “Invite person to the social event in 3mo”, “Invite person to the event in 6mo or to the EAG”.
Another possible metaphor here is exponential backoff.
Oh, underrated comment from 3 years ago. One problem, however, is that you don’t want too many connections to go through you specifically, since it’ll overload you and possibly replace other connections they might form. People don’t have infinite bandwidth for connections, and if they only have room for one EA friend, say, you don’t want to take up that slot long-term. You may not want to permanently set yourself up as the linchpin.
In addition to Darius’s suggestions, I recommend using Murphyjitsu to generate your personal list of failure modes. Imagine yourself one/five/ten years from now, no longer being an EA. Ask yourself: what happened? Then try to think of ways to prevent this from happening.
Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:
“Choose effective altruist endeavors that also grant you selfish benefits. There are a number of standard human motivators. Status, friends, mates, money, fame. When these things are on the line work actually gets done. Without these things it’s a lot harder. If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you. If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it. If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want. We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want. Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized. If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose.”
This strikes me as incredibly good advice.
There’s probably something to be gained by investigating this further, but i would guess that most cases of value drift are because a loss of willpower and motivation, rather that an update of one’s opinion. I think the word value drift is a bit ambigious here, because i think the stuff you mention is something we don’t really want to include in whatever term we use here. Now that i think about it, i think what really makes the difference here are deeply held intuitions about the range of our moral duty and so for which ‘changing your mind’ doesn’t alway seem appropriate.
Thanks, Tom! I agree with with you that all else being equal
though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.
This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:
If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of ‘being loyal to and cooperating with my future self’, it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.
It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don’t want to lose and what I’d like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.
It’s not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)
That is, if you believe in moral uncertainty, and believe you currently haven’t done a good job of figuring out the “correct” way of thinking about ethics, you may think you’re likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won’t even be interested in those questions anymore.
Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?
It could turn out to be bad. For example, say she pledges in 2000 to “never eat meat, dairy, or eggs again.” By 2030, clean meat, dairy, and eggs become near universal (something she did not anticipate in 2000). Her view in 2030 is that she should be willing to order non-vegan food at restaurants since asking for vegan food would make her seem weird while being unlikely to prevent animal suffering. If she takes her pledge seriously and literally, she is tied to a suboptimal position (despite only intending to prevent loss of motivation).
This could happen in a number of other ways:
She takes the Giving What We Can Further Pledge* intending to prevent herself from buying unnecessary stuff but the result is that her future self (who is just as altruistic) cannot move to a higher cost of living location.
She places her donation money into a donor-advised fund intending to prevent herself from spending it non-altruistically later but the result is that her future self (who is just as altruistic) cannot donate to promising projects that lack 501(c)(3) status.
She chooses a direct work career path with little flexible career capital intending to prevent herself from switching to a high earning career and keeping all the money but the result is that her future self (who is just as altruistic) cannot easily switch to a new cause area where she would be able to have a much larger impact.
It seems to me that actions that bind you can constrain you in unexpected ways despite your intention being to only constrain yourself in case you lose motivation. Of course, it may still be good to constrain yourself because the expected benefit from preventing reduced altruism due to loss of motivation could outweigh the expected cost from the possibility of preventing yourself from becoming more impactful. However, the possibility of constraining actions ultimately being harmful makes me think that they are distinct from actions like surrounding yourself with like-minded people and regularly consuming EA content.
*Giving What We Can does not push people to take the Further Pledge.
I think you’re just denying the possibility of value drift here. If you think it exists, then committment strategies could make sense. if you don’t, they won’t.
I disagree—I think you can believe “value drift” exists and also allow your future self autonomy.
My current “values” or priorities are different from my teenage values, because I’ve learned and because I have a different peer group now. In ten years, they will likely be different again.
Which “values” should I follow: 16-year-old me, 26-year-old me, or 36-year-old me? It’s not obvious to me that the right answer is 26-year-old me (my current values).
An easy way to gather a pool of “value drifted” people to survey could be to look at previous iterations of the EA survey and identify people who filled out the survey at some point in the past, but haven’t filled it out in the past N years. Then you could email them a special survey asking why they haven’t been filling out the survey, perhaps offering a chance to win an Amazon gift card as an incentive, and include questions about sources of value drift.
I’m confused about what is happening here. I remember reading this article a year ago, and most of the comments are almost exactly one year old. But for some reason the date of the post is “8th May 2019” and the post is in the first page of the forum where it says that it was posted 8 days ago. I guess there is some kind of a bug in the forum that caused the date of the post to be wrong.
I believe if you save something as a draft and then re-publish it, it changes the publication date. Darius, is that maybe what happened? If you know the original publication date, the moderators can change it to the original.
Yes, this is what happened. There are cases where it’s good to be able to have the date adjust (e.g. if you accidentally publish a post before it’s finished and want to edit and repost), but in this case, it was unintentional. I’ll change the date.
One thing I find really helpful to remain consistent in my values is introspection followed by writing the results down in a note, both a physical one and in a text file in my pc. I observed that this strategy really works for me, both for figuring out who I am and for making my actions consistent with it for however long periods of time. I still have 70% of the notes I wrote 5 years ago, and 100% of the most important ones that are the core of all my values.
Good article in lots of ways. I’m perhaps slightly put off by the sheer amount of info here- I don’t feel like I can input all of this easily, given my own laziness and number of goals which I feel like I prioritise. Not sure there’s an easy solution to that (maybe some sort of two three top suggestions?), but feel like this is a bit of an information overload. Thanks for writing it though Darius, I enjoyed it :)
Personally, if I were to simplify this post down to top 2 pieces of advice 1) focus on doing good now 2) surround yourself with people who will keep encouraging you to do good long term.
Value drift is not necessarily a bad thing.
If it gets people away from cultish movements with morally questionable ideologies, value drift is a good thing.
If you’re a collage kid who drinks the KoolAid and then outgrows it over time, all the more power to that future self.
Grounding your spending in your own wellbeing has high information value; the purchasing power allocated to your own preferences gives tangible feedback inside your own brain—you know what brings you utility and what doesn’t. You know what purchases you like and which ones you dislike.
Compare this with giving money to strangers who merely promise to make the world a better place based on lots of highly questionable empirical assumptions and even more questionable moral axioms. Surely you can see the difference in information value.
Frankly I am shocked that there are people who give 50% of their income away to Effective Altruism; the social dynamics and moral uncertainties surrounding the Effective Altruism movements don’t even remotely justify such a speculative investment.