The EA movement’s values are drifting. You’re allowed to stay put.
Context: A slew of slightly disorganized thoughts I’ve had in mind for some time, but I’m not sure how obvious the ideas are to others. Relatively unpolished but posting anyway.
Years ago, I did a research project on value drift in EA. One of the things I asked participants is what would make them leave the movement. A couple of them mentioned the possibility of the EA movement’s values drifting.
At the time, this seemed kind of weird to me. Where could EA values drift to anyway? The only two values I could think of that define EA are effectiveness and altruism, and it seemed weird that we wouldn’t notice if one or both fell through.
As I’ve watched the movement change and grow over the past six years, I’ve realized it’s a little more complicated than that. There are lots of different ways people’s values affect how they pursue Effective Altruism. And I can’t help but notice that the values of the core of the EA movement are shifting. I’m hardly the first person to point out that things are changing, but most of what I’ve seen focuses on the increased funding. I see a few more differences:
Longtermism — and perhaps implicitly, x-risk avoidance — as a dominant value, if not the only value. (This one’s been elaborated on at length already.)
Less risk aversion. EA’s drastically increased involvement in policy and politics seems like the most obvious example, but the poor optics of highly-paid junior roles at “altruistic” nonprofits is an underrated risk, in my opinion, too.
Similarly, less proportion of funding towards evidence-backed ideas, and more funding for long-shot projects, of which much or most of the expected value comes from a small chance of big success.
Less impartiality. This is the one that concerns me the most. As more funding goes to community-building, a lot of money seems to be going to luxuries EAs don’t need, which feels very far from EA’s origins.
I’m not saying these changes are all bad, but they certainly do not reflect the values of the movement as a whole, and I want things to stay that way. But I worry that as the movement moves more and more firmly toward these values, others might be ostracized for staying where the movement was ten years, five years, or even one year ago.
My main point is: I think EAs think that leaving EA is value drift, and value drift is bad, and therefore leaving EA is a bad thing. But with the changes we’re seeing, people might be leaving EA as a means of staying within their values. And that’s okay.
By no means do I think that old EA values are superior to new ones. I’m grateful for the people challenging the status quo in EA and leading to progress. But I also want to make sure we still value the community members who don’t change their minds because of what’s trending in EA, and who still hold the values that we thought were good values 10 years ago and are likely still good values today.
- EA career guide for people from LMICs by 15 Dec 2022 14:37 UTC; 252 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- The dangers of high salaries within EA organisations by 10 Jun 2022 7:54 UTC; 38 points) (
- 7 Jun 2022 11:41 UTC; 6 points) 's comment on Four Concerns Regarding Longtermism by (
Am in agreement with most of your post, except for one thing: calling these changes to our values.
The following is the beginnings of a chain of thinking that isn’t fully fleshed out, but I hope is useful. All word choices are probably suboptimal. I don’t hold the implications of these views very strongly or at all, I’m mostly trying to puzzle things out and provide arguments I think are somewhat strong or compelling.
All the things you mention don’t seem like values to me—they seem more like strategies or approaches to doing good (which
“Core” values are things like truth-seekingness, epistemic humility or maximizing impact or something, whereas for example “cause neutrality” and by extension “longtermism” are downstream of that.
But we also have “secondary values” (terrible wording) which are influenced by our core values and our worldview and specific (cognitive) beliefs about how the world works (this influence each other but are somewhat independent).
I can see a version of EA where the core values → longtermism chain becomes replaced with just longtermism as a default (just like in current EA we take the core values → helping people in developing countries chain is something of a default—I don’t think it’s very often that people come into EA strongly opposing this value set—this isn’t a bad thing—these are the low hanging fruit).
Why are core & secondary values important to distinguish?
People who are on board with the changes do not see the shared values as conflicting with the core values they see it as a natural progression of core values. Just like we thought that “everyone matters” leads to “donate to help improve the lives of poor people in developing countries” so too is the connection between “everyone matters” to “future people should be our priority”.
Implication: people reading this post may say “this isn’t value drift”
I think are core values are really important and the real glue of our community, a glue that will withstand the test of time and ideally let us adapt, change and grow as we get new information and we update our beliefs.
Maybe this is to idealistic, and in practice simply saying “but we share the same core values” even if true, is simply not enough.
In practice, the level of secondary values can be more useful: maybe technical AI safety researchers and farmed animal welfare advocates just don’t have that much in common or the inferential distance is a bit too much in terms of their models of the world, impact, risk aversion etc. etc.
Maybe related is that even for ideal expected utility maximizers, values and subjective probabilities are impossible to disentangle by observing behavior. So it’s not always easy to tell what changes are value drift vs epistemic updates.
While I understand the point you’re making, the comment you linked is (to my non-STEM mind) pretty hard to parse. Would you be able to give a less technical, more ELI5 explanation?
Sure, here’s the ELI12:
Suppose that there are two billionaires, April and Autumn. Originally they were funding AMF because they thought working on AI alignment would be 0.01% likely to work and solving alignment would be as good as saving 10 billion lives, which is an expected value of 1 million lives, lower than you could get by funding AMF.
After being in the EA community a while they switched to funding alignment research for different reasons.
April updated upwards on tractability. She thinks research on AI alignment is 10% likely to work, and solving alignment is as good as saving 10 billion lives.
Autumn now buys longtermist moral arguments. Autumn thinks research on AI alignment is 0.01% likely to work, and solving alignment is as good as saving 10 trillion lives.
Both of them assign the same expected utility to alignment-- 1 billion lives. As such they will make the same decisions. So even though April made an epistemic update and Autumn a moral update, we cannot distinguish them from behavior alone.
This extends to a general principle: actions are driven by a combination of your values and subjective probabilities, and any given action is consistent with many different combinations of utility function and probability distribution.
As a second example, suppose Bart is an investor who makes risk-averse decisions (say, invests in bonds rather than stocks). He might do this for two reasons:
He would get a lot of disutility from losing money (maybe it’s his retirement fund)
He irrationally believes the probability of losing money is higher than it actually is (maybe he is biased because he grew up during a financial crash).
These different combinations of probability and utility inform the same risk-averse behavior. In fact, probability and utility are so interchangeable that professional traders—just about the most calibrated, rational people with regard to probability of losing money, and who are only risk-averse for reason (1) -- often model financial products as if losing money is more likely than it actually is, because it makes the math easier.
Thanks this is helpful, and potentially a useful top-level post
Very valid! I guess I’m thinking of this as “approaches EA values” [verb] rather than “values” [noun]. I think most if not all of the most abstract values EA holds are still in place, but the distinction between core and secondary values is important.
This was mainly a linguistic comment because I find that sometimes people disagree with a post if the terminology used is wrong, so I wanted to get ahead of that. I think I probably could have been more clear that I think you’ve identified something important and true here, and I am somewhat concerned about how memes spread and wouldn’t want people who haven’t updated along those lines to feel less like they are part of the EA community.
I agree with this goal hierarchy framework—it’s super super useful to appreciate that many of one’s personal goals are just extrapolations and mental shortcuts of more distilled upstream goals
While I agree with Vaidehi’s comments on whether “value drift” is the right descriptor, I think it’s true that proportion of in-practice-priorities has probably shifted.
As someone who endorses the overall shift towards longtermist priorities, I still do agree with this post. I think it’s important people be thinking for themselves and not getting tugged along with social consensus.
I think the challenge is that the recent changes can be described in a number of different ways:
Object level changes to fields, disciplines or industries that we focus on which is priorities shift
Changes in (attitudes and behaviors) regarding spending which maybe could be described as lifestyle shift (and relatedly, increasing importance ascribed to EA time, which could be a bit of a values shift)
A more ambitious and less risk averse attitude, which maybe is a culture shift
I’m not quite sure how I’d summarise these changes with 1 phrase or word, but these things in combination does create a certain… “aesthetic” that feels coherent—I could create a “2022 EA starter pack” meme that would probably capture the above pretty accurately.
Suggestion to change to: “Less proportion of funding towards” since the total amount of funding to e.g. GiveWell backed charities has increased but overall there is stil more funding (at least as of now, though that may change in the near future) towards that.
Good catch, thank you!
Personally, I dislike the title framing on at least two accounts:
As Vaidehi mentioned, I don’t think that “values” have shifted, but rather implementation/interpretation of fairly steady values (e.g., truth-seeking/open-mindedness, impact-maximization) have shifted or at least expanded.
“Drift” has a connotation (if not outright denotation) of “drifting [away] [[from the ideal/target]]” whereas I think that it’s mainly just fair to say “interpretations are shifting.”
One person’s Value Drift is another person’s Bayesian Updating
Finally : +1 for posting anyways, I appreciate it. I find alternative framings of ideas I’ve heard before and things I don’t fully agree with really useful (more so than ideas I agree with actually) for teasing out what I actually think and clarifying my thoughts on complicated topics.
I strongly agree with this!
It’s ok to leave EA, but I really hope these community members feel welcome to stay here. Just donating to GiveWell charities is still effective altruism. “Big tent” effective altruism is very important (particularly right now).