EA is Insufficiently Value Neutral in Practice

I think EA should be a value neutral movement. That is, it should seek to have a large umbrella of folks seeking to do effective good based on what they value. This means that some folks in EA will want to be effective at doing things they think are good but you think are not good and vice versa. I think this is not only okay but desirable, because EA should be in the business of effectiveness and good doing, not deciding for others what things they should think are good.

Not everyone agrees. Comments on a few recent posts come to mind that indicate there’s a solid chunk of folks in EA who think the things they value are truly best, not just their best attempt at determining what things are best. Some evidence:

On the one hand, it’s good to ask if the object-level work we think is good actually does good by our values. And it’s natural to come up with theories that try to justify which things are good. And yet in practice I find EA leaves out a lot of potential cause areas that people value and they could pursue more effectively.

To get really specific about this, here’s some cause areas that are outside the Overton window for EAs today but that matter to some people in the world and that they could reasonably want to pursue more effectively:

  • spreading a religion like Christianity that teaches that those who don’t convert will face extreme suffering for eternity

  • changing our systems of organizing labor to be more humane, e.g. creating a communist utopia

  • civilizing “barbarian” peoples

  • engaging in a multigenerational program to improve the human genome via selective breeding

All of these ideas, to my thinking, are well outside what most EAs would tolerate. If I were to write a post about how the most important cause area is spreading Buddhism to liberate all beings from suffering, I don’t think anyone would take me very seriously. If I were to do the same but for spreading Islam to bring peace to all peoples, I’d likely get stronger opposition.

Why? Because EA is not in practice value neutral. This is not exactly a novel insight: many EAs, and especially some of the founding EAs, are explicitly utilitarians of one flavor or another. This is not a specific complaint about EAs, though: this is just how humans are by default. We get trapped by our own worldviews and values, suffer from biases like the typical mind fallacy, and are quick to oppose things that stand in opposition to our values because it means we, at least in the short term, might get less of what we want.

Taking what we think is good for granted is a heuristic that served our ancestors well, but I think it’s is bad for the movement. We should take things like metaethical uncertainty and the unilateralist’s curse (and the meta-unilateralist’s curse?) seriously. And if we do so, that means leaving open the possibility that we’re fundamentally wrong about what would be best for the world, or what “best” even means, or what we would have been satisfied with “best” having meant in hindsight. Consequently, I think we should be more open to EAs working towards things that they think are good because they value them even though we might personally value exactly the opposite. This seems more consistent with a mission of doing good better rather than doing some specific good better.

The good news is people in EA already do this. For example, I think x-risks are really important and dominate all other concerns. If I had $1bn to allocate, I’d allocate all of it to x-risk reduction and none of it to anything else. Some people would think this is a tragedy because people alive today could have been saved using that money! I think the even greater tragedy is not saving the much larger number of potential future lives! But I can co-exist in the EA movement alongside people who prioritize global health and animal welfare, and if that is possible, we should be able to tolerate even more people would value things even more unlike what we value, so long as what they care about is effective marginal good doing, whatever they happen to think good is.

As I see it, my allies in this world aren’t so much the people who value what I value. Sure, I like them. But my real allies are the people who are willing to apply the same sort of methods to achieve their ends, whatever their ends may be. Thus I want these people to be part of EA, even if I think what they care about is wrong. Therefore, I advocate for a more inclusive, more value neutral EA than the one we have today.

ETA: There’s a point that I think is important but I didn’t make explicit in the post. Elevating it from the comments:

It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.

To expand a bit, I analogize this to supporting free speech in a sort of maximalist way. That is, not only do I think we should have freedom of speech, but also that we should help people make the best arguments for things they want to say, even if we disagree with those things. We can disagree on the object level, but at the meta level we should all try to benefit from common improvements to processes, reasoning, etc.

I want disagreements over values to stay firmly rooted at the object level if possible, or maybe only one meta level up. Go up enough meta levels to the concept of doing effective good, for whatever you take good to be, and we become value neutral. For example, I want an EA where people help each other come up with the best case for their position, even if many find it revolting, and then disagree with that best case on the object level rather than trying to do an end run around actually engaging with it and sabotaging it by starving it as the meta level. As far as I’m concerned, elevating the conflict passed the object level is cheating and epistemically dishonest.