EA is Insufficiently Value Neutral in Practice
I think EA should be a value neutral movement. That is, it should seek to have a large umbrella of folks seeking to do effective good based on what they value. This means that some folks in EA will want to be effective at doing things they think are good but you think are not good and vice versa. I think this is not only okay but desirable, because EA should be in the business of effectiveness and good doing, not deciding for others what things they should think are good.
Not everyone agrees. Comments on a few recent posts come to mind that indicate there’s a solid chunk of folks in EA who think the things they value are truly best, not just their best attempt at determining what things are best. Some evidence:
I posted a question about effective giving for abortion rights in America. It was controversial to say the least.
A structurally identical question that was not so controversial because it was more within the realm of things most EAs value.
There’s a lot of heat in the comments around the idea that EAs might seek office as Republicans rather than Democrats in the US.
The longstanding debate between the various cause areas of EA and what matters more: animal welfare, global poverty, x-risks, biorisk, etc.
On the one hand, it’s good to ask if the object-level work we think is good actually does good by our values. And it’s natural to come up with theories that try to justify which things are good. And yet in practice I find EA leaves out a lot of potential cause areas that people value and they could pursue more effectively.
To get really specific about this, here’s some cause areas that are outside the Overton window for EAs today but that matter to some people in the world and that they could reasonably want to pursue more effectively:
spreading a religion like Christianity that teaches that those who don’t convert will face extreme suffering for eternity
changing our systems of organizing labor to be more humane, e.g. creating a communist utopia
civilizing “barbarian” peoples
engaging in a multigenerational program to improve the human genome via selective breeding
All of these ideas, to my thinking, are well outside what most EAs would tolerate. If I were to write a post about how the most important cause area is spreading Buddhism to liberate all beings from suffering, I don’t think anyone would take me very seriously. If I were to do the same but for spreading Islam to bring peace to all peoples, I’d likely get stronger opposition.
Why? Because EA is not in practice value neutral. This is not exactly a novel insight: many EAs, and especially some of the founding EAs, are explicitly utilitarians of one flavor or another. This is not a specific complaint about EAs, though: this is just how humans are by default. We get trapped by our own worldviews and values, suffer from biases like the typical mind fallacy, and are quick to oppose things that stand in opposition to our values because it means we, at least in the short term, might get less of what we want.
Taking what we think is good for granted is a heuristic that served our ancestors well, but I think it’s is bad for the movement. We should take things like metaethical uncertainty and the unilateralist’s curse (and the meta-unilateralist’s curse?) seriously. And if we do so, that means leaving open the possibility that we’re fundamentally wrong about what would be best for the world, or what “best” even means, or what we would have been satisfied with “best” having meant in hindsight. Consequently, I think we should be more open to EAs working towards things that they think are good because they value them even though we might personally value exactly the opposite. This seems more consistent with a mission of doing good better rather than doing some specific good better.
The good news is people in EA already do this. For example, I think x-risks are really important and dominate all other concerns. If I had $1bn to allocate, I’d allocate all of it to x-risk reduction and none of it to anything else. Some people would think this is a tragedy because people alive today could have been saved using that money! I think the even greater tragedy is not saving the much larger number of potential future lives! But I can co-exist in the EA movement alongside people who prioritize global health and animal welfare, and if that is possible, we should be able to tolerate even more people would value things even more unlike what we value, so long as what they care about is effective marginal good doing, whatever they happen to think good is.
As I see it, my allies in this world aren’t so much the people who value what I value. Sure, I like them. But my real allies are the people who are willing to apply the same sort of methods to achieve their ends, whatever their ends may be. Thus I want these people to be part of EA, even if I think what they care about is wrong. Therefore, I advocate for a more inclusive, more value neutral EA than the one we have today.
ETA: There’s a point that I think is important but I didn’t make explicit in the post. Elevating it from the comments:
It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.
To expand a bit, I analogize this to supporting free speech in a sort of maximalist way. That is, not only do I think we should have freedom of speech, but also that we should help people make the best arguments for things they want to say, even if we disagree with those things. We can disagree on the object level, but at the meta level we should all try to benefit from common improvements to processes, reasoning, etc.
I want disagreements over values to stay firmly rooted at the object level if possible, or maybe only one meta level up. Go up enough meta levels to the concept of doing effective good, for whatever you take good to be, and we become value neutral. For example, I want an EA where people help each other come up with the best case for their position, even if many find it revolting, and then disagree with that best case on the object level rather than trying to do an end run around actually engaging with it and sabotaging it by starving it as the meta level. As far as I’m concerned, elevating the conflict passed the object level is cheating and epistemically dishonest.
I like the post. Well-written and well-reasoned. Unfortunately, I don’t agree — not at all.
A (hopefully) useful example, inspired by Worley’s thoughts, my mother, and Richard’s stinging question, respectively. Look at the following causes:
X-risk Prevention
Susan G. Komen Foundation
The Nazi Party
All three of the above would happily accept donations. Those who donate only to the first would probably view the values of the second cause as merely different from their own values, but they’d probably view the values of the third cause as opposing their own set of values.
Someone who donates to x-risk prevention might think that breast cancer awareness isn’t a very cost-effective form of charity. Someone who values breast cancer awareness might think that extinction from artificial intelligence is absurd. They wouldn’t mind the other “wasting their money” on x-risk prevention/breast cancer awareness — but both would (hopefully) find that the values of the Nazi Party are in direct opposition to their own values, not merely adjacently different.
The dogma that “one should fulfill the values one holds as effectively as possible” ignores the fundamental question: what values should one hold? Since ethics isn’t a completed field, EA sticks to — and should stick to — things that are almost unquestionably good: animals shouldn’t suffer, humanity shouldn’t go extinct, people shouldn’t have to die from malaria, etc. Not too many philosophers question the ethical value of preventing extinction or animal suffering. Benatar might be one of the few who disagrees, but even he would still probably say that relieving an animal’s pain is a good thing.
TL;DR: Doing something bad effectively… is still bad. In fact, it’s not just bad; it’s worse bad. I’d rather the Nazi Party was an ineffective mess of an institution than a data-driven, streamlined organization. This post seems to emphasize the E and ignore the A.
Agreed entirely. There is a large difference between “We should coexist alongside not maximally effective causes” and “We should coexist across causes we actively oppose.” I think a good test for this would be:
You have one million dollars, and you can only do one of two things with it—you can donate it to Cause A, or you can set it on fire. Which would you prefer to do?
I think we should be happy to coexist with (And encourage effectiveness for) any cause for which we would choose to donate the money. A longtermist would obviously prefer a million dollars go to animal welfare than be wasted. Given this choice, I’d rather a million dollars go to supporting the arts, feeding local homeless people, or improving my local churches even though I’m not religious. But I wouldn’t donate this money to the Effective Nazism idea that other people have mentioned—I’d rather it just be destroyed. Every dollar donated to them would be a net bad for the world in my opinion.
Hmm, I think these arguments comparing to other causes are missing two key things:
they aren’t sensitive to scope
they aren’t considering opportunity cost
Here’s an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that’s all missed opportunity in expectation to save more future lives.
But I don’t actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly “bad” things from my perspective. Why?
I take their values seriously. I don’t agree, but they have a right to value what they want, even if I disagree. I don’t personally have to help them, but I also won’t oppose them unless they come into object level conflict with my own values.
Actually, that last sentence makes me realize a point I failed to make in the post! It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.
“As I see it, my allies in this world aren’t so much the people who value what I value. Sure, I like them. But my real allies are the people who are willing to apply the same sort of methods to achieve their ends, whatever their ends may be.”
This to me seems absurd. Let us imagine two armies one from country A and the other from country B, both of which use air raids as their methodology. The army from country A wants to invade country B and vice versa. Do you view these armies as being allies because they use the same methodology?
In an important sense, yes!
To take an example of opposing armies, consider the European powers between say 1000 CE and 1950 CE. They were often at war with each other. Yet they were clearly allies in a sense that they were in agreement that the European way was best and that some European should clearly win in various conflicts and not others. This was clear during, for example, various wars between powers to preserve monarchy and Catholic rule. If I’m Austria I still want to fight the neighboring Catholic powers ruled by a king to gain land, but I’d rather be fighting them than Protestant republics!
As I see it, an object-level battle does not necessarily make someone my enemy and may in fact be my willing ally when we step back from object-level concerns. If phrased in terms of ideas, every time I’d prefer to make friends with folks who apply similar methods of rationality and epistemology even if we disagree on object-level conclusions because we share the same methods rather than make friends with people who happen to agree with me but don’t share my methods, because I can talk and reason with people who share my methods. If the object-level-agreeing, method-disagreeing “allies” turn on me, I have no recourse to shared methods.
Can you define methodology? If you are defining the term so broadly that monarchy, catholic rule, and republic are methodologies then you don’t have to bite the bullet on the “effective nazi” objection. You can simply say, “fascism is a methodology I oppose” however at this point it seems like the term is so broad that your objection to EA fails to have meaning.
I don’t think this example holds up to historical scrutiny, but it’s so broad Idk how to argue on that front so I’m simply going to agree to disagree.
You can work to understand other people’s philosophical assumptions and work within those parameters.
Would you really want to ally with Effective Nazism?
Strict value neutrality means not caring about the difference between good and evil. I think the “altruism” part of EA is important: it needs to be directed at ends that are genuinely good. Of course, there’s plenty of room for people to disagree about how to prioritize between different good things. We don’t all need to have the exact same rank ordering of priorities. But that’s a very different thing from value neutrality.
I’d bite the bullet and say “yes”. I disagree with Nazism, but to be intellectually consistent I have to accept that even beliefs about what is good that I find personally unpalatable deserve consideration. This is very similar to my stance on free speech: people should be allowed to say things that I disagree with, and I’m generally in favor of making it easier for people to say things, including things I disagree with.
To your point about not caring about the difference between good and evil, this sort of misses the point I’d like to make. How do you know what is good and evil? Well, you made some value judgement, and that judgment is yours. Even if you’re a moral realist, the fact remains that you’re discovering moral facts and can be mistaken about the facts. Since all we have access to is what claims people make about what they believe is best, we’re limited in how prescriptive we can be without risking, e.g., punishing ourselves if moral fashion changes.
A) No — to be intellectually consistent, you wouldn’t merely have to claim that Nazism deserves consideration. You’d have to actively support an anti-Semitic person donating to the Nazi Party and ensuring that it functions as efficiently as possible to eradicate Jewish people.[1] Correct me if I’m wrong, but your post didn’t seem to stop at wanting just a discussion of values — it pushed for action to increase the effectiveness of whatever values someone else held, even if those values are counter to your own.
B) Why do you think beliefs you find personally unpalatable deserve consideration — or, at least, how much consideration is necessary? Was the Holocaust insufficient consideration of the ideals of Nazism? Do you believe we should leave the Final Solution on the table as a way of pursuing ethical good in the world? These aren’t “gotcha” questions — given that you responded “yes” to Richard’s incisive question, I’d legitimately like to see how far your intellectual consistency will take you.
Agreed. This is a key question, and I think Richard avoids this thorny problem in his comment. However, the fact that the field of ethics hasn’t come to a conclusion over which system of values we should hold doesn’t implicate a free-for-all. We may not (yet?) know what is objectively good and evil, or even if ethics are objective or exist in the first place, but we can still aim for the good and away from the bad.
I’m excited to hear out your answer — you have a lot of interesting takes, and you have an easy-to-follow writing style.
I’m Jewish. I’m a descendent of Holocaust survivors. My father is a Holocaust scholar. I’m attending a conference on the Holocaust tomorrow. I’m not offended by the employment of the Nazi Party as an example, but if someone else is, I’d be happy to edit this post and change the example to something else — either shoot me a direct message or simply reply to this chain.
To your footnote, I’m not sure how many people are directly uncomfortable, but I do find arguments that roughly boil down to “but what about Nazis?” lazy as they try to run around the discussion by pointing to a thing that will make most readers go “Nazis bad, I agree with whatever says ‘Nazis bad’ most strongly!”. This doesn’t mean thinking Nazis are bad is an unreasonable position or something, only that it looms so large it swamps many people’s ability to think clearly.
Rationalists the to taboo comparing things to Nazis or using Nazis as an example for this reason, but not all EAs are rationalists and it is a specific point in idea space that most everyone will agree is bad, but I’m also pretty sure we can cook up worse views even more people would disagree with (cf. the baby eaters of Three Worlds Collide).
?
Maybe the formatting of your comment cut off the later portions. It seems like your response to my comment only included a discussion of my end note. To be clear, my end note was meant as merely a side-conversation, only tangentially related to the main body of the comment.
I’ll be generous in assuming that it was merely a formatting error — I wouldn’t hope to assume that you ignored the main points of my comment in favor of writing only about my relatively unimportant end note.
I await your response to the content of my comment! :)
There is (or, at least, ought to be) a big gap between “considering” a view and “allying” with it. If you’re going to ally with any view no matter its content, there’s no point in going to the trouble of actually thinking about it. Thinking is only worthwhile if it’s possible to reach conclusions that differ depending on the details of what’s considered.
Of course we’re fallible, but that doesn’t entail radical skepticism (see: any decent intro philosophy text). Whatever premises you think lead to the conclusion “maybe Nazism is okay after all,” you should have less confidence in those philosophical premises than in the opposing conclusion that actually, genocide really is bad. So those dubious premises can’t rationally be used to defeat the more-credible opposing conclusion.
Thank you for this. I think it’s worth discussing which kinds of moral views are compatible with EA. For example, in chapter 2 of The Precipice, Toby Ord enumerates 5 moral foundations for caring about existential risk (also discussed in this presentation):
So I find it strange and disappointing that we make little effort to promote longtermism to people who don’t share the EA mainstream’s utilitarian foundations.
Similarly, I think it’s worth helping conservationists figure out how to conserve biodiversity as efficiently as possible, perhaps alongside other values such as human and animal welfare, even though it is not something inherently valued by utilitarianism and seems to conflict with improving wild animal welfare. I have moral uncertainty as to the relative importance of biodiversity and WAW, so I’d like to see society try to optimize both and come to a consensus about how to navigate the tradeoffs between the two.