I like the post. Well-written and well-reasoned. Unfortunately, I don’t agree — not at all.
A (hopefully) useful example, inspired by Worley’s thoughts, my mother, and Richard’s stinging question, respectively. Look at the following causes:
X-risk Prevention
Susan G. Komen Foundation
The Nazi Party
All three of the above would happily accept donations. Those who donate only to the first would probably view the values of the second cause as merely different from their own values, but they’d probably view the values of the third cause as opposing their own set of values.
Someone who donates to x-risk prevention might think that breast cancer awareness isn’t a very cost-effective form of charity. Someone who values breast cancer awareness might think that extinction from artificial intelligence is absurd. They wouldn’t mind the other “wasting their money” on x-risk prevention/breast cancer awareness — but both would (hopefully) find that the values of the Nazi Party are in direct opposition to their own values, not merely adjacently different.
The dogma that “one should fulfill the values one holds as effectively as possible” ignores the fundamental question: what values should one hold? Since ethics isn’t a completed field, EA sticks to — and should stick to — things that are almost unquestionably good: animals shouldn’t suffer, humanity shouldn’t go extinct, people shouldn’t have to die from malaria, etc. Not too many philosophers question the ethical value of preventing extinction or animal suffering. Benatar might be one of the few who disagrees, but even he would still probably say that relieving an animal’s pain is a good thing.
TL;DR: Doing something bad effectively… is still bad. In fact, it’s not just bad; it’s worse bad. I’d rather the Nazi Party was an ineffective mess of an institution than a data-driven, streamlined organization. This post seems to emphasize the E and ignore the A.
Agreed entirely. There is a large difference between “We should coexist alongside not maximally effective causes” and “We should coexist across causes we actively oppose.” I think a good test for this would be:
You have one million dollars, and you can only do one of two things with it—you can donate it to Cause A, or you can set it on fire. Which would you prefer to do?
I think we should be happy to coexist with (And encourage effectiveness for) any cause for which we would choose to donate the money. A longtermist would obviously prefer a million dollars go to animal welfare than be wasted. Given this choice, I’d rather a million dollars go to supporting the arts, feeding local homeless people, or improving my local churches even though I’m not religious. But I wouldn’t donate this money to the Effective Nazism idea that other people have mentioned—I’d rather it just be destroyed. Every dollar donated to them would be a net bad for the world in my opinion.
Hmm, I think these arguments comparing to other causes are missing two key things:
they aren’t sensitive to scope
they aren’t considering opportunity cost
Here’s an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that’s all missed opportunity in expectation to save more future lives.
But I don’t actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly “bad” things from my perspective. Why?
I take their values seriously. I don’t agree, but they have a right to value what they want, even if I disagree. I don’t personally have to help them, but I also won’t oppose them unless they come into object level conflict with my own values.
Actually, that last sentence makes me realize a point I failed to make in the post! It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.
I like the post. Well-written and well-reasoned. Unfortunately, I don’t agree — not at all.
A (hopefully) useful example, inspired by Worley’s thoughts, my mother, and Richard’s stinging question, respectively. Look at the following causes:
X-risk Prevention
Susan G. Komen Foundation
The Nazi Party
All three of the above would happily accept donations. Those who donate only to the first would probably view the values of the second cause as merely different from their own values, but they’d probably view the values of the third cause as opposing their own set of values.
Someone who donates to x-risk prevention might think that breast cancer awareness isn’t a very cost-effective form of charity. Someone who values breast cancer awareness might think that extinction from artificial intelligence is absurd. They wouldn’t mind the other “wasting their money” on x-risk prevention/breast cancer awareness — but both would (hopefully) find that the values of the Nazi Party are in direct opposition to their own values, not merely adjacently different.
The dogma that “one should fulfill the values one holds as effectively as possible” ignores the fundamental question: what values should one hold? Since ethics isn’t a completed field, EA sticks to — and should stick to — things that are almost unquestionably good: animals shouldn’t suffer, humanity shouldn’t go extinct, people shouldn’t have to die from malaria, etc. Not too many philosophers question the ethical value of preventing extinction or animal suffering. Benatar might be one of the few who disagrees, but even he would still probably say that relieving an animal’s pain is a good thing.
TL;DR: Doing something bad effectively… is still bad. In fact, it’s not just bad; it’s worse bad. I’d rather the Nazi Party was an ineffective mess of an institution than a data-driven, streamlined organization. This post seems to emphasize the E and ignore the A.
Agreed entirely. There is a large difference between “We should coexist alongside not maximally effective causes” and “We should coexist across causes we actively oppose.” I think a good test for this would be:
You have one million dollars, and you can only do one of two things with it—you can donate it to Cause A, or you can set it on fire. Which would you prefer to do?
I think we should be happy to coexist with (And encourage effectiveness for) any cause for which we would choose to donate the money. A longtermist would obviously prefer a million dollars go to animal welfare than be wasted. Given this choice, I’d rather a million dollars go to supporting the arts, feeding local homeless people, or improving my local churches even though I’m not religious. But I wouldn’t donate this money to the Effective Nazism idea that other people have mentioned—I’d rather it just be destroyed. Every dollar donated to them would be a net bad for the world in my opinion.
Hmm, I think these arguments comparing to other causes are missing two key things:
they aren’t sensitive to scope
they aren’t considering opportunity cost
Here’s an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that’s all missed opportunity in expectation to save more future lives.
But I don’t actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly “bad” things from my perspective. Why?
I take their values seriously. I don’t agree, but they have a right to value what they want, even if I disagree. I don’t personally have to help them, but I also won’t oppose them unless they come into object level conflict with my own values.
Actually, that last sentence makes me realize a point I failed to make in the post! It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.