Hmm, I think these arguments comparing to other causes are missing two key things:
they aren’t sensitive to scope
they aren’t considering opportunity cost
Here’s an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that’s all missed opportunity in expectation to save more future lives.
But I don’t actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly “bad” things from my perspective. Why?
I take their values seriously. I don’t agree, but they have a right to value what they want, even if I disagree. I don’t personally have to help them, but I also won’t oppose them unless they come into object level conflict with my own values.
Actually, that last sentence makes me realize a point I failed to make in the post! It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.
Hmm, I think these arguments comparing to other causes are missing two key things:
they aren’t sensitive to scope
they aren’t considering opportunity cost
Here’s an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that’s all missed opportunity in expectation to save more future lives.
But I don’t actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly “bad” things from my perspective. Why?
I take their values seriously. I don’t agree, but they have a right to value what they want, even if I disagree. I don’t personally have to help them, but I also won’t oppose them unless they come into object level conflict with my own values.
Actually, that last sentence makes me realize a point I failed to make in the post! It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.