Thanks for your post, I broadly agree with your main point, and I really value the emphasis on transparency and honesty when introducing people to EA.
That said, I think the way you frame cause neutrality doesn’t quite match how most people in the community understand it.
To me, cause neutrality doesn’t mean giving equal weight to all causes. It means being open to any cause and then prioritizing them based on principles like scale, neglectedness, and tractability. That process will naturally lead to some causes getting much more attention and funding than others, and that’s not a failure of neutrality, but a result of applying it well.
So when you say we’re “pretending to be more cause-neutral than we are,” I think that’s a bit off. I get that this may sound like semantics, and I agree it’s a problem if people are told EA treats all causes equally, only to later discover the community is heavily focused on a few. But that’s exactly why I think a principle-first framing is important. We should be clear that EA takes cause neutrality seriously as a principle, and that many people in the community, after applying that principle, have concluded that reducing catastrophic risks from AI is a top priority. And that this conclusion might change with new evidence or reasoning, but the underlying approach stays the same.
Strong agree—cause neutrality should not at all imply an even spread of investment. I just in fact do think AI is the most pressing cause according to my values and empirical beliefs
Thanks for your post, I broadly agree with your main point, and I really value the emphasis on transparency and honesty when introducing people to EA.
That said, I think the way you frame cause neutrality doesn’t quite match how most people in the community understand it.
To me, cause neutrality doesn’t mean giving equal weight to all causes. It means being open to any cause and then prioritizing them based on principles like scale, neglectedness, and tractability. That process will naturally lead to some causes getting much more attention and funding than others, and that’s not a failure of neutrality, but a result of applying it well.
So when you say we’re “pretending to be more cause-neutral than we are,” I think that’s a bit off. I get that this may sound like semantics, and I agree it’s a problem if people are told EA treats all causes equally, only to later discover the community is heavily focused on a few. But that’s exactly why I think a principle-first framing is important. We should be clear that EA takes cause neutrality seriously as a principle, and that many people in the community, after applying that principle, have concluded that reducing catastrophic risks from AI is a top priority. And that this conclusion might change with new evidence or reasoning, but the underlying approach stays the same.
Strong agree—cause neutrality should not at all imply an even spread of investment. I just in fact do think AI is the most pressing cause according to my values and empirical beliefs
Yes, I agree.
The OP seems to talk about cause-agnosticism (uncertainty about which cause is most pressing) or cause-divergence (focusing on many causes).
But EA is not cause agnostic OR cause neutral atm