A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Thanks for the link ! The person who posted may not have been a newcomer to EA, but it is a perfect example of the kind of threads that I was thinking may repel newbies, or slightly discourage them to even ask. I really agree with what you say, there really is something to dig into there.
I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Here’s a really interesting debate on whether biodiversity loss should be an EA cause area, for example.
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Thanks for the link ! The person who posted may not have been a newcomer to EA, but it is a perfect example of the kind of threads that I was thinking may repel newbies, or slightly discourage them to even ask.
I really agree with what you say, there really is something to dig into there.