Finally, embracing domain-specific effective altruism diversifies the portfolio of potential impact for effective altruism.
There is no need for a more diverse portfolio. There is no evidence to suggest that there are causes higher in expected value than are being worked on. If anything, the most effective way to maximise the EA portfolio is by doing cause prioritisation research, but this already is one of the most impactful causes.
Even within the EA movement currently, there are disagreements about the highest-potential causes to champion. Indeed, one could argue that domain-specific effective altruist organizations already exist.
People have different values and draw different conclusions from evidence, but this is hardly an argument for branching out to further causes most people agree there is little high impact evidence for.
Take, for example, Animal Charity Evaluators (ACE) or the Machine Intelligence Research Institute (MIRI), both of which are considered effective altruist organizations by the Centre for Effective Altruism. Animal welfare and the development of “friendly” artificial intelligence are both considered causes of interest for the EA movement. But how should they be evaluated against each other? And more to the point, if it were conclusively determined that friendly AI was the optimal cause to focus on, would ACE and other animal welfare EA charities shut down to avoid diverting attention and resources away from friendly AI? Or vice versa?
If it were conclusively determined (unrealistic) that X (in this case AI) is better than Y (in this case animals), then yes everyone who can should switch, since that would increase their marginal impact.
If you don’t believe that there are other valuable causes out there, or that cause X can be conclusively determined to be better than cause Y, then why do you think cause prioritization research is a valuable use of EA resources?
Yes, I should have phrased these things more clearly.
a) The evidence we currently have in this world suggests that the usual EA causes have an extraordinarily higher impact than other causes. That is the entire reason EA is working on them: because they do the most good per unit time invested.
Indeed there might be even better causes but the most effective way to find them is, well, to look for them in the most efficient way possible which is (cause prioritisation) research. Spreading EA-thinking in other domains doesn’t provide nearly as much data.
b) I just meant that we probably won’t be 100% sure of anything, but I agree that we could find overwhelming evidence for an incredibly high-impact opportunity. Hence the need for cause prioritisation research
Spreading EA-thinking in other domains doesn’t provide nearly as much data
I really disagree with this. I think it would result in dramatically more data compared to the alternative, especially if each of those domains is doing its own within-cause prioritization.
There is no need for a more diverse portfolio. There is no evidence to suggest that there are causes higher in expected value than are being worked on. If anything, the most effective way to maximise the EA portfolio is by doing cause prioritisation research, but this already is one of the most impactful causes.
People have different values and draw different conclusions from evidence, but this is hardly an argument for branching out to further causes most people agree there is little high impact evidence for.
If it were conclusively determined (unrealistic) that X (in this case AI) is better than Y (in this case animals), then yes everyone who can should switch, since that would increase their marginal impact.
If you don’t believe that there are other valuable causes out there, or that cause X can be conclusively determined to be better than cause Y, then why do you think cause prioritization research is a valuable use of EA resources?
Yes, I should have phrased these things more clearly.
a) The evidence we currently have in this world suggests that the usual EA causes have an extraordinarily higher impact than other causes. That is the entire reason EA is working on them: because they do the most good per unit time invested.
Indeed there might be even better causes but the most effective way to find them is, well, to look for them in the most efficient way possible which is (cause prioritisation) research. Spreading EA-thinking in other domains doesn’t provide nearly as much data.
b) I just meant that we probably won’t be 100% sure of anything, but I agree that we could find overwhelming evidence for an incredibly high-impact opportunity. Hence the need for cause prioritisation research
I really disagree with this. I think it would result in dramatically more data compared to the alternative, especially if each of those domains is doing its own within-cause prioritization.