Thanks for making this post, I think this sort of discussion is very important.
It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept.
I disagree with this. Here’s an alternative framing:
EA’s big ethical ideas are 1) reviving strong, active, personal moral duties, 2) longtermism, 3) some practical implications of welfarism that academic philosophy has largely overlooked (e.g. the moral importance of wild animal suffering, mental health, simulated consciousnesses, etc).
I don’t think EA has had many big empirical ideas (by which I mean ideas about how the world works, not just ideas involving experimentation and observation). We’ve adopted some views about AI from rationalists (imo without building on them much so far, although that’s changing), some views about futurism from transhumanists, and some views about global development from economists. Of course there’s a lot of people in those groups who are also EAs, but it doesn’t feel like many of these ideas have been developed “under the banner of EA”.
When I think about successes of “traditional” cause prioritisation within EA, I mostly think of things in the former category, e.g. the things I listed above as “practical implications of welfarism”. But I think that longtermism in some sense screens off this type of cause prioritisation. For longtermists, surprising applications of ethical principles aren’t as valuable, because by default we shouldn’t expect them to influence humanity’s trajectory, and because we’re mainly using a maxipok strategy.
Instead, from a longtermist perspective, I expect that biggest breakthroughs in cause prioritisation will come from understanding the future better, and identifying levers of large-scale influence that others aren’t already fighting over. AI safety would be the canonical example; the post on reducing the influence of malevolent actors is another good example. However, we should expect this to be significantly harder than the types of cause prioritisation I discussed above. Finding new ways to be altruistic is very neglected. But lots of people want to understand and control the future of the world, and it’s not clear how distinct doing this selfishly is from doing this altruistically. Also, futurism is really hard.
So I think a sufficient solution to the case of the missing cause prioritisation research is: more EAs are longtermists than before, and longtermist cause prioritisation is much harder than other cause prioritisation, and doesn’t play to EA’s strengths as much. Although I do think it’s possible, and I plan to put up a post on this soon.
For longtermists, surprising applications of ethical principles aren’t as valuable, because by default we shouldn’t expect them to influence humanity’s trajectory, and because we’re mainly using a maxipok strategy
Aiming for maxipok doesn’t mean not influencing the trajectory (if the counterfactual is catastrophe), it’s just much harder to measure impact. If measuring impact is hard, de-risking becomes more important, because of path-dependency. If we build out one or two particular longtermist cause areas really strongly with lots of certainty, they’ll have a lot of momentum (orgs and stuff) and if we find out later that they are having negative impact or not having impact (or worse, this happens and we just never find out), that will be bad.
I agree longtermist cause prioritisation is harder, even though I didn’t really think your reasons were very well articulated (in particular I don’t understand why you’re comparing altruism with understanding & controlling the future, seems like apples and oranges to me and surely it’s the intersection of X and altruism with the market gap), but I don’t think it’s less valuable.
Thanks for making this post, I think this sort of discussion is very important.
I disagree with this. Here’s an alternative framing:
EA’s big ethical ideas are 1) reviving strong, active, personal moral duties, 2) longtermism, 3) some practical implications of welfarism that academic philosophy has largely overlooked (e.g. the moral importance of wild animal suffering, mental health, simulated consciousnesses, etc).
I don’t think EA has had many big empirical ideas (by which I mean ideas about how the world works, not just ideas involving experimentation and observation). We’ve adopted some views about AI from rationalists (imo without building on them much so far, although that’s changing), some views about futurism from transhumanists, and some views about global development from economists. Of course there’s a lot of people in those groups who are also EAs, but it doesn’t feel like many of these ideas have been developed “under the banner of EA”.
When I think about successes of “traditional” cause prioritisation within EA, I mostly think of things in the former category, e.g. the things I listed above as “practical implications of welfarism”. But I think that longtermism in some sense screens off this type of cause prioritisation. For longtermists, surprising applications of ethical principles aren’t as valuable, because by default we shouldn’t expect them to influence humanity’s trajectory, and because we’re mainly using a maxipok strategy.
Instead, from a longtermist perspective, I expect that biggest breakthroughs in cause prioritisation will come from understanding the future better, and identifying levers of large-scale influence that others aren’t already fighting over. AI safety would be the canonical example; the post on reducing the influence of malevolent actors is another good example. However, we should expect this to be significantly harder than the types of cause prioritisation I discussed above. Finding new ways to be altruistic is very neglected. But lots of people want to understand and control the future of the world, and it’s not clear how distinct doing this selfishly is from doing this altruistically. Also, futurism is really hard.
So I think a sufficient solution to the case of the missing cause prioritisation research is: more EAs are longtermists than before, and longtermist cause prioritisation is much harder than other cause prioritisation, and doesn’t play to EA’s strengths as much. Although I do think it’s possible, and I plan to put up a post on this soon.
Aiming for maxipok doesn’t mean not influencing the trajectory (if the counterfactual is catastrophe), it’s just much harder to measure impact. If measuring impact is hard, de-risking becomes more important, because of path-dependency. If we build out one or two particular longtermist cause areas really strongly with lots of certainty, they’ll have a lot of momentum (orgs and stuff) and if we find out later that they are having negative impact or not having impact (or worse, this happens and we just never find out), that will be bad.
I agree longtermist cause prioritisation is harder, even though I didn’t really think your reasons were very well articulated (in particular I don’t understand why you’re comparing altruism with understanding & controlling the future, seems like apples and oranges to me and surely it’s the intersection of X and altruism with the market gap), but I don’t think it’s less valuable.