This comment is not directly related to your post: I don’t think the long-run future should be viewed of as a cause area. It’s simply where most sentient beings live (or might live), and therefore it’s a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you’re just thinking about the possible impacts of AI.
I’d go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there’s also medium-term future beings as a distinct population, depending on where we draw the lines.)
I think EA would probably be discovering more things if we were focused on looking not for new cause areas but for new specific intervention areas, comparable to individual health support for the global poor (e.g. antimalarial nets, deworming pills), individual financial help for the global poor (e.g. unconditional cash transfers), individual advocacy of plant-based eating (e.g. leafleting, online ads), institutional farmed animal welfare reforms (e.g. cage-free eating), technical AI safety research, and general extinction risk policy work.
If we think of the EA cause area landscape in “intervention area” terms, there seems to be a lot more change happening.
are best thought of as target populations than cause areas … the space not covered by these three is basically just wealthy modern humans
I guess this thought is probably implicit in a lot of EA, but I’d never quite heard it stated that way. It should be more often!
That said, I think it’s not quite precise. There’s a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say “far future”).
I agree with Jacy. Another point I’d add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe.
For example, for a long time in EA, “existential risk reduction” was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on “s-risks” (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren’t as focused on the far-future, e.g., at least the next few centuries.
However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a “far future” focus. Meanwhile, 80,000 Hours advocates for the use of the term “long-run future” for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks.
I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it’d be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I’m unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it’s regarded as lower-stakes, I think it makes sense to be tolerant of differing terminology, although of course clarifications or expansions upon definitions should be posted to the comments, as above.
This comment is not directly related to your post: I don’t think the long-run future should be viewed of as a cause area. It’s simply where most sentient beings live (or might live), and therefore it’s a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you’re just thinking about the possible impacts of AI.
I’d go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there’s also medium-term future beings as a distinct population, depending on where we draw the lines.)
I think EA would probably be discovering more things if we were focused on looking not for new cause areas but for new specific intervention areas, comparable to individual health support for the global poor (e.g. antimalarial nets, deworming pills), individual financial help for the global poor (e.g. unconditional cash transfers), individual advocacy of plant-based eating (e.g. leafleting, online ads), institutional farmed animal welfare reforms (e.g. cage-free eating), technical AI safety research, and general extinction risk policy work.
If we think of the EA cause area landscape in “intervention area” terms, there seems to be a lot more change happening.
I think this is a good point; you may also be interested in Michelle’s post about beneficiary groups, my comment about beneficiary subgroups, and Michelle’s follow-up about finding more effective causes.
I guess this thought is probably implicit in a lot of EA, but I’d never quite heard it stated that way. It should be more often!
That said, I think it’s not quite precise. There’s a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say “far future”).
I agree with Jacy. Another point I’d add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe.
For example, for a long time in EA, “existential risk reduction” was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on “s-risks” (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren’t as focused on the far-future, e.g., at least the next few centuries.
However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a “far future” focus. Meanwhile, 80,000 Hours advocates for the use of the term “long-run future” for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks.
I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it’d be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I’m unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it’s regarded as lower-stakes, I think it makes sense to be tolerant of differing terminology, although of course clarifications or expansions upon definitions should be posted to the comments, as above.
In the same vein as this comment and its replies: I’m disposed to framing the three as expansions of the “moral circle”. See, for example: https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/