I agree with Jacy. Another point I’d add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe.
For example, for a long time in EA, “existential risk reduction” was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on “s-risks” (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren’t as focused on the far-future, e.g., at least the next few centuries.
However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a “far future” focus. Meanwhile, 80,000 Hours advocates for the use of the term “long-run future” for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks.
I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it’d be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I’m unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it’s regarded as lower-stakes, I think it makes sense to be tolerant of differing terminology, although of course clarifications or expansions upon definitions should be posted to the comments, as above.
I agree with Jacy. Another point I’d add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe.
For example, for a long time in EA, “existential risk reduction” was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on “s-risks” (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren’t as focused on the far-future, e.g., at least the next few centuries.
However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a “far future” focus. Meanwhile, 80,000 Hours advocates for the use of the term “long-run future” for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks.
I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it’d be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I’m unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it’s regarded as lower-stakes, I think it makes sense to be tolerant of differing terminology, although of course clarifications or expansions upon definitions should be posted to the comments, as above.