If this is de-railing the convo here feel free to ignore, but what do you mean concretely by the distinction between “near-termist” and “long-termist” cause areas here? Back in spring 2022 when some pretty senior (though not decision-critical) politicians were making calls for NATO to establish a no-fly zone over Ukraine, preventing Nuclear Catastrophe seemed pretty “near-termist” to me?
I also suspect many if not most AIXR people in the Bay think that AI Alignment is a pretty “near-term” concern for them. Similarly, concerns about Shrimp Welfare have only been focused on the here-and-now effects and not the ‘long run digital shrimp’ strawshrimp I sometimes see on social media.
Is near-term/long-term thinking actually capturing a clean cause area distinction? Or is it just a ‘vibe’ distinction? I think you clearest definition is:
Many people are drawn to the clear and more palatable idea that we should devote our lives to doing the most good to humans and animals alive right now
But to me that’s a philosophical debate right, or perspecitve? Because of 80K’s top 5 list, I could easily see individuals in each area making arguments that it is a near-term cause.
To be clear, I actually think I agree with a lot of what you say so I don’t want to come off as arguing the opposite case. But when I see these arguments about near v long termism or old v new EA or bednets v scifi EA, it just doesn’t seem to “carve nature at its joints” as the saying goes, and often leads to confusion as people argue about different things while using the same words.
Thanks those are some fair points. I think I am just using language that others use, so there is some shared understanding, even though it carries with it a lot of fuzziness like you say.
Maybe we can think of better language than “near term” or “long term” framing or just be more precise.
Open Phil had this issue—they now use ‘Global Health & Wellbeing’ and ‘Global Catastrophic Risks’, which I think captures the substantive focus of each.
If this is de-railing the convo here feel free to ignore, but what do you mean concretely by the distinction between “near-termist” and “long-termist” cause areas here? Back in spring 2022 when some pretty senior (though not decision-critical) politicians were making calls for NATO to establish a no-fly zone over Ukraine, preventing Nuclear Catastrophe seemed pretty “near-termist” to me?
I also suspect many if not most AIXR people in the Bay think that AI Alignment is a pretty “near-term” concern for them. Similarly, concerns about Shrimp Welfare have only been focused on the here-and-now effects and not the ‘long run digital shrimp’ strawshrimp I sometimes see on social media.
Is near-term/long-term thinking actually capturing a clean cause area distinction? Or is it just a ‘vibe’ distinction? I think you clearest definition is:
But to me that’s a philosophical debate right, or perspecitve? Because of 80K’s top 5 list, I could easily see individuals in each area making arguments that it is a near-term cause.
To be clear, I actually think I agree with a lot of what you say so I don’t want to come off as arguing the opposite case. But when I see these arguments about near v long termism or old v new EA or bednets v scifi EA, it just doesn’t seem to “carve nature at its joints” as the saying goes, and often leads to confusion as people argue about different things while using the same words.
Thanks those are some fair points. I think I am just using language that others use, so there is some shared understanding, even though it carries with it a lot of fuzziness like you say.
Maybe we can think of better language than “near term” or “long term” framing or just be more precise.
Open Phil had this issue—they now use ‘Global Health & Wellbeing’ and ‘Global Catastrophic Risks’, which I think captures the substantive focus of each.