From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
This seems clearly incorrect to me. I’m surprised to see this claim fronted prominently inside a highly upvoted comment. It also strikes me as uncharitable by invoking the “fanatical” frame.
Prioritizing x-risk merely requires thinking the risk of existential catastrophe is close enough in time.
It’s a widely held belief in the existential risk reduction community that we are likely to see a great technological transformation in the next 50 years. A technological transformation will either cause flourishing, existential catastrophe, or other forms of large change for humanity. The next 50 years will matter directly for most currently living people. Existential risk reduction and handling the technological transformation are therefore not just questions of the ‘far future’ or the ‘long-term’; it is also a ‘near-term’ concern.
Note that I wrote this short piece in 2020, before Chat GPT. I used “50 years” to, even at the point, work with a conservative time frame. Back then in 2020, I might have used 20-30 years personally. Now, in 2025, I might use 10 years personally.
This seems clearly incorrect to me. I’m surprised to see this claim fronted prominently inside a highly upvoted comment. It also strikes me as uncharitable by invoking the “fanatical” frame.
Prioritizing x-risk merely requires thinking the risk of existential catastrophe is close enough in time.
https://forum.effectivealtruism.org/posts/X5aJKx3f6z5sX2Ji4/the-far-future-is-not-just-the-far-future
Note that I wrote this short piece in 2020, before Chat GPT. I used “50 years” to, even at the point, work with a conservative time frame. Back then in 2020, I might have used 20-30 years personally. Now, in 2025, I might use 10 years personally.