In general it doesn’t seem logical to me to bucket cause areas as either “longtermist” or “neartermist”.
I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:
Are you longtermist?
If so, prioritize AI safety, maybe other x-risks, and maybe global catastrophic risks
If not, prioritize global health or factory farming depending on your view on how much non-human animals matter compared to humans
But really the situation is way more complicated than this, and I don’t think the simplification is accurate enough to be worth spreading.
There was a time when I thought ending factory farming was highest priority, motivated by a longtermist worldview
There was also a time when I thought bio-risk reduction was highest priority, motivated by a neartermist worldview
(now I think AI-risk reduction is highest priority regardless of what I think about longtermism)
When thinking through cause prioritization, I think most EAs (including me) over-emphasize the importance of philosophical considerations like longtermism or speciesism, and under-emphasize the importance of empirical considerations like AI timelines, how much effort it would take to make bio-weapons obsolete or what diseases cause the most intense suffering.
In general it doesn’t seem logical to me to bucket cause areas as either “longtermist” or “neartermist”.
I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:
Are you longtermist?
If so, prioritize AI safety, maybe other x-risks, and maybe global catastrophic risks
If not, prioritize global health or factory farming depending on your view on how much non-human animals matter compared to humans
But really the situation is way more complicated than this, and I don’t think the simplification is accurate enough to be worth spreading.
There was a time when I thought ending factory farming was highest priority, motivated by a longtermist worldview
There was also a time when I thought bio-risk reduction was highest priority, motivated by a neartermist worldview
(now I think AI-risk reduction is highest priority regardless of what I think about longtermism)
When thinking through cause prioritization, I think most EAs (including me) over-emphasize the importance of philosophical considerations like longtermism or speciesism, and under-emphasize the importance of empirical considerations like AI timelines, how much effort it would take to make bio-weapons obsolete or what diseases cause the most intense suffering.
Agreed! And we should hardly be surprised to see such a founder effect, being that EA was started by philosophers and philosophy fans.