The below statement is anecdotal; I think it’s hard to have a fact-based argument without clear/more up-to-date survey data.
The EA movement includes an increasing number of extreme long-termists (i.e., we should care about the trillions of humans who come after us rather than the 7 billion alive now). If AI development happens even in the next 200 years (and not 20) then we would still want to prioritize that work, per a long-termist framework.
I also find the above logic unsettling; there’s a long philosophical argument to be had regarding what to prioritize and when, and even if we should prioritize broad topic areas like these.
A general critique of utilitarianism is that without bounds, it results in some extreme recommendations.
There are also weighing arguments—“we can’t have a long term without a short term” and more concretely “people whose lives are saved now, discounted to the present, have significant power because they can help us steer the most important century in a positive direction.”
The below statement is anecdotal; I think it’s hard to have a fact-based argument without clear/more up-to-date survey data.
The EA movement includes an increasing number of extreme long-termists (i.e., we should care about the trillions of humans who come after us rather than the 7 billion alive now). If AI development happens even in the next 200 years (and not 20) then we would still want to prioritize that work, per a long-termist framework.
I also find the above logic unsettling; there’s a long philosophical argument to be had regarding what to prioritize and when, and even if we should prioritize broad topic areas like these.
A general critique of utilitarianism is that without bounds, it results in some extreme recommendations.
There are also weighing arguments—“we can’t have a long term without a short term” and more concretely “people whose lives are saved now, discounted to the present, have significant power because they can help us steer the most important century in a positive direction.”