The below statement is anecdotal; I think it’s hard to have a fact-based argument without clear/more up-to-date survey data.
The EA movement includes an increasing number of extreme long-termists (i.e., we should care about the trillions of humans who come after us rather than the 7 billion alive now). If AI development happens even in the next 200 years (and not 20) then we would still want to prioritize that work, per a long-termist framework.
I also find the above logic unsettling; there’s a long philosophical argument to be had regarding what to prioritize and when, and even if we should prioritize broad topic areas like these.
A general critique of utilitarianism is that without bounds, it results in some extreme recommendations.
There are also weighing arguments—“we can’t have a long term without a short term” and more concretely “people whose lives are saved now, discounted to the present, have significant power because they can help us steer the most important century in a positive direction.”
Of course!
Unsolicited advice:
If this post convinces you to explore this path, I would prepare a resume and start messaging managers on LinkedIn because companies are struggling to hire. It’s such a workers’ market right now.
I’d say that 80k’s guide to consulting and other materials in this forum on professional correspondence would be valuable in terms of approach, even if they aren’t geared towards federal work specifically.