On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions?
To the extent that this question overlaps with Mauricio’s question 1.2 (i.e. A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?), then you might find his answer helpful.
other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff
I think equally important for longtermists is the new requirement for the Commission to consider updating the definition of AI, and the list of high-risk systems, every 1 year. If you buy that adaptive/flexible/future-proof governance will be important for regulating AGI, then this looks good.
(The basic argument for this instance of adaptive governance is something like: AI progress is fast and will only get faster, so having relevant sections of regulation come up for mandatory review every so often is a good idea, especially since policymakers are busy so this doesn’t tend to happen by default.)
Relevant part of the doc: