On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions?
To the extent that this question overlaps with Mauricio’s question 1.2 (i.e. A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?), then you might find his answer helpful.
other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff
To the extent that this question overlaps with Mauricio’s question 1.2 (i.e. A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?), then you might find his answer helpful.
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff