“no other literal X-risk” seems too strong. There are certainly some potential ways that nuclear war or a bioweapon could cause human extinction. They’re not just catastrophic risks.
In addition, catastrophic risks don’t just involve massive immediate suffering. They drastically change global circumstances in a way which will have knock-on effects on whether, when, and how we build AGI.
All that said, I directionally agree with you, and I think that probably all longtermists should have a model of the effects their work has on the potentiality of aligned AGI, and that they should seriously consider switching to working more directly on AI, even if their competencies appear to lie elsewhere. I just think that your post takes this point too far.
I think this is a bit too strong of a claim. It is true that that overwhelming majority of value in the future is determined by whether, when, and how we build AGI. I think it is also true that a longtermist trying to maximize impact should, in some sense, be doing something which affects whether, when, or how we build AGI.
However, I think your post is too dismissive of working on other existential risks. Reducing the chance that we all die before building AGI increases the chance that we build AGI. While there probably won’t be a nuclear war before AGI, it is quite possible that a person very well-suited to working on reducing nuclear issues could reduce x-risk more by working to reduce nuclear x-risk than they could by working more directly on AI.