This advice seems to be particularly targeted at college students / undergraduates / people early in their careers (based on Section 2) and I expect many undergraduates might read this post.
Your post links to 2 articles from Eliezer Yudkowsky’s / MIRI’s perspective of AI alignment, which is a (but importantly, not the only) perspective of alignment research that is particularly dire. Also, several people working on alignment do in fact have plans (link to vanessa kosoy), even if they are skeptical they will work.
The way that these articles are linked assumes they are an accepted view or presents them in a fairly unnuanced way which seems concerning, especially coupled with the framing of “we have to save the world” (which Benjamin Hilton has commented on).
A comment from a friend (I’ve paraphrased a bit):