Great piece— great prompt to rethink things and good digests of implications.
If you agree that mass movement building is a priority, check out PauseAI-US.org , or donate here: https://www.zeffy.com/donation-form/donate-to-help-pause-ai
One implication I strongly disagree with is that people should be getting jobs in AI labs. I don’t see you connecting that to actual safety impact, and I sincerely doubt working as a researcher gives you any influence on safety at this point (if it ever did). There is a definite cost to working at a lab, which is capture and NDA-walling. Already so many EAs work at Anthropic that it is shielded from scrutiny within EA, and the attachment to “our player” Anthropic has made it hard for many EAs to do the obvious thing by supporting PauseAI. Put simply: I see no meaningful path to impact on safety working as an AI lab researcher, and I see serious risks to individual and community effectiveness and mission focus.
Espionage may honestly be the best reason to work in a lab.