Counterargument: EA AI Safety is a talent program for Anthropic.
I wish it weren’t but that’s what’s going to continue happen if what the community has become pushes to grow. “Make AI go well” is code for their agenda. EA may be about morality but its answer on AI Safety is stuck and it is wrong. Anthropic’s agenda is not “up for renegotiation” at all. If you want to fix EA AI Safety it has to break out of the mentality 80k has done so much to put it in that the answer is to get a high-powered job working with AI companies or otherwise “playing the game”.
The good EA, the one I loved so much, was about being willing to do what was right even if it was scrappy and unglamorous (especially then bc it would be more neglected!). EA AI Safety today is sneering reviews of a book that could help rally the public bc insiders all know we’re doing this wacky Superalignment thing today, and something else tomorrow, but whatever the “reason” we always support Anthropic trying to achieve world domination. And the young EAs are scared not to be elite and sophisticated by agreeing, and it breaks my heart. Getting more kids into current EA would not teach them “flexible decisonmaking”.
EA needs to return to its roots in a way I gave up on waiting for before it needs to grow.
No I’m just concerned that the overwhelming effect of training EAs to do safety stuff that’s highly dependent on where the frontier labs are is them working at frontier labs. In theory there’s plenty of technical stuff to do that’s helpful, but in practice working at a frontier lab is the attractor. There are also knock-on effects in EA as a culture and movement when working at frontier labs is a primary occupation for top talent.