Reasons to have hope

This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.

  1. Eliezer Yudkowsky was invited to give a TED talk and received a standing ovation

  2. The NSF announced a $20 million request for proposals for empirical AI safety research.

  3. 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development

  4. AI Safety concerns have received increased media coverage

  5. ~700 people applied for AGI Safety Fundamentals in January

  6. FLI’s open letter has received 27572 signatures to date


Remember – The world is awful. The world is much better. The world can be much better.