Strongly agreed that working on the near term applications of AI safety is underrated by most EAs. Nearly all of the AI safety discussion focuses on advanced RL agents that are not widely deployed in the world today, and it’s possible that these systems do not soon reach commercial viability. Misaligned AI is causing real harms today and solving those problems would be a great step towards building the technical tools and engineering culture necessary to scale to aligning more advanced AI.
(That’s just a three sentence explanation of a topic deserving much more detailed analysis, so really looking forward to your post!!)
Strongly agreed that working on the near term applications of AI safety is underrated by most EAs. Nearly all of the AI safety discussion focuses on advanced RL agents that are not widely deployed in the world today, and it’s possible that these systems do not soon reach commercial viability. Misaligned AI is causing real harms today and solving those problems would be a great step towards building the technical tools and engineering culture necessary to scale to aligning more advanced AI.
(That’s just a three sentence explanation of a topic deserving much more detailed analysis, so really looking forward to your post!!)