Yes, I have seen people become more actively interested in joining or promoting projects related to AI safety. More importantly, I think it creates an AI safety culture and mentality. I’ll have a lot more to say about all of this in my (hopefully) forthcoming post on why I think promoting near-term research is valuable.
Strongly agreed that working on the near term applications of AI safety is underrated by most EAs. Nearly all of the AI safety discussion focuses on advanced RL agents that are not widely deployed in the world today, and it’s possible that these systems do not soon reach commercial viability. Misaligned AI is causing real harms today and solving those problems would be a great step towards building the technical tools and engineering culture necessary to scale to aligning more advanced AI.
(That’s just a three sentence explanation of a topic deserving much more detailed analysis, so really looking forward to your post!!)
Yes, I have seen people become more actively interested in joining or promoting projects related to AI safety. More importantly, I think it creates an AI safety culture and mentality. I’ll have a lot more to say about all of this in my (hopefully) forthcoming post on why I think promoting near-term research is valuable.
Strongly agreed that working on the near term applications of AI safety is underrated by most EAs. Nearly all of the AI safety discussion focuses on advanced RL agents that are not widely deployed in the world today, and it’s possible that these systems do not soon reach commercial viability. Misaligned AI is causing real harms today and solving those problems would be a great step towards building the technical tools and engineering culture necessary to scale to aligning more advanced AI.
(That’s just a three sentence explanation of a topic deserving much more detailed analysis, so really looking forward to your post!!)