As well as a bunch of draft blog posts that will eventually be incorporated into a strategy paper trying to chart various possibilities for AI risk, somewhat similar to GCRI’s “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis” which you mentioned in your post.
Fantastic post! Thank you very much for writing it.
Personally I’d add the Foundational Research Institute, which has released a few AI safety-related papers in the last year:
Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention
How Feasible is the Rapid Development of Artificial Superintelligence?
Backup utility functions as a fail-safe AI technique
As well as a bunch of draft blog posts that will eventually be incorporated into a strategy paper trying to chart various possibilities for AI risk, somewhat similar to GCRI’s “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis” which you mentioned in your post.
Hey Kaj! This is not meant as criticism, but in the future, maybe add a disclosure?
Oh, sure. I figured it’d be obvious enough from the links that it wouldn’t need to be mentioned explicitly, but yeah, I work for FRI.