I wrote this several months ago for LessWrong, but it seemed useful to have crossposted here.
It’s a writeup of several informal conversations I had with Andrew Critch (of the Berkeley Existential Risk Initiative) about what considerations are important for taking AI Risk seriously, based on his understanding of the AI landscape. (The landscape has changed slightly in the past year, but I think most concerns are still relevant)
“Taking AI Risk Seriously” – Thoughts by Andrew Critch
Link post
I wrote this several months ago for LessWrong, but it seemed useful to have crossposted here.
It’s a writeup of several informal conversations I had with Andrew Critch (of the Berkeley Existential Risk Initiative) about what considerations are important for taking AI Risk seriously, based on his understanding of the AI landscape. (The landscape has changed slightly in the past year, but I think most concerns are still relevant)