(I really should ask you some questions about AI risk and policy/strategy/governance (“Policy” from now on). I was actually thinking a lot about that just before I got sidetracked by the SJ topic.)
My understanding is that aside from formally publishing papers, Policy researchers usually communicate with each other via private Google Docs. Is that right? Would you find it useful to have a public or private forum for Policy discussion similar to the AI Alignment Forum? See also Where are people thinking and talking about global coordination for AI safety?
In the absence of a Policy Forum, I’ve been posting Policy-relevant ideas to the Alignment Forum. Do you and other Policy researchers you know follow AF?
In this comment I wrote, “Worryingly, it seems that there’s a disconnect between the kind of global coordination that AI governance researchers are thinking and talking about, and the kind that technical AI safety researchers often talk about nowadays as necessary to ensure safety.” Would you agree with this?
I’m interested in your thoughts on The Main Sources of AI Risk?, especially whether any of the sources/types of AI risk listed there are new to you, if you disagree with any of them, or if you can suggest any additional ones.
(I really should ask you some questions about AI risk and policy/strategy/governance (“Policy” from now on). I was actually thinking a lot about that just before I got sidetracked by the SJ topic.)
My understanding is that aside from formally publishing papers, Policy researchers usually communicate with each other via private Google Docs. Is that right? Would you find it useful to have a public or private forum for Policy discussion similar to the AI Alignment Forum? See also Where are people thinking and talking about global coordination for AI safety?
In the absence of a Policy Forum, I’ve been posting Policy-relevant ideas to the Alignment Forum. Do you and other Policy researchers you know follow AF?
In this comment I wrote, “Worryingly, it seems that there’s a disconnect between the kind of global coordination that AI governance researchers are thinking and talking about, and the kind that technical AI safety researchers often talk about nowadays as necessary to ensure safety.” Would you agree with this?
I’m interested in your thoughts on The Main Sources of AI Risk?, especially whether any of the sources/types of AI risk listed there are new to you, if you disagree with any of them, or if you can suggest any additional ones.