“Between 1 and 10%” also feels surprisingly low to me for general AI-related catastrophes. I at least would have thought that experts are less optimistic than that.
But pending clarification, I wouldn’t put much weight on this estimate given that the interviews mentioned in the 80k problem area profile you link to seemed to be about informing the entire problem profile rather than this estimate specifically. So it’s not clear e.g. whether the interviews included a question about all-things-considered risk for AI-related catastrophe that was asked to Nick Bostrom, an anonymous leading professor of computer science, Jaan Tallinn, Jan Leike, Miles Brundage, Nate Soares, and Daniel Dewey.
“Between 1 and 10%” also feels surprisingly low to me for general AI-related catastrophes. I at least would have thought that experts are less optimistic than that.
But pending clarification, I wouldn’t put much weight on this estimate given that the interviews mentioned in the 80k problem area profile you link to seemed to be about informing the entire problem profile rather than this estimate specifically. So it’s not clear e.g. whether the interviews included a question about all-things-considered risk for AI-related catastrophe that was asked to Nick Bostrom, an anonymous leading professor of computer science, Jaan Tallinn, Jan Leike, Miles Brundage, Nate Soares, and Daniel Dewey.
Good point, I’ll send a message to Robert Wiblin asking for clarification.