I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:
”Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.”
The hyperlink goes to an FHI paper that appears to just summarize various risks, so it’s unclear what her source was on the “most.” I’d be curious to know as well. She does stress the greater variance of outcomes and uncertainty surrounding AI—writing “Our predictions about climate change are more confident, both for better and for worse.”—so maybe my distillation should admit that too.
I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:
”Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.”
The hyperlink goes to an FHI paper that appears to just summarize various risks, so it’s unclear what her source was on the “most.” I’d be curious to know as well. She does stress the greater variance of outcomes and uncertainty surrounding AI—writing “Our predictions about climate change are more confident, both for better and for worse.”—so maybe my distillation should admit that too.