I think that at least 80% of the AI safety researchers at MIRI, FHI, CHAI, OpenAI, and DeepMind would currently assign a >10% probability to this claim: “The research community will fail to solve one or more technical AI safety problems, and as a consequence there will be a permanent and drastic reduction in the amount of value in our future.”
If you’re still making this claim now, want to bet on it? (We’d first have to operationalize who counts as an “AI safety researcher”.)
I also think it wasn’t true in Sep 2017, but I’m less confident about that, and it’s not as easy to bet on.
If you’re still making this claim now, want to bet on it? (We’d first have to operationalize who counts as an “AI safety researcher”.)
I also think it wasn’t true in Sep 2017, but I’m less confident about that, and it’s not as easy to bet on.
(Am e-mailing with Rohin, will report back e.g. if we check this with a survey.)
Results are in this post.