Those all seem like important risks to me, but I’d estimate the highest x-risk from agentic systems that learn to seek power or wirehead, especially after a transition to very rapid economic or scientific progress. If AI progresses slowly or is only a tool used by human operators, x-risk seems much lower to me.
Those all seem like important risks to me, but I’d estimate the highest x-risk from agentic systems that learn to seek power or wirehead, especially after a transition to very rapid economic or scientific progress. If AI progresses slowly or is only a tool used by human operators, x-risk seems much lower to me.
Good recent post on various failure modes: https://www.lesswrong.com/posts/mSF4KTxAGRG3EHmhb/ai-x-risk-approximately-ordered-by-embarrassment