Since releasing his report on existential risk from power-seeking AI, his estimate of AI x-risk has increased from 5% to >10%. Why?
Since releasing his report on existential risk from power-seeking AI, his estimate of AI x-risk has increased from 5% to >10%. Why?