Paul Christiano argues here that AI would only need to have āpico-pseudokindnessā (caring about humans one part in a trillion) to take over the universe but not trash Earthās environment to the point of uninhabitability, and that at least this is amount of kindness is likely.
See the reply to the first comment on that post. Paulās āmost humans die from AI takeoverā is 11%. There are other bad scenarios he considers, like losing control of the future, or most humans die for other reasons, but my understanding is that the 11% most closely corresponds to doom from AI.
Paul Christiano argues here that AI would only need to have āpico-pseudokindnessā (caring about humans one part in a trillion) to take over the universe but not trash Earthās environment to the point of uninhabitability, and that at least this is amount of kindness is likely.
Doesnāt Paul Christiano also have a p(doom) of around 50%? (To me, this suggests āmaybeā, rather than ālikelyā).
See the reply to the first comment on that post. Paulās āmost humans die from AI takeoverā is 11%. There are other bad scenarios he considers, like losing control of the future, or most humans die for other reasons, but my understanding is that the 11% most closely corresponds to doom from AI.
Fair. But the other scenarios making up the ~50% are still terrible enough for us to Pause.