Has anyone fully dug into the AI credulity risk? By that I mean the XR scenario where a technology A) believes they’ve invented AGI and B) acts on the belief in Ozymandian fashion. Note A does not require actually inventing AGI. It just requires that like the recent Google employee that they believe it to be so. And note B does not require that they actually have evil intent. Often in fact evil can be done by a desire to do good. I believe that this is a nontrivial risk and compounded by the availability of technologies, the potential future upside to AGI could blind even well intentioned persons to do horrific deeds in the service of the brave new world. The logic is simple as it is elegant and horrifying: What is a finite number of present human lives against an infinity of lifetimes in the beautiful garden of a well aligned AGI? Would not the monster be the person who doesn’t do what’s necessary to ensure that future? What wouldn’t be justified to create such a utopia?
Has anyone fully dug into the AI credulity risk? By that I mean the XR scenario where a technology A) believes they’ve invented AGI and B) acts on the belief in Ozymandian fashion. Note A does not require actually inventing AGI. It just requires that like the recent Google employee that they believe it to be so. And note B does not require that they actually have evil intent. Often in fact evil can be done by a desire to do good. I believe that this is a nontrivial risk and compounded by the availability of technologies, the potential future upside to AGI could blind even well intentioned persons to do horrific deeds in the service of the brave new world. The logic is simple as it is elegant and horrifying: What is a finite number of present human lives against an infinity of lifetimes in the beautiful garden of a well aligned AGI? Would not the monster be the person who doesn’t do what’s necessary to ensure that future? What wouldn’t be justified to create such a utopia?