An unsafe AGI can kill far, far more than even the worst air accident. It can kill more conscious beings than train crashes, shipwrecks, terror attacks, pandemics, and even nuclear wars combined. It can kill every sentient being on Earth and render the planet permanently uninhabitable by any biological lifeforms. AI (and more specifically AGI/ASI) could also find a way to leave planet Earth, eventually consuming other sentient beings in different star systems, even in the absence of superluminal travel.
A lot of people say this, but I have never seen any compelling evidence to back this claim up. To be clear, I’m referring to the claim that an AI could achieve this in a short amount of time without being noticed and stopped.
As far as I know, not a single big name AI researcher, not even the AI safety concerned, believes in FOOM(nigh-unbounded intelligence explosion). I have extensively looked at molecular nanotech research, and I do not believe it can be invented in a short amount of time by non-godlike AI.
Without molecular nanotech, I do not see a reliable path for an AGI to defeat humanity. Every other method appears to me to be heavily luck based.
As someone who cares deeply about the safety and flourishing of living beings, I personally think the default position toward existential risk should be to assume something is a risk until it can be demonstrated that it isn’t.
We don’t have any direct experience of an AGI/ASI, but theoretically, it could increase itself in intelligence and effective power exponentially in a very short (by human standards) scale of time. Furthermore, since AGI is by definition more intelligent than the average human in most domains (with ASI exceeding any human’s capabilities), I doubt we as humans can make any strong statements about what such a machine can or can’t do. In light of all this, and my approach that x-risks associated with technology should be assumed to exist until proven otherwise, it seems rational to call for a global pause on AI development and ban on new models, at least until more research can be done to determine the inherent safety levels associated with self-improving, agentic machine intelligence systems.
I agree that molecular nanotechnology could make AI much riskier, but an unfriendly AGI wouldn’t need nanobots to eliminate humanity. Nuclear warheads and power plant meltdowns, engineered pandemics and so on are all ways that could accomplish such a goal.
Again, I respect that you don’t see a path for an AGI to defeat humanity. I just want to remind you of the stakes here. If you (and others in the pro-AI camp) are wrong, our entire species dies (possibly along with all complex life on Earth). This isn’t really something we can afford to gamble on.
I doubt we as humans can make any strong statements about what such a machine can or can’t do
Yes, actually, we can. It can’t move faster than the speed of light. It can’t create an exact simulation of my brain with no brain scan. It can’t invent working nanotechnology without a lab and a metric shit-ton of experimentation.
Intelligence is not fucking magic. Being very smart does not give you a bypass to the laws of physics, or logistics, or computational complexity.
Nuclear warheads require humans to push the button. Engineered pandemics have a tradeoff, where highly deadly diseases will burn themselves out before killing everyone, and highly spreadable diseases are not as deadly. Merely killing 95% of humanity would not be enough to defeat us. The AI needs electricity: we don’t.
You will not be able to shut down AI development with such incredibly weak arguments and no supporting evidence.
I am all for safety and research. But if you want to advocate for drastic action, you need to actually make a case for it. And that means not handwaving away the obvious questions, like “how on earth could an AI kill everyone, when everyone has a pretty high interest in not being killed, and are willing to take drastic action to do so”.
A lot of people say this, but I have never seen any compelling evidence to back this claim up. To be clear, I’m referring to the claim that an AI could achieve this in a short amount of time without being noticed and stopped.
As far as I know, not a single big name AI researcher, not even the AI safety concerned, believes in FOOM(nigh-unbounded intelligence explosion). I have extensively looked at molecular nanotech research, and I do not believe it can be invented in a short amount of time by non-godlike AI.
Without molecular nanotech, I do not see a reliable path for an AGI to defeat humanity. Every other method appears to me to be heavily luck based.
As someone who cares deeply about the safety and flourishing of living beings, I personally think the default position toward existential risk should be to assume something is a risk until it can be demonstrated that it isn’t.
We don’t have any direct experience of an AGI/ASI, but theoretically, it could increase itself in intelligence and effective power exponentially in a very short (by human standards) scale of time. Furthermore, since AGI is by definition more intelligent than the average human in most domains (with ASI exceeding any human’s capabilities), I doubt we as humans can make any strong statements about what such a machine can or can’t do. In light of all this, and my approach that x-risks associated with technology should be assumed to exist until proven otherwise, it seems rational to call for a global pause on AI development and ban on new models, at least until more research can be done to determine the inherent safety levels associated with self-improving, agentic machine intelligence systems.
I agree that molecular nanotechnology could make AI much riskier, but an unfriendly AGI wouldn’t need nanobots to eliminate humanity. Nuclear warheads and power plant meltdowns, engineered pandemics and so on are all ways that could accomplish such a goal.
Again, I respect that you don’t see a path for an AGI to defeat humanity. I just want to remind you of the stakes here. If you (and others in the pro-AI camp) are wrong, our entire species dies (possibly along with all complex life on Earth). This isn’t really something we can afford to gamble on.
Yes, actually, we can. It can’t move faster than the speed of light. It can’t create an exact simulation of my brain with no brain scan. It can’t invent working nanotechnology without a lab and a metric shit-ton of experimentation.
Intelligence is not fucking magic. Being very smart does not give you a bypass to the laws of physics, or logistics, or computational complexity.
Nuclear warheads require humans to push the button. Engineered pandemics have a tradeoff, where highly deadly diseases will burn themselves out before killing everyone, and highly spreadable diseases are not as deadly. Merely killing 95% of humanity would not be enough to defeat us. The AI needs electricity: we don’t.
You will not be able to shut down AI development with such incredibly weak arguments and no supporting evidence.
I am all for safety and research. But if you want to advocate for drastic action, you need to actually make a case for it. And that means not handwaving away the obvious questions, like “how on earth could an AI kill everyone, when everyone has a pretty high interest in not being killed, and are willing to take drastic action to do so”.