As someone who cares deeply about the safety and flourishing of living beings, I personally think the default position toward existential risk should be to assume something is a risk until it can be demonstrated that it isn’t.
We don’t have any direct experience of an AGI/ASI, but theoretically, it could increase itself in intelligence and effective power exponentially in a very short (by human standards) scale of time. Furthermore, since AGI is by definition more intelligent than the average human in most domains (with ASI exceeding any human’s capabilities), I doubt we as humans can make any strong statements about what such a machine can or can’t do. In light of all this, and my approach that x-risks associated with technology should be assumed to exist until proven otherwise, it seems rational to call for a global pause on AI development and ban on new models, at least until more research can be done to determine the inherent safety levels associated with self-improving, agentic machine intelligence systems.
I agree that molecular nanotechnology could make AI much riskier, but an unfriendly AGI wouldn’t need nanobots to eliminate humanity. Nuclear warheads and power plant meltdowns, engineered pandemics and so on are all ways that could accomplish such a goal.
Again, I respect that you don’t see a path for an AGI to defeat humanity. I just want to remind you of the stakes here. If you (and others in the pro-AI camp) are wrong, our entire species dies (possibly along with all complex life on Earth). This isn’t really something we can afford to gamble on.
I doubt we as humans can make any strong statements about what such a machine can or can’t do
Yes, actually, we can. It can’t move faster than the speed of light. It can’t create an exact simulation of my brain with no brain scan. It can’t invent working nanotechnology without a lab and a metric shit-ton of experimentation.
Intelligence is not fucking magic. Being very smart does not give you a bypass to the laws of physics, or logistics, or computational complexity.
Nuclear warheads require humans to push the button. Engineered pandemics have a tradeoff, where highly deadly diseases will burn themselves out before killing everyone, and highly spreadable diseases are not as deadly. Merely killing 95% of humanity would not be enough to defeat us. The AI needs electricity: we don’t.
You will not be able to shut down AI development with such incredibly weak arguments and no supporting evidence.
I am all for safety and research. But if you want to advocate for drastic action, you need to actually make a case for it. And that means not handwaving away the obvious questions, like “how on earth could an AI kill everyone, when everyone has a pretty high interest in not being killed, and are willing to take drastic action to do so”.
As someone who cares deeply about the safety and flourishing of living beings, I personally think the default position toward existential risk should be to assume something is a risk until it can be demonstrated that it isn’t.
We don’t have any direct experience of an AGI/ASI, but theoretically, it could increase itself in intelligence and effective power exponentially in a very short (by human standards) scale of time. Furthermore, since AGI is by definition more intelligent than the average human in most domains (with ASI exceeding any human’s capabilities), I doubt we as humans can make any strong statements about what such a machine can or can’t do. In light of all this, and my approach that x-risks associated with technology should be assumed to exist until proven otherwise, it seems rational to call for a global pause on AI development and ban on new models, at least until more research can be done to determine the inherent safety levels associated with self-improving, agentic machine intelligence systems.
I agree that molecular nanotechnology could make AI much riskier, but an unfriendly AGI wouldn’t need nanobots to eliminate humanity. Nuclear warheads and power plant meltdowns, engineered pandemics and so on are all ways that could accomplish such a goal.
Again, I respect that you don’t see a path for an AGI to defeat humanity. I just want to remind you of the stakes here. If you (and others in the pro-AI camp) are wrong, our entire species dies (possibly along with all complex life on Earth). This isn’t really something we can afford to gamble on.
Yes, actually, we can. It can’t move faster than the speed of light. It can’t create an exact simulation of my brain with no brain scan. It can’t invent working nanotechnology without a lab and a metric shit-ton of experimentation.
Intelligence is not fucking magic. Being very smart does not give you a bypass to the laws of physics, or logistics, or computational complexity.
Nuclear warheads require humans to push the button. Engineered pandemics have a tradeoff, where highly deadly diseases will burn themselves out before killing everyone, and highly spreadable diseases are not as deadly. Merely killing 95% of humanity would not be enough to defeat us. The AI needs electricity: we don’t.
You will not be able to shut down AI development with such incredibly weak arguments and no supporting evidence.
I am all for safety and research. But if you want to advocate for drastic action, you need to actually make a case for it. And that means not handwaving away the obvious questions, like “how on earth could an AI kill everyone, when everyone has a pretty high interest in not being killed, and are willing to take drastic action to do so”.