You’re misinterpreting what a moratorium would involve. I think you should read my post, where I outlined what I think a reasonable pathway would be—not stopping completely forever, but a negotiated agreement about how to restrict more powerful and by-default dangerous systems, and therefore only allowing those that are shown to be safe.
Edit to add: “unlike nukes a single AI escaping doesn’t end the world” ← Disagree on both fronts. A single nuclear weapons won’t destroy the world, while a single misaligned and malign superintelligent AI, if created and let loose, almost certainly will—it doesn’t need a hospitable environment.
So there is one model that might have worked for nukes. You know about PAL and weak-link strong link design methodology? This is a technology for reducing the rogue use of nuclear warheads. It was shared with Russia/the USSR so that they could choose to make their nuclear warheads safe from unauthorized use.
Major AI labs could design software frameworks and tooling that make AI models, even ASI capabilities level models, less likely to escape or misbehave. And release the tooling.
It would be voluntary compliance but like the Linux Kernel it might in practice be used by almost everyone.
As for the second point, no. Your argument has a hidden assumption that is not supported by evidence or credible AI scientists.
The evidence is that models that exhibit human scale abilities need human scale (within an oom) level of compute and memory. The physical hardware racks to support this are enormous and not available outside AI labs. Were we to restrict the retail sale of certain kinds of training accelerator chips and especially high bandwidth interconnects, we could limit the places human level + AI could exist to data centers at known addresses.
Your hidden assumption is optimizations, but the problem is that if you consider not just “AGI” but “ASI”, the amount of hardware to support superhuman level cognition is probably nonlinear.
If you wanted a model that could find an action that has a better expected value than a human level model with 90 percent probability (so the model is 10 times smarter in utility), it probably needs more than 10 times the compute. Probably logarithmic, that to find a better action 90 percent of the time you need to explore a vastly larger possibility space and you need the compute and memory to do this.
This is probably provable in a theorem but the science isn’t there yet.
If correct, actually ASI is easily contained. Just write down where 10,000+ H100s are located or find it by IR or power consumption. If you suspect a rogue ASI has escaped that’s where you check.
This is what I mean by controlling the environment. Realtime auditing of AI accelerator clusters—what model is running, who is paying for it, what’s their license number, etc—would actually decrease progress very little while make escapes difficult.
If hacking and escapes turns out to be a threat, air gaps and asic hardware firewalls to prevent this are the next level of security to add.
The difference is that major labs would not be decelerated at all. There is no pause. They just in parallel have to spend a trivial amount of money complying with the registration and logging reqs.
You’re misinterpreting what a moratorium would involve. I think you should read my post, where I outlined what I think a reasonable pathway would be—not stopping completely forever, but a negotiated agreement about how to restrict more powerful and by-default dangerous systems, and therefore only allowing those that are shown to be safe.
Edit to add: “unlike nukes a single AI escaping doesn’t end the world” ← Disagree on both fronts. A single nuclear weapons won’t destroy the world, while a single misaligned and malign superintelligent AI, if created and let loose, almost certainly will—it doesn’t need a hospitable environment.
So there is one model that might have worked for nukes. You know about PAL and weak-link strong link design methodology? This is a technology for reducing the rogue use of nuclear warheads. It was shared with Russia/the USSR so that they could choose to make their nuclear warheads safe from unauthorized use.
Major AI labs could design software frameworks and tooling that make AI models, even ASI capabilities level models, less likely to escape or misbehave. And release the tooling.
It would be voluntary compliance but like the Linux Kernel it might in practice be used by almost everyone.
As for the second point, no. Your argument has a hidden assumption that is not supported by evidence or credible AI scientists.
The evidence is that models that exhibit human scale abilities need human scale (within an oom) level of compute and memory. The physical hardware racks to support this are enormous and not available outside AI labs. Were we to restrict the retail sale of certain kinds of training accelerator chips and especially high bandwidth interconnects, we could limit the places human level + AI could exist to data centers at known addresses.
Your hidden assumption is optimizations, but the problem is that if you consider not just “AGI” but “ASI”, the amount of hardware to support superhuman level cognition is probably nonlinear.
If you wanted a model that could find an action that has a better expected value than a human level model with 90 percent probability (so the model is 10 times smarter in utility), it probably needs more than 10 times the compute. Probably logarithmic, that to find a better action 90 percent of the time you need to explore a vastly larger possibility space and you need the compute and memory to do this.
This is probably provable in a theorem but the science isn’t there yet.
If correct, actually ASI is easily contained. Just write down where 10,000+ H100s are located or find it by IR or power consumption. If you suspect a rogue ASI has escaped that’s where you check.
This is what I mean by controlling the environment. Realtime auditing of AI accelerator clusters—what model is running, who is paying for it, what’s their license number, etc—would actually decrease progress very little while make escapes difficult.
If hacking and escapes turns out to be a threat, air gaps and asic hardware firewalls to prevent this are the next level of security to add.
The difference is that major labs would not be decelerated at all. There is no pause. They just in parallel have to spend a trivial amount of money complying with the registration and logging reqs.