Why doesn’t this translate to AI risk.
“We should avoid building more powerful AI because it might kill us all” breaks to
No prior AI system has tried to kill us all
We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be “AGI” we don’t know.
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real world tasks
And like the absence of unicorns caught on film someone could argue that 1⁄2 are unlikely by prior due to AI hype that did not pan out.
The counter argument seems to be “we should pause now, I don’t have to prove anything because an AI system might be so smart it can defeat any obstacles even though I don’t know how it could do that, it will be so smart it finds a way”. Or “by the time there is proof we will be about to die”.
So just to summarize:
No deceptive or dangerous AI has ever been built or empirically tested. (1)
Historically AI capabilities have consistently been “underwhelming”, far below the hype. (2)
If we discuss “ok we build a large AGI, give it persistent memory and online learning, and isolate it in an air gapped data center and hand carry data to the machine via hardware locked media, what is the danger” you are going to respond either with:
“I don’t know how the model escapes but it’s so smart it will find a way” or (3)
“I am confident humanity will exist very far into the future so a small risk now is unacceptable (say 1-10 percent pDoom)”.
and if I point out that this large ASI model needs thousands of H100 accelerator cards and megawatts of power and specialized network topology to exist and there is nowhere to escape to, you will argue “it will optimize itself to fit on consumer PCs and escape to a botnet”. (4)
Have I summarized the arguments?
Like we’re supposed to coordinate an international pause and I see 4 unproven assertions above that have zero direct evidence. The one about humanity existing far into the future I don’t know I don’t want to argue that because it’s not falsifiable.
Shouldn’t we wait for evidence?