“We should avoid building more powerful AI because it might kill us all” breaks to
No prior AI system has tried to kill us all
We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be “AGI” we don’t know.
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real world tasks
And like the absence of unicorns caught on film someone could argue that 1⁄2 are unlikely by prior due to AI hype that did not pan out.
The counter argument seems to be “we should pause now, I don’t have to prove anything because an AI system might be so smart it can defeat any obstacles even though I don’t know how it could do that, it will be so smart it finds a way”. Or “by the time there is proof we will be about to die”.
Why doesn’t this translate to AI risk.
“We should avoid building more powerful AI because it might kill us all” breaks to
No prior AI system has tried to kill us all
We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be “AGI” we don’t know.
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real world tasks
And like the absence of unicorns caught on film someone could argue that 1⁄2 are unlikely by prior due to AI hype that did not pan out.
The counter argument seems to be “we should pause now, I don’t have to prove anything because an AI system might be so smart it can defeat any obstacles even though I don’t know how it could do that, it will be so smart it finds a way”. Or “by the time there is proof we will be about to die”.