It’s a bit like if we’re talking about transporting live stock and someone is like “prove transporting dragons is differently than other livestock. They’re massive, can fly, can breath fire and in many stories are very intelligent.
Those facts provide a reasonable basis for why we should treat dragons differently than livestock when transporting them. I don’t think that is really a shifting of the burden of proof, but rather an argument that dragons have met the burden of proof. Do we see that with AI yet? Perhaps. But I think so far most of the arguments for AI risks have been abstract and rely heavily upon theoretical evidence rather than concrete foreseeable harms. I think this type of argument is notoriously unreliable.
I’m also not saying “AI is the same”. I’m saying “We shouldn’t just assume AI is different a priori”.
AI can invent other technologies, provide strategical advice, act autonomously, self-replicate, ect. It feels like the default should very much be that it needs its own analysis.
I agree that AI will eventually be able to do those things, and so we should probably regulate it pretty heavily eventually. But a “pause” would probably include stopping a bunch of harmless AI products too. For example, a lot of people want to stop GPT-5. I’m skeptical, as a practical matter, that OpenAI should have to prove to us that GPT-5 will be safe before releasing it. I think we should probably instead wait until the concrete harms from AI become clearer before controlling it heavily.
My point regarding burden of proof is that something has gone wrong if you think dragons are in the same reference class as pigs, cows, even lions in terms of transportation challenges. And the fact that someone needs to ask for an explicit list is indicative of a mistake somewhere.
I’m not saying that you can’t argue that they are the same. Just that a more reasonable framing would then be more along the lines of, “here’s my surprising conclusion that we can regulate it the same way”.
Those facts provide a reasonable basis for why we should treat dragons differently than livestock when transporting them. I don’t think that is really a shifting of the burden of proof, but rather an argument that dragons have met the burden of proof. Do we see that with AI yet? Perhaps. But I think so far most of the arguments for AI risks have been abstract and rely heavily upon theoretical evidence rather than concrete foreseeable harms. I think this type of argument is notoriously unreliable.
I’m also not saying “AI is the same”. I’m saying “We shouldn’t just assume AI is different a priori”.
I agree that AI will eventually be able to do those things, and so we should probably regulate it pretty heavily eventually. But a “pause” would probably include stopping a bunch of harmless AI products too. For example, a lot of people want to stop GPT-5. I’m skeptical, as a practical matter, that OpenAI should have to prove to us that GPT-5 will be safe before releasing it. I think we should probably instead wait until the concrete harms from AI become clearer before controlling it heavily.
My point regarding burden of proof is that something has gone wrong if you think dragons are in the same reference class as pigs, cows, even lions in terms of transportation challenges. And the fact that someone needs to ask for an explicit list is indicative of a mistake somewhere.
I’m not saying that you can’t argue that they are the same. Just that a more reasonable framing would then be more along the lines of, “here’s my surprising conclusion that we can regulate it the same way”.