Scenario 3 depends on broad attention to and concern about AI. A major accident would suffice to cause this, but it is not necessary. I expect that even before a catastrophic accident occurs (if one ever does), the public and governments will pay much more attention to AI, just due to its greater and more legibly powerful capabilities in the future. Of course, such appreciation of AI doesn’t automatically lead to sane policy responses. But neither does an accident—do you think that if one state causes a global catastrophe, the main response from other AI-relevant states will be “AI is really risky and we should slow down” rather than just some combination of being angry at the responsible state and patching the revealed vulnerability to a particular kind of deployment?? Note also that even strong regulation catalyzed by an accident is likely to
target AI deployments, not development, which is not directly helpful in terms of classic Yudkowsky-style risk
be domain-specific; an accident in an unrelated domain doesn’t by default lead governments to stop companies from making ever-bigger language models
Scenario 3 depends on broad attention to and concern about AI. A major accident would suffice to cause this, but it is not necessary. I expect that even before a catastrophic accident occurs (if one ever does), the public and governments will pay much more attention to AI, just due to its greater and more legibly powerful capabilities in the future. Of course, such appreciation of AI doesn’t automatically lead to sane policy responses. But neither does an accident—do you think that if one state causes a global catastrophe, the main response from other AI-relevant states will be “AI is really risky and we should slow down” rather than just some combination of being angry at the responsible state and patching the revealed vulnerability to a particular kind of deployment?? Note also that even strong regulation catalyzed by an accident is likely to
target AI deployments, not development, which is not directly helpful in terms of classic Yudkowsky-style risk
be domain-specific; an accident in an unrelated domain doesn’t by default lead governments to stop companies from making ever-bigger language models