Even if governments aren’t doing a lot of important AI research “in house,” and private actors continue to be the primary funders of AI R&D, we should expect governments to become much more active if really serious threats to security start to emerge. National governments are unlikely to be passive, for example, if safety/alignment failures become increasingly damaging—or, especially, if existentially bad safety/alignment failures ever become clearly plausible. If any important institutions, design decisions, etc., regarding AI get “locked in,” then I also expect governments to be heavily involved in shaping these institutions, making these decisions, etc. And states are, of course, the most important actors for many concerns having to do with political instability caused by AI. Finally, there are also certain potential solutions to risks—like creating binding safety regulations, forging international agreements, or plowing absolutely enormous amounts of money into research projects—that can’t be implemented by private actors alone.
Basically, in most scenarios where AI governance work turns out be really useful from a long-termist perspective—because there are existential safety/alignment risks, because AI causes major instability, or because there are opportunities to “lock in” key features of the world—I expect governments to really matter.
In brief, I do actually feel pretty positively.
Even if governments aren’t doing a lot of important AI research “in house,” and private actors continue to be the primary funders of AI R&D, we should expect governments to become much more active if really serious threats to security start to emerge. National governments are unlikely to be passive, for example, if safety/alignment failures become increasingly damaging—or, especially, if existentially bad safety/alignment failures ever become clearly plausible. If any important institutions, design decisions, etc., regarding AI get “locked in,” then I also expect governments to be heavily involved in shaping these institutions, making these decisions, etc. And states are, of course, the most important actors for many concerns having to do with political instability caused by AI. Finally, there are also certain potential solutions to risks—like creating binding safety regulations, forging international agreements, or plowing absolutely enormous amounts of money into research projects—that can’t be implemented by private actors alone.
Basically, in most scenarios where AI governance work turns out be really useful from a long-termist perspective—because there are existential safety/alignment risks, because AI causes major instability, or because there are opportunities to “lock in” key features of the world—I expect governments to really matter.