What are your thoughts on AI policy careers in government? I’m somewhat skeptical, for two main reasons:
1) It’s not clear that governments will become leading actors in AI development; by default I expect this not to happen. Unlike with nuclear weapons, governments don’t need to become experts in the technology to yield AI-based weapons; they can just purchase them from contractors. Beyond military power, competition between nations is mostly economic. Insofar as AI is an input to this, governments have an incentive to invest in domestic AI firms over government AI capabilities, because this is the more effective way to translate AI into GDP.
2) Government careers in AI policy also look compelling if the intersection of AI and war is crucial. But as you say in the interview, it’s not clear that AI is the best lever for reducing existentially damaging war. And in the EA community, it seems like this argument was generated as an additional reason to work on AI, and wasn’t the output of research trying to work out the best ways to reduce war.
Do you think the answer to this question should be a higher priority, especially given the growing number of EAs studying things like Security Studies in D.C.?
Even if governments aren’t doing a lot of important AI research “in house,” and private actors continue to be the primary funders of AI R&D, we should expect governments to become much more active if really serious threats to security start to emerge. National governments are unlikely to be passive, for example, if safety/alignment failures become increasingly damaging—or, especially, if existentially bad safety/alignment failures ever become clearly plausible. If any important institutions, design decisions, etc., regarding AI get “locked in,” then I also expect governments to be heavily involved in shaping these institutions, making these decisions, etc. And states are, of course, the most important actors for many concerns having to do with political instability caused by AI. Finally, there are also certain potential solutions to risks—like creating binding safety regulations, forging international agreements, or plowing absolutely enormous amounts of money into research projects—that can’t be implemented by private actors alone.
Basically, in most scenarios where AI governance work turns out be really useful from a long-termist perspective—because there are existential safety/alignment risks, because AI causes major instability, or because there are opportunities to “lock in” key features of the world—I expect governments to really matter.
What are your thoughts on AI policy careers in government? I’m somewhat skeptical, for two main reasons:
1) It’s not clear that governments will become leading actors in AI development; by default I expect this not to happen. Unlike with nuclear weapons, governments don’t need to become experts in the technology to yield AI-based weapons; they can just purchase them from contractors. Beyond military power, competition between nations is mostly economic. Insofar as AI is an input to this, governments have an incentive to invest in domestic AI firms over government AI capabilities, because this is the more effective way to translate AI into GDP.
2) Government careers in AI policy also look compelling if the intersection of AI and war is crucial. But as you say in the interview, it’s not clear that AI is the best lever for reducing existentially damaging war. And in the EA community, it seems like this argument was generated as an additional reason to work on AI, and wasn’t the output of research trying to work out the best ways to reduce war.
Do you think the answer to this question should be a higher priority, especially given the growing number of EAs studying things like Security Studies in D.C.?
In brief, I do actually feel pretty positively.
Even if governments aren’t doing a lot of important AI research “in house,” and private actors continue to be the primary funders of AI R&D, we should expect governments to become much more active if really serious threats to security start to emerge. National governments are unlikely to be passive, for example, if safety/alignment failures become increasingly damaging—or, especially, if existentially bad safety/alignment failures ever become clearly plausible. If any important institutions, design decisions, etc., regarding AI get “locked in,” then I also expect governments to be heavily involved in shaping these institutions, making these decisions, etc. And states are, of course, the most important actors for many concerns having to do with political instability caused by AI. Finally, there are also certain potential solutions to risks—like creating binding safety regulations, forging international agreements, or plowing absolutely enormous amounts of money into research projects—that can’t be implemented by private actors alone.
Basically, in most scenarios where AI governance work turns out be really useful from a long-termist perspective—because there are existential safety/alignment risks, because AI causes major instability, or because there are opportunities to “lock in” key features of the world—I expect governments to really matter.