Thank you for this article. I’ve read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I’ve read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you’re stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you’re not obligated to explain anything to me or to respond at all, I’m just writing this because I think it’s generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changes—off the top of my head:
Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
The report specifically addresses how the fact that Microsoft Office is so embedded in government means the company can get away with bad practices, but seemingly doesn’t connect this to how AI companies might end up in the same position.
Government contracts to buy LLM services increases AI company revenue, which shortens timelines.
The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on “win-wins” but doesn’t actually say how we can avoid the downsides (or, if it did, I didn’t get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it’s not clear to me whether that’s true.
For what it’s worth, I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness. I don’t see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won’t want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was “Human disempowerment by advanced AI”, which IMO is an overly euphemistic way of saying “AI will kill everyone”.
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
you won’t get people to care about extinction risks by pretending they don’t exist;
the market is already saturated with AI safety people writing Very Serious Reports in which they pretend that human extinction isn’t a serious concern;
AI x-risk is mainstream enough at this point that we can probably stop pretending not to care about it.
There are some recommendations in this article that I like, and if I think it should focus much more on them:
investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the risks of advanced AI
Without better compliance tools, AI companies and AI systems might start taking increasingly consequential actions without regulators’ understanding or supervision
[Without oversight], the government may be unable to verify AI companies’ claims about their testing practices or the safety of their AI models.
Steady AI adoption could backfire if it desensitizes government decision-makers to the risks of AI in government, or grows their appetite for automation past what the government can safely handle.
I also liked the section “Government adoption of AI will need to manage important risks” and I think it should have been emphasized more instead of buried in the middle.
Some line item responses
I don’t really know how to organize this so I’m just going to write a list of lines that stood out to me.
invest in AI and technical talent
What does that mean exactly? I can’t think of how you could do that without shortening timelines so I don’t know what you have in mind here.
Streamline procurement processes for AI products and related tech
I also don’t understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
Gradual adoption is significantly safer than a rapid scale-up.
This sounds plausible but I am not convinced that it’s true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
And in a crisis — e.g. after a conspicuous failure, or a jump in the salience of AI adoption for the administration in power — agencies might cut corners and have less time for security measures, testing, in-house development, etc.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
Frontier AI development will probably concentrate, leaving the government with less bargaining power.
I don’t think that’s how that works. Government gets to make laws. Frontier AI companies don’t get to make laws. This is only true if you’re talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that’s what you’re talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn’t it do the opposite?
It’s natural to focus on the broad question of whether we should speed up or slow down government AI adoption. But this framing is both oversimplified and impractical
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it’s saying that’s not a good framing? So why did the article use that framing? I get the sense that you didn’t intend to use that framing, but it comes across as if you’re using it.
Hire and retain technical talent, including by raising salaries
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don’t think “timelines” are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I’m not saying you’re wrong, I’m just saying you can’t take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
Explore legal or other ways to avoid extreme concentration in the frontier AI market
(this isn’t a disagreement, just a comment:)
You don’t say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says “It’s very unclear whether centralizing would be good or bad”, but you’re citing it as if it definitively finds centralization to be bad.
If the US government never ramps up AI adoption, it may be unable to properly respond to existential challenges.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn’t really matter whether the US government is using AI internally.
Map out scenarios in which AI safety regulation is ineffective and explore potential strategies
I don’t think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don’t exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the mark—right now the priority should be to get any sort of safety regulations at all.
Build emergency AI capacity outside of the government
I am moderately bullish on this idea (I’ve spoken favorably about Sentinel before) although I don’t actually have a good sense of when it would be useful. I’d like to see more projection of under exactly what sort of scenarios “emergency capacity” would be able to prevent catastrophes. Not that that’s within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn’t seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.
Thank you for this article. I’ve read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I’ve read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you’re stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you’re not obligated to explain anything to me or to respond at all, I’m just writing this because I think it’s generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changes—off the top of my head:
Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
The report specifically addresses how the fact that Microsoft Office is so embedded in government means the company can get away with bad practices, but seemingly doesn’t connect this to how AI companies might end up in the same position.
Government contracts to buy LLM services increases AI company revenue, which shortens timelines.
The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on “win-wins” but doesn’t actually say how we can avoid the downsides (or, if it did, I didn’t get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it’s not clear to me whether that’s true.
For what it’s worth, I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness. I don’t see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won’t want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was “Human disempowerment by advanced AI”, which IMO is an overly euphemistic way of saying “AI will kill everyone”.
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
you won’t get people to care about extinction risks by pretending they don’t exist;
the market is already saturated with AI safety people writing Very Serious Reports in which they pretend that human extinction isn’t a serious concern;
AI x-risk is mainstream enough at this point that we can probably stop pretending not to care about it.
There are some recommendations in this article that I like, and if I think it should focus much more on them:
I also liked the section “Government adoption of AI will need to manage important risks” and I think it should have been emphasized more instead of buried in the middle.
Some line item responses
I don’t really know how to organize this so I’m just going to write a list of lines that stood out to me.
What does that mean exactly? I can’t think of how you could do that without shortening timelines so I don’t know what you have in mind here.
I also don’t understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
This sounds plausible but I am not convinced that it’s true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
I don’t think that’s how that works. Government gets to make laws. Frontier AI companies don’t get to make laws. This is only true if you’re talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that’s what you’re talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn’t it do the opposite?
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it’s saying that’s not a good framing? So why did the article use that framing? I get the sense that you didn’t intend to use that framing, but it comes across as if you’re using it.
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don’t think “timelines” are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I’m not saying you’re wrong, I’m just saying you can’t take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
(this isn’t a disagreement, just a comment:)
You don’t say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says “It’s very unclear whether centralizing would be good or bad”, but you’re citing it as if it definitively finds centralization to be bad.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn’t really matter whether the US government is using AI internally.
I don’t think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don’t exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the mark—right now the priority should be to get any sort of safety regulations at all.
I am moderately bullish on this idea (I’ve spoken favorably about Sentinel before) although I don’t actually have a good sense of when it would be useful. I’d like to see more projection of under exactly what sort of scenarios “emergency capacity” would be able to prevent catastrophes. Not that that’s within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn’t seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.