Thank you for this article. I’ve read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I’ve read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you’re stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you’re not obligated to explain anything to me or to respond at all, I’m just writing this because I think it’s generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changes—off the top of my head:
Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
The report specifically addresses how the fact that Microsoft Office is so embedded in government means the company can get away with bad practices, but seemingly doesn’t connect this to how AI companies might end up in the same position.
Government contracts to buy LLM services increases AI company revenue, which shortens timelines.
The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on “win-wins” but doesn’t actually say how we can avoid the downsides (or, if it did, I didn’t get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it’s not clear to me whether that’s true.
For what it’s worth, I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness. I don’t see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won’t want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was “Human disempowerment by advanced AI”, which IMO is an overly euphemistic way of saying “AI will kill everyone”.
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
you won’t get people to care about extinction risks by pretending they don’t exist;
the market is already saturated with AI safety people writing Very Serious Reports in which they pretend that human extinction isn’t a serious concern;
AI x-risk is mainstream enough at this point that we can probably stop pretending not to care about it.
There are some recommendations in this article that I like, and if I think it should focus much more on them:
investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the risks of advanced AI
Without better compliance tools, AI companies and AI systems might start taking increasingly consequential actions without regulators’ understanding or supervision
[Without oversight], the government may be unable to verify AI companies’ claims about their testing practices or the safety of their AI models.
Steady AI adoption could backfire if it desensitizes government decision-makers to the risks of AI in government, or grows their appetite for automation past what the government can safely handle.
I also liked the section “Government adoption of AI will need to manage important risks” and I think it should have been emphasized more instead of buried in the middle.
Some line item responses
I don’t really know how to organize this so I’m just going to write a list of lines that stood out to me.
invest in AI and technical talent
What does that mean exactly? I can’t think of how you could do that without shortening timelines so I don’t know what you have in mind here.
Streamline procurement processes for AI products and related tech
I also don’t understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
Gradual adoption is significantly safer than a rapid scale-up.
This sounds plausible but I am not convinced that it’s true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
And in a crisis — e.g. after a conspicuous failure, or a jump in the salience of AI adoption for the administration in power — agencies might cut corners and have less time for security measures, testing, in-house development, etc.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
Frontier AI development will probably concentrate, leaving the government with less bargaining power.
I don’t think that’s how that works. Government gets to make laws. Frontier AI companies don’t get to make laws. This is only true if you’re talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that’s what you’re talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn’t it do the opposite?
It’s natural to focus on the broad question of whether we should speed up or slow down government AI adoption. But this framing is both oversimplified and impractical
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it’s saying that’s not a good framing? So why did the article use that framing? I get the sense that you didn’t intend to use that framing, but it comes across as if you’re using it.
Hire and retain technical talent, including by raising salaries
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don’t think “timelines” are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I’m not saying you’re wrong, I’m just saying you can’t take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
Explore legal or other ways to avoid extreme concentration in the frontier AI market
(this isn’t a disagreement, just a comment:)
You don’t say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says “It’s very unclear whether centralizing would be good or bad”, but you’re citing it as if it definitively finds centralization to be bad.
If the US government never ramps up AI adoption, it may be unable to properly respond to existential challenges.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn’t really matter whether the US government is using AI internally.
Map out scenarios in which AI safety regulation is ineffective and explore potential strategies
I don’t think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don’t exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the mark—right now the priority should be to get any sort of safety regulations at all.
Build emergency AI capacity outside of the government
I am moderately bullish on this idea (I’ve spoken favorably about Sentinel before) although I don’t actually have a good sense of when it would be useful. I’d like to see more projection of under exactly what sort of scenarios “emergency capacity” would be able to prevent catastrophes. Not that that’s within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn’t seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.
Thanks for this comment! I don’t view it as “overly critical.”
Quickly responding (just my POV, not Forethought’s!) to some of what you brought up ---
(This ended up very long, sorry! TLDR: I agree with some of what you wrote, disagree with some of the other stuff / think maybe we’re talking past each other. No need to respond to everything here!)
A. Motivation behind writing the piece / target audience/ vibe / etc.
Re:
…it might help me if you explained more about the motivation [behind writing the article] [...] the article reads like you decided the conclusion and then wrote a series of justifications
I’m personally glad I posted this piece, but not very satisfied with it for a bunch of reasons, one of which is that I don’t think I ever really figured out what the scope/target audience should be (who I was writing for/what the piece was trying to do).
So I agree it might help to quickly write out the rough ~history of the piece:
I’d started looking into stuff related to “differential AI development” (DAID), and generally exploring how the timing of different [AI things] relative to each other could matter.
But I also kept coming back to a frame of “oh crap, who is using AI how much/how significantly is gonna matter an increasing amount as time goes on. I expect adoption will be quite uneven — e.g. AI companies will be leading the way — and some groups (whose actions/ability to make reasonable decisions we care about a lot) will be left behind.”
At the time I was thinking about this in terms of “differential AI development and diffusion”
IIRC I soon started thinking about governments here; I had the sense that government decision-makers were generally slow on tech use, and I was also using “which types of AI applications will not be properly incentivized by the market” as a way to think about which AI applications might be easier to speed up. (I think we mentioned this here.)
This ended up taking me on a mini deep dive on government adoption of AI, which in turn increasingly left me with the impression that (e.g.) the US federal government would either (1) become increasingly overtaken from within by an unusually AI-capable group (or e.g. the DOD), (2) be rendered increasingly irrelevant, leaving (US) AI companies to regulate themselves and likely worsening its ability to deal with other issues, or (3) somehow in fact adopt AI, but likely in a chaotic way that would be especially dangerous (because things would go slowly until a crisis forced a ~war-like undertaking).
I ended up poking around in this for a while, mostly as an aside to my main DAID work, feeling like I should probably scope this out and move on. (The ~original DAID memos I’d shared with people discussed government AI adoption.)
After a couple of rounds of drafts+feedback I got into a “I should really publish some version of this that I believe and seems useful and then get back to other stuff; I don’t think I’m the right person to work a lot more on this but I’m hoping other people in the space will pick up whatever is correct here and push it forward” mode—and ended up sharing this piece.
In particular I don’t expect (and wasn’t expecting) that ~policymakers will read this, but hope it’s useful for people at relevant think tanks or similar who have more government experience/knowledge but might not be paying attention to one “side” of this issue or the other. (For instance, I think a decent fraction of people worried about existential risks from advanced AI don’t really think about how using AI might be important for navigating those risks, partly because all of AI kinda gets lumped together).
Quick responses to some other things in your comment that seem kinda related to what I’m responding to in this “motivation/vibe/…” cluster:
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was “Human disempowerment by advanced AI”, which IMO is an overly euphemistic way of saying “AI will kill everyone”.
We might have notably different worldviews here (to be clear mine is pretty fuzzy!). For one thing, in my view many of the scary “AI disempowerment” outcomes might not in fact look immediately like “AI kills everyone” (although to be clear that is in fact an outcome I’m very worried about), and unpacking what I mean by “disempowerment” in the piece (or trying to find the ideal way to say it) didn’t seem productive—IIRC I wrote something and moved on. I also want to be clear that rogue AI [disempowering] humans is not the only danger I’m worried about, i.e. it doesn’t dominate everything else for me—the list you’re quoting from wasn’t an attempt to mask AI takeover, but rather a sketch of the kind of thing I’m thinking about. (Note: I do remember moving that item down the list at some point when I was working on a draft, but IIRC this was because I wanted to start with something narrower to communicate the main point, not because I wanted to de-emphasize ~AI takeover.)
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk.
I might be failing to notice my bias, but I basically disagree here—although I do feel a different version of what you’re maybe pointing to here (see next para). I was expecting that basically anyone who reads the piece will already have engaged at least a bit with “AI might kill all humans”, and likely most of the relevant audience will have thought very deeply about this and in fact has this as a major concern. I also don’t personally feel shy about saying that I think this might happen — although again I definitely don’t want to imply that I think this is overwhelmingly likely to happen or the only thing that matters, because that’s just not what I believe.
However I did occasionally feel like I was ~LARPing research writing when I was trying to articulate my thoughts, and suspect some of that never got resolved! (And I think I floundered a bit on where to go with the piece when getting contradicting feedback from different people—although ultimately the feedback was very useful.) In my view this mostly shows up in other ways, though. (Related—I really appreciated Joe Carlsmith’s recent post on fake thinking and real thinking when trying to untangle myself here.)
B. Downside risks of the proposed changes
Making policymakers “more reluctant to place restrictions on AI development...”
I did try to discuss this a bit in the “Government adoption of AI will need to manage important risks” section (and sort of in the “3. Defusing the time bomb of rushed automation” section), and indeed it’s a thing I’m worried about.
I think ultimately my view is that without use of AI in government settings, stuff like AI governance will just be ineffective or fall to private actors anyway, and also that the willingness-to-regulate /undue influence dynamics will be much worse if the government has no in-house capacity or is working with only one AI company as a provider.
Shortening timelines by increasing AI company revenue
I think this isn’t a major factor here—the govt is a big customer in some areas, but the private sector dominates (as does investment in the form of grants, IIRC)
“The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.”
I agree with this, and somewhat worry about it. IIRC I have a footnote on this somewhere -—I decided to scope this out. Ultimately my view right now is that the alternative (~no governance at least in the US, etc.) is worse. Sort of relatedly, I find the “narrow corridor” a useful frame here—see e.g. here.)
C. Is gov competence actually a bottleneck?
I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness. I don’t see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won’t want to restrict them.
My view is that you need both, we’re not on track for competence, and we should be pretty uncertain about what happens on the willingness side.
D. Michael’s line item responses
1.
> invest in AI and technical talent
What does that mean exactly? I can’t think of how you could do that without shortening timelines so I don’t know what you have in mind here.
I’m realizing this can be read as “invest in AI and in technical talent” — I meant “invest in AI talent and (broader) technical talent (in govt).” I’m not sure if this just addresses the comment; my guess is that doing this might have a tiny shortening effect on timelines (but is somewhat unclear, partly because in some cases e.g. raising salaries for AI roles in govt might draw people away from frontier AI companies), but this is unlikely to be the decisive factor. (Maybe related: my view is that generally this kind of thing should be weighed instead of treated as a reason to entirely discard certain kinds of interventions.)
2.
> Streamline procurement processes for AI products and related tech
I also don’t understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
I was specifically talking about agencies’ procurement of AI products — e.g. say the DOE wants a system that makes forecasting demand easier or whatever; making it easier for them to actually get such a system faster. I think the effect on timelines will likely be fairly small here (but am not sure), and currently think it would be outweighed by the benefits.
3.
> Gradual adoption is significantly safer than a rapid scale-up.
This sounds plausible but I am not convinced that it’s true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
I’d be excited to see more analysis on this, but it’s one of the points I personally am more confident about (and I will probably not dive in right now).
4.
> And in a crisis — e.g. after a conspicuous failure, or a jump in the salience of AI adoption for the administration in power — agencies might cut corners and have less time for security measures, testing, in-house development, etc.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
Sorry, again my writing here was probably unclear; the scenarios I was picturing were more like:
There’s a serious breach—US govt systems get hacked (again) by [foreign nation, maybe using AI] - revealing that they’re even weaker than is currently understood, or publicly embarrassing the admin. The admin pushes for fast modernization on this front.
A flashy project isn’t proceeding as desired (especially as things are ramping up), the admin in power is ~upset with the lack of progress, pushes
There’s a successful violent attack (e.g. terrorism); turns out [agency] was acting too slowly...
Etc.
Not sure if that answers the question/confusion?
5.
> Frontier AI development will probably concentrate, leaving the government with less bargaining power.
I don’t think that’s how that works. Government gets to make laws. Frontier AI companies don’t get to make laws. This is only true if you’re talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that’s what you’re talking about then I believe that would require thinking about things in a very different way than how this article presents them.
This section is trying to argue that AI adoption will be riskier later on, so the “bargaining power” I was talking about here is the bargaining power of the US federal govt (or of federal agencies) as a customer; the companies it’s buying from will have more leverage if they’re effectively monopolies. My understanding is that there are already situations where the US govt has limited negotiation power and maybe even makes policy concessions to specific companies specifically because of its relationship to those companies — e.g. in defense (Lockheed Martin, etc., although this is also kinda complicated) and again maybe Microsoft.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn’t it do the opposite?
Again, the section was specifically trying to argue that later adoption is scarier than earlier adoption (in this case because there are (still) several frontier AI companies). But I do think that building up internal AI capacity, especially talent, would reduce the leverage any specific AI company has over the US federal government.
6.
> It’s natural to focus on the broad question of whether we should speed up or slow down government AI adoption. But this framing is both oversimplified and impractical
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it’s saying that’s not a good framing? So why did the article use that framing? I get the sense that you didn’t intend to use that framing, but it comes across as if you’re using it.
Yeah, I don’t think I navigated this well! (And I think I was partly talking ti myself here.) But maybe my “motivation” notes above give some context? In terms of the specific “position” I in practice leaned into: Part of why I led with the benefits of AI adoption was the sense that the ~existential risk community (which is most of my audience) generally focuses on risks of AI adoption/use/products, and that’s where my view diverges more. There’s also been more discussion, from an existential risk POV, of the risks of adoption than there has been of the benefits, so I didn’t feel that elaborating too much on the risks would be as useful.
7.
> Hire and retain technical talent, including by raising salaries
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don’t think “timelines” are a thing). So arguably we should have fewer such people in positions of government power.
The TLDR of my view here is something like “without more internal AI/technical talent (most of) the government will be slower on using AI to improve its work & stay relevant, which I think is bad, and also it will be increasingly reliant on external people/groups/capacity for technical expertise—e.g. relying on external evals, or trusting external advice on what policy options make sense, etc. and this is bad.”
8.
> Explore legal or other ways to avoid extreme concentration in the frontier AI market
[...]
The linked article attached to this quote says “It’s very unclear whether centralizing would be good or bad”, but you’re citing it as if it definitively finds centralization to be bad.
I was linking to this to point to relevant discussion, not as a justification for a strong claim like “centralization is definitively bad”—sorry for being unclear!
9.
> If the US government never ramps up AI adoption, it may be unable to properly respond to existential challenges.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn’t really matter whether the US government is using AI internally.
I suspect we may have fairly different underlying worldviews here, but maybe a core underlying belief on my end is that there are things that it’s helpful for the government to do before we get to ~ASI, and also there will be AI tools pre ~ASI that are very helpful for doing those things. (Or an alt framing: the world will get ~/fast/complicated/weird due to AI before there’s nothing the US gov could in theory do to make things go better.)
10.
> Map out scenarios in which AI safety regulation is ineffective and explore potential strategies
I don’t think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don’t exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the mark—right now the priority should be to get any sort of safety regulations at all.
I fairly strongly disagree here (with “the priority should be to get any sort of safety regulations at all”) but don’t have time to get into it, really sorry!
---
Finally, thanks a bunch for saying that you enjoyed some of my earlier writing & I changed your thinking on slow vs quick mistakes! That kind of thing is always lovely to hear.
(Posted on my phone— sorry for typos and similar!)
Thanks, this comment gives me a much better sense of where you’re coming from. I agree and disagree with various specific points, but I won’t get into that since I don’t think we will resolve any disagreements without an extended discussion.
What I will say is that I found this comment to be much more enlightening than your original post. And whereas I said before that the original article didn’t feel like the output of a reasoning process, this comment did feel like that. At least for me personally, I think whatever mental process you used to write this comment is what you should use to write these sorts of articles, because whatever process you used to write this comment, it worked.
I don’t know what’s going on inside your head, but if I were to guess, perhaps you didn’t want to write an article in the style of this comment because it’s too informal or personal or un-authoritative. Those qualities do make it harder to (say) get a paper published in an academic journal, but I prefer to read articles that have those qualities. If your audience is the EA Forum or similar, then I think you should lean into them.
However I did occasionally feel like I was ~LARPing research writing when I was trying to articulate my thoughts, and suspect some of that never got resolved!
I don’t think you were LARPing research, your comment shows a real thought process behind it. After reading all your line item responses, I feel like I understand what you were trying to say. Like on the few parts I quoted as seeming contradictory, I can now see why they weren’t actually contradictory and they were part of a coherent stance.
I think you (Michael Dickens) are probably one of my favorite authors on your side of this, and I’m happy to see this discussion—though I myself am more on the other side.
Some quick responses > I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness.
I assume it can clearly be a mix of both. Right now we’re in a situation where many people barely trust the US government to do anything. A major argument for why the US government shouldn’t regulate AI is that they often mess up things they try to regulate. This is a massive deal in a lot of the back-and-forth I’ve seen on the issue on Twitter.
I’d expect that if the US government were far more competent, people would trust it to take care of many more things, including high-touch AI oversight.
> Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
This doesn’t seem like a major deal to me. Like, the US government uses software a lot, but I don’t see them “funding/helping software development”, even though I really think they should. If I were them, I would have invested far more in open-source systems, for instance.
My quick impression is that a competent oversight and guiding of AI systems, carefully working through the risks and benefits, would be incredibly challenging, and I’d expect any human-lead government to make gigantic errors in it. Even attempts to “slow down AI” could easily backfire if not done well. For example, I think that Democratic attempts to increase migration in the last few years might have massively backfired.
I agree with a good portion of your comment but I still don’t think increasing government competence (on AI) is worth prioritizing:
SB-1047 was adequately competently written (AFAICT). If we get more regulations at a similar level of competence, that would be reasonable.
Good AI regulations will make things harder on AI companies. AI leaders / tech accelerationists will be unhappy about regulations regardless of how competently written they are. On the other hand, the general population mostly supports AI regulations (according to AIPI polls). Getting regulators on board with what people want seems to me to be the best path to getting regulations in place.
Like, the US government uses software a lot, but I don’t see them “funding/helping software development”
Suppose it turned out Microsoft Office was dangerous. Surely the fact that Office is so embedded in government procedures would make it less likely to get banned?
IIRC you see similar phenomena (although I can’t recall any examples off hand) where some government-mandated software has massive security flaws but nobody does anything about it because the software is too entrenched.
SB-1047 was adequately competently written (AFAICT). If we get more regulations at a similar level of competence, that would be reasonable.
Agreed
Getting regulators on board with what people want seems to me to be the best path to getting regulations in place.
I don’t see it as either/or. I agree that pushing for regulations is a bigger priority than AI in government. Right now the former is getting dramatically more EA resources and I’d expect that to continue. But I think the latter are getting almost none, and that doesn’t seem right to me.
Suppose it turned out Microsoft Office was dangerous. Surely the fact that Office is so embedded in government procedures would make it less likely to get banned?
I worry we’re getting into a distant hypothetical. I’d equate this to, “Given the Government is using Microsoft Office, are they likely to try to make sure that future versions of Microsoft Office are better? Especially, in a reckless way?”
Naively I’d expect a government that uses Microsoft Office to be one with a better understanding of the upsides and downsides of Microsoft Office.
I’d expect that most AI systems the Government would use would be fairly harmless (in terms of the main risks we care about). Like, things a few years old (and thus tested a lot in industry), with less computing power than would be ideal, etc.
Related, I think that the US military has done good work to make high-reliability software, due to their need for it. (Though this is a complex discussion, as they obviously do a mix of things.)
IIRC you see similar phenomena (although I can’t recall any examples off hand) where some government-mandated software has massive security flaws but nobody does anything about it because the software is too entrenched.
I’d expect that if the US government were far more competent, people would trust it to take care of many more things, including high-touch AI oversight.
This is probably true, but improving competence throughout the government would be a massive undertaking, would take a long time and also have a long lag before public opinion would update. Seems like an extremely circuitous route to impact.
I previously was addressing Michael’s more limited point, “I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness.”
All that said, separately, I think that “increasing government competence” is often a good bet, as it just comes with a long list of benefits.
But if one believes that AI will happen soon, and that a major bottleneck is “getting the broad public to trust the US government more, with the purpose of then encouraging AI reform”, that seems like a dubious strategy.
Thank you for this article. I’ve read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I’ve read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you’re stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you’re not obligated to explain anything to me or to respond at all, I’m just writing this because I think it’s generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changes—off the top of my head:
Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
The report specifically addresses how the fact that Microsoft Office is so embedded in government means the company can get away with bad practices, but seemingly doesn’t connect this to how AI companies might end up in the same position.
Government contracts to buy LLM services increases AI company revenue, which shortens timelines.
The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on “win-wins” but doesn’t actually say how we can avoid the downsides (or, if it did, I didn’t get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it’s not clear to me whether that’s true.
For what it’s worth, I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness. I don’t see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won’t want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was “Human disempowerment by advanced AI”, which IMO is an overly euphemistic way of saying “AI will kill everyone”.
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
you won’t get people to care about extinction risks by pretending they don’t exist;
the market is already saturated with AI safety people writing Very Serious Reports in which they pretend that human extinction isn’t a serious concern;
AI x-risk is mainstream enough at this point that we can probably stop pretending not to care about it.
There are some recommendations in this article that I like, and if I think it should focus much more on them:
I also liked the section “Government adoption of AI will need to manage important risks” and I think it should have been emphasized more instead of buried in the middle.
Some line item responses
I don’t really know how to organize this so I’m just going to write a list of lines that stood out to me.
What does that mean exactly? I can’t think of how you could do that without shortening timelines so I don’t know what you have in mind here.
I also don’t understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
This sounds plausible but I am not convinced that it’s true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
I don’t think that’s how that works. Government gets to make laws. Frontier AI companies don’t get to make laws. This is only true if you’re talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that’s what you’re talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn’t it do the opposite?
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it’s saying that’s not a good framing? So why did the article use that framing? I get the sense that you didn’t intend to use that framing, but it comes across as if you’re using it.
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don’t think “timelines” are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I’m not saying you’re wrong, I’m just saying you can’t take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
(this isn’t a disagreement, just a comment:)
You don’t say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says “It’s very unclear whether centralizing would be good or bad”, but you’re citing it as if it definitively finds centralization to be bad.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn’t really matter whether the US government is using AI internally.
I don’t think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don’t exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the mark—right now the priority should be to get any sort of safety regulations at all.
I am moderately bullish on this idea (I’ve spoken favorably about Sentinel before) although I don’t actually have a good sense of when it would be useful. I’d like to see more projection of under exactly what sort of scenarios “emergency capacity” would be able to prevent catastrophes. Not that that’s within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn’t seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.
Thanks for this comment! I don’t view it as “overly critical.”
Quickly responding (just my POV, not Forethought’s!) to some of what you brought up ---
(This ended up very long, sorry! TLDR: I agree with some of what you wrote, disagree with some of the other stuff / think maybe we’re talking past each other. No need to respond to everything here!)
A. Motivation behind writing the piece / target audience/ vibe / etc.
Re:
I’m personally glad I posted this piece, but not very satisfied with it for a bunch of reasons, one of which is that I don’t think I ever really figured out what the scope/target audience should be (who I was writing for/what the piece was trying to do).
So I agree it might help to quickly write out the rough ~history of the piece:
I’d started looking into stuff related to “differential AI development” (DAID), and generally exploring how the timing of different [AI things] relative to each other could matter.
My main focus quickly became exploring ~safety-increasing AI applications/tools — Owen and I recently posted about this (see the link).
But I also kept coming back to a frame of “oh crap, who is using AI how much/how significantly is gonna matter an increasing amount as time goes on. I expect adoption will be quite uneven — e.g. AI companies will be leading the way — and some groups (whose actions/ability to make reasonable decisions we care about a lot) will be left behind.”
At the time I was thinking about this in terms of “differential AI development and diffusion”
IIRC I soon started thinking about governments here; I had the sense that government decision-makers were generally slow on tech use, and I was also using “which types of AI applications will not be properly incentivized by the market” as a way to think about which AI applications might be easier to speed up. (I think we mentioned this here.)
This ended up taking me on a mini deep dive on government adoption of AI, which in turn increasingly left me with the impression that (e.g.) the US federal government would either (1) become increasingly overtaken from within by an unusually AI-capable group (or e.g. the DOD), (2) be rendered increasingly irrelevant, leaving (US) AI companies to regulate themselves and likely worsening its ability to deal with other issues, or (3) somehow in fact adopt AI, but likely in a chaotic way that would be especially dangerous (because things would go slowly until a crisis forced a ~war-like undertaking).
I ended up poking around in this for a while, mostly as an aside to my main DAID work, feeling like I should probably scope this out and move on. (The ~original DAID memos I’d shared with people discussed government AI adoption.)
After a couple of rounds of drafts+feedback I got into a “I should really publish some version of this that I believe and seems useful and then get back to other stuff; I don’t think I’m the right person to work a lot more on this but I’m hoping other people in the space will pick up whatever is correct here and push it forward” mode—and ended up sharing this piece.
In particular I don’t expect (and wasn’t expecting) that ~policymakers will read this, but hope it’s useful for people at relevant think tanks or similar who have more government experience/knowledge but might not be paying attention to one “side” of this issue or the other. (For instance, I think a decent fraction of people worried about existential risks from advanced AI don’t really think about how using AI might be important for navigating those risks, partly because all of AI kinda gets lumped together).
Quick responses to some other things in your comment that seem kinda related to what I’m responding to in this “motivation/vibe/…” cluster:
We might have notably different worldviews here (to be clear mine is pretty fuzzy!). For one thing, in my view many of the scary “AI disempowerment” outcomes might not in fact look immediately like “AI kills everyone” (although to be clear that is in fact an outcome I’m very worried about), and unpacking what I mean by “disempowerment” in the piece (or trying to find the ideal way to say it) didn’t seem productive—IIRC I wrote something and moved on. I also want to be clear that rogue AI [disempowering] humans is not the only danger I’m worried about, i.e. it doesn’t dominate everything else for me—the list you’re quoting from wasn’t an attempt to mask AI takeover, but rather a sketch of the kind of thing I’m thinking about. (Note: I do remember moving that item down the list at some point when I was working on a draft, but IIRC this was because I wanted to start with something narrower to communicate the main point, not because I wanted to de-emphasize ~AI takeover.)
I might be failing to notice my bias, but I basically disagree here—although I do feel a different version of what you’re maybe pointing to here (see next para). I was expecting that basically anyone who reads the piece will already have engaged at least a bit with “AI might kill all humans”, and likely most of the relevant audience will have thought very deeply about this and in fact has this as a major concern. I also don’t personally feel shy about saying that I think this might happen — although again I definitely don’t want to imply that I think this is overwhelmingly likely to happen or the only thing that matters, because that’s just not what I believe.
However I did occasionally feel like I was ~LARPing research writing when I was trying to articulate my thoughts, and suspect some of that never got resolved! (And I think I floundered a bit on where to go with the piece when getting contradicting feedback from different people—although ultimately the feedback was very useful.) In my view this mostly shows up in other ways, though. (Related—I really appreciated Joe Carlsmith’s recent post on fake thinking and real thinking when trying to untangle myself here.)
B. Downside risks of the proposed changes
Making policymakers “more reluctant to place restrictions on AI development...”
I did try to discuss this a bit in the “Government adoption of AI will need to manage important risks” section (and sort of in the “3. Defusing the time bomb of rushed automation” section), and indeed it’s a thing I’m worried about.
I think ultimately my view is that without use of AI in government settings, stuff like AI governance will just be ineffective or fall to private actors anyway, and also that the willingness-to-regulate /undue influence dynamics will be much worse if the government has no in-house capacity or is working with only one AI company as a provider.
Shortening timelines by increasing AI company revenue
I think this isn’t a major factor here—the govt is a big customer in some areas, but the private sector dominates (as does investment in the form of grants, IIRC)
“The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.”
I agree with this, and somewhat worry about it. IIRC I have a footnote on this somewhere -—I decided to scope this out. Ultimately my view right now is that the alternative (~no governance at least in the US, etc.) is worse. Sort of relatedly, I find the “narrow corridor” a useful frame here—see e.g. here.)
C. Is gov competence actually a bottleneck?
My view is that you need both, we’re not on track for competence, and we should be pretty uncertain about what happens on the willingness side.
D. Michael’s line item responses
1.
I’m realizing this can be read as “invest in AI and in technical talent” — I meant “invest in AI talent and (broader) technical talent (in govt).” I’m not sure if this just addresses the comment; my guess is that doing this might have a tiny shortening effect on timelines (but is somewhat unclear, partly because in some cases e.g. raising salaries for AI roles in govt might draw people away from frontier AI companies), but this is unlikely to be the decisive factor. (Maybe related: my view is that generally this kind of thing should be weighed instead of treated as a reason to entirely discard certain kinds of interventions.)
2.
I was specifically talking about agencies’ procurement of AI products — e.g. say the DOE wants a system that makes forecasting demand easier or whatever; making it easier for them to actually get such a system faster. I think the effect on timelines will likely be fairly small here (but am not sure), and currently think it would be outweighed by the benefits.
3.
I’d be excited to see more analysis on this, but it’s one of the points I personally am more confident about (and I will probably not dive in right now).
4.
Sorry, again my writing here was probably unclear; the scenarios I was picturing were more like:
There’s a serious breach—US govt systems get hacked (again) by [foreign nation, maybe using AI] - revealing that they’re even weaker than is currently understood, or publicly embarrassing the admin. The admin pushes for fast modernization on this front.
A flashy project isn’t proceeding as desired (especially as things are ramping up), the admin in power is ~upset with the lack of progress, pushes
There’s a successful violent attack (e.g. terrorism); turns out [agency] was acting too slowly...
Etc.
Not sure if that answers the question/confusion?
5.
This section is trying to argue that AI adoption will be riskier later on, so the “bargaining power” I was talking about here is the bargaining power of the US federal govt (or of federal agencies) as a customer; the companies it’s buying from will have more leverage if they’re effectively monopolies. My understanding is that there are already situations where the US govt has limited negotiation power and maybe even makes policy concessions to specific companies specifically because of its relationship to those companies — e.g. in defense (Lockheed Martin, etc., although this is also kinda complicated) and again maybe Microsoft.
Again, the section was specifically trying to argue that later adoption is scarier than earlier adoption (in this case because there are (still) several frontier AI companies). But I do think that building up internal AI capacity, especially talent, would reduce the leverage any specific AI company has over the US federal government.
6.
Yeah, I don’t think I navigated this well! (And I think I was partly talking ti myself here.) But maybe my “motivation” notes above give some context?
In terms of the specific “position” I in practice leaned into: Part of why I led with the benefits of AI adoption was the sense that the ~existential risk community (which is most of my audience) generally focuses on risks of AI adoption/use/products, and that’s where my view diverges more. There’s also been more discussion, from an existential risk POV, of the risks of adoption than there has been of the benefits, so I didn’t feel that elaborating too much on the risks would be as useful.
7.
The TLDR of my view here is something like “without more internal AI/technical talent (most of) the government will be slower on using AI to improve its work & stay relevant, which I think is bad, and also it will be increasingly reliant on external people/groups/capacity for technical expertise—e.g. relying on external evals, or trusting external advice on what policy options make sense, etc. and this is bad.”
8.
(The linked article is this one: https://www.forethought.org/research/should-there-be-just-one-western-agi-project )
I was linking to this to point to relevant discussion, not as a justification for a strong claim like “centralization is definitively bad”—sorry for being unclear!
9.
I suspect we may have fairly different underlying worldviews here, but maybe a core underlying belief on my end is that there are things that it’s helpful for the government to do before we get to ~ASI, and also there will be AI tools pre ~ASI that are very helpful for doing those things. (Or an alt framing: the world will get ~/fast/complicated/weird due to AI before there’s nothing the US gov could in theory do to make things go better.)
10.
I fairly strongly disagree here (with “the priority should be to get any sort of safety regulations at all”) but don’t have time to get into it, really sorry!
---
Finally, thanks a bunch for saying that you enjoyed some of my earlier writing & I changed your thinking on slow vs quick mistakes! That kind of thing is always lovely to hear.
(Posted on my phone— sorry for typos and similar!)
Thanks, this comment gives me a much better sense of where you’re coming from. I agree and disagree with various specific points, but I won’t get into that since I don’t think we will resolve any disagreements without an extended discussion.
What I will say is that I found this comment to be much more enlightening than your original post. And whereas I said before that the original article didn’t feel like the output of a reasoning process, this comment did feel like that. At least for me personally, I think whatever mental process you used to write this comment is what you should use to write these sorts of articles, because whatever process you used to write this comment, it worked.
I don’t know what’s going on inside your head, but if I were to guess, perhaps you didn’t want to write an article in the style of this comment because it’s too informal or personal or un-authoritative. Those qualities do make it harder to (say) get a paper published in an academic journal, but I prefer to read articles that have those qualities. If your audience is the EA Forum or similar, then I think you should lean into them.
I don’t think you were LARPing research, your comment shows a real thought process behind it. After reading all your line item responses, I feel like I understand what you were trying to say. Like on the few parts I quoted as seeming contradictory, I can now see why they weren’t actually contradictory and they were part of a coherent stance.
I think you (Michael Dickens) are probably one of my favorite authors on your side of this, and I’m happy to see this discussion—though I myself am more on the other side.
Some quick responses
> I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness.
I assume it can clearly be a mix of both. Right now we’re in a situation where many people barely trust the US government to do anything. A major argument for why the US government shouldn’t regulate AI is that they often mess up things they try to regulate. This is a massive deal in a lot of the back-and-forth I’ve seen on the issue on Twitter.
I’d expect that if the US government were far more competent, people would trust it to take care of many more things, including high-touch AI oversight.
> Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
This doesn’t seem like a major deal to me. Like, the US government uses software a lot, but I don’t see them “funding/helping software development”, even though I really think they should. If I were them, I would have invested far more in open-source systems, for instance.
My quick impression is that a competent oversight and guiding of AI systems, carefully working through the risks and benefits, would be incredibly challenging, and I’d expect any human-lead government to make gigantic errors in it. Even attempts to “slow down AI” could easily backfire if not done well. For example, I think that Democratic attempts to increase migration in the last few years might have massively backfired.
I agree with a good portion of your comment but I still don’t think increasing government competence (on AI) is worth prioritizing:
SB-1047 was adequately competently written (AFAICT). If we get more regulations at a similar level of competence, that would be reasonable.
Good AI regulations will make things harder on AI companies. AI leaders / tech accelerationists will be unhappy about regulations regardless of how competently written they are. On the other hand, the general population mostly supports AI regulations (according to AIPI polls). Getting regulators on board with what people want seems to me to be the best path to getting regulations in place.
Suppose it turned out Microsoft Office was dangerous. Surely the fact that Office is so embedded in government procedures would make it less likely to get banned?
IIRC you see similar phenomena (although I can’t recall any examples off hand) where some government-mandated software has massive security flaws but nobody does anything about it because the software is too entrenched.
Thanks for the responses!
Agreed
I don’t see it as either/or. I agree that pushing for regulations is a bigger priority than AI in government. Right now the former is getting dramatically more EA resources and I’d expect that to continue. But I think the latter are getting almost none, and that doesn’t seem right to me.
I worry we’re getting into a distant hypothetical. I’d equate this to, “Given the Government is using Microsoft Office, are they likely to try to make sure that future versions of Microsoft Office are better? Especially, in a reckless way?”
Naively I’d expect a government that uses Microsoft Office to be one with a better understanding of the upsides and downsides of Microsoft Office.
I’d expect that most AI systems the Government would use would be fairly harmless (in terms of the main risks we care about). Like, things a few years old (and thus tested a lot in industry), with less computing power than would be ideal, etc.
Related, I think that the US military has done good work to make high-reliability software, due to their need for it. (Though this is a complex discussion, as they obviously do a mix of things.)
Tyler Technologies.
But this is local government not federal.
This is probably true, but improving competence throughout the government would be a massive undertaking, would take a long time and also have a long lag before public opinion would update. Seems like an extremely circuitous route to impact.
I mainly agree.
I previously was addressing Michael’s more limited point, “I don’t think government competence is what’s holding us back from having good AI regulations, it’s government willingness.”
All that said, separately, I think that “increasing government competence” is often a good bet, as it just comes with a long list of benefits.
But if one believes that AI will happen soon, and that a major bottleneck is “getting the broad public to trust the US government more, with the purpose of then encouraging AI reform”, that seems like a dubious strategy.