I overall agree we should prefer USG to be better AI-integrated. I think this isnât a particularly controversial or surprising conclusion though, so I think the main question is how high a priority this is, and I am somewhat skeptical it is on the ITN pareto frontier. E.g. I would assume plenty of people care about government efficiency and state capacity generally, and a lot of these interventions are generally about making USG more capable rather than too targeted towards longtermist priorities.
So this felt like neither the sort of piece targeted to mainstream US policy folks, nor that convincing for why this should be an EA/âlongtermist focus area. Still, I hadnât thought much about this before, and so doing this level of medium-depth investigation feels potentially valuable, but Iâm unconvinced that e.g. OP should spin up a grantmaker focused on this (not that you were necessarily recommending this).
Also, a few reasons govts may have a better time adopting AI come to mind:
Access to large amounts of internal private data
Large institutions can better afford one-time upfront costs to train or finetune specialised models, compared to small businesses
But I agree the opposing reasons you give are probably stronger.
we should do what we normally do when juggling different priorities: evaluate the merits and costs of specific interventions, looking for âwin-winâ opportunities
If only this were how USG juggled its priorities!
Of course criticism is only a partially overlapping set with advice, but this post reminded me a bit of this take on giving and receiving criticism.