I’ve been thinking a lot about this broad topic and am very sympathetic. Happy to see it getting more discussion.
I think this post correctly flags how difficult it is to get the government to change.
At the same time, I imagine there might be some very clever strategies to get a lot of the benefits of AI without many of the normal costs of integration.
For example:
The federal government makes heavy use of private contractors. These contractors are faster to adopt innovations like AI.
There are clearly some subsets of the government that matter far more than others. And there are some that are much easier to improve than others.
If AI strategy/intelligence is cheap enough, most of the critical work can be paid for by donors. For example, we have a situation where there’s a think tank that uses AI to figure out the best strategies/plans for much of the government, and government officials can choose to pay attention to this.
Basically, I think some level of optimism is warranted, and would suggest more research into that area.
(This is all very similar to previous thinking on how forecasting can be useful to the government.)
I imagine there might be some very clever strategies to get a lot of the benefits of AI without many of the normal costs of integration.
For example:
The federal government makes heavy use of private contractors. These contractors are faster to adopt innovations like AI.
There are clearly some subsets of the government that matter far more than others. And there are some that are much easier to improve than others.
If AI strategy/intelligence is cheap enough, most of the critical work can be paid for by donors. For example, we have a situation where there’s a think tank that uses AI to figure out the best strategies/plans for much of the government, and government officials can choose to pay attention to this.
I’d be excited to see more work in this direction!
Quick notes: I think (1) is maybe the default way I expect things to go fine (although I have some worries about worlds where almost all US federal govt AI capacity is via private contractors). (2) seems right, and I’d want someone who has (or can develop) a deeper understanding of this area than me to explore this. Stuff like (3) seems quite useful, although I’m worried about things like ensuring access to the right kind of data and decision-makers (but partnerships / a mix of (2) and (3) could help).
(A lot of this probably falls loosely under “build capacity outside the US federal government” in my framework, but I think the lines are very blurry / a lot of the same interventions help with appropriate use/adoption of AI in the government and external capacity. )
all very similar to previous thinking on how forecasting can be useful to the government
I hadn’t thought about this — makes sense, and a useful flag, thank you! (I might dig into this a bit more.)
I’ve been thinking a lot about this broad topic and am very sympathetic. Happy to see it getting more discussion.
I think this post correctly flags how difficult it is to get the government to change.
At the same time, I imagine there might be some very clever strategies to get a lot of the benefits of AI without many of the normal costs of integration.
For example:
The federal government makes heavy use of private contractors. These contractors are faster to adopt innovations like AI.
There are clearly some subsets of the government that matter far more than others. And there are some that are much easier to improve than others.
If AI strategy/intelligence is cheap enough, most of the critical work can be paid for by donors. For example, we have a situation where there’s a think tank that uses AI to figure out the best strategies/plans for much of the government, and government officials can choose to pay attention to this.
Basically, I think some level of optimism is warranted, and would suggest more research into that area.
(This is all very similar to previous thinking on how forecasting can be useful to the government.)
I’d be excited to see more work in this direction!
Quick notes: I think (1) is maybe the default way I expect things to go fine (although I have some worries about worlds where almost all US federal govt AI capacity is via private contractors). (2) seems right, and I’d want someone who has (or can develop) a deeper understanding of this area than me to explore this. Stuff like (3) seems quite useful, although I’m worried about things like ensuring access to the right kind of data and decision-makers (but partnerships / a mix of (2) and (3) could help).
(A lot of this probably falls loosely under “build capacity outside the US federal government” in my framework, but I think the lines are very blurry / a lot of the same interventions help with appropriate use/adoption of AI in the government and external capacity. )
I hadn’t thought about this — makes sense, and a useful flag, thank you! (I might dig into this a bit more.)