the main question is how high a priority this is, and I am somewhat skeptical it is on the ITN pareto frontier. E.g. I would assume plenty of people care about government efficiency and state capacity generally, and a lot of these interventions are generally about making USG more capable rather than too targeted towards longtermist priorities.
Agree that “how high-priority should this be” is a key question, and I’m definitely not sure it’s on the ITN pareto frontier! (Nice phrase, btw.)
Quick notes on some things that raise the importance for me, though:
I agree lots of people care about government efficiency/ state capacity — but I suspect few of them are seriously considering the possibility of transformative AI in the near future, and I think what you do to ~boost capacity looks pretty different in that world
Also/relatedly, my worldview means I have extra reasons to care about state capacity, and given my worldview is unusual that means I should expect the world is underinvesting in state capacity (just like most people would love to see a world with fewer respiratory infections, but tracking the possibility of a bioengineered pandemic means I see stuff like far-UVC/better PPE/etc. as higher value)
More generally I like the “how much more do I care about X” frame — see this piece from 2014
(It could also be a kind of public good.)
In particular, it seems like a *lot* of the theory of change of AI governance+ relies on competent/skillful action/functioning by the US federal government, in periods where AI is starting to radically transform the world (e.g. to mandate testing and be able to tell if that mandate isn’t being followed!), and my sense is that this assumption is fragile/the govt may very well not actually be sufficiently competent — so we better be working on getting there, or investing more in plans that don’t rely on this assumption
And I’m pretty worried that a decent amount of work aimed at mitigating the risks of AI could end up net-negative (for its own goals) by not tracking this issue and thus not focusing enough on the interventions that are actually worth pursuing—further harming government AI adoption & competence / capacity in the process (e.g. I think some of the OMB/EO guidance from last year looked positive to me before I dug into this, and now looks negative). So I’d like to nudge some people who work on issues related to existential risk (and government) away from a view like: “all AI is scary/bad, anything that is ‘pro-AI’ increases existential risk, if this bundle of policies/barriers inhibits a bunch of different AI things then that’s probably great even if I think only a tiny fraction is truly (existentially) risky”, etc.
--
this felt like neither the sort of piece targeted to mainstream US policy folks, nor that convincing for why this should be an EA/longtermist focus area.
Totally reasonable reaction IMO. To a large extent I see this as a straightforward flaw of the piece & how I approached it (partly due to lack of time—see my reply to Michael above), although I’ll flag that my main hope was to surface this to people who are in fact kind of in between—e.g. folks at think tanks that do research on existential security and have government experience/expertise.
--
I’m unconvinced that e.g. OP should spin up a grantmaker focused on this (not that you were necessarily recommending this).
I am in fact not recommending this! (There could be specific interventions in the area that I’d see as worth funding, though, and it’s also related to other clusters where something like the above is reasonable IMO.)
--
Also, a few reasons govts may have a better time adopting AI come to mind:
Access to large amounts of internal private data
Large institutions can better afford one-time upfront costs to train or finetune specialised models, compared to small businesses
But I agree the opposing reasons you give are probably stronger.
The data has to be accessible, though, and this is a pretty big problem. See e.g. footnote 17.
I agree that a major advantage could be that the federal government can in fact move a lot of money when ~it wants to, and could make some (cross-agency/...) investments into secure models or similar, although my sense is that right now that kind of thing is the exception/aspiration, not the rule/standard practice. (Another advantage is that companies do want to maintain good relationships with the government/admin, and might thus invest more in being useful. Also there are probably a lot of skilled people who are willing to help with this kind of work, for less personal gain.)
Agree that “how high-priority should this be” is a key question, and I’m definitely not sure it’s on the ITN pareto frontier! (Nice phrase, btw.)
Quick notes on some things that raise the importance for me, though:
I agree lots of people care about government efficiency/ state capacity — but I suspect few of them are seriously considering the possibility of transformative AI in the near future, and I think what you do to ~boost capacity looks pretty different in that world
Also/relatedly, my worldview means I have extra reasons to care about state capacity, and given my worldview is unusual that means I should expect the world is underinvesting in state capacity (just like most people would love to see a world with fewer respiratory infections, but tracking the possibility of a bioengineered pandemic means I see stuff like far-UVC/better PPE/etc. as higher value)
More generally I like the “how much more do I care about X” frame — see this piece from 2014
(It could also be a kind of public good.)
In particular, it seems like a *lot* of the theory of change of AI governance+ relies on competent/skillful action/functioning by the US federal government, in periods where AI is starting to radically transform the world (e.g. to mandate testing and be able to tell if that mandate isn’t being followed!), and my sense is that this assumption is fragile/the govt may very well not actually be sufficiently competent — so we better be working on getting there, or investing more in plans that don’t rely on this assumption
And I’m pretty worried that a decent amount of work aimed at mitigating the risks of AI could end up net-negative (for its own goals) by not tracking this issue and thus not focusing enough on the interventions that are actually worth pursuing—further harming government AI adoption & competence / capacity in the process (e.g. I think some of the OMB/EO guidance from last year looked positive to me before I dug into this, and now looks negative). So I’d like to nudge some people who work on issues related to existential risk (and government) away from a view like: “all AI is scary/bad, anything that is ‘pro-AI’ increases existential risk, if this bundle of policies/barriers inhibits a bunch of different AI things then that’s probably great even if I think only a tiny fraction is truly (existentially) risky”, etc.
--
Totally reasonable reaction IMO. To a large extent I see this as a straightforward flaw of the piece & how I approached it (partly due to lack of time—see my reply to Michael above), although I’ll flag that my main hope was to surface this to people who are in fact kind of in between—e.g. folks at think tanks that do research on existential security and have government experience/expertise.
--
I am in fact not recommending this! (There could be specific interventions in the area that I’d see as worth funding, though, and it’s also related to other clusters where something like the above is reasonable IMO.)
--
The data has to be accessible, though, and this is a pretty big problem. See e.g. footnote 17.
I agree that a major advantage could be that the federal government can in fact move a lot of money when ~it wants to, and could make some (cross-agency/...) investments into secure models or similar, although my sense is that right now that kind of thing is the exception/aspiration, not the rule/standard practice. (Another advantage is that companies do want to maintain good relationships with the government/admin, and might thus invest more in being useful. Also there are probably a lot of skilled people who are willing to help with this kind of work, for less personal gain.)
--
🙃 (some decision-makers do, though!)