Current: US government relations (energy & tech, mostly)
Former: doctoral candidate (law @ Oxford) / lecturer (humanitarian aid & human rights practice) / global operations advisor (nonprofits) / NSF research fellow (civil conflict management & peace science)
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing”—I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.