So the 299 corporate lobbyists is less about measuring total influence and more about how many pairs of eyes there are on the playing field who are potentially able to notice a bill moving forward—the odds that all 299 of them miss the bill for months on end are essentially zero.
You’re right to be skeptical of “number of lobbyists” as a measure of influence; a better metric would be the total amount of money spent on in-house government relations experts, outside lobbyists, advertising, PR firms, social media experts, and campaign donations. I don’t have access to those figures for tech companies, but I still feel confident that the total industry budget for DC influence is much higher than the total AI safety budget for DC influence, especially if we discount money that’s going to academic AI governance research that’s too abstract to buy much political influence.
I’m not sure why you’re saying that a conflict framing doesn’t represent the facts on the ground—it’s true that many lobbyists are friendly and are willing to work for a variety of different causes, but if they’re currently being employed to work against AI safety, then I would think we’re in conflict with them. Do you see it differently? What kinds of conflicts (if any) do you see in the political arena, and how do you think about them?
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing”—I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.
So the 299 corporate lobbyists is less about measuring total influence and more about how many pairs of eyes there are on the playing field who are potentially able to notice a bill moving forward—the odds that all 299 of them miss the bill for months on end are essentially zero.
You’re right to be skeptical of “number of lobbyists” as a measure of influence; a better metric would be the total amount of money spent on in-house government relations experts, outside lobbyists, advertising, PR firms, social media experts, and campaign donations. I don’t have access to those figures for tech companies, but I still feel confident that the total industry budget for DC influence is much higher than the total AI safety budget for DC influence, especially if we discount money that’s going to academic AI governance research that’s too abstract to buy much political influence.
I’m not sure why you’re saying that a conflict framing doesn’t represent the facts on the ground—it’s true that many lobbyists are friendly and are willing to work for a variety of different causes, but if they’re currently being employed to work against AI safety, then I would think we’re in conflict with them. Do you see it differently? What kinds of conflicts (if any) do you see in the political arena, and how do you think about them?
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing”—I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.
OK, let me know when you’re back, and I’ll be happy to chat more! You can also email me at jason@aipolicy.us if you like.