In 2024, several lobbyists from the same firm represented both OpenAI and the Center for AI Safety Action Fund at the same time. I am not suggesting any conflict of interest on their part. However, I don’t think the “299 corporate lobbyists vs. scrappy AI safety groups” is an effective framing, given that at least some of the money is flowing to the similar places.
I wouldn’t compare external lobbyists to full time advocacy staff because (1) external lobbyists and lawyers typically cover many clients and are unlikely to be particularly committed to a single issue like AI safety, and (2) firms often register anyone who does outreach on the project, regardless of whether they meet federal lobbying disclosure thresholds. It’s also pretty congenial in general; Amazon lobbyists have helped me with EA-related side projects without payment because they were being nice.
This isn’t to say that advocacy couldn’t absorb more funding. But the conflict framing doesn’t seem to represent the facts on the ground, at if organizations want funding to hire mainstream government relations people who rarely see it that way. Upskilling people already committed to AI safety would be different.
You run an AI safety org full time and have a better idea of the field. I’m just throwing in my two cents re: representation disparities.
So the 299 corporate lobbyists is less about measuring total influence and more about how many pairs of eyes there are on the playing field who are potentially able to notice a bill moving forward—the odds that all 299 of them miss the bill for months on end are essentially zero.
You’re right to be skeptical of “number of lobbyists” as a measure of influence; a better metric would be the total amount of money spent on in-house government relations experts, outside lobbyists, advertising, PR firms, social media experts, and campaign donations. I don’t have access to those figures for tech companies, but I still feel confident that the total industry budget for DC influence is much higher than the total AI safety budget for DC influence, especially if we discount money that’s going to academic AI governance research that’s too abstract to buy much political influence.
I’m not sure why you’re saying that a conflict framing doesn’t represent the facts on the ground—it’s true that many lobbyists are friendly and are willing to work for a variety of different causes, but if they’re currently being employed to work against AI safety, then I would think we’re in conflict with them. Do you see it differently? What kinds of conflicts (if any) do you see in the political arena, and how do you think about them?
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing”—I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.
In 2024, several lobbyists from the same firm represented both OpenAI and the Center for AI Safety Action Fund at the same time. I am not suggesting any conflict of interest on their part. However, I don’t think the “299 corporate lobbyists vs. scrappy AI safety groups” is an effective framing, given that at least some of the money is flowing to the similar places.
I wouldn’t compare external lobbyists to full time advocacy staff because (1) external lobbyists and lawyers typically cover many clients and are unlikely to be particularly committed to a single issue like AI safety, and (2) firms often register anyone who does outreach on the project, regardless of whether they meet federal lobbying disclosure thresholds. It’s also pretty congenial in general; Amazon lobbyists have helped me with EA-related side projects without payment because they were being nice.
This isn’t to say that advocacy couldn’t absorb more funding. But the conflict framing doesn’t seem to represent the facts on the ground, at if organizations want funding to hire mainstream government relations people who rarely see it that way. Upskilling people already committed to AI safety would be different.
You run an AI safety org full time and have a better idea of the field. I’m just throwing in my two cents re: representation disparities.
So the 299 corporate lobbyists is less about measuring total influence and more about how many pairs of eyes there are on the playing field who are potentially able to notice a bill moving forward—the odds that all 299 of them miss the bill for months on end are essentially zero.
You’re right to be skeptical of “number of lobbyists” as a measure of influence; a better metric would be the total amount of money spent on in-house government relations experts, outside lobbyists, advertising, PR firms, social media experts, and campaign donations. I don’t have access to those figures for tech companies, but I still feel confident that the total industry budget for DC influence is much higher than the total AI safety budget for DC influence, especially if we discount money that’s going to academic AI governance research that’s too abstract to buy much political influence.
I’m not sure why you’re saying that a conflict framing doesn’t represent the facts on the ground—it’s true that many lobbyists are friendly and are willing to work for a variety of different causes, but if they’re currently being employed to work against AI safety, then I would think we’re in conflict with them. Do you see it differently? What kinds of conflicts (if any) do you see in the political arena, and how do you think about them?
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing”—I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.
OK, let me know when you’re back, and I’ll be happy to chat more! You can also email me at jason@aipolicy.us if you like.