What’s your explanation for why they attack EAs rather than, say, the AI ethics crowd?
Why was SB 1047 so controversial, while other much more onerous AI bills (esp for “little tech”) were barely discussed?
If you think their goal is just to win, why attack the movement that has power and can coordinate funding to counter their actions? What exactly are they trying to win, and why would EA stop them from achieving that (if EA were not seeking power and influence)?
(I am not claiming that their target-selection rubric is calibrated to who is actually bad or good and idk why you would think that. I feel like you are committing some kind of fallacy where in any conflict there is a “good” side and a “bad” side and this is causing you to read implications into my comments that I don’t intend.)
I think there are (at least) two possible interpretations of
You present the parenthetical as a meliorating factor, but I expect that these enemies exist due to previous undemocratic power-seeking actions by the AI safety community.
The more natural interpretation is that “previous undemocratic power-seeking actions by the AI safety community” are causally upstream of these enemies existing and their agendas. I think this is implausible.
The more correct framing, to me, is that “previous undemocratic power-seeking actions by the AI safety community” made EAs a good target for attack ads, in a way that, say, a counterfactual version of EA that clearly and legibly never took actions that upset the power balance (eg a version of EA where all it does is openly advocated people give 1% of their money to GiveDirectly) wouldn’t. The best lies/propaganda have some grain of truth to them, and usually more than just a grain.
Similarly if you’re advising a politician,
your scandals are why the opposing party is attacking your scandals, why your allies are leaving you, and that’s why you seem to have so many enemies
is in some sense literally true (manufacturing fake scandals is less effective). It’s even useful (It’s good for politicians and would-be politicians to have less scandals rather than whine about the media or opposing attack ads as unfair)! But it’s better to model your political enemies as out to seek their objectives regardless, and your scandals as reducing the costs/increasing the benefits of a specific way to reach their objective, rather than casually upstream of their underlying objectives regardless.
I am in fact claiming it is causally upstream. Idk why you think it’s implausible.
The main reason I’m not persuaded by your politician analogy, is that the politician analogy bakes in the assumption that there is a zero-sum conflict going on. But the whole question here is why there is a conflict in the first place.
What’s your explanation for why they attack EAs rather than, say, the AI ethics crowd?
I think there are a few plausible reasons that don’t require “undemocratic power-seeking” as the primary explanation:
EAs have been more motivated to & competent at gaining influence through legitimate/standard means
EA policy ideas enjoyed better reception by policymakers, were more aligned with their interests. The opponents might have stronger disagreement with AI ethics ideas, but see them as much less likely to have influence.
EAs compete more directly on their territory. The AI ethics crowd has sufficiently different values and assumptions that they’re less of a threat locally. (Kind of a ‘narcissism of small differences’.)
I expect that if your ideas are resonating with policymakers and people are getting appointed to relevant roles because they’re competent, bad faith opponents will target you roughly the same as if you’d been pulling strings behind the scenes in dubious ways.
Why was SB 1047 so controversial, while other much more onerous AI bills (esp for “little tech”) were barely discussed?
Maybe I’m missing something. SB 1047 seemed like a relatively transparent action, that followed the democratic process. Is your point that undemocratic power-seeking actions prior/unrelated to SB 1047 likely explains the stronger opposition to SB 1047?
I don’t really care whether it’s “democratic” or “undemocratic” and wish I hadn’t used the word in my original comment (I was mostly just mirroring the original language).
My main claim is that AI safety / EA likely created their own enemies due to an intense focus on gaining influence and power.
I am not claiming it is inherently bad to gain influence and power. I do it myself. I just think AI safety / EA is pretty naive in how it goes about it.
Tbc some of your explanations would still go against my claim if they were true. I don’t think they’re true, but I agree I haven’t justified that here.
But there wasn’t originally any conflict! I don’t understand why everyone is presupposing conflict!
My whole point involves reasoning about what created the conflict in the first place; I’m not going to be persuaded by arguments that presume a conflict.
OK, let’s reason about what “created the conflict in the first place.” Blaming AI safety for instigating this conflict necessarily presupposes that unfettered industry is the natural order of things and that trying to regulate or govern or have any oversight or balance considerations is unnatural and amounted to some kind of preemptive strike that justifies a response, no matter how bad faith that response is.
It’s fine to question whether influence-seeking was strategically costly, but I think you’ve gone beyond false equivalency into a weird kind of epistemic laundering that totally shrugs at the question of whether companies should be trusted to self-regulate and imagines AI safety power-seeking created out of whole cloth enemies who had no prior beef (to mix metaphors).
I would argue that any power-seeking merely activated and organized opposition that was always latently there, because naked commercial interests and ideological aversion to any kind of responsibility regime have always existed in an industry that was already using its structural advantages to drive society toward a cliff.
I don’t think that’s an attack on the AI ethics crowd. I think that’s an attack on wokeness which maybe deals a glancing blow to AI ethics as an incidental side effect.
Like, if you look at the purpose:
One of the most pervasive and destructive of these ideologies is so-called “diversity, equity, and inclusion” (DEI). In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.
This has very little to do with what the “AI ethics” crowd wants in my experience. The topics I hear about are more like algorithmic discrimination, misinformation, the right to an explanation, child safety, job loss, copyright, accessibility, etc.
You can also skim through the papers at FAccT 2025, I think this also suggests that it’s not an attack on that crowd except incidentally.
What’s your explanation for why they attack EAs rather than, say, the AI ethics crowd?
Why was SB 1047 so controversial, while other much more onerous AI bills (esp for “little tech”) were barely discussed?
If you think their goal is just to win, why attack the movement that has power and can coordinate funding to counter their actions? What exactly are they trying to win, and why would EA stop them from achieving that (if EA were not seeking power and influence)?
(I am not claiming that their target-selection rubric is calibrated to who is actually bad or good and idk why you would think that. I feel like you are committing some kind of fallacy where in any conflict there is a “good” side and a “bad” side and this is causing you to read implications into my comments that I don’t intend.)
I think there are (at least) two possible interpretations of
The more natural interpretation is that “previous undemocratic power-seeking actions by the AI safety community” are causally upstream of these enemies existing and their agendas. I think this is implausible.
The more correct framing, to me, is that “previous undemocratic power-seeking actions by the AI safety community” made EAs a good target for attack ads, in a way that, say, a counterfactual version of EA that clearly and legibly never took actions that upset the power balance (eg a version of EA where all it does is openly advocated people give 1% of their money to GiveDirectly) wouldn’t. The best lies/propaganda have some grain of truth to them, and usually more than just a grain.
Similarly if you’re advising a politician,
is in some sense literally true (manufacturing fake scandals is less effective). It’s even useful (It’s good for politicians and would-be politicians to have less scandals rather than whine about the media or opposing attack ads as unfair)! But it’s better to model your political enemies as out to seek their objectives regardless, and your scandals as reducing the costs/increasing the benefits of a specific way to reach their objective, rather than casually upstream of their underlying objectives regardless.
I am in fact claiming it is causally upstream. Idk why you think it’s implausible.
The main reason I’m not persuaded by your politician analogy, is that the politician analogy bakes in the assumption that there is a zero-sum conflict going on. But the whole question here is why there is a conflict in the first place.
I think there are a few plausible reasons that don’t require “undemocratic power-seeking” as the primary explanation:
EAs have been more motivated to & competent at gaining influence through legitimate/standard means
EA policy ideas enjoyed better reception by policymakers, were more aligned with their interests. The opponents might have stronger disagreement with AI ethics ideas, but see them as much less likely to have influence.
EAs compete more directly on their territory. The AI ethics crowd has sufficiently different values and assumptions that they’re less of a threat locally. (Kind of a ‘narcissism of small differences’.)
I expect that if your ideas are resonating with policymakers and people are getting appointed to relevant roles because they’re competent, bad faith opponents will target you roughly the same as if you’d been pulling strings behind the scenes in dubious ways.
Maybe I’m missing something. SB 1047 seemed like a relatively transparent action, that followed the democratic process. Is your point that undemocratic power-seeking actions prior/unrelated to SB 1047 likely explains the stronger opposition to SB 1047?
I don’t really care whether it’s “democratic” or “undemocratic” and wish I hadn’t used the word in my original comment (I was mostly just mirroring the original language).
My main claim is that AI safety / EA likely created their own enemies due to an intense focus on gaining influence and power.
I am not claiming it is inherently bad to gain influence and power. I do it myself. I just think AI safety / EA is pretty naive in how it goes about it.
Tbc some of your explanations would still go against my claim if they were true. I don’t think they’re true, but I agree I haven’t justified that here.
Precisely because a movement that is powerful + coordinated is more threatening than one that is not.
But there wasn’t originally any conflict! I don’t understand why everyone is presupposing conflict!
My whole point involves reasoning about what created the conflict in the first place; I’m not going to be persuaded by arguments that presume a conflict.
OK, let’s reason about what “created the conflict in the first place.” Blaming AI safety for instigating this conflict necessarily presupposes that unfettered industry is the natural order of things and that trying to regulate or govern or have any oversight or balance considerations is unnatural and amounted to some kind of preemptive strike that justifies a response, no matter how bad faith that response is.
It’s fine to question whether influence-seeking was strategically costly, but I think you’ve gone beyond false equivalency into a weird kind of epistemic laundering that totally shrugs at the question of whether companies should be trusted to self-regulate and imagines AI safety power-seeking created out of whole cloth enemies who had no prior beef (to mix metaphors).
I would argue that any power-seeking merely activated and organized opposition that was always latently there, because naked commercial interests and ideological aversion to any kind of responsibility regime have always existed in an industry that was already using its structural advantages to drive society toward a cliff.
I think the AI ethics crowd is the subject of attacks (though arguably this is because they tried to seek power and influence).
I don’t think that’s an attack on the AI ethics crowd. I think that’s an attack on wokeness which maybe deals a glancing blow to AI ethics as an incidental side effect.
Like, if you look at the purpose:
This has very little to do with what the “AI ethics” crowd wants in my experience. The topics I hear about are more like algorithmic discrimination, misinformation, the right to an explanation, child safety, job loss, copyright, accessibility, etc.
You can also skim through the papers at FAccT 2025, I think this also suggests that it’s not an attack on that crowd except incidentally.