What’s your explanation for why they attack EAs rather than, say, the AI ethics crowd?
I think there are a few plausible reasons that don’t require “undemocratic power-seeking” as the primary explanation:
EAs have been more motivated to & competent at gaining influence through legitimate/standard means
EA policy ideas enjoyed better reception by policymakers, were more aligned with their interests. The opponents might have stronger disagreement with AI ethics ideas, but see them as much less likely to have influence.
EAs compete more directly on their territory. The AI ethics crowd has sufficiently different values and assumptions that they’re less of a threat locally. (Kind of a ‘narcissism of small differences’.)
I expect that if your ideas are resonating with policymakers and people are getting appointed to relevant roles because they’re competent, bad faith opponents will target you roughly the same as if you’d been pulling strings behind the scenes in dubious ways.
Why was SB 1047 so controversial, while other much more onerous AI bills (esp for “little tech”) were barely discussed?
Maybe I’m missing something. SB 1047 seemed like a relatively transparent action, that followed the democratic process. Is your point that undemocratic power-seeking actions prior/unrelated to SB 1047 likely explains the stronger opposition to SB 1047?
I don’t really care whether it’s “democratic” or “undemocratic” and wish I hadn’t used the word in my original comment (I was mostly just mirroring the original language).
My main claim is that AI safety / EA likely created their own enemies due to an intense focus on gaining influence and power.
I am not claiming it is inherently bad to gain influence and power. I do it myself. I just think AI safety / EA is pretty naive in how it goes about it.
Tbc some of your explanations would still go against my claim if they were true. I don’t think they’re true, but I agree I haven’t justified that here.
I think there are a few plausible reasons that don’t require “undemocratic power-seeking” as the primary explanation:
EAs have been more motivated to & competent at gaining influence through legitimate/standard means
EA policy ideas enjoyed better reception by policymakers, were more aligned with their interests. The opponents might have stronger disagreement with AI ethics ideas, but see them as much less likely to have influence.
EAs compete more directly on their territory. The AI ethics crowd has sufficiently different values and assumptions that they’re less of a threat locally. (Kind of a ‘narcissism of small differences’.)
I expect that if your ideas are resonating with policymakers and people are getting appointed to relevant roles because they’re competent, bad faith opponents will target you roughly the same as if you’d been pulling strings behind the scenes in dubious ways.
Maybe I’m missing something. SB 1047 seemed like a relatively transparent action, that followed the democratic process. Is your point that undemocratic power-seeking actions prior/unrelated to SB 1047 likely explains the stronger opposition to SB 1047?
I don’t really care whether it’s “democratic” or “undemocratic” and wish I hadn’t used the word in my original comment (I was mostly just mirroring the original language).
My main claim is that AI safety / EA likely created their own enemies due to an intense focus on gaining influence and power.
I am not claiming it is inherently bad to gain influence and power. I do it myself. I just think AI safety / EA is pretty naive in how it goes about it.
Tbc some of your explanations would still go against my claim if they were true. I don’t think they’re true, but I agree I haven’t justified that here.