I think there are (at least) two possible interpretations of
You present the parenthetical as a meliorating factor, but I expect that these enemies exist due to previous undemocratic power-seeking actions by the AI safety community.
The more natural interpretation is that “previous undemocratic power-seeking actions by the AI safety community” are causally upstream of these enemies existing and their agendas. I think this is implausible.
The more correct framing, to me, is that “previous undemocratic power-seeking actions by the AI safety community” made EAs a good target for attack ads, in a way that, say, a counterfactual version of EA that clearly and legibly never took actions that upset the power balance (eg a version of EA where all it does is openly advocated people give 1% of their money to GiveDirectly) wouldn’t. The best lies/propaganda have some grain of truth to them, and usually more than just a grain.
Similarly if you’re advising a politician,
your scandals are why the opposing party is attacking your scandals, why your allies are leaving you, and that’s why you seem to have so many enemies
is in some sense literally true (manufacturing fake scandals is less effective). It’s even useful (It’s good for politicians and would-be politicians to have less scandals rather than whine about the media or opposing attack ads as unfair)! But it’s better to model your political enemies as out to seek their objectives regardless, and your scandals as reducing the costs/increasing the benefits of a specific way to reach their objective, rather than casually upstream of their underlying objectives regardless.
I am in fact claiming it is causally upstream. Idk why you think it’s implausible.
The main reason I’m not persuaded by your politician analogy, is that the politician analogy bakes in the assumption that there is a zero-sum conflict going on. But the whole question here is why there is a conflict in the first place.
I think there are (at least) two possible interpretations of
The more natural interpretation is that “previous undemocratic power-seeking actions by the AI safety community” are causally upstream of these enemies existing and their agendas. I think this is implausible.
The more correct framing, to me, is that “previous undemocratic power-seeking actions by the AI safety community” made EAs a good target for attack ads, in a way that, say, a counterfactual version of EA that clearly and legibly never took actions that upset the power balance (eg a version of EA where all it does is openly advocated people give 1% of their money to GiveDirectly) wouldn’t. The best lies/propaganda have some grain of truth to them, and usually more than just a grain.
Similarly if you’re advising a politician,
is in some sense literally true (manufacturing fake scandals is less effective). It’s even useful (It’s good for politicians and would-be politicians to have less scandals rather than whine about the media or opposing attack ads as unfair)! But it’s better to model your political enemies as out to seek their objectives regardless, and your scandals as reducing the costs/increasing the benefits of a specific way to reach their objective, rather than casually upstream of their underlying objectives regardless.
I am in fact claiming it is causally upstream. Idk why you think it’s implausible.
The main reason I’m not persuaded by your politician analogy, is that the politician analogy bakes in the assumption that there is a zero-sum conflict going on. But the whole question here is why there is a conflict in the first place.