But there wasn’t originally any conflict! I don’t understand why everyone is presupposing conflict!
My whole point involves reasoning about what created the conflict in the first place; I’m not going to be persuaded by arguments that presume a conflict.
OK, let’s reason about what “created the conflict in the first place.” Blaming AI safety for instigating this conflict necessarily presupposes that unfettered industry is the natural order of things and that trying to regulate or govern or have any oversight or balance considerations is unnatural and amounted to some kind of preemptive strike that justifies a response, no matter how bad faith that response is.
It’s fine to question whether influence-seeking was strategically costly, but I think you’ve gone beyond false equivalency into a weird kind of epistemic laundering that totally shrugs at the question of whether companies should be trusted to self-regulate and imagines AI safety power-seeking created out of whole cloth enemies who had no prior beef (to mix metaphors).
I would argue that any power-seeking merely activated and organized opposition that was always latently there, because naked commercial interests and ideological aversion to any kind of responsibility regime have always existed in an industry that was already using its structural advantages to drive society toward a cliff.
Precisely because a movement that is powerful + coordinated is more threatening than one that is not.
But there wasn’t originally any conflict! I don’t understand why everyone is presupposing conflict!
My whole point involves reasoning about what created the conflict in the first place; I’m not going to be persuaded by arguments that presume a conflict.
OK, let’s reason about what “created the conflict in the first place.” Blaming AI safety for instigating this conflict necessarily presupposes that unfettered industry is the natural order of things and that trying to regulate or govern or have any oversight or balance considerations is unnatural and amounted to some kind of preemptive strike that justifies a response, no matter how bad faith that response is.
It’s fine to question whether influence-seeking was strategically costly, but I think you’ve gone beyond false equivalency into a weird kind of epistemic laundering that totally shrugs at the question of whether companies should be trusted to self-regulate and imagines AI safety power-seeking created out of whole cloth enemies who had no prior beef (to mix metaphors).
I would argue that any power-seeking merely activated and organized opposition that was always latently there, because naked commercial interests and ideological aversion to any kind of responsibility regime have always existed in an industry that was already using its structural advantages to drive society toward a cliff.