This is a valuable post, but I don’t think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:
Pausing AI development could be the wrong move, even if you don’t care about benefits and only care about risks
AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn’t merely about the nature of the risks but about the impact of potential interventions.
Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:
Falling behind China
Creating a compute overhang with subsequent rapid catch-up development
Polarizing the AI discourse before risks are clearer (and discrediting concerned AI experts), turning AI into a politically intractable problem, and
Causing AI lab regulatory flight to countries with lower state capacity, less robust democracies, fewer safety guardrails, and a lesser ability to mandate security standards to prevent model exfiltration
People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).
Although you may have felt that you did your “scouting” work and arrived at a position worth defending as a warrior, others’ comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.
If our shared goal is the clichéd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues’ input that one’s actions may be counterproductive to that end-goal.
2. Fighting does not necessarily get one closer to winning.
Although the analogy of war is compelling and lends itself well to your post’s argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don’t think this is the core of most thoughtful people’s criticism.
To be very clear, I’m not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to
(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to “do the dirty work,” and
(2) the idea that the scout-work is “done” already and an AI pause is currently desirable. (I’m not sure I’m right here at all, but I have reasons [above] to think that PauseAI shouldn’t be so sure either.)
Sorry for not editing this better, I wanted to write it quickly. I welcome people’s responses though I may not be able to answer to them!
This is a valuable post, but I don’t think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:
Pausing AI development could be the wrong move, even if you don’t care about benefits and only care about risks
AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn’t merely about the nature of the risks but about the impact of potential interventions.
Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:
Falling behind China
Creating a compute overhang with subsequent rapid catch-up development
Polarizing the AI discourse before risks are clearer (and discrediting concerned AI experts), turning AI into a politically intractable problem, and
Causing AI lab regulatory flight to countries with lower state capacity, less robust democracies, fewer safety guardrails, and a lesser ability to mandate security standards to prevent model exfiltration
People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).
Although you may have felt that you did your “scouting” work and arrived at a position worth defending as a warrior, others’ comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.
If our shared goal is the clichéd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues’ input that one’s actions may be counterproductive to that end-goal.
2. Fighting does not necessarily get one closer to winning.
Although the analogy of war is compelling and lends itself well to your post’s argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don’t think this is the core of most thoughtful people’s criticism.
To be very clear, I’m not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to
(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to “do the dirty work,” and
(2) the idea that the scout-work is “done” already and an AI pause is currently desirable. (I’m not sure I’m right here at all, but I have reasons [above] to think that PauseAI shouldn’t be so sure either.)
Sorry for not editing this better, I wanted to write it quickly. I welcome people’s responses though I may not be able to answer to them!