Iâll push against this post a little bit, despite agreeing with a lot of the ideas.
Firstly, I think we can avoid the moral discomfort of âhoping for warning shotsâ by reframing as âhoping for windows of opportunityâ. We should hope and prepare for moments, where, for whatever reason, policymakers and the public are unusually attentive to what weâre saying.
Secondly, while youâre more arguing against the hand-wavy âwarning-shot as cavalryâ claims, there seems to be another claim- that we should act in a similar way regardless of whether or not the âwarning shotâ model is correct, i.e. whether we expect the policy and discourse battle to take the form of a gradual grind of persuasion vs. a very lumpy, unpredictable pattern shaped around distinct windows of opportunity.
Our strategy might look similar most of the time, and I agree that a lot of the hard persuasion work in the trenches needs to go on regardless. But I suspect there are a few ways you might act differently if the âwarning shot/âwindows of opportunityâ model is correct. For example:
Strategic preparednessâkeep some things in reserve, have a bunch of ready-to-go policy proposal binders or communication strategies deliberately for when a window opens
Take a slightly more cautious approach to preserving credibility capital. There are ways of talking about risks now that might cost you influence today, but look appropriate in the correct window.
Build relationships in anticipation of a window of opportunity opening, rather than pushing directly for change.
I agree with all your suggestions and donât see them in contrast with the post.
Iâm not trying to say reality will never be lumpy, but I am claiming that we canât make use of that without a contingent of the overall AI Safety movement being prepared to take a grind-y strategy. Sometimes itâll be pure grind and sometimes itâll have more momentum behind it. But if you have no groundwork laid when something big happens, you canât just jump in and expect people to interpret it as supporting your account.
Iâll push against this post a little bit, despite agreeing with a lot of the ideas.
Firstly, I think we can avoid the moral discomfort of âhoping for warning shotsâ by reframing as âhoping for windows of opportunityâ. We should hope and prepare for moments, where, for whatever reason, policymakers and the public are unusually attentive to what weâre saying.
Secondly, while youâre more arguing against the hand-wavy âwarning-shot as cavalryâ claims, there seems to be another claim- that we should act in a similar way regardless of whether or not the âwarning shotâ model is correct, i.e. whether we expect the policy and discourse battle to take the form of a gradual grind of persuasion vs. a very lumpy, unpredictable pattern shaped around distinct windows of opportunity.
Our strategy might look similar most of the time, and I agree that a lot of the hard persuasion work in the trenches needs to go on regardless. But I suspect there are a few ways you might act differently if the âwarning shot/âwindows of opportunityâ model is correct. For example:
Strategic preparednessâkeep some things in reserve, have a bunch of ready-to-go policy proposal binders or communication strategies deliberately for when a window opens
Take a slightly more cautious approach to preserving credibility capital. There are ways of talking about risks now that might cost you influence today, but look appropriate in the correct window.
Build relationships in anticipation of a window of opportunity opening, rather than pushing directly for change.
I agree with all your suggestions and donât see them in contrast with the post.
Iâm not trying to say reality will never be lumpy, but I am claiming that we canât make use of that without a contingent of the overall AI Safety movement being prepared to take a grind-y strategy. Sometimes itâll be pure grind and sometimes itâll have more momentum behind it. But if you have no groundwork laid when something big happens, you canât just jump in and expect people to interpret it as supporting your account.