But I also wish you’d say what exactly your alternative course of action is, and why it’s better. E.g. the worry of “algorithmic progress gets you to the threshold” also applies to unconditional pauses. Right now your comments feel to me like a search for anything negative about a conditional pause, without checking whether that negative applies to other courses of action.
The way I see it, the main difference between conditional vs unconditional pause is that the unconditional pause comes with a bigger safety margin (as big as we can muster). So, given that I’m more worried about surprising takeoffs, that position seems prima facie more appealing to me.
In addition, as I say in my other comment, I’m open to (edit: or, more strongly, I’d ideally prefer this!) some especially safety-conscious research continuing onwards through the pause. I gather that this is one of your primary concerns? I agree that an outcome where that’s possible requires nuanced discourse, which we may not get if public reaction to AI goes too far in one direction. So, I agree that there’s a tradeoff around public advocacy.
The way I see it, the main difference between conditional vs unconditional pause is that the unconditional pause comes with a bigger safety margin (as big as we can muster). So, given that I’m more worried about surprising takeoffs, that position seems prima facie more appealing to me.
In addition, as I say in my other comment, I’m open to (edit: or, more strongly, I’d ideally prefer this!) some especially safety-conscious research continuing onwards through the pause. I gather that this is one of your primary concerns? I agree that an outcome where that’s possible requires nuanced discourse, which we may not get if public reaction to AI goes too far in one direction. So, I agree that there’s a tradeoff around public advocacy.