I wrote some criticism in this comment. Mainly, I argue that (1) A pause could be undesirable. A pause could be net-negative in expectation (with high variance depending on implementation specifics), and that PauseAI should take this concern more seriously. (2) Fighting doesn’t necessarily bring you closer to winning. PauseAI’s approach *could* be counterproductive even for the aim of achieving a pause, whether or not it’s desirable. From my comment:
Although the analogy of war is compelling and lends itself well to your post’s argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
What is the ultimate counterfactual here? I’d argue it’s extinction from AGI/ASI in the next 5-10 years with high probability. Better to fight this and lose than just roll over and die.
To be clear—I’m open to more scouting being done concurrently (and open to changing my mind), but imo none of these answers are convincing or reassuring.
This is missing the point of my 2nd argument. It sure sounds better to “fight and lose than roll over and die.” But I’m saying that “fighting” in the way that PauseAI is “fighting” could make it more likely that you lose. Not saying “fighting” in general will have this effect. Or that this won’t ever change. Or that I’m confident about this. Just saying: take criticism seriously, acknowledge the uncertainty, don’t rush into action just because you want to do something.
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
Who else is pushing for a global Pause/Stop/Moratorium/Non-Proliferation Treaty? Who else is doing that in a way such that PauseAI might be counterfactually harming their efforts? Again, no action on this, or waiting for others to do something “better”, are terrible choices when the consequences of insufficient global action are that we all die in the relatively near future.
Do you think it’s possible for you to be convinced that building ASI is a suicide race, short of an actual AI-mediated global catastrophe? What would it take?
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
I wrote some criticism in this comment. Mainly, I argue that
(1) A pause could be undesirable. A pause could be net-negative in expectation (with high variance depending on implementation specifics), and that PauseAI should take this concern more seriously.
(2) Fighting doesn’t necessarily bring you closer to winning. PauseAI’s approach *could* be counterproductive even for the aim of achieving a pause, whether or not it’s desirable. From my comment:
What is the ultimate counterfactual here? I’d argue it’s extinction from AGI/ASI in the next 5-10 years with high probability. Better to fight this and lose than just roll over and die.
To be clear—I’m open to more scouting being done concurrently (and open to changing my mind), but imo none of these answers are convincing or reassuring.
This is missing the point of my 2nd argument. It sure sounds better to “fight and lose than roll over and die.”
But I’m saying that “fighting” in the way that PauseAI is “fighting” could make it more likely that you lose.
Not saying “fighting” in general will have this effect. Or that this won’t ever change. Or that I’m confident about this. Just saying: take criticism seriously, acknowledge the uncertainty, don’t rush into action just because you want to do something.
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
Who else is pushing for a global Pause/Stop/Moratorium/Non-Proliferation Treaty? Who else is doing that in a way such that PauseAI might be counterfactually harming their efforts? Again, no action on this, or waiting for others to do something “better”, are terrible choices when the consequences of insufficient global action are that we all die in the relatively near future.
Do you think it’s possible for you to be convinced that building ASI is a suicide race, short of an actual AI-mediated global catastrophe? What would it take?
~50%. I think XPT is a terrible prior. Much better to look at the most recent AI Impacts Survey, or the CAIS Statement on AI Risk.