This is missing the point of my 2nd argument. It sure sounds better to “fight and lose than roll over and die.” But I’m saying that “fighting” in the way that PauseAI is “fighting” could make it more likely that you lose. Not saying “fighting” in general will have this effect. Or that this won’t ever change. Or that I’m confident about this. Just saying: take criticism seriously, acknowledge the uncertainty, don’t rush into action just because you want to do something.
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
Who else is pushing for a global Pause/Stop/Moratorium/Non-Proliferation Treaty? Who else is doing that in a way such that PauseAI might be counterfactually harming their efforts? Again, no action on this, or waiting for others to do something “better”, are terrible choices when the consequences of insufficient global action are that we all die in the relatively near future.
Do you think it’s possible for you to be convinced that building ASI is a suicide race, short of an actual AI-mediated global catastrophe? What would it take?
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
This is missing the point of my 2nd argument. It sure sounds better to “fight and lose than roll over and die.”
But I’m saying that “fighting” in the way that PauseAI is “fighting” could make it more likely that you lose.
Not saying “fighting” in general will have this effect. Or that this won’t ever change. Or that I’m confident about this. Just saying: take criticism seriously, acknowledge the uncertainty, don’t rush into action just because you want to do something.
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
Who else is pushing for a global Pause/Stop/Moratorium/Non-Proliferation Treaty? Who else is doing that in a way such that PauseAI might be counterfactually harming their efforts? Again, no action on this, or waiting for others to do something “better”, are terrible choices when the consequences of insufficient global action are that we all die in the relatively near future.
Do you think it’s possible for you to be convinced that building ASI is a suicide race, short of an actual AI-mediated global catastrophe? What would it take?
~50%. I think XPT is a terrible prior. Much better to look at the most recent AI Impacts Survey, or the CAIS Statement on AI Risk.