I’m not even sure what it would mean to do a pause without an RSP-like resumption condition.
Have the resumption condition be a global consensus on an x-safety solution or a global democratic mandate for restarting (and remember there are more components of x-safety than just alignment—also misuse and multi-agent coordination).
much more plausibly things that you could actually get a government to agree to.
I think if governments actually properly appreciated the risks, they could agree to an unconditional pause.
This seems extremely disingenuous and bad faith. That’s obviously not the tradeoff and it confuses me why you would even claim that. Surely you know that I am not Sam Altman or Dario Amodei or whatever.
Sorry. I’m looking at it at the company level. Please don’t take my critiques as being directed at you personally. What is in it for Anthropic and OpenAI and DeepMind to keep going with scaling? Money and power, right? I think it’s pushing it a bit at this stage to say that they, as companies, are primarily concerned with reducing x-risk. If they were they would’ve stopped scaling already. Forget the (suicide) race. Set an example to everyone and just stop!
Have the resumption condition be a global consensus on an x-safety solution or a global democratic mandate for restarting (and remember there are more components of x-safety than just alignment—also misuse and multi-agent coordination).
This seems basically unachievable and even if it was achievable it doesn’t even seem like the right thing to do—I don’t actually trust the global median voter to judge whether additional scaling is safe or not. I’d much rather have rigorous technical standards than nebulous democratic standards.
I think it’s pushing it a bit at this stage to say that they, as companies, are primarily concerned with reducing x-risk.
That’s why we should be pushing them to have good RSPs! I just think you should be pushing on the RSP angle rather than the pause angle.
I’d much rather have rigorous technical standards then nebulous democratic standards.
Fair. And where I say “global consensus on an x-safety”, I mean expert opinion (as I say in the OP). I expect the public to remain generally a lot more conservative than the technical experts though, in terms of risk they are willing to tolerate.
I just think you should be pushing on the RSP angle rather than the pause angle.
The RSP angle is part of the corporate “big AI” “business as usual” agenda. To those of us playing the outside game it seems very close to safetywashing.
The RSP angle is part of the corporate “big AI” “business as usual” agenda. To those of us playing the outside game it seems very close to safetywashing.
I’ve written up more about why I think this is not true here.
Have the resumption condition be a global consensus on an x-safety solution or a global democratic mandate for restarting (and remember there are more components of x-safety than just alignment—also misuse and multi-agent coordination).
I think if governments actually properly appreciated the risks, they could agree to an unconditional pause.
Sorry. I’m looking at it at the company level. Please don’t take my critiques as being directed at you personally. What is in it for Anthropic and OpenAI and DeepMind to keep going with scaling? Money and power, right? I think it’s pushing it a bit at this stage to say that they, as companies, are primarily concerned with reducing x-risk. If they were they would’ve stopped scaling already. Forget the (suicide) race. Set an example to everyone and just stop!
This seems basically unachievable and even if it was achievable it doesn’t even seem like the right thing to do—I don’t actually trust the global median voter to judge whether additional scaling is safe or not. I’d much rather have rigorous technical standards than nebulous democratic standards.
That’s why we should be pushing them to have good RSPs! I just think you should be pushing on the RSP angle rather than the pause angle.
Fair. And where I say “global consensus on an x-safety”, I mean expert opinion (as I say in the OP). I expect the public to remain generally a lot more conservative than the technical experts though, in terms of risk they are willing to tolerate.
The RSP angle is part of the corporate “big AI” “business as usual” agenda. To those of us playing the outside game it seems very close to safetywashing.
I’ve written up more about why I think this is not true here.
Thanks. I’m not convinced.