I tend to put P(doom) around 80%, so I think I’m on the pessimistic side, and I tend to think short timelines are at least a real and serious possibility that we should be planning for. Nevertheless, I disagree with a global stop or a pause being the “only reasonable hope”—global stops and pauses seem basically unworkable to me. I’m much more excited about governmentally enforced Responsible Scaling Policies, which seem like the “better option” that you’re missing here.
@evhub can you say more about what you envision a governmentally-enforced RSP world would look like? Is it similar to licensing? What happens when a dangerous capability eval goes off— does the government have the ability to implement a national pause?
Aside: IMO it’s pretty clear that the voluntary-commitment RSP regime is insufficient, since some companies simply won’t develop RSPs, and even if lots of folks adopted RSPs, the competitive pressures in favor of racing seem like they’d make it hard for anyone to pause for >a few months. I was surprised/disappointed that neither ARC nor Anthropic mentioned this. ARC says some stuff about how maybe in the future one day we might have some stuff from RSPs that could maybe inform government standards, but (in my opinion) their discussion of government involvement was quite weak, perhaps even to the point of being misleading (by making it seem like the voluntary commitments will be sufficient.)
I think some of the negative reaction to responsible scaling, at least among some people I know, is that it seems like an attempt for companies to say “trust us— we can scale responsibly, so we don’t need actual government regulation.” If the narrative is “hey, we agree that the government should force everyone to scale responsibly, and this means that the government would have the ability to tell people that they have to stop scaling if the government decides it’s too risky”, then I’d still probably prefer stopping right now, but I’d be much more sympathetic to the RSP position.
What happens when a dangerous capability eval goes off— does the government have the ability to implement a national pause?
I think presumably the pause would just be for that company’s scaling—presumably other organizations that were still in compliance would still be fine.
If the narrative is “hey, we agree that the government should force everyone to scale responsibly, and this means that the government would have the ability to tell people that they have to stop scaling if the government decides it’s too risky”, then I’d still probably prefer stopping right now, but I’d be much more sympathetic to the RSP position.
That’s definitely my position, yeah—and I think it’s also ARC’s and Anthropic’s position. I think the key thing with the current advocacy around companies doing this is that one of the best ways to get a governmentally-enforced RSP regime is for companies to first voluntarily commit to the sort of RSPs that you want the government to later enforce.
I think presumably the pause would just be for that company’s scaling—presumably other organizations that were still in compliance would still be fine.
I think this makes sense for certain types of dangerous capabilities (e.g., a company develops a system that has strong cyberoffensive capabilities. That company has to stop but other companies can keep going).
But what about dangerous capabilities that have more to do with AI takeover (e.g., a company develops a system that shows signs of autonomous replication, manipulation, power-seeking, deception) or scientific capabilities (e.g., the ability to develop better AI systems)?
Supposing that 3-10 other companies are within a few months of these systems, do you think at this point we need a coordinated pause, or would it be fine to just force company 1 to pause?
That’s definitely my position, yeah—and I think it’s also ARC’s and Anthropic’s position.
Do you know if ARC or Anthropic have publicly endorsed this position anywhere? (And if not, I’d be curious for your take on why, although that’s more speculative so feel free to pass).
But what about dangerous capabilities that have more to do with AI takeover (e.g., a company develops a system that shows signs of autonomous replication, manipulation, power-seeking, deception) or scientific capabilities (e.g., the ability to develop better AI systems)?
Supposing that 3-10 other companies are within a few months of these systems, do you think at this point we need a coordinated pause, or would it be fine to just force company 1 to pause?
If they can’t do that, then the other labs catch up and they’re all blocked on the same spot, which if you’ve put your capabilities bars at the right spots, shouldn’t be dangerous.
If they can do that, then they get to keep going, ahead of other labs, until they hit another blocker and need to demonstrate safety/understanding/alignment to an even greater degree.
I guess I’m not really sure what your objection is to Responsible Scaling Policies? I see that there’s a bunch of links, but I don’t really see a consistent position being staked out by the various sources you’ve linked to. Do you want to describe what your objection is?
I guess the closest there is “the danger is already apparent enough” which, while true, doesn’t really seem like an objection. I agree that the danger is apparent, but I don’t think that advocating for a pause is a very good way to address that danger.
The consistent position is that further scaling is reckless at this stage; it can’t be done in a “responsible” way, unless you think subjecting the world to a 10-25% risk of extinction is a responsible thing to be doing!
What is a better way of addressing the danger? Waiting for it to get more intense and more apparent by scaling further!? Waiting until a disaster actually happens? Actually pausing, or stopping (and setting an example), rather than just advocating for a pause?
Perhaps the crux is related to how dangerous you think current models are? I’m quite confident that we have at least a couple additional orders of magnitude of scaling before the world ends, so I’m not too worried about stopping training of current models, or even next-generation models. But I do start to get worried with next-next-generation models.
So, in my view, the key is to make sure that we have a well-enforced Responsible Scaling Policy (RSP) regime that is capable of preventing scaling unless hard safety metrics are met (I favor understanding-based evals for this) before the next two scaling generations. That means we need to get good RSPs into law with solid enforcement behind them and—at least in very short timeline worlds—that needs to happen in the next few years. By far the best way to make that happen, in my opinion, is to pressure labs to put out good RSPs now that governments can build on.
I don’t think the current models are dangerous, but perhaps they could be if used for long enough on improving AI. A couple of orders of magnitude (or a couple of generations) is only a couple of years! This is soon enough to be pushing as hard as we can for a pause right now!
Why try and take it right down to the wire with RSPs? Why over-complicate things? The stakes couldn’t be bigger (extinction). It’s super reckless to not just be saying “It seems quite likely we’re getting to world-ending models in 2-5 years. Let’s not keep going any longer. Let’s just stop now.” The tradeoff [edit: for Anthropic] for a few tens of $Bs of extra profit really doesn’t seem worth it!
This is soon enough to be pushing as hard as we can for a pause right now!
I mean, yes, obviously we should be doing everything we can right now. I just think that a RSP-gated pause is the right way to do a pause. I’m not even sure what it would mean to do a pause without an RSP-like resumption condition.
Why try and take it right down to the wire with RSPs?
Because it’s more likely to succeed. RSPs provides very clear and legible risk-based criteria that are much more plausibly things that you could actually get a government to agree to.
The tradeoff for a few tens of $Bs of extra profit really doesn’t seem worth it!
This seems extremely disingenuous and bad faith. That’s obviously not the tradeoff and it confuses me why you would even claim that. Surely you know that I am not Sam Altman or Dario Amodei or whatever.
The actual tradeoff is the probability of success. If I thought e.g. just advocating for a six month pause right now was more effective at reducing existential risk, I would do it.
I’m not even sure what it would mean to do a pause without an RSP-like resumption condition.
Have the resumption condition be a global consensus on an x-safety solution or a global democratic mandate for restarting (and remember there are more components of x-safety than just alignment—also misuse and multi-agent coordination).
much more plausibly things that you could actually get a government to agree to.
I think if governments actually properly appreciated the risks, they could agree to an unconditional pause.
This seems extremely disingenuous and bad faith. That’s obviously not the tradeoff and it confuses me why you would even claim that. Surely you know that I am not Sam Altman or Dario Amodei or whatever.
Sorry. I’m looking at it at the company level. Please don’t take my critiques as being directed at you personally. What is in it for Anthropic and OpenAI and DeepMind to keep going with scaling? Money and power, right? I think it’s pushing it a bit at this stage to say that they, as companies, are primarily concerned with reducing x-risk. If they were they would’ve stopped scaling already. Forget the (suicide) race. Set an example to everyone and just stop!
Have the resumption condition be a global consensus on an x-safety solution or a global democratic mandate for restarting (and remember there are more components of x-safety than just alignment—also misuse and multi-agent coordination).
This seems basically unachievable and even if it was achievable it doesn’t even seem like the right thing to do—I don’t actually trust the global median voter to judge whether additional scaling is safe or not. I’d much rather have rigorous technical standards than nebulous democratic standards.
I think it’s pushing it a bit at this stage to say that they, as companies, are primarily concerned with reducing x-risk.
That’s why we should be pushing them to have good RSPs! I just think you should be pushing on the RSP angle rather than the pause angle.
I’d much rather have rigorous technical standards then nebulous democratic standards.
Fair. And where I say “global consensus on an x-safety”, I mean expert opinion (as I say in the OP). I expect the public to remain generally a lot more conservative than the technical experts though, in terms of risk they are willing to tolerate.
I just think you should be pushing on the RSP angle rather than the pause angle.
The RSP angle is part of the corporate “big AI” “business as usual” agenda. To those of us playing the outside game it seems very close to safetywashing.
The RSP angle is part of the corporate “big AI” “business as usual” agenda. To those of us playing the outside game it seems very close to safetywashing.
I’ve written up more about why I think this is not true here.
Why are people downvoting my reply without comment, and upvoting evhub’s comment? It’s the most upvoted comment, even though he clearly didn’t even ctrl-F for “Responsible Scaling” / notice that I’d addressed it in the OP!
I tend to put P(doom) around 80%, so I think I’m on the pessimistic side, and I tend to think short timelines are at least a real and serious possibility that we should be planning for. Nevertheless, I disagree with a global stop or a pause being the “only reasonable hope”—global stops and pauses seem basically unworkable to me. I’m much more excited about governmentally enforced Responsible Scaling Policies, which seem like the “better option” that you’re missing here.
@evhub can you say more about what you envision a governmentally-enforced RSP world would look like? Is it similar to licensing? What happens when a dangerous capability eval goes off— does the government have the ability to implement a national pause?
Aside: IMO it’s pretty clear that the voluntary-commitment RSP regime is insufficient, since some companies simply won’t develop RSPs, and even if lots of folks adopted RSPs, the competitive pressures in favor of racing seem like they’d make it hard for anyone to pause for >a few months. I was surprised/disappointed that neither ARC nor Anthropic mentioned this. ARC says some stuff about how maybe in the future one day we might have some stuff from RSPs that could maybe inform government standards, but (in my opinion) their discussion of government involvement was quite weak, perhaps even to the point of being misleading (by making it seem like the voluntary commitments will be sufficient.)
I think some of the negative reaction to responsible scaling, at least among some people I know, is that it seems like an attempt for companies to say “trust us— we can scale responsibly, so we don’t need actual government regulation.” If the narrative is “hey, we agree that the government should force everyone to scale responsibly, and this means that the government would have the ability to tell people that they have to stop scaling if the government decides it’s too risky”, then I’d still probably prefer stopping right now, but I’d be much more sympathetic to the RSP position.
I think presumably the pause would just be for that company’s scaling—presumably other organizations that were still in compliance would still be fine.
That’s definitely my position, yeah—and I think it’s also ARC’s and Anthropic’s position. I think the key thing with the current advocacy around companies doing this is that one of the best ways to get a governmentally-enforced RSP regime is for companies to first voluntarily commit to the sort of RSPs that you want the government to later enforce.
Thanks! A few quick responses/questions:
I think this makes sense for certain types of dangerous capabilities (e.g., a company develops a system that has strong cyberoffensive capabilities. That company has to stop but other companies can keep going).
But what about dangerous capabilities that have more to do with AI takeover (e.g., a company develops a system that shows signs of autonomous replication, manipulation, power-seeking, deception) or scientific capabilities (e.g., the ability to develop better AI systems)?
Supposing that 3-10 other companies are within a few months of these systems, do you think at this point we need a coordinated pause, or would it be fine to just force company 1 to pause?
Do you know if ARC or Anthropic have publicly endorsed this position anywhere? (And if not, I’d be curious for your take on why, although that’s more speculative so feel free to pass).
I wrote up a bunch of my thoughts on this in more detail here.
What should happen there is that the leading lab is forced to stop and try to demonstrate that e.g. they understand their model sufficiently such that they can keep scaling. Then:
If they can’t do that, then the other labs catch up and they’re all blocked on the same spot, which if you’ve put your capabilities bars at the right spots, shouldn’t be dangerous.
If they can do that, then they get to keep going, ahead of other labs, until they hit another blocker and need to demonstrate safety/understanding/alignment to an even greater degree.
Hi Evan,
What is your median time from now until human extinction? If it is only a few years, I would be happy to set up a bet like this one.
I mention Responsible Scaling!
EDIT to add: I’m interested in a response from evhub (or anyone else) to the points raised against Responsible Scaling (see links for more details).
I guess I’m not really sure what your objection is to Responsible Scaling Policies? I see that there’s a bunch of links, but I don’t really see a consistent position being staked out by the various sources you’ve linked to. Do you want to describe what your objection is?
I guess the closest there is “the danger is already apparent enough” which, while true, doesn’t really seem like an objection. I agree that the danger is apparent, but I don’t think that advocating for a pause is a very good way to address that danger.
The consistent position is that further scaling is reckless at this stage; it can’t be done in a “responsible” way, unless you think subjecting the world to a 10-25% risk of extinction is a responsible thing to be doing!
What is a better way of addressing the danger? Waiting for it to get more intense and more apparent by scaling further!? Waiting until a disaster actually happens? Actually pausing, or stopping (and setting an example), rather than just advocating for a pause?
Perhaps the crux is related to how dangerous you think current models are? I’m quite confident that we have at least a couple additional orders of magnitude of scaling before the world ends, so I’m not too worried about stopping training of current models, or even next-generation models. But I do start to get worried with next-next-generation models.
So, in my view, the key is to make sure that we have a well-enforced Responsible Scaling Policy (RSP) regime that is capable of preventing scaling unless hard safety metrics are met (I favor understanding-based evals for this) before the next two scaling generations. That means we need to get good RSPs into law with solid enforcement behind them and—at least in very short timeline worlds—that needs to happen in the next few years. By far the best way to make that happen, in my opinion, is to pressure labs to put out good RSPs now that governments can build on.
I don’t think the current models are dangerous, but perhaps they could be if used for long enough on improving AI. A couple of orders of magnitude (or a couple of generations) is only a couple of years! This is soon enough to be pushing as hard as we can for a pause right now!
Why try and take it right down to the wire with RSPs? Why over-complicate things? The stakes couldn’t be bigger (extinction). It’s super reckless to not just be saying “It seems quite likely we’re getting to world-ending models in 2-5 years. Let’s not keep going any longer. Let’s just stop now.” The tradeoff [edit: for Anthropic] for a few tens of $Bs of extra profit really doesn’t seem worth it!
I mean, yes, obviously we should be doing everything we can right now. I just think that a RSP-gated pause is the right way to do a pause. I’m not even sure what it would mean to do a pause without an RSP-like resumption condition.
Because it’s more likely to succeed. RSPs provides very clear and legible risk-based criteria that are much more plausibly things that you could actually get a government to agree to.
This seems extremely disingenuous and bad faith. That’s obviously not the tradeoff and it confuses me why you would even claim that. Surely you know that I am not Sam Altman or Dario Amodei or whatever.
The actual tradeoff is the probability of success. If I thought e.g. just advocating for a six month pause right now was more effective at reducing existential risk, I would do it.
Have the resumption condition be a global consensus on an x-safety solution or a global democratic mandate for restarting (and remember there are more components of x-safety than just alignment—also misuse and multi-agent coordination).
I think if governments actually properly appreciated the risks, they could agree to an unconditional pause.
Sorry. I’m looking at it at the company level. Please don’t take my critiques as being directed at you personally. What is in it for Anthropic and OpenAI and DeepMind to keep going with scaling? Money and power, right? I think it’s pushing it a bit at this stage to say that they, as companies, are primarily concerned with reducing x-risk. If they were they would’ve stopped scaling already. Forget the (suicide) race. Set an example to everyone and just stop!
This seems basically unachievable and even if it was achievable it doesn’t even seem like the right thing to do—I don’t actually trust the global median voter to judge whether additional scaling is safe or not. I’d much rather have rigorous technical standards than nebulous democratic standards.
That’s why we should be pushing them to have good RSPs! I just think you should be pushing on the RSP angle rather than the pause angle.
Fair. And where I say “global consensus on an x-safety”, I mean expert opinion (as I say in the OP). I expect the public to remain generally a lot more conservative than the technical experts though, in terms of risk they are willing to tolerate.
The RSP angle is part of the corporate “big AI” “business as usual” agenda. To those of us playing the outside game it seems very close to safetywashing.
I’ve written up more about why I think this is not true here.
Thanks. I’m not convinced.
Why are people downvoting my reply without comment, and upvoting evhub’s comment? It’s the most upvoted comment, even though he clearly didn’t even ctrl-F for “Responsible Scaling” / notice that I’d addressed it in the OP!