One reason I’m critical of the Anthropic RSP is that it does not make it clear under what conditions it would actually pause, or for how long, or under what safeguards it would determine it’s OK to keep going.
It’s hard to take anything else you’re saying seriously when you say things like this; it seems clear that you just haven’t read Anthropic’s RSP. I think that the current conditions and resulting safeguards are insufficient to prevent AI existential risk, but to say that it doesn’t make them clear is just patently false.
The conditions under which Anthropic commits to pausing in the RSP are very clear. In big bold font on the second page it says:
Anthropic’s commitment to follow the ASL scheme thus implies that we commit to pause the
scaling and/or delay the deployment of new models whenever our scaling ability outstrips our
ability to comply with the safety procedures for the corresponding ASL.
And then it lays out a serious of safety procedures that Anthropic commits to meeting for ASL-3 models or else pausing, with some of the most serious commitments here being:
Model weight and code security: We commit to ensuring that ASL-3 models are stored in
such a manner to minimize risk of theft by a malicious actor that might use the model to cause a
catastrophe. Specifically, we will implement measures designed to harden our security so that
non-state attackers are unlikely to be able to steal model weights, and advanced threat actors
(e.g. states) cannot steal them without significant expense. The full set of security measures
that we commit to (and have already started implementing) are described in this appendix, and
were developed in consultation with the authors of a forthcoming RAND report on securing AI
weights.
Successfully pass red-teaming: World-class experts collaborating with prompt engineers
should red-team the deployment thoroughly and fail to elicit information at a level of
sophistication, accuracy, usefulness, detail, and frequency which significantly enables
catastrophic misuse. Misuse domains should at a minimum include causes of extreme CBRN
risks, and cybersecurity.
Note that in contrast to the ASL-3 capability threshold, this red-teaming is about whether
the model can cause harm under realistic circumstances (i.e. with harmlessness training
and misuse detection in place), not just whether it has the internal knowledge that would
enable it in principle to do so.
We will refine this methodology, but we expect it to require at least many dozens of
hours of deliberate red-teaming per topic area, by world class experts specifically
focused on these threats (rather than students or people with general expertise in a
broad domain). Additionally, this may involve controlled experiments, where people with
similar levels of expertise to real threat actors are divided into groups with and without
model access, and we measure the delta of success between them.
And a clear evaluation-based definition of ASL-3:
We define an ASL-3 model as one that can either immediately, or with additional post-training
techniques corresponding to less than 1% of the total training cost, do at least one of the following two
things. (By post-training techniques we mean the best capabilities elicitation techniques we are aware
of at the time, including but not limited to fine-tuning, scaffolding, tool use, and prompt engineering.)
Capabilities that significantly increase risk of misuse catastrophe: Access to the model
would substantially increase the risk of deliberately-caused catastrophic harm, either by
proliferating capabilities, lowering costs, or enabling new methods of attack. This increase in risk
is measured relative to today’s baseline level of risk that comes from e.g. access to search
engines and textbooks. We expect that AI systems would first elevate this risk from use by non-state attackers.
Our first area of effort is in evaluating bioweapons risks where we will determine threat models
and capabilities in consultation with a number of world-class biosecurity experts. We are now
developing evaluations for these risks in collaboration with external experts to meet ASL-3
commitments, which will be a more systematized version of our recent work on frontier
red-teaming. In the near future, we anticipate working with CBRN, cyber, and related experts to
develop threat models and evaluations in those areas before they present substantial risks.
However, we acknowledge that these evaluations are fundamentally difficult, and there remain
disagreements about threat models.
Autonomous replication in the lab: The model shows early signs of autonomous
self-replication ability, as defined by 50% aggregate success rate on the tasks listed in
[Appendix on Autonomy Evaluations]. The appendix includes an overview of our threat model
for autonomous capabilities and a list of the basic capabilities necessary for accumulation of
resources and surviving in the real world, along with conditions under which we would judge the
model to have succeeded. Note that the referenced appendix describes the ability to act
autonomously specifically in the absence of any human intervention to stop the model, which
limits the risk significantly. Our evaluations were developed in consultation with Paul Christiano
and ARC Evals, which specializes in evaluations of autonomous replication.
This is the basic substance of the RSP; I don’t understand how you could have possibly read it and missed this. I don’t want to be mean, but I am really disappointed in these sort of exceedingly lazy takes.
It think calling a take “lazy”, which could indeed be considered “mean” is not avery helpful approach, you could have made your point without that kind of derision. There are going to be a lot of misunderstandings and hot takes around RSPs, and I think AI company employees especially should err heavily on the side of patience and kind understanding it they want to avoid people becoming more adversarial towards them.
Live by the sword, die by the sword.
Akash said...
“that it does not make it clear under what conditions it would actually pause, or for how long, or under what safeguards it would determine it’s OK to keep going. It”
I agree the conditions from the RSP you started are clearer than I would have expected reading Akash’s above comment, but to be fair to Akash, from those paragraphs you posted above, only the last one seems to state a clear and specific condition for pausing, the others seem to say “refer to experts” which could be considered unclear, to give Akash the benefit of the doubt.
And they don’t say how long the pause would be out conditions for restarting either.
Cross-posted from LessWrong.
It’s hard to take anything else you’re saying seriously when you say things like this; it seems clear that you just haven’t read Anthropic’s RSP. I think that the current conditions and resulting safeguards are insufficient to prevent AI existential risk, but to say that it doesn’t make them clear is just patently false.
The conditions under which Anthropic commits to pausing in the RSP are very clear. In big bold font on the second page it says:
And then it lays out a serious of safety procedures that Anthropic commits to meeting for ASL-3 models or else pausing, with some of the most serious commitments here being:
And a clear evaluation-based definition of ASL-3:
This is the basic substance of the RSP; I don’t understand how you could have possibly read it and missed this. I don’t want to be mean, but I am really disappointed in these sort of exceedingly lazy takes.
It think calling a take “lazy”, which could indeed be considered “mean” is not avery helpful approach, you could have made your point without that kind of derision. There are going to be a lot of misunderstandings and hot takes around RSPs, and I think AI company employees especially should err heavily on the side of patience and kind understanding it they want to avoid people becoming more adversarial towards them.
Live by the sword, die by the sword.
Akash said...
“that it does not make it clear under what conditions it would actually pause, or for how long, or under what safeguards it would determine it’s OK to keep going. It”
I agree the conditions from the RSP you started are clearer than I would have expected reading Akash’s above comment, but to be fair to Akash, from those paragraphs you posted above, only the last one seems to state a clear and specific condition for pausing, the others seem to say “refer to experts” which could be considered unclear, to give Akash the benefit of the doubt.
And they don’t say how long the pause would be out conditions for restarting either.