Very excited to see this section (from the condensed report). Are you able to say more about the kind of work you would find useful in this space or the organizations/individuals that you think are doing some exemplary work in this space?
We recommend interventions which plan for worst-case scenarios—that is, interventions which are effective when preventative measures fail to prevent AI threats emerging. For concreteness, we outline some potential interventions which boost resilience against AI risks.
Developing contingency plans: Ensure there are clear plans and protocols in the event that an AI system poses an unacceptably high level of risk.13 Such planning could be analogous to planning in other fields, such as pandemic preparedness or nuclear wargaming.
Robust shutdown mechanisms: Invest in infrastructure and planning to make it easier to close down AI systems in scenarios where they pose unacceptably high levels of risk.
Also, very minor, but I think there’s a minor formatting issue with footnote 23.
Very excited to see this section (from the condensed report). Are you able to say more about the kind of work you would find useful in this space or the organizations/individuals that you think are doing some exemplary work in this space?
Also, very minor, but I think there’s a minor formatting issue with footnote 23.