What do people think the implications are for the AI safety field if internal deployment is where frontier AI labs are heading? It would seem less tractable to me, especially for people who aren’t already deeply networked into the labs, which I expect describes most people currently trying to pivot into AI safety.
For starters, you’d want to lobby for external safety testing on internal models. You’d want to make sure external safety researchers had access to the models. You’d want certain reporting, etc.
What do people think the implications are for the AI safety field if internal deployment is where frontier AI labs are heading? It would seem less tractable to me, especially for people who aren’t already deeply networked into the labs, which I expect describes most people currently trying to pivot into AI safety.
For starters, you’d want to lobby for external safety testing on internal models. You’d want to make sure external safety researchers had access to the models. You’d want certain reporting, etc.