I share this impression. Unfortunately it’s hard to capture the quality of labs’ security with objective criteria based on public information. (I have disclaimers about this in 4-6 different places, including the homepage.) I’m extremely interested in suggestions for criteria that would capture the ways Google’s security is good.
I mean Google does basic things like use Yubikeys where other places don’t even reliably do that. Unclear what a good checklist would look like, but maybe one could be created.
The broader question I’m confused about is how much to update on the local/object-level of whether the labs are doing “kind of reasonable” stuff, vs what their overall incentives and positions in the ecosystem points them to doing.
eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta’s alleged “scorched Earth” strategy where they are trying very hard to commoditize the component of LLMs.
To my understanding, Google has better infosec than OpenAI and Anthropic. They have much more experience protecting assets.
I share this impression. Unfortunately it’s hard to capture the quality of labs’ security with objective criteria based on public information. (I have disclaimers about this in 4-6 different places, including the homepage.) I’m extremely interested in suggestions for criteria that would capture the ways Google’s security is good.
I mean Google does basic things like use Yubikeys where other places don’t even reliably do that. Unclear what a good checklist would look like, but maybe one could be created.
The broader question I’m confused about is how much to update on the local/object-level of whether the labs are doing “kind of reasonable” stuff, vs what their overall incentives and positions in the ecosystem points them to doing.
eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta’s alleged “scorched Earth” strategy where they are trying very hard to commoditize the component of LLMs.
eg GDM employees have Google/Alphabet stock, most of the variance in their earnings isn’t going to come from AI, at least in the short term.