Dan H
Karma: 1,234
AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI Plus, “Circuit Breakers” for AI systems, and updates on China’s AI industry
AI Safety Newsletter #37: US Launches Antitrust Investigations Plus, recent criticisms of OpenAI and Anthropic, and a summary of Situational Awareness
AISN #36: Voluntary Commitments are Insufficient Plus, a Senate AI Policy Roadmap, and Chapter 1: An Overview of Catastrophic Risks
AISN #35: Lobbying on AI Regulation Plus, New Models from OpenAI and Google, and Legal Regimes for Training on Copyrighted Data
OpenAI has made a hard commitment to safety by allocating 20% compute (~20% of budget) for the superalignment team. That is a huge commitment which isn’t reflected in this.
I mean Google does basic things like use Yubikeys where other places don’t even reliably do that. Unclear what a good checklist would look like, but maybe one could be created.
AISN #34: New Military AI Systems Plus, AI Labs Fail to Uphold Voluntary Commitments to UK AI Safety Institute, and New AI Policy Proposals in the US Senate
To my understanding, Google has better infosec than OpenAI and Anthropic. They have much more experience protecting assets.
I’ve heard OpenAI employees talk about the relatively high amount of compute superalignment has (complaining superalignment has too much and they, employees outside superalignment, don’t have enough). In conversations with superalignment people, I noticed they talk about it as a real strategic asset (“make sure we’re ready to use our compute on automated AI R&D for safety”) rather than just an example of safety washing. This was something Ilya pushed for back when he was there.