You don’t appear to be majorly used for safety-washing
You don’t appear to be under the same amount of crazy NDAs as I’ve seen from OpenAI and Anthropic
You don’t seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and don’t seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isn’t making a statement about what Google is doing, or Deepmind at large is doing, it’s just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/Brain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.
You don’t appear to be majorly used for safety-washing
You don’t appear to be under the same amount of crazy NDAs as I’ve seen from OpenAI and Anthropic
You don’t seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and don’t seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isn’t making a statement about what Google is doing, or Deepmind at large is doing, it’s just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/Brain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.