I don’t think this is true for the safety teams at Deepmind, but think it was true for some of the safety team at OpenAI, though I don’t think all of it (I don’t know what the current safety team at OpenAI is like, since most of it left to Anthropic).
Thanks for sharing. It seems like the most informed people in AI Safety have strongly changed their views on the impact of OpenAI and Deepmind compared to only a few years ago. Most notably, I was surprised to see ~all of the OpenAI safety team leave for Anthropic . This shift and the reasoning behind it have been fairly opaque to me, although I try to keep up to date. Clearly there are risks with publicly criticizing these important organizations, but I’d be really interested to hear more about this update from anybody who understands it.
Do you think this is the case for Deepmind / OpenAI’s safety teams as well, or does this only apply to non-safety roles within these organisations?
I don’t think this is true for the safety teams at Deepmind, but think it was true for some of the safety team at OpenAI, though I don’t think all of it (I don’t know what the current safety team at OpenAI is like, since most of it left to Anthropic).
Thanks for sharing. It seems like the most informed people in AI Safety have strongly changed their views on the impact of OpenAI and Deepmind compared to only a few years ago. Most notably, I was surprised to see ~all of the OpenAI safety team leave for Anthropic . This shift and the reasoning behind it have been fairly opaque to me, although I try to keep up to date. Clearly there are risks with publicly criticizing these important organizations, but I’d be really interested to hear more about this update from anybody who understands it.