From maybe 2013 to 2016, DeepMind was at the forefront of hype around AGI. Since then, they’ve done less hype. For example, AlphaStar was not hyped nearly as much as I think it could have been.
I think that there’s a very solid chance that this was an intentional move on the part of DeepMind: that they’ve been intentionally avoiding making AGI capabilities seem sexy.
In the wake of big public releases like ChatGPT and Sydney and GPT-4, I think it’s worth appreciating this move on DeepMind’s part. It’s not a very visible move. It’s easy to fail to notice. It probably hurts their own position in the arms race. I think it’s a prosocial move.
If you are the sort of person who is going to do AGI capabilities research—and I recommend against it—then I’d recommend doing it at places that are more likely to be able to keep their research private, rather than letting it contribute to an arms race that I expect would kill literally everyone.
I suspect that DeepMind has not only been avoiding hype, but also avoiding publishing a variety of their research. Various other labs have also been avoiding both, and I applaud them too. And perhaps DeepMind has been out of the limelight because they focus less on large language models, and the results that they do have are harder to hype. But insofar as DeepMind was in the limelight, and did intentionally step back from it and avoid drawing tons more attention and investment to AGI capabilities (in light of how Earth is not well-positioned to deploy AGI capabilities in ways that make the world better), I think that’s worth noticing and applauding.
(To be clear: I think DeepMind could do significantly better on the related axis of avoiding publishing research that advances capabilities, and for instance I was sad to see Chinchilla published. And they could do better at avoiding hype themselves, as noted in the comments. At this stage, I would recommend that DeepMind cease further capabilities research until our understanding of alignment is much further along, and my applause for the specific act of avoiding hype does not constitute a general endorsement of their operations. Nevertheless, my primary guess is that DeepMind has made at least some explicit attempts to avoid hype, and insofar as that’s true, I applaud the decision.)
Hooray for stepping out of the limelight
From maybe 2013 to 2016, DeepMind was at the forefront of hype around AGI. Since then, they’ve done less hype. For example, AlphaStar was not hyped nearly as much as I think it could have been.
I think that there’s a very solid chance that this was an intentional move on the part of DeepMind: that they’ve been intentionally avoiding making AGI capabilities seem sexy.
In the wake of big public releases like ChatGPT and Sydney and GPT-4, I think it’s worth appreciating this move on DeepMind’s part. It’s not a very visible move. It’s easy to fail to notice. It probably hurts their own position in the arms race. I think it’s a prosocial move.
If you are the sort of person who is going to do AGI capabilities research—and I recommend against it—then I’d recommend doing it at places that are more likely to be able to keep their research private, rather than letting it contribute to an arms race that I expect would kill literally everyone.
I suspect that DeepMind has not only been avoiding hype, but also avoiding publishing a variety of their research. Various other labs have also been avoiding both, and I applaud them too. And perhaps DeepMind has been out of the limelight because they focus less on large language models, and the results that they do have are harder to hype. But insofar as DeepMind was in the limelight, and did intentionally step back from it and avoid drawing tons more attention and investment to AGI capabilities (in light of how Earth is not well-positioned to deploy AGI capabilities in ways that make the world better), I think that’s worth noticing and applauding.
(To be clear: I think DeepMind could do significantly better on the related axis of avoiding publishing research that advances capabilities, and for instance I was sad to see Chinchilla published. And they could do better at avoiding hype themselves, as noted in the comments. At this stage, I would recommend that DeepMind cease further capabilities research until our understanding of alignment is much further along, and my applause for the specific act of avoiding hype does not constitute a general endorsement of their operations. Nevertheless, my primary guess is that DeepMind has made at least some explicit attempts to avoid hype, and insofar as that’s true, I applaud the decision.)