Well from an AI safety viewpoint, the very worst teams to be leading the AGI rush would be those that (1) are very competent, well-funded, well-run, and full of idealistic talent, and (2) don’t actually care about reducing extinction risk—however much lip service they pay to AI safety.
From that perspective, OpenAI is the worst team, and they’re in the lead.
I think that’s quite a pessimistic take. I take Altman seriously on caring about x-risk, although I’m not sure he takes it quite seriously enough. This is based on public comments to that effect around 2013, before he started running OpenAI. And Sutskever definitely seems properly concerned.
Well from an AI safety viewpoint, the very worst teams to be leading the AGI rush would be those that (1) are very competent, well-funded, well-run, and full of idealistic talent, and (2) don’t actually care about reducing extinction risk—however much lip service they pay to AI safety.
From that perspective, OpenAI is the worst team, and they’re in the lead.
I think that’s quite a pessimistic take. I take Altman seriously on caring about x-risk, although I’m not sure he takes it quite seriously enough. This is based on public comments to that effect around 2013, before he started running OpenAI. And Sutskever definitely seems properly concerned.