Seth—you mentioned that ‘we currently have three teams in the lead who all appear to honestly take the risks very seriously, and changing that might be a very bad idea.’
I assume you’re referring to OpenAI, DeepMind, and Anthropic.
Yes, they all give lip service to AI safety, and they hire safety researchers, and they safety-wash their capabilities development.
But I see no evidence that they would actually stop their AGI development under any circumstances, no matter how risky it started to seem.
Maybe you trust their leadership. I do not. And I don’t think the 8 billion people in the world should have their fates left in the hands of a tiny set of AI industry leaders—no matter how benevolent they seem, or how many times they talk about AI safety in interviews.
I agree that those teams aren’t completely trustworthy, and in an ideal world, we should be making this decision by including everyone on earth. But with a partial pause, do you expect to have better or worse teams in the lead for achieving AGI? That was my point.
Well from an AI safety viewpoint, the very worst teams to be leading the AGI rush would be those that (1) are very competent, well-funded, well-run, and full of idealistic talent, and (2) don’t actually care about reducing extinction risk—however much lip service they pay to AI safety.
From that perspective, OpenAI is the worst team, and they’re in the lead.
I think that’s quite a pessimistic take. I take Altman seriously on caring about x-risk, although I’m not sure he takes it quite seriously enough. This is based on public comments to that effect around 2013, before he started running OpenAI. And Sutskever definitely seems properly concerned.
Seth—you mentioned that ‘we currently have three teams in the lead who all appear to honestly take the risks very seriously, and changing that might be a very bad idea.’
I assume you’re referring to OpenAI, DeepMind, and Anthropic.
Yes, they all give lip service to AI safety, and they hire safety researchers, and they safety-wash their capabilities development.
But I see no evidence that they would actually stop their AGI development under any circumstances, no matter how risky it started to seem.
Maybe you trust their leadership. I do not. And I don’t think the 8 billion people in the world should have their fates left in the hands of a tiny set of AI industry leaders—no matter how benevolent they seem, or how many times they talk about AI safety in interviews.
I agree that those teams aren’t completely trustworthy, and in an ideal world, we should be making this decision by including everyone on earth. But with a partial pause, do you expect to have better or worse teams in the lead for achieving AGI? That was my point.
Well from an AI safety viewpoint, the very worst teams to be leading the AGI rush would be those that (1) are very competent, well-funded, well-run, and full of idealistic talent, and (2) don’t actually care about reducing extinction risk—however much lip service they pay to AI safety.
From that perspective, OpenAI is the worst team, and they’re in the lead.
I think that’s quite a pessimistic take. I take Altman seriously on caring about x-risk, although I’m not sure he takes it quite seriously enough. This is based on public comments to that effect around 2013, before he started running OpenAI. And Sutskever definitely seems properly concerned.