I’ve just thought of a counter-argument to my point. If OpenAI isn’t safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.
What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.
Yes, that if we send people to Anthropic with the aim of “winning an AI arms race” that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.
Hmm, I don’t see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it’s more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership’s views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Yeah, I don’t think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don’t expect it would and would instead make things worse).
I’ve just thought of a counter-argument to my point. If OpenAI isn’t safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.
That sounds like the way OpenAI got started.
What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.
Yes, that if we send people to Anthropic with the aim of “winning an AI arms race” that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.
Hmm, I don’t see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it’s more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership’s views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Yeah, I don’t think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don’t expect it would and would instead make things worse).
I’m skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.
It does seem important, but unclear it matters most.