Hmm, I don’t see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it’s more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership’s views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Yeah, I don’t think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don’t expect it would and would instead make things worse).
Hmm, I don’t see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it’s more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership’s views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Yeah, I don’t think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don’t expect it would and would instead make things worse).
I’m skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.
It does seem important, but unclear it matters most.