But also it’s a personal decision. If you’re burnt out and fed up or you can’t bear to support an organization you disagree with then you may be better off quitting.
Also, quitting in protest can be a way to convince an organization to change course. It’s not always effective, but it’s certainly a strong message to leadership that you disapprove of what they’re doing which may at the very least get them thinking.
I’ve just thought of a counter-argument to my point. If OpenAI isn’t safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.
What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.
Yes, that if we send people to Anthropic with the aim of “winning an AI arms race” that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.
Hmm, I don’t see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it’s more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership’s views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Yeah, I don’t think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don’t expect it would and would instead make things worse).
This raises the concern of whether 80,000 Hours should still recommend people to join OpenAI.
Even if OpenAI has gone somewhat off the rails, should we want more or fewer safety-conscious people at OpenAI? I would imagine more.
I expect this was very much taken into account by the people that have quit, which makes their decision to quit anyway quite alarming.
Does this not imply that all the people who quit recently shouldn’t have?
From an EA-perspective—yes, maybe.
But also it’s a personal decision. If you’re burnt out and fed up or you can’t bear to support an organization you disagree with then you may be better off quitting.
Also, quitting in protest can be a way to convince an organization to change course. It’s not always effective, but it’s certainly a strong message to leadership that you disapprove of what they’re doing which may at the very least get them thinking.
I’ve just thought of a counter-argument to my point. If OpenAI isn’t safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.
That sounds like the way OpenAI got started.
What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.
Yes, that if we send people to Anthropic with the aim of “winning an AI arms race” that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.
Hmm, I don’t see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it’s more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.
Ultimately what matters most is what the leadership’s views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.
Yeah, I don’t think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don’t expect it would and would instead make things worse).
I’m skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.
It does seem important, but unclear it matters most.