Should we keep making excuses for OpenAI, and Anthropic, and DeepMind, pursuing AGI at recklessly high speed, despite the fact that AI capabilities research is far out-pacing AI safety and alignment research?
I don’t at all follow your jump from “OpenAI is wracked by scandals” to “other AGI labs bad”—Anthropic and GDM had nothing to do with Sam’s behaviour, and Anthropic co-founders actively chose to leave OpenAI. I know you already believed this position, but it feels like you’re arguing that Sam’s scandals should change other people’s position here. I don’t see how it gives much evidence either way for how the EA community should engage with Anthropic or DeepMind?
I definitely agree that this gives meaningful evidence on whether eg 80K should still recommend working at OpenAI (or even working on alignment at OpenAI, though that’s far less clear cut IMO)
Neel—am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion.
The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there’s no reason to think those issues are unique to OpenAI.
If Anthropic came out tomorrow and said, ‘OK, everyone, this AGI stuff is way too dangerous to pursue at the moment; we’re shutting down capabilities research for a decade until AI safety can start to catch up’, then they would have my respect.
I don’t at all follow your jump from “OpenAI is wracked by scandals” to “other AGI labs bad”—Anthropic and GDM had nothing to do with Sam’s behaviour, and Anthropic co-founders actively chose to leave OpenAI. I know you already believed this position, but it feels like you’re arguing that Sam’s scandals should change other people’s position here. I don’t see how it gives much evidence either way for how the EA community should engage with Anthropic or DeepMind?
I definitely agree that this gives meaningful evidence on whether eg 80K should still recommend working at OpenAI (or even working on alignment at OpenAI, though that’s far less clear cut IMO)
Neel—am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion.
The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there’s no reason to think those issues are unique to OpenAI.
If Anthropic came out tomorrow and said, ‘OK, everyone, this AGI stuff is way too dangerous to pursue at the moment; we’re shutting down capabilities research for a decade until AI safety can start to catch up’, then they would have my respect.