For transparency: I’d personally encourage 80k to be more opinionated here, I think you’re well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you’re not confident in being opinionated) - I think you’re well positioned to make a high quality discussion about it, but that’s a long story and maybe off topic.
I don’t currently have a confident view on this beyond “We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.”
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!
For transparency: I’d personally encourage 80k to be more opinionated here, I think you’re well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you’re not confident in being opinionated) - I think you’re well positioned to make a high quality discussion about it, but that’s a long story and maybe off topic.
I don’t currently have a confident view on this beyond “We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.”
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!