Don’t work in certain positions unless you feel awesome about the lab being a force for good.
First of all I agree, thumbs up from me! 🙌
But you also wrote:
Recommended organisations
We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.
I assume you don’t recommend people go work for whatever lab “currently [seems like they’re] taking existential risk more seriously than other labs” ?
Do you have further recommendations on how to pick a lab?
(Do you agree this is a really important part of an AI-Safety-Career plan, or does it seem sort-of-secondary to you?)
I’m asking in the context of an engineer considering working on capabilities (and if they’re building skill—they might ask themselves “what am I going to use this skill for”, which I think is a good question). Also, I noticed you wrote “broadly advancing AI capabilities should be regarded overall as probably harmful”, which I agree with, and seems to make this question even more important.
For transparency: I’d personally encourage 80k to be more opinionated here, I think you’re well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you’re not confident in being opinionated) - I think you’re well positioned to make a high quality discussion about it, but that’s a long story and maybe off topic.
I don’t currently have a confident view on this beyond “We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.”
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!
TL;DR: “which lab” seems important, no?
You wrote:
First of all I agree, thumbs up from me! 🙌
But you also wrote:
I assume you don’t recommend people go work for whatever lab “currently [seems like they’re] taking existential risk more seriously than other labs” ?
Do you have further recommendations on how to pick a lab?
(Do you agree this is a really important part of an AI-Safety-Career plan, or does it seem sort-of-secondary to you?)
I’m asking in the context of an engineer considering working on capabilities (and if they’re building skill—they might ask themselves “what am I going to use this skill for”, which I think is a good question). Also, I noticed you wrote “broadly advancing AI capabilities should be regarded overall as probably harmful”, which I agree with, and seems to make this question even more important.
For transparency: I’d personally encourage 80k to be more opinionated here, I think you’re well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you’re not confident in being opinionated) - I think you’re well positioned to make a high quality discussion about it, but that’s a long story and maybe off topic.
I don’t currently have a confident view on this beyond “We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.”
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!