Hi there, I’d like to share some updates from the last month.
Text during last update (July 5)
OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.
Text as of today:
OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic. Note that there have also been concerns around OpenAI’s HR practices.
The thinking behind these updates has been:
We continue to get negative updates concerning OpenAI, so it’s good for us to update our guidance accordingly.
While it’s unclear exactly what’s going on with the NDAs (are they cancelled or are they not?), it’s pretty clear that it’s in the interest of users to know there’s something they should look into with regard to HR practices.
We’ve tweaked the language to “concerns about doing harm” instead of “considerations” for all three frontier labs to indicate more strongly that these are potentially negative considerations to make before applying.
We don’t go into much detail for the sake of length / people not glazing over them—my guess is that the current text is the right length to have people notice it and then look into it more with our newly updated AI company article and the Washington Post link.
This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still don’t align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!
Re the “concerns around HR practices” link, I don’t think that Washington Post article is the best thing to link. That focuses on clauses stopping people talking to regulators which, while very bad, seems less “holy shit WTF” to me than the threatening people’s previously paid compensation over non-disparagements thing. I think the best article on that is Kelsey Piper’s (though OpenAI have seemingly mostly released people from those and corresponding threats, and Kelsey’s article doesn’t link to follow-ups discussing those).
My metric here is roughly “if a friend of mine wanted to join OpenAI, what would I warn them about” rather than “which is objectively worse for the world”, and I think ‘they are willing to threaten millions of dollars of stock you have already been paid’ is much more important to warn about.
Hi there, I’d like to share some updates from the last month.
Text during last update (July 5)
OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.
Text as of today:
OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic. Note that there have also been concerns around OpenAI’s HR practices.
The thinking behind these updates has been:
We continue to get negative updates concerning OpenAI, so it’s good for us to update our guidance accordingly.
While it’s unclear exactly what’s going on with the NDAs (are they cancelled or are they not?), it’s pretty clear that it’s in the interest of users to know there’s something they should look into with regard to HR practices.
We’ve tweaked the language to “concerns about doing harm” instead of “considerations” for all three frontier labs to indicate more strongly that these are potentially negative considerations to make before applying.
We don’t go into much detail for the sake of length / people not glazing over them—my guess is that the current text is the right length to have people notice it and then look into it more with our newly updated AI company article and the Washington Post link.
This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still don’t align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!
Yeah, this does seem like an improvement. I appreciate you thinking about it and making some updates.
This is a great update, thanks!
Re the “concerns around HR practices” link, I don’t think that Washington Post article is the best thing to link. That focuses on clauses stopping people talking to regulators which, while very bad, seems less “holy shit WTF” to me than the threatening people’s previously paid compensation over non-disparagements thing. I think the best article on that is Kelsey Piper’s (though OpenAI have seemingly mostly released people from those and corresponding threats, and Kelsey’s article doesn’t link to follow-ups discussing those).
My metric here is roughly “if a friend of mine wanted to join OpenAI, what would I warn them about” rather than “which is objectively worse for the world”, and I think ‘they are willing to threaten millions of dollars of stock you have already been paid’ is much more important to warn about.
In the context of an EA jobs list it seems like both are pretty bad. (there’s the “job list” part, and the “EA” part)
I’m pro including both, but was just commenting on which I would choose if only including one for space reasons