Hi there, Iâd like to share some updates from the last month.
Text during last update (July 5)
OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.
Text as of today:
OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic. Note that there have also been concerns around OpenAIâs HR practices.
The thinking behind these updates has been:
We continue to get negative updates concerning OpenAI, so itâs good for us to update our guidance accordingly.
While itâs unclear exactly whatâs going on with the NDAs (are they cancelled or are they not?), itâs pretty clear that itâs in the interest of users to know thereâs something they should look into with regard to HR practices.
Weâve tweaked the language to âconcerns about doing harmâ instead of âconsiderationsâ for all three frontier labs to indicate more strongly that these are potentially negative considerations to make before applying.
We donât go into much detail for the sake of length /â people not glazing over themâmy guess is that the current text is the right length to have people notice it and then look into it more with our newly updated AI company article and the Washington Post link.
This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still donât align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!
Re the âconcerns around HR practicesâ link, I donât think that Washington Post article is the best thing to link. That focuses on clauses stopping people talking to regulators which, while very bad, seems less âholy shit WTFâ to me than the threatening peopleâs previously paid compensation over non-disparagements thing. I think the best article on that is Kelsey Piperâs (though OpenAI have seemingly mostly released people from those and corresponding threats, and Kelseyâs article doesnât link to follow-ups discussing those).
My metric here is roughly âif a friend of mine wanted to join OpenAI, what would I warn them aboutâ rather than âwhich is objectively worse for the worldâ, and I think âthey are willing to threaten millions of dollars of stock you have already been paidâ is much more important to warn about.
Hi there, Iâd like to share some updates from the last month.
Text during last update (July 5)
OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. We recommend specific positions at OpenAI that we think may be high impact. We do not necessarily recommend working at other jobs at OpenAI. You can read more about considerations around working at a leading AI company in our career review on the topic.
Text as of today:
OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic. Note that there have also been concerns around OpenAIâs HR practices.
The thinking behind these updates has been:
We continue to get negative updates concerning OpenAI, so itâs good for us to update our guidance accordingly.
While itâs unclear exactly whatâs going on with the NDAs (are they cancelled or are they not?), itâs pretty clear that itâs in the interest of users to know thereâs something they should look into with regard to HR practices.
Weâve tweaked the language to âconcerns about doing harmâ instead of âconsiderationsâ for all three frontier labs to indicate more strongly that these are potentially negative considerations to make before applying.
We donât go into much detail for the sake of length /â people not glazing over themâmy guess is that the current text is the right length to have people notice it and then look into it more with our newly updated AI company article and the Washington Post link.
This is thanks to discussions within 80k and thanks to some of the comments here. While I suspect, @Raemon, that we still donât align on important things, I nonetheless appreciate the prompt to think this through more and I believe that it has led to improvements!
Yeah, this does seem like an improvement. I appreciate you thinking about it and making some updates.
This is a great update, thanks!
Re the âconcerns around HR practicesâ link, I donât think that Washington Post article is the best thing to link. That focuses on clauses stopping people talking to regulators which, while very bad, seems less âholy shit WTFâ to me than the threatening peopleâs previously paid compensation over non-disparagements thing. I think the best article on that is Kelsey Piperâs (though OpenAI have seemingly mostly released people from those and corresponding threats, and Kelseyâs article doesnât link to follow-ups discussing those).
My metric here is roughly âif a friend of mine wanted to join OpenAI, what would I warn them aboutâ rather than âwhich is objectively worse for the worldâ, and I think âthey are willing to threaten millions of dollars of stock you have already been paidâ is much more important to warn about.
In the context of an EA jobs list it seems like both are pretty bad. (thereâs the âjob listâ part, and the âEAâ part)
Iâm pro including both, but was just commenting on which I would choose if only including one for space reasons