I donât think itâs accurate to say that âEAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies.â
My understanding is that no matter how you define âEAs,â many people have always been supportive of working with/âat AI companies, and many others sceptical of that approach.
I think Kelsey Piperâs article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense âwe shouldnât work with AI companiesâ, but I canât imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I canât imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of peopleâs response would be âthis clause is bad, but we shouldnât alienate the most safety conscious AI company or else we might increase riskâ. I donât see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.
I donât think itâs accurate to say that âEAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies.â
My understanding is that no matter how you define âEAs,â many people have always been supportive of working with/âat AI companies, and many others sceptical of that approach.
I think Kelsey Piperâs article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense âwe shouldnât work with AI companiesâ, but I canât imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I canât imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of peopleâs response would be âthis clause is bad, but we shouldnât alienate the most safety conscious AI company or else we might increase riskâ. I donât see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.