I donāt think itās accurate to say that āEAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies.ā
My understanding is that no matter how you define āEAs,ā many people have always been supportive of working with/āat AI companies, and many others sceptical of that approach.
I think Kelsey Piperās article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense āwe shouldnāt work with AI companiesā, but I canāt imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I canāt imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of peopleās response would be āthis clause is bad, but we shouldnāt alienate the most safety conscious AI company or else we might increase riskā. I donāt see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.
I donāt think itās accurate to say that āEAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies.ā
My understanding is that no matter how you define āEAs,ā many people have always been supportive of working with/āat AI companies, and many others sceptical of that approach.
I think Kelsey Piperās article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense āwe shouldnāt work with AI companiesā, but I canāt imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I canāt imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of peopleās response would be āthis clause is bad, but we shouldnāt alienate the most safety conscious AI company or else we might increase riskā. I donāt see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.