I find it encouraging that EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies (c.f. Why Not Slow AI Progress?). Previously, I worried that social/professional entanglements and image concerns would lead EAs to align with AI companies even after receiving clear signals that AI companies are not interested in safety. I’m glad to have been wrong about that.
Caveat: we’ve only seen this kind of scrutiny applied to OpenAI and it remains to be seen whether Anthropic and DeepMind will get the same scrutiny.
I don’t think it’s accurate to say that “EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies.”
My understanding is that no matter how you define “EAs,” many people have always been supportive of working with/at AI companies, and many others sceptical of that approach.
I think Kelsey Piper’s article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense “we shouldn’t work with AI companies”, but I can’t imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I can’t imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of people’s response would be “this clause is bad, but we shouldn’t alienate the most safety conscious AI company or else we might increase risk”. I don’t see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.
I find it encouraging that EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies (c.f. Why Not Slow AI Progress?). Previously, I worried that social/professional entanglements and image concerns would lead EAs to align with AI companies even after receiving clear signals that AI companies are not interested in safety. I’m glad to have been wrong about that.
Caveat: we’ve only seen this kind of scrutiny applied to OpenAI and it remains to be seen whether Anthropic and DeepMind will get the same scrutiny.
I don’t think it’s accurate to say that “EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies.”
My understanding is that no matter how you define “EAs,” many people have always been supportive of working with/at AI companies, and many others sceptical of that approach.
I think Kelsey Piper’s article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense “we shouldn’t work with AI companies”, but I can’t imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I can’t imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of people’s response would be “this clause is bad, but we shouldn’t alienate the most safety conscious AI company or else we might increase risk”. I don’t see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.