The Stop AI response posted here seems maybe fine in isolation. This might have largely happened due to the Stop AI co-founder having a mental breakdown. But I would hope for Stop AI to deeply consider their role in this as well. The response of Remmelt Ellen (who is a frequent EA Forum contributor and advisor to Stop AI) doesn’t make me hopeful, especially the bolded parts:
An early activist at Stop AI had a mental health crisis and went missing. He hit the leader and said stuff he’d never condone anyone in the group to say, and apologized for it after. Two takeaways: - Act with care. Find Sam. - Stop the ‘AGI may kill us by 2027’ shit please.
[...]
I advised Stop AI organisers to change up the statement before they put it out. But they didn’t. How to see this is is a mental health crisis. Treat the person going through it with care, so they don’t go over the edge (meaning: don’t commit suicide). 2/
The organisers checked in with Sam everyday. They did everything they could. Then he went missing. From what I know about Sam, he must have felt guilt-stricken about lashing out as he did. He left both his laptop and phone behind and the door unlocked. I hope he’s alive. 3/
Sam panicked often in the months before. A few co-organisers had a stern chat with him, and after that people agreed he needed to move out of his early role of influence. Sam himself was adamant about being democratic at Stop AI, where people could be voted in or out. 4/
You may wonder whether that panic came from hooking onto some ungrounded thinking from Yudkowsky. Put roughly: that an ML model in the next few years could reach a threshold where it internally recursively improves itself and then plan to take over the world in one go. 5/
That’s a valid concern, because Sam really was worried about his sister dying out from AI in the next 1-3 years. We should be deeply concerned about corporate-AI scaling putting the sixth mass extinction into overdrive. But not in the way Yudkowsky speculates about it. 6/
Stop AI also had a “fuck-transhumanism” channel at some point. We really don’t like the grand utopian ideologies of people who think they can take over society with ‘aligned’ technology. I’ve been clear on my stance on Yudkowsky, and so have others. 7/
Transhumanist takeover ideology is convenient for wannabe system dictators like Elon Musk and Sam Altman. The way to look at this: They want to make people expendable. 8/
The first tweet started promising: lower the temperature, lay off the apocalyptic rhetoric. But then by the end of the thread, he gets into his own heightened, dramatic, apocalyptic rhetoric. So, this is not a good attempt to deescalate things.
The Stop AI response posted here seems maybe fine in isolation. This might have largely happened due to the Stop AI co-founder having a mental breakdown. But I would hope for Stop AI to deeply consider their role in this as well. The response of Remmelt Ellen (who is a frequent EA Forum contributor and advisor to Stop AI) doesn’t make me hopeful, especially the bolded parts:
The first tweet started promising: lower the temperature, lay off the apocalyptic rhetoric. But then by the end of the thread, he gets into his own heightened, dramatic, apocalyptic rhetoric. So, this is not a good attempt to deescalate things.