The Stop AI response posted here seems maybe fine in isolation. This might have largely happened due to the Stop AI co-founder having a mental breakdown. But I would hope for Stop AI to deeply consider their role in this as well. The response of Remmelt Ellen (who is a frequent EA Forum contributor and advisor to Stop AI) doesn’t make me hopeful, especially the bolded parts:
An early activist at Stop AI had a mental health crisis and went missing. He hit the leader and said stuff he’d never condone anyone in the group to say, and apologized for it after. Two takeaways:
- Act with care. Find Sam.
- Stop the ‘AGI may kill us by 2027’ shit please.[...]
I advised Stop AI organisers to change up the statement before they put it out. But they didn’t. How to see this is is a mental health crisis. Treat the person going through it with care, so they don’t go over the edge (meaning: don’t commit suicide). 2/
The organisers checked in with Sam everyday. They did everything they could. Then he went missing. From what I know about Sam, he must have felt guilt-stricken about lashing out as he did. He left both his laptop and phone behind and the door unlocked. I hope he’s alive. 3/
Sam panicked often in the months before. A few co-organisers had a stern chat with him, and after that people agreed he needed to move out of his early role of influence. Sam himself was adamant about being democratic at Stop AI, where people could be voted in or out. 4/
You may wonder whether that panic came from hooking onto some ungrounded thinking from Yudkowsky. Put roughly: that an ML model in the next few years could reach a threshold where it internally recursively improves itself and then plan to take over the world in one go. 5/
That’s a valid concern, because Sam really was worried about his sister dying out from AI in the next 1-3 years. We should be deeply concerned about corporate-AI scaling putting the sixth mass extinction into overdrive. But not in the way Yudkowsky speculates about it. 6/
Stop AI also had a “fuck-transhumanism” channel at some point. We really don’t like the grand utopian ideologies of people who think they can take over society with ‘aligned’ technology. I’ve been clear on my stance on Yudkowsky, and so have others. 7/
Transhumanist takeover ideology is convenient for wannabe system dictators like Elon Musk and Sam Altman. The way to look at this: They want to make people expendable. 8/
[...]
I’m somewhat surprised about the lack of information about Anthropic employee’s donation plans.
Potential reasons:
They are all working full-time (probably more) and it’s really hard to get clarity on your own donation plans in such a situation. And communicating about them is even harder.
They might have specific plans but talking about them publicly is tricky. It might imply information about Anthropics plans (e.g. regarding IPO) or about the internal sentiment about the prospect of Anthropic gaining/losing value in the future. Or just plain old ‘what happens to your inbox once you imply that you’re going to be donating >10M soon?’.
They might not see a lot of benefit of communicating publicly about this. Maybe they are chatting with Coefficient Giving about their plans. Maybe they are planning their own foundation.
There might just not be that many people with significant wealth at Anthropic who are planning on donating effectively anytime soon. This could be because of value drift, because they expect their assets to increase in value and want to donate later, because they don’t see great donation opportunities yet.
Interested to hear whether I’ve missed a major consideration and whether people have takes about which of these reasons is most likely/explanatory.