Call to demand answers from Anthropic about joining the AI race

Link post

I’m surprised that once again it starts as “let’s work on safety together! Let’s share ideas and work on a good thing”, then some entity grows bigger and bigger, starts taking unilateral decisions that are controversial, still using the name and support of community

I feel the same scheme as with SBF: first, the community is used to build The Thing. Then, The Thing forgets about anything ethical or safe and just turns into an “effective profit/​PR maxmizer”. I feel kinda conned and used. Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil. Even I was shocked.

I ask people to demand answers from them. I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well.

I was more about “let’s have safety and ethics people together” (which ethicists didn’t like), less and less in time. Now I don’t know anymore. I want answers.

I feel traumatized in general by the safety community and EA. I was doing research internships at Google and CHAI Berkeley. I was doing later an ethics nonprofit. All of those were somewhat EA-aligned (not 100% outside). I don’t know how can I trust people who say “safety” anymore.

What is going on?

See my thread for more questions. I feel traumatized by EA, by this duplicity (that I have seen “rising up” before this, see my other threads). I’m searching for a job and I’m scared of people. Because this is not the first time, not at all. Somehow tech people are “number one” at this. And EA/​tech people seem to be “number 0”, even better at Machiavellianism and duplicity than Peter Thiel or Musk. At least, Musk openly says he’s “red-pilled” and talks to Putin. What EA/​safety is doing is kinda similar but hidden under the veil of “safety”.

Not all people are like this. Let’s not be like this.

I expect downvotes—I don’t care. I want answers.

https://​​twitter.com/​​sergia_ch/​​status/​​1631338866840948737?s=20