Your central question appears interesting and important to me: Has Anthropic joined the arms race for advanced AI? If yes, why?
(And taking a by default conflict-theoretic stance toward new AI startups is perhaps good, based on the evidence one has received via DeepMind/OpenAI).
So, I’d join in in the call for asking e.g. Anthropic (but also other startups like Conjecture, Adept AI and Aligned AI) for their plans to avoid race dynamics, and their current implementation. However, I believe it’s not very likely that especially Anthropic will comment on this.
However, your post is mostly not fleshing out the question, but instead not-quite-attacking-but-also-not-not-attacking Anthropic (“Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil”) and not fully fleshing out the reasons why you’re asking the question (“I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well”), but instead talking a lot about your emotional state a lot. (I don’t think that talking about your emotional state a lot is necessarily bad, but I’d like accusations, questions and statements about emotion to be separated if possible.)
See my thread for more questions. I feel traumatized by EA, by this duplicity (that I have seen “rising up” before this, see my other threads). I’m searching for a job and I’m scared of people. Because this is not the first time, not at all. Somehow tech people are “number one” at this. And EA/tech people seem to be “number 0”, even better at Machiavellianism and duplicity than Peter Thiel or Musk. At least, Musk openly says he’s “red-pilled” and talks to Putin. What EA/safety is doing is kinda similar but hidden under the veil of “safety”.
I don’t understand this paragraph, for example. Why do you believe that EA/tech people are better at Machiavellianism than those two? Who exactly is EA/tech people here, that would be good to know.
My emotional state is relevant here. I’m one of the people who was excited about safety. Then I slowly was seeing how the plan is shaky and decisions are controversial (advertise OpenAI jobs, do the “first we get a lot of capabilities skills the do safety” which usually means a capabilities person with an EA t-shirt and not much safety).
My emotional state summarises the history that happened to me. It is relevant to my case: I am showing you how you would feel if you went through my experience, in case if you choose to believe it.
It’s not a “side note”, it’s my evidence I’m showing to say “I have concerns and this feels off, rather a pattern than a one-off case”. Emotions are good for holistic reasoning.
I don’t have the energy to write a “full-fledged EA post with dots over is and all that”. I mean, I feel I’m one of the “plaintiffs” in this case. I believed EA, I trusted all those forum posts. Now I see something is wrong. I am simply asking other people to look into this, say how they feel and think about this.
So we figure out something together.
I feel I need support. No, I don’t want to go to the FB mental health EA support group, because this is not about mental health specifically—it’s about how the field of AI safety is. It’s not that “I feel bad because of a chemical imbalance in my mind”. I feel bad because I see bad things :)
I have written at length on Twitter about my experiences. If you’re still interested, I can link it a bit later.
For a traumatized person it’s painful to go though all this again and again.
(Note: did not downvote)
Your central question appears interesting and important to me: Has Anthropic joined the arms race for advanced AI? If yes, why?
(And taking a by default conflict-theoretic stance toward new AI startups is perhaps good, based on the evidence one has received via DeepMind/OpenAI).
So, I’d join in in the call for asking e.g. Anthropic (but also other startups like Conjecture, Adept AI and Aligned AI) for their plans to avoid race dynamics, and their current implementation. However, I believe it’s not very likely that especially Anthropic will comment on this.
However, your post is mostly not fleshing out the question, but instead not-quite-attacking-but-also-not-not-attacking Anthropic (“Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil”) and not fully fleshing out the reasons why you’re asking the question (“I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well”), but instead talking a lot about your emotional state a lot. (I don’t think that talking about your emotional state a lot is necessarily bad, but I’d like accusations, questions and statements about emotion to be separated if possible.)
I don’t understand this paragraph, for example. Why do you believe that EA/tech people are better at Machiavellianism than those two? Who exactly is EA/tech people here, that would be good to know.
My emotional state is relevant here. I’m one of the people who was excited about safety. Then I slowly was seeing how the plan is shaky and decisions are controversial (advertise OpenAI jobs, do the “first we get a lot of capabilities skills the do safety” which usually means a capabilities person with an EA t-shirt and not much safety).
My emotional state summarises the history that happened to me. It is relevant to my case: I am showing you how you would feel if you went through my experience, in case if you choose to believe it.
It’s not a “side note”, it’s my evidence I’m showing to say “I have concerns and this feels off, rather a pattern than a one-off case”. Emotions are good for holistic reasoning.
I don’t have the energy to write a “full-fledged EA post with dots over is and all that”. I mean, I feel I’m one of the “plaintiffs” in this case. I believed EA, I trusted all those forum posts. Now I see something is wrong. I am simply asking other people to look into this, say how they feel and think about this.
So we figure out something together.
I feel I need support. No, I don’t want to go to the FB mental health EA support group, because this is not about mental health specifically—it’s about how the field of AI safety is. It’s not that “I feel bad because of a chemical imbalance in my mind”. I feel bad because I see bad things :)
I have written at length on Twitter about my experiences. If you’re still interested, I can link it a bit later.
For a traumatized person it’s painful to go though all this again and again.
Thank you.