I think the thing you’re expressing is fine, and reasonable to be worried about. I think Anthropic should be clear about their strategy. The Google investment does give me pause, and my biggest worry about Anthropic (as with many people, I think) has always been that their strategy could ultimately lead to accelerating capabilities more than alignment.
I just don’t think this post expressed that thing particularly well, or in a way I’d expect or want Anthropic to feel compelled to respond to. My preferred version of this would engage with reasons in favor of Anthropic’s actions, and how recent actions have concretely differed from what they’ve stated in the past.
My understanding of (part of) their strategy has always been that they want to work with the largest models, and sometimes release products with the possibility of profiting off of them (hence the PBC structure rather than a nonprofit). These ideas also sound reasonable (but not bulletproof) to me, so I consequently didn’t see the Google deal as a sudden change of direction or backstab—it’s easily explainable (although possibly concerning) in my preexisting model of what Anthropic’s doing.
So my objection is jumping to a “demand answers” framing, FTX comparisons, and accusations of Machiavellian scheming, rather than an “I’d really like Anthropic to comment on why they think this is good, and I’m worried they’re not adequately considering the downsides” framing. The former, to me, requires significantly more evidence of wrongdoing than I’m aware of or you’ve provided.
I did question these assumptions (“we do capabilities to increase career capital, and somehow stay in this phase almost forever” and such) since 2020 in the field, talking to people directly. The reactions and disregard I got is the reason I feel the way I feel about all this.
I was thinking “yes, I am probably just not getting it, I will ask politely”. The replies I got were what’s causally preceding me feeling this way.
I am traumatized and I don’t want to engage fully logically here, because I feel pain when I do that. I was writing a lot of logical texts and saying logical things, only to be dismissed kinda, like “you’re not getting it, we are going to the top of this, maybe you need to be more comfortable with power” or something like this.
Needless to say, I have pre-existing trauma about a similar theme from childhood, family etc.
I do not pretend to be an objective EA doing objective things. After all, we don’t have much objective evidence here except for news articles about Anthropic 🤷♀️
So, what I’m doing here is simply expressing how I feel, expressing that I feel a bit powerless about this problem, and asking for help in solving it, inquiring about it, and making sure something is done.
I can delete my post if there is a better post and the community thinks my post is not helpful.
I want to start a discussion, but all I have is a traumatized mind tired of talking about it, which tried every possible measure I could think of.
I leave it up to you, the community, people here to decide—post a new post, ignore it, keep this one and the new one, or only the new one, or write Anthropic people directly, or go to the news, or ask them on Twitter, or anything you can think of—I do not have the mental capacity to do it.
All I can is to write that I feel bad about it, that I’m tired, that I don’t feel my CS skills would be used for good if I joined AIS research today, that I’m disillusioned, and that I ask the community, people who feel the same, to do something if they want to.
I do not claim factual accuracy or rationality metrics. Just raw experience, for you to serve as a starting point in your own actions about this, if you are interested.
My mind now can do talks about feelings, so I talk about feelings. I think feelings are good way to express what I want to say. So I went with this.
I’ll explain my downvote.
I think the thing you’re expressing is fine, and reasonable to be worried about. I think Anthropic should be clear about their strategy. The Google investment does give me pause, and my biggest worry about Anthropic (as with many people, I think) has always been that their strategy could ultimately lead to accelerating capabilities more than alignment.
I just don’t think this post expressed that thing particularly well, or in a way I’d expect or want Anthropic to feel compelled to respond to. My preferred version of this would engage with reasons in favor of Anthropic’s actions, and how recent actions have concretely differed from what they’ve stated in the past.
My understanding of (part of) their strategy has always been that they want to work with the largest models, and sometimes release products with the possibility of profiting off of them (hence the PBC structure rather than a nonprofit). These ideas also sound reasonable (but not bulletproof) to me, so I consequently didn’t see the Google deal as a sudden change of direction or backstab—it’s easily explainable (although possibly concerning) in my preexisting model of what Anthropic’s doing.
So my objection is jumping to a “demand answers” framing, FTX comparisons, and accusations of Machiavellian scheming, rather than an “I’d really like Anthropic to comment on why they think this is good, and I’m worried they’re not adequately considering the downsides” framing. The former, to me, requires significantly more evidence of wrongdoing than I’m aware of or you’ve provided.
I acknowledge and agree with your criticism.
I did question these assumptions (“we do capabilities to increase career capital, and somehow stay in this phase almost forever” and such) since 2020 in the field, talking to people directly. The reactions and disregard I got is the reason I feel the way I feel about all this.
I was thinking “yes, I am probably just not getting it, I will ask politely”. The replies I got were what’s causally preceding me feeling this way.
I am traumatized and I don’t want to engage fully logically here, because I feel pain when I do that. I was writing a lot of logical texts and saying logical things, only to be dismissed kinda, like “you’re not getting it, we are going to the top of this, maybe you need to be more comfortable with power” or something like this.
Needless to say, I have pre-existing trauma about a similar theme from childhood, family etc.
I do not pretend to be an objective EA doing objective things. After all, we don’t have much objective evidence here except for news articles about Anthropic 🤷♀️
So, what I’m doing here is simply expressing how I feel, expressing that I feel a bit powerless about this problem, and asking for help in solving it, inquiring about it, and making sure something is done.
I can delete my post if there is a better post and the community thinks my post is not helpful.
I want to start a discussion, but all I have is a traumatized mind tired of talking about it, which tried every possible measure I could think of.
I leave it up to you, the community, people here to decide—post a new post, ignore it, keep this one and the new one, or only the new one, or write Anthropic people directly, or go to the news, or ask them on Twitter, or anything you can think of—I do not have the mental capacity to do it.
All I can is to write that I feel bad about it, that I’m tired, that I don’t feel my CS skills would be used for good if I joined AIS research today, that I’m disillusioned, and that I ask the community, people who feel the same, to do something if they want to.
I do not claim factual accuracy or rationality metrics. Just raw experience, for you to serve as a starting point in your own actions about this, if you are interested.
My mind now can do talks about feelings, so I talk about feelings. I think feelings are good way to express what I want to say. So I went with this.
That is all. That is all I do here. Thank you.