Are you saying bc it’s not “surprising” it should be allowed? This rhetorical move of shaming your opponent for not having already gotten used to and therefore tolerating someone doing bad things I always find bizarre.
Holly Elmore ⏸️ 🔸
Thank you for writing this. The comment section is all the evidence you need that EAs need to hear this and not be allowed to excuse Anthropic. It’s worse than I thought, honestly, seeing the pages of apologetics here over whether this is technically dishonest.
I’m sorry to say this because I like you and your comments, but you have a history of apologetics for Anthropic, Lorenzo. This is “do I have to believe it?” rather than “does this evidence suggest it?” thinking. They are simply not being forthright about the deep extent of the relationship between Anthropic and EA this whole time. They want people to think the connection is less than it was or is.
I think almost nobody had the info needed to predict FTX besides the perpetrators. I think we already know all we need to oppose Anthropic.
Not when the issue is “knowledge of and identification with something by name”
Well it’s not really an assumption, is it? We have very good reason to think she’s downplaying her knowledge.
I did basically say that in this post lol https://forum.effectivealtruism.org/posts/tuSQBGgnoxvsXwXJ3/criticism-is-sanctified-in-ea-but-like-any-intervention
EA Forum April Fools post not complete without incorrect gotcha nitpick comment!
lol “great post, but it fails to engage what I think about when I think of PauseAI”
While Anthropic’s plan is a terrible one, so is PauseAI’s. We have no good plans. And we must’nt fight amongst ourselves.
Who’s “ourselves”? Anthropic doesn’t have “a terrible plan” for AI Safety—they are the AI danger.
What happens because of these papers? Do they influence Anthropic to stop developing powerful AI? Evidently not.
I agree with this descriptively, but at this moment in time the way EA evolved to basically require all these things makes me sad because that isolates the idea from the broader world and isolates EAs from pursuing interventions that are outside of their norm, like big tent mass movement building (which I believe is the way forward with AI Safety, but EAs to consider anti-Scout mindset or something).
Agree. One of the things I most appreciate about old school EA is that it took things that used to feel like above-and-beyond altruism in my personal life and made me see that I actually enjoyed those things selfishly. Local charitable giving or going out of my way to help a friend of a friend became less of a burden once I was “off the hook” because of giving money more effectively, and I realized that the reason I didn’t want to give that stuff up was that it made me feel good and improved my life.
PauseAI US thanks you for your donation, Bruce! Anyone else who wants to make us the beneficiary of a wager is highly encouraged :)
Yeah, because then it would be a clear conversation. The tradeoffs that are currently obscured wouldn’t be hidden and the speculation would be unmasked.
All the disagreements on worldview can be phrased correctly. Currently people use the word “marginal” to sneak in specific values and assumptions about what is effective.
No, it’s literally about what the word marginal means
I think people like the “labs” language because it makes it easier to work with them and all the reasons you state, which is why I generally say “AI companies”. I do find it hard, however, to make myself understood sometimes in an EA context when I don’t use it.
I do feel called to address the big story (that’s also usually what makes me sad and worn out), but, like you, what really brings me back is little stuff like a beautiful flower or seeing a hummingbird.
Convincing such people that Anthropic is doing corpspeak and not just being perfectly reasonable or justified by 3D chess (with ultimate EA goals) would be a lot of progress...
It’s a huge problem in EA that people don’t take CoI that seriously as something that affect their thinking. They think they can solve every problem explicitly intellectually so corruption by money won’t happen to them.