I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as “self-identifying with a specific social community” (which they claim they don’t, at least not anymore) vs EA as “wanting to do good and caring about others” (which they claim they do, and always did)
Going point by point:
Dario, Anthropic’s CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an “outdated term”
Amanda Askell was the 67th signatory of the GWWC pledge.
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don’t consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Many early and senior employees identify as effective altruists and/or previously worked for EA organisations
Amanda Askell explicitly says “I definitely have met people here who are effective altruists” in the article you quote, so I don’t think this contradicts it in any way
Anthropic has hired a “model welfare lead” and seems to be the company most concerned about AI sentience, an issue that’s discussed little outside of EA circles.
On the Future of Life podcast, Daniela said, “I think since we [Dario and her] were very, very small, we’ve always had this special bond around really wanting to make the world better or wanting to help people” and “he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008.” The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)
Their first company value states, “We strive to make decisions that maximize positive outcomes for humanity in the long run.”
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
It’s perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I’m not suggesting that Anthropic needs to brand itself as an EA organisation
I think that’s exactly what they are doing in the quotes in the article: “I don’t identify with that terminology” and “it’s not a theme of the organization or anything”
But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community.
I don’t think they suggest that, depending on your definition of “strong”. Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).
I don’t think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it’s bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don’t see why it can’t let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren’t Catholic anymore, it wouldn’t be dishonest for me to say “I don’t identify with the term, and this is not a Catholic company, although some of our employees are Catholic”. And giving to charity or wanting to do good wouldn’t be gotchas that I’m secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
The point I was trying to make is, separate from whether these statements are literally false, they give a misleading impression to the reader. If I didn’t know anything about Anthropic and I read the words “I definitely have met people here who are effective altruists, but it’s not a theme of the organization or anything”, I might think Anthropic is like Google where you may occasionally meet people in the cafeteria who happen to be effective altruists but EA really has nothing to do with the organisation. I would not get the impression that many of the employees are EAs who work at Anthropic or work on AI safety for EA reasons. And that the three members of the trust they’ve given veto power over the company to have been heavily involved in EA.
I also think being weird and evasive about this isn’t a good communication strategy (for reasons @Mjreard discusses above).
As a side point, I’m confused when you say:
I don’t think they suggest that, depending on your definition of “strong”. Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
That was said by the author of the article who was trying to make the point that there is a link between Anthropic and EA. So I don’t see this as evidence of Anthropic being forthcoming.
I don’t think they suggest that, depending on your definition of “strong”. Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
That was said by the author of the article who was trying to make the point that there is a link between Anthropic and EA. So I don’t see this as evidence of Anthropic being forthcoming.
I think in the context of the article, their quotes (44 words in total) make more sense:
In that context, the quotes clarify that Anthropic is not an “EA company”, and give a more accurate understanding of the relationship to the reader.
A more in-depth analysis of the historical affiliations, separations, agreements, and disagreements of Anthropic’s funders, founders, and employees with various parts of EA over the past 15 years would take far more than two paragraphs.
If I didn’t know anything about Anthropic and I read the words “I definitely have met people here who are effective altruists, but it’s not a theme of the organization or anything”, I might think Anthropic is like Google where you may occasionally meet people in the cafeteria who happen to be effective altruists but EA really has nothing to do with the organisation.
You wouldn’t think that in the context of the article, though.
I would not get the impression that many of the employees are EAs who work at Anthropic or work on AI safety for EA reasons. And that the three members of the trust they’ve given veto power over the company to have been heavily involved in EA.
I don’t know what percentage of Anthropic employees consider themselves part of the EA community. Also, I don’t agree that it’s clear that Evidence Action’s CEO is part of the effective altruism community because evidence action received money from GiveWell.
https://www.linkedin.com/in/kanika-bahl-091a936/details/experience/ She was working in global health since before effective altruism was a thing, and many/most people funded by OpenPhilanthropy don’t consider themselves part of the community. In the same way that charities funded by Catholic donors are not necessarily Catholic. It does seem that OpenPhilanthropy was their main source of funding for many years though, which makes the link stronger than I originally thought.
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I never interpreted that to be the crux/problem here. (I know I’m late replying to this.)
People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.
I don’t care about it because I still identify with EA; instead, I care because it goes under “not being consistently candid.” (I quite like that expression despite its unfortunate history). I’d be equally annoyed if they downplayed some significant other thing unrelated to EA.
Sure, you might say it’s fine not being consistently candid with journalists. They may quote you out of context. Pretty common advice for talking to journalists is to keep your statements as short and general as possible, esp. when they ask you things that aren’t “on message.” Probably they were just trying to avoid actually-unfair bad press here? Still, it’s clumsy and ineffective. It backfired. Being candid would probably have been better here even from the perspective of preventing journalists from spinning this against them. Also, they could just decide not to talk to untrusted journalists?
More generally, I feel like we really need leaders who can build trust and talk openly about difficult tradeoffs and realities.
Just as a side point, I do not think Amanda’s past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely. It’s not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don’t have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn’t a public figure back then, but I’d be genuinely shocked to find out she didn’t have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas.
Not that I’m accusing her of dishonesty here or anything: she didn’t say that she wasn’t EA or that she had never been EA, just that Anthropic wasn’t an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as “self-identifying with a specific social community” (which they claim they don’t, at least not anymore) vs EA as “wanting to do good and caring about others” (which they claim they do, and always did)
Going point by point:
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an “outdated term”
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don’t consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Amanda Askell explicitly says “I definitely have met people here who are effective altruists” in the article you quote, so I don’t think this contradicts it in any way
https://x.com/AmandaAskell/status/1905995851547148659
That’s false: https://en.wikipedia.org/wiki/Artificial_consciousness
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
I think that’s exactly what they are doing in the quotes in the article: “I don’t identify with that terminology” and “it’s not a theme of the organization or anything”
I don’t think they suggest that, depending on your definition of “strong”. Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I don’t think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it’s bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don’t see why it can’t let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren’t Catholic anymore, it wouldn’t be dishonest for me to say “I don’t identify with the term, and this is not a Catholic company, although some of our employees are Catholic”. And giving to charity or wanting to do good wouldn’t be gotchas that I’m secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
The point I was trying to make is, separate from whether these statements are literally false, they give a misleading impression to the reader. If I didn’t know anything about Anthropic and I read the words “I definitely have met people here who are effective altruists, but it’s not a theme of the organization or anything”, I might think Anthropic is like Google where you may occasionally meet people in the cafeteria who happen to be effective altruists but EA really has nothing to do with the organisation. I would not get the impression that many of the employees are EAs who work at Anthropic or work on AI safety for EA reasons. And that the three members of the trust they’ve given veto power over the company to have been heavily involved in EA.
I also think being weird and evasive about this isn’t a good communication strategy (for reasons @Mjreard discusses above).
As a side point, I’m confused when you say:
That was said by the author of the article who was trying to make the point that there is a link between Anthropic and EA. So I don’t see this as evidence of Anthropic being forthcoming.
I think in the context of the article, their quotes (44 words in total) make more sense:
In that context, the quotes clarify that Anthropic is not an “EA company”, and give a more accurate understanding of the relationship to the reader.
A more in-depth analysis of the historical affiliations, separations, agreements, and disagreements of Anthropic’s funders, founders, and employees with various parts of EA over the past 15 years would take far more than two paragraphs.
You wouldn’t think that in the context of the article, though.
I don’t know what percentage of Anthropic employees consider themselves part of the EA community. Also, I don’t agree that it’s clear that Evidence Action’s CEO is part of the effective altruism community because evidence action received money from GiveWell.
https://www.linkedin.com/in/kanika-bahl-091a936/details/experience/ She was working in global health since before effective altruism was a thing, and many/most people funded by OpenPhilanthropy don’t consider themselves part of the community. In the same way that charities funded by Catholic donors are not necessarily Catholic. It does seem that OpenPhilanthropy was their main source of funding for many years though, which makes the link stronger than I originally thought.
I never interpreted that to be the crux/problem here. (I know I’m late replying to this.)
People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.
I don’t care about it because I still identify with EA; instead, I care because it goes under “not being consistently candid.” (I quite like that expression despite its unfortunate history). I’d be equally annoyed if they downplayed some significant other thing unrelated to EA.
Sure, you might say it’s fine not being consistently candid with journalists. They may quote you out of context. Pretty common advice for talking to journalists is to keep your statements as short and general as possible, esp. when they ask you things that aren’t “on message.” Probably they were just trying to avoid actually-unfair bad press here? Still, it’s clumsy and ineffective. It backfired. Being candid would probably have been better here even from the perspective of preventing journalists from spinning this against them. Also, they could just decide not to talk to untrusted journalists?
More generally, I feel like we really need leaders who can build trust and talk openly about difficult tradeoffs and realities.
Just as a side point, I do not think Amanda’s past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely. It’s not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don’t have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn’t a public figure back then, but I’d be genuinely shocked to find out she didn’t have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas.
Not that I’m accusing her of dishonesty here or anything: she didn’t say that she wasn’t EA or that she had never been EA, just that Anthropic wasn’t an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/
You seem to have ignored a central part of what was said by Daniela Amodei; “I’m not the expert on effective altruism,” which seems hard to defend.