Iām sympathetic to wanting to keep your identity small, particularly if you think the person asking about your identity is a journalist writing a hit piece, but if everyone takes funding, staff, etc. from the EA commons and donāt share that they got value from that commons, the commons will predictably be under-supported in the future.
I hope Anthropic leadership can find a way to share what they do and donāt get out of EA (e.g. in comments here).
I understand why people shy away from/āhide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that harms the movement in a way where the costs are externalized.
When PG refers to keeping your identity small, he means donāt defend it or its characteristics for the sake of it. Thereās nothing wrong with being a C/āC++ programmer, but realizing itās not the best for rapid development needs or memory safety. In this case, you can own being an EA/āyour affiliation to EA and not need to justify everything about the community.
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and donāt want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people youād be happy to be associated with.
FWIW, I appreciated reading this :) Thank you for sharing it!
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and donāt want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people youād be happy to be associated with.
I so agree! I think there is something virtuous and collaborative for those of us who have benefited from EA and its ideas /ā community to justā¦ being willing to stand up and say simply that. I think these ideas are worth fighting for.
Note that much of the strongest opposition to Anthropic is also associated with EA, so itās not obvious that the EA community has been an uncomplicated good for the company, though I think it likely has been fairly helpful on net (especially if one measures EAās contribution to Anthropicās mission of making transformative AI go well for the world rather than its contribution to the companyās bottom line). I do think it would be better if Anthropic comms were less evasive about the degree of their entanglement with EA.
(I work at Anthropic, though I donāt claim any particular insight into the views of the cofounders. For my part Iāll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldnāt personally have said them, but I think āa journalist goes through your public statements looking for the most damning or hypocritical things youāve ever said out of contextā is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
My guess is that the people quoted in this article would be sad if e.g. 80k started telling people not to work at Anthropic. But maybe Iām wrongāwould be good to know if so!
(And also yes, āpeople having unreasonably high expectations for epistemics in published workā is definitely a cost of dealing with EAs!)
Oh, definitely agreedāI think effects like āEA counterfactually causes a person to work at Anthropicā are straightforwardly good for Anthropic. Almost all of the sources of bad-for-Anthropic effects from EA I expect come from people who have never worked there.
(Though again, I think even the all-things-considered effect of EA has been substantially positive for the company, and I agree that it would probably be virtue-ethically better for Anthropic to express more of the value theyāve gotten from that commons.)
Edit: the comment above has been edited, the below was a reply to a previous version and it makes less sense now, leaving it for posterity
You know much more than I do, but Iām surprised by this take. My sense is that Anthropic is giving a lot back:
funding
My understanding is that all early investors in Anthropic made a ton of money, itās plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).
As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply theyāll give much more funding than they got. (Of course in EV, it could still go to zero)
staff
I donāt see why hiring people is more ātakingā than āgivingā, especially if the hires get to work on things that they believe are better for the world than any other role they could work on
and doesnāt contribute anything back
My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
To be clear, I donāt know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be āgiving backā even more to EA, but Iām skeptical that it would be the most cost-effective use of their resources (including time and brand value)
Great points, I donāt want to imply that they contribute nothing back, I will think about how to reword my comment.
I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people arenāt aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing.
I want to flag that the EA-aligned equity from Anthropic might well be worth $5-$30B+, and their power in Anthropic could be worth more (in terms of shaping AI and AI safety).
So on the whole, Iām mostly hopeful that they do good things with those two factors. It seems quite possible to me that they have more power and ability now than the rest of EA combined.
Thatās not to say Iām particularly optimistic. Just that Iām really not focused on their PR/ācoms related to EA right now; Iād ideally just keep focused on those two thingsāmeaning Iād encourage them to focus on those, and to the extent that other EAs could apply support/āpressure, Iād encourage other EAs to focus on these two.
EA-aligned equity from Anthropic might well be worth $5-$30B+,
Now that you mention this, I think itās worth flagging the conflict of interest between EA and Anthropic that it poses. Although itās a little awkward to ascribe conflicts of interest to movements, I think a belief that ideological allies hold vast amounts of wealth in a specific companyāespecially combined with a hope that such allies will use said wealth to further the movementās objectivesāqualifies.
There are a couple of layers to that. First, thereās a concern that the financial entanglement with Anthropic could influence EA actors, such as by pulling punches on Anthropic, punching extra-hard on OpenAI, or shading policy proposals in Anthropicās favor. Relatedly, people may hesitate to criticize Anthropic (or make policy proposals hostile to it) because their actual or potential funders have Anthropic entanglements, whether or not the funders would actually act in a conflicted manner.
By analogy, I donāt see EA as a credible source on the virtues and drawbacks of crypto or Asana. The difference is that neither crypto nor management software are EA cause areas, so those conflicts are less likely to impinge on core EA work than the conflict regarding Anthropic.
The next layer is that a reasonable observer would discount some EA actions and proposals based on the COI. To a somewhat informed member of the general public or a policymaker, I think establishing the financial COI creates a burden shift, under which EA bears an affirmative burden of establishing that its actions and proposals are free of taint. Thatās a hard burden to meet in a highly technical and fast-developing field. And some powerful entities (e.g., OpenAI) would be incentivized to hammer on the COI if people start listening to EA more.
Iām not sure how to mitigate this COI, although some sort of firewall between funders with Anthropic entanglements and grantmakers might help some.
(In this particular case, how Anthropic communicates about EA is more a meta concern, and so I donāt feel the COI in the same way I would if the concern about Anthropic were at the object level. Also, being comprised of social animals, EA cares about its reputation for more than instrumental reasonsāso to the extent that there is a pro-Anthropic COI it may largely counteract that effect. However, I still think itās generally worth explicitly raising and considering the COI where Anthropic-related conduct is being considered.)
Iām sympathetic to wanting to keep your identity small, particularly if you think the person asking about your identity is a journalist writing a hit piece, but if everyone takes funding, staff, etc. from the EA commons and donāt share that they got value from that commons, the commons will predictably be under-supported in the future.
I hope Anthropic leadership can find a way to share what they do and donāt get out of EA (e.g. in comments here).
I understand why people shy away from/āhide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that harms the movement in a way where the costs are externalized.
When PG refers to keeping your identity small, he means donāt defend it or its characteristics for the sake of it. Thereās nothing wrong with being a C/āC++ programmer, but realizing itās not the best for rapid development needs or memory safety. In this case, you can own being an EA/āyour affiliation to EA and not need to justify everything about the community.
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and donāt want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people youād be happy to be associated with.
Iām a proud EA.
FWIW, I appreciated reading this :) Thank you for sharing it!
I so agree! I think there is something virtuous and collaborative for those of us who have benefited from EA and its ideas /ā community to justā¦ being willing to stand up and say simply that. I think these ideas are worth fighting for.
<3
On this note, Iām happy that in CEAās new post, they talk about building the brand of effective altruism
Note that much of the strongest opposition to Anthropic is also associated with EA, so itās not obvious that the EA community has been an uncomplicated good for the company, though I think it likely has been fairly helpful on net (especially if one measures EAās contribution to Anthropicās mission of making transformative AI go well for the world rather than its contribution to the companyās bottom line). I do think it would be better if Anthropic comms were less evasive about the degree of their entanglement with EA.
(I work at Anthropic, though I donāt claim any particular insight into the views of the cofounders. For my part Iāll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldnāt personally have said them, but I think āa journalist goes through your public statements looking for the most damning or hypocritical things youāve ever said out of contextā is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
My guess is that the people quoted in this article would be sad if e.g. 80k started telling people not to work at Anthropic. But maybe Iām wrongāwould be good to know if so!
(And also yes, āpeople having unreasonably high expectations for epistemics in published workā is definitely a cost of dealing with EAs!)
Oh, definitely agreedāI think effects like āEA counterfactually causes a person to work at Anthropicā are straightforwardly good for Anthropic. Almost all of the sources of bad-for-Anthropic effects from EA I expect come from people who have never worked there.
(Though again, I think even the all-things-considered effect of EA has been substantially positive for the company, and I agree that it would probably be virtue-ethically better for Anthropic to express more of the value theyāve gotten from that commons.)
Edit: the comment above has been edited, the below was a reply to a previous version and it makes less sense now, leaving it for posterity
You know much more than I do, but Iām surprised by this take. My sense is that Anthropic is giving a lot back:
My understanding is that all early investors in Anthropic made a ton of money, itās plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).
As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply theyāll give much more funding than they got. (Of course in EV, it could still go to zero)
I donāt see why hiring people is more ātakingā than āgivingā, especially if the hires get to work on things that they believe are better for the world than any other role they could work on
My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
To be clear, I donāt know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be āgiving backā even more to EA, but Iām skeptical that it would be the most cost-effective use of their resources (including time and brand value)
Great points, I donāt want to imply that they contribute nothing back, I will think about how to reword my comment.
I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people arenāt aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing.
I want to flag that the EA-aligned equity from Anthropic might well be worth $5-$30B+, and their power in Anthropic could be worth more (in terms of shaping AI and AI safety).
So on the whole, Iām mostly hopeful that they do good things with those two factors. It seems quite possible to me that they have more power and ability now than the rest of EA combined.
Thatās not to say Iām particularly optimistic. Just that Iām really not focused on their PR/ācoms related to EA right now; Iād ideally just keep focused on those two thingsāmeaning Iād encourage them to focus on those, and to the extent that other EAs could apply support/āpressure, Iād encourage other EAs to focus on these two.
Now that you mention this, I think itās worth flagging the conflict of interest between EA and Anthropic that it poses. Although itās a little awkward to ascribe conflicts of interest to movements, I think a belief that ideological allies hold vast amounts of wealth in a specific companyāespecially combined with a hope that such allies will use said wealth to further the movementās objectivesāqualifies.
There are a couple of layers to that. First, thereās a concern that the financial entanglement with Anthropic could influence EA actors, such as by pulling punches on Anthropic, punching extra-hard on OpenAI, or shading policy proposals in Anthropicās favor. Relatedly, people may hesitate to criticize Anthropic (or make policy proposals hostile to it) because their actual or potential funders have Anthropic entanglements, whether or not the funders would actually act in a conflicted manner.
By analogy, I donāt see EA as a credible source on the virtues and drawbacks of crypto or Asana. The difference is that neither crypto nor management software are EA cause areas, so those conflicts are less likely to impinge on core EA work than the conflict regarding Anthropic.
The next layer is that a reasonable observer would discount some EA actions and proposals based on the COI. To a somewhat informed member of the general public or a policymaker, I think establishing the financial COI creates a burden shift, under which EA bears an affirmative burden of establishing that its actions and proposals are free of taint. Thatās a hard burden to meet in a highly technical and fast-developing field. And some powerful entities (e.g., OpenAI) would be incentivized to hammer on the COI if people start listening to EA more.
Iām not sure how to mitigate this COI, although some sort of firewall between funders with Anthropic entanglements and grantmakers might help some.
(In this particular case, how Anthropic communicates about EA is more a meta concern, and so I donāt feel the COI in the same way I would if the concern about Anthropic were at the object level. Also, being comprised of social animals, EA cares about its reputation for more than instrumental reasonsāso to the extent that there is a pro-Anthropic COI it may largely counteract that effect. However, I still think itās generally worth explicitly raising and considering the COI where Anthropic-related conduct is being considered.)