EA-aligned equity from Anthropic might well be worth $5-$30B+,
Now that you mention this, I think it’s worth flagging the conflict of interest between EA and Anthropic that it poses. Although it’s a little awkward to ascribe conflicts of interest to movements, I think a belief that ideological allies hold vast amounts of wealth in a specific company—especially combined with a hope that such allies will use said wealth to further the movement’s objectives—qualifies.
There are a couple of layers to that. First, there’s a concern that the financial entanglement with Anthropic could influence EA actors, such as by pulling punches on Anthropic, punching extra-hard on OpenAI, or shading policy proposals in Anthropic’s favor. Relatedly, people may hesitate to criticize Anthropic (or make policy proposals hostile to it) because their actual or potential funders have Anthropic entanglements, whether or not the funders would actually act in a conflicted manner.
By analogy, I don’t see EA as a credible source on the virtues and drawbacks of crypto or Asana. The difference is that neither crypto nor management software are EA cause areas, so those conflicts are less likely to impinge on core EA work than the conflict regarding Anthropic.
The next layer is that a reasonable observer would discount some EA actions and proposals based on the COI. To a somewhat informed member of the general public or a policymaker, I think establishing the financial COI creates a burden shift, under which EA bears an affirmative burden of establishing that its actions and proposals are free of taint. That’s a hard burden to meet in a highly technical and fast-developing field. And some powerful entities (e.g., OpenAI) would be incentivized to hammer on the COI if people start listening to EA more.
I’m not sure how to mitigate this COI, although some sort of firewall between funders with Anthropic entanglements and grantmakers might help some.
(In this particular case, how Anthropic communicates about EA is more a meta concern, and so I don’t feel the COI in the same way I would if the concern about Anthropic were at the object level. Also, being comprised of social animals, EA cares about its reputation for more than instrumental reasons—so to the extent that there is a pro-Anthropic COI it may largely counteract that effect. However, I still think it’s generally worth explicitly raising and considering the COI where Anthropic-related conduct is being considered.)
Now that you mention this, I think it’s worth flagging the conflict of interest between EA and Anthropic that it poses. Although it’s a little awkward to ascribe conflicts of interest to movements, I think a belief that ideological allies hold vast amounts of wealth in a specific company—especially combined with a hope that such allies will use said wealth to further the movement’s objectives—qualifies.
There are a couple of layers to that. First, there’s a concern that the financial entanglement with Anthropic could influence EA actors, such as by pulling punches on Anthropic, punching extra-hard on OpenAI, or shading policy proposals in Anthropic’s favor. Relatedly, people may hesitate to criticize Anthropic (or make policy proposals hostile to it) because their actual or potential funders have Anthropic entanglements, whether or not the funders would actually act in a conflicted manner.
By analogy, I don’t see EA as a credible source on the virtues and drawbacks of crypto or Asana. The difference is that neither crypto nor management software are EA cause areas, so those conflicts are less likely to impinge on core EA work than the conflict regarding Anthropic.
The next layer is that a reasonable observer would discount some EA actions and proposals based on the COI. To a somewhat informed member of the general public or a policymaker, I think establishing the financial COI creates a burden shift, under which EA bears an affirmative burden of establishing that its actions and proposals are free of taint. That’s a hard burden to meet in a highly technical and fast-developing field. And some powerful entities (e.g., OpenAI) would be incentivized to hammer on the COI if people start listening to EA more.
I’m not sure how to mitigate this COI, although some sort of firewall between funders with Anthropic entanglements and grantmakers might help some.
(In this particular case, how Anthropic communicates about EA is more a meta concern, and so I don’t feel the COI in the same way I would if the concern about Anthropic were at the object level. Also, being comprised of social animals, EA cares about its reputation for more than instrumental reasons—so to the extent that there is a pro-Anthropic COI it may largely counteract that effect. However, I still think it’s generally worth explicitly raising and considering the COI where Anthropic-related conduct is being considered.)