In general, what do you think of the level of conflict of interests within EA grantmaking?
My best guess, based on public information, is that CoIs within longtermism grantmaking are being handled with less-than-ideal strictness. For example, generally speaking, if a project related to anthropogenic x-risks would not get funding without the vote of a grantmaker who is a close friend of the applicant, it seems better to not fund the project.
(For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)
My understanding is that Anthropic is not a nonprofit and it received funding from investors rather than grantmakers. Though Anthropic can cause CoI issues related to Holden’s decision-making about longtermism funding. Holden said in an interview:
Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so [...] I’m as conflict-of-interest-y as I can be with this organization.
Do you think COIs pose a significant threat to the EA’s epistemic standards?
I think CoIs can easily influence decision making (in general, not specifically in EA). In the realm of anthropogenic x-risks, judging whether a high-impact intervention is net-positive or net-negative is often very hard due to complex cluelessness. Therefore, CoI-driven biases and self-deception can easily influence decision making and cause harm.
How should grantmakers navigate potential COIs? How should this be publicly communicated?
I think grantmakers should not be placed in a position where they need to decide how to navigate potential CoIs. Rather, the way grantmakers handle CoIs should be dictated by a detailed CoI policy (that should probably be made public).
My best guess, based on public information, is that CoIs within longtermism grantmaking are being handled with less-than-ideal strictness. For example, generally speaking, if a project related to anthropogenic x-risks would not get funding without the vote of a grantmaker who is a close friend of the applicant, it seems better to not fund the project.
My understanding is that Anthropic is not a nonprofit and it received funding from investors rather than grantmakers. Though Anthropic can cause CoI issues related to Holden’s decision-making about longtermism funding. Holden said in an interview:
I think CoIs can easily influence decision making (in general, not specifically in EA). In the realm of anthropogenic x-risks, judging whether a high-impact intervention is net-positive or net-negative is often very hard due to complex cluelessness. Therefore, CoI-driven biases and self-deception can easily influence decision making and cause harm.
I think grantmakers should not be placed in a position where they need to decide how to navigate potential CoIs. Rather, the way grantmakers handle CoIs should be dictated by a detailed CoI policy (that should probably be made public).