I started off with a policy of recusing myself from even small CoIs. But these days, I mostly accord with (what I think is) the equilibrium: a) definite recusal for romantic relationships, b) very likely recusal for employment or housing relationships, c) probable recusal for close friends, d) disclosure but no self-recusal by default for other relationships.
In January, Jonas Vollmer published a beta version of the EA Funds’ internal Conflict of Interest policy. Here are some excerpts from it:
Any relationship that could cause significantly biased judgment (or the perception of that) constitutes a potential conflict of interest, e.g. romantic/sexual relationships, close work relationships, close friendships, or living together.
.
The default suggestion is that you recuse yourself from discussing the grant and voting on it.
.
If the above means we can’t evaluate a grant, we will consider forwarding the application to another high-quality grantmaker if possible. If delegating to such a grantmaker is difficult, and this policy would hamper the EA community’s ability to make a good decision, we prefer an evaluation with conflict of interest over none (or one that’s significantly worse). However, the chair and the EA Funds ED should carefully discuss such a case and consider taking additional measures before moving ahead.
Is this consistent with the current CoI policy of the EA Funds?
In general, what do you think of the level of conflict of interests within EA grantmaking? I’m a bit of an outsider to the meta / AI safety folks located in Berkeley, but I’ve been surprised to find out the frequency of close relationships between grantmakers and grant receivers. (For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)
Do you think COIs pose a significant threat to the EA’s epistemic standards? How should grantmakers navigate potential COIs? How should this be publicly communicated?
In general, what do you think of the level of conflict of interests within EA grantmaking?
My best guess, based on public information, is that CoIs within longtermism grantmaking are being handled with less-than-ideal strictness. For example, generally speaking, if a project related to anthropogenic x-risks would not get funding without the vote of a grantmaker who is a close friend of the applicant, it seems better to not fund the project.
(For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)
My understanding is that Anthropic is not a nonprofit and it received funding from investors rather than grantmakers. Though Anthropic can cause CoI issues related to Holden’s decision-making about longtermism funding. Holden said in an interview:
Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so [...] I’m as conflict-of-interest-y as I can be with this organization.
Do you think COIs pose a significant threat to the EA’s epistemic standards?
I think CoIs can easily influence decision making (in general, not specifically in EA). In the realm of anthropogenic x-risks, judging whether a high-impact intervention is net-positive or net-negative is often very hard due to complex cluelessness. Therefore, CoI-driven biases and self-deception can easily influence decision making and cause harm.
How should grantmakers navigate potential COIs? How should this be publicly communicated?
I think grantmakers should not be placed in a position where they need to decide how to navigate potential CoIs. Rather, the way grantmakers handle CoIs should be dictated by a detailed CoI policy (that should probably be made public).
This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics.
I’m confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don’t have a strong coherent outside view because it’s hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.
My impression is that Linch’s description of their actions above is consistent with our current COI policy. The Fund chairs and I have some visibility over COI matters, and fund managers often flag cases when they are unsure what the policy should be, and then I or the fund Chairs can weigh in with our suggestion.
Often we suggest proceeding as usual or a partial but not full recusal (e.g. the fund manager should participate in discussion but not vote on the grant themselves).
I understand that you recently replaced Jonas as the head of the EA Funds. In January, Jonas indicated that the EA Funds intends to publish a polished CoI policy. Is there still such an intention?
The policy that you referenced is the most up-to-date policy that we have but, I do intend to publish a polished version of the COI policy on our site at some point. I am not sure right now when I will have the capacity for this but thank you for the nudge.
Hi Linch, thank you for writing this!
In January, Jonas Vollmer published a beta version of the EA Funds’ internal Conflict of Interest policy. Here are some excerpts from it:
.
.
Is this consistent with the current CoI policy of the EA Funds?
In general, what do you think of the level of conflict of interests within EA grantmaking? I’m a bit of an outsider to the meta / AI safety folks located in Berkeley, but I’ve been surprised to find out the frequency of close relationships between grantmakers and grant receivers. (For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)
Do you think COIs pose a significant threat to the EA’s epistemic standards? How should grantmakers navigate potential COIs? How should this be publicly communicated?
(Responses from Linch or anybody else welcome)
My best guess, based on public information, is that CoIs within longtermism grantmaking are being handled with less-than-ideal strictness. For example, generally speaking, if a project related to anthropogenic x-risks would not get funding without the vote of a grantmaker who is a close friend of the applicant, it seems better to not fund the project.
My understanding is that Anthropic is not a nonprofit and it received funding from investors rather than grantmakers. Though Anthropic can cause CoI issues related to Holden’s decision-making about longtermism funding. Holden said in an interview:
I think CoIs can easily influence decision making (in general, not specifically in EA). In the realm of anthropogenic x-risks, judging whether a high-impact intervention is net-positive or net-negative is often very hard due to complex cluelessness. Therefore, CoI-driven biases and self-deception can easily influence decision making and cause harm.
I think grantmakers should not be placed in a position where they need to decide how to navigate potential CoIs. Rather, the way grantmakers handle CoIs should be dictated by a detailed CoI policy (that should probably be made public).
Here’s my general stance on integrity, which I think is a superset of issues with CoI.
As noted by ofer, I also think investments are structurally different from grants.
This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.
I’m confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don’t have a strong coherent outside view because it’s hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.
My impression is that Linch’s description of their actions above is consistent with our current COI policy. The Fund chairs and I have some visibility over COI matters, and fund managers often flag cases when they are unsure what the policy should be, and then I or the fund Chairs can weigh in with our suggestion.
Often we suggest proceeding as usual or a partial but not full recusal (e.g. the fund manager should participate in discussion but not vote on the grant themselves).
Thank you for the info!
I understand that you recently replaced Jonas as the head of the EA Funds. In January, Jonas indicated that the EA Funds intends to publish a polished CoI policy. Is there still such an intention?
The policy that you referenced is the most up-to-date policy that we have but, I do intend to publish a polished version of the COI policy on our site at some point. I am not sure right now when I will have the capacity for this but thank you for the nudge.