This policy seems too lax to me. In particular, I’m fairly surprised at the very narrow range of circumstances in which individual fund members would recuse themselves. It seems fairly obvious to me that being in a close friendship or active collaboration with someone should require recusal and being personal friends with someone should require disclosure.
In general I feel CoI policies should err fairly strongly on the side of caution, whereas this one does the opposite. I’d appreciate some discussion on why this is the case.
Using the information available to you and not excluding a person’s judgment in situations where they could reasonably be called ‘biased’ is the standard practise in places like Y Combinator and OpenPhil. OpenPhil writes about this in the classic post Hits-based Giving. A relevant quote (emphasis in original):
We don’t: put extremely high weight on avoiding conflicts of interest, intellectual “bubbles” or “echo chambers.”
...In some cases, this risk may be compounded by social connections. When hiring specialists in specific causes, we’ve explicitly sought people with deep experience and strong connections in a field. Sometimes, that means our program officers are friends with many of the people who are best suited to be our advisors and grantees.
...it sometimes happens that it’s difficult to disentangle the case for a grant from the relationships around it.[2] When these situations occur, there’s a greatly elevated risk that we aren’t being objective, and aren’t weighing the available evidence and arguments reasonably. If our goal were to find the giving opportunities most strongly supported by evidence, this would be a major problem. But the drawbacks for a “hits-based” approach are less clear, and the drawbacks of too strongly avoiding these situations would, in my view, be unacceptable.
It seems fairly obvious to me that being in a [...] active collaboration with someone should require recusal
This seems plausibly right to me, though my model is that this should depend a bit on the size and nature of the collaboration.
As a concrete example, my model is that Open Phil has many people who were actively collaborating with projects that eventually grew into CSET, and that that involvement was necessary to make the project feasible, and some of those then went on to work at CSET. Those people were also the most informed about the decisions about the grants they eventually made to CSET, and so I don’t expect them to have been recused from the relevant decisions. So I would be hesitant to commit to nobody on the LTFF ever being involved in a project in the same way that a bunch of Open Phil staff were involved in CSET.
My broad model here is that recusal is a pretty bad tool for solving this problem, and that this instead should be solved by the fund members putting more effort into grants that are subject to COIs, and to be more likely to internally veto grants if they seem to be the result of COIs. Obviously that has less external accountability, but is how I expect organizations like GiveWell and Open Phil to manage cases like this. Disclosure feels like the right default in this case, which allows us to be open about how we adjusted our votes and decisions based on the COIs present.
In general I feel CoI policies should err fairly strongly on the side of caution
I don’t think I understand what this means, written in this very general language. Most places don’t have strong COI policies at all, and both GiveWell and OpenPhil have much laxer COI policies than the above, from what I can tell, which seem like two of the most relevant reference points.
Open Phil has also written a bunch about how they no longer disclose most COIs because the cost was quite large, so overall it seems like a bad idea to just blindly err on the side of caution (since one of the most competent organizations in our direct orbit has decided that that strategy was a mistake).
The above COI policy is more restrictive than the policy for any other fund (since its supplementary and in addition to the official CEA COI policy), so it’s also not particularly lax in a general sense.
It seems fairly obvious to me that being in a close friendship [...] should require recusal
I am pretty uncertain about this case. My current plan is to have a policy of disclosing these things for a while, and then allow donors and other stakeholders to give us feedback on whether they think some of the grants were bad as a result of those conflicts.
Again, CSET is a pretty concrete example here, with many people at Open Phil being close friends with people at CSET. Or many people at GiveWell being friends with people at GiveDirectly or AMF. I don’t know their internal COI policies, but I don’t expect those GiveWell or Open Phil employees to completely recuse themselves from the decisions related to those organizations.
There is a more general heuristic here, where at this stage I prefer our policies to end up disclosing a lot of information, so that others can be well-informed about the tradeoffs we are making. If you err on the side of recusal, you will just prevent a lot of grants from being made, the opportunity cost of which is really hard to communicate to potential donors and stakeholders, and it’s hard for people to get a sense of the tradeoffs. So I prefer starting relatively lax, and then over time figuring out ways in which we can reduce bad incentives while still preserving the value of many of the grants that are very context-heavy.
I wonder why you think recusal is a bad way to address COIs. The downsides seem minimal to me: The other fund managers can still vote in favor of a grant, and the recused fund manager can still provide information about the potential grantee. This will also automatically mean that other fund managers have to invest more time into investigating the grant, which is something you seemed to favor. I’d be keen to hear your thoughts.
In comparison, using internal veto power seems like a more brittle solution that relies more on attention from other fund managers and might not work in all instances.
In comparison, disclosure often seems more complicated to me because it interferes with the privacy of fund managers and potential grantees.
I think Open Phil’s situation is substantially different because they are accountable to a very different type of donor, have fewer grant evaluators per grant, and most of their grants fall outside the EA community such that COIs are less common. (That said, I wonder about the COI policy for their EA grants committee.) GiveWell is also in a landscape where COIs are much less likely to arise.
I think there should be a fairly restrictive COI policy for all of the funds, not just for the LTFF.
The usual thing that I’ve seen happen in the case of recusals is that the recused person can no longer bring their expertise to the table, and de-facto when a fund-member is recused from a grant, without someone else having the expertise to evaluate the grant, it is much less likely for that grant to happen. This means two things:
1. Projects are now punished for establishing relationships with grantmakers and working together with grantmakers
2. Grantmakers are punished for establishing relationships with organizations and projects they are excited about
3. Funds can no longer leverage the expertise of the people with the most relevant context
In general when someone is recused they seem to no longer argue for why a grant is important, and in a hit-based view a lot of the time the people who have positive models for why a grant is important are also most likely to have a social network that is strongly connected to the grant in question.
I don’t expect a loosely connected committee like the LTFF or other EA Funds to successfully extract that information from the relevant fund-member, and so a conservative COI policy will reliably fail to make the most valuable grants. Maybe an organization in which people had the time to spend hundreds of hours talking to each other can afford to just have someone with expertise recuse themselves and then try to download their models of why a grant is promising and evaluate it themselves independently, but the LTFF (and I expect other EA Funds) do not have that luxury. I have not seen a group of people navigate this successfully and de-facto I am very confident that a process that relies heavily on recusals will just tend to fail to make grants when the fund-member with the most relevant expertise is excused.
have fewer grant evaluators per grant
Having fewer grant evaluators per grant is a choice that Open Phil made that the EA Funds can also make, I don’t see how that is an external constraint. It is at least partially a result of trusting in the hit-based giving view that generates a lot of my intuitions around recusals. Nothing is stopping the EA Funds from having fewer grant evaluators per grant (and de-facto most grants are only investigated by a single person on a fund team, with the rest just providing basic oversight, which is why recusals are so costly, because frequently only a single fund member even has the requisite skills and expertise necessary to investigate a grant in a reasonable amount of time).
and most of their grants fall outside the EA community such that COIs are less common.
While most grants fall outside of the EA community, many if not most of the grant investigators will still have COIs with the organizations they are evaluating, because that is where they will extend their social network. So the people who work at GiveWell tend to have closer social ties to organizations working in that space (often having been hired from that space), the people working on biorisk will have social ties to the existing pandemic prevention space, etc. I do think that overall Open Phil’s work is somewhat less likely to hit on COIs but not that much. I also overall trust Open Phil’s judgement a lot more in domains where they are socially embedded in the relevant network, and I think Open Phil also thinks that, and puts a lot of emphasis of understanding the specific social constraints and hierarchies in the fields they are making grants in. Again, a recusal-heavy COI policy would create really bad incentives on grantmakers here, and isolate the fund from many of the most important sources of expertise.
This policy seems too lax to me. In particular, I’m fairly surprised at the very narrow range of circumstances in which individual fund members would recuse themselves. It seems fairly obvious to me that being in a close friendship or active collaboration with someone should require recusal and being personal friends with someone should require disclosure.
In general I feel CoI policies should err fairly strongly on the side of caution, whereas this one does the opposite. I’d appreciate some discussion on why this is the case.
Using the information available to you and not excluding a person’s judgment in situations where they could reasonably be called ‘biased’ is the standard practise in places like Y Combinator and OpenPhil. OpenPhil writes about this in the classic post Hits-based Giving. A relevant quote (emphasis in original):
This seems plausibly right to me, though my model is that this should depend a bit on the size and nature of the collaboration.
As a concrete example, my model is that Open Phil has many people who were actively collaborating with projects that eventually grew into CSET, and that that involvement was necessary to make the project feasible, and some of those then went on to work at CSET. Those people were also the most informed about the decisions about the grants they eventually made to CSET, and so I don’t expect them to have been recused from the relevant decisions. So I would be hesitant to commit to nobody on the LTFF ever being involved in a project in the same way that a bunch of Open Phil staff were involved in CSET.
My broad model here is that recusal is a pretty bad tool for solving this problem, and that this instead should be solved by the fund members putting more effort into grants that are subject to COIs, and to be more likely to internally veto grants if they seem to be the result of COIs. Obviously that has less external accountability, but is how I expect organizations like GiveWell and Open Phil to manage cases like this. Disclosure feels like the right default in this case, which allows us to be open about how we adjusted our votes and decisions based on the COIs present.
I don’t think I understand what this means, written in this very general language. Most places don’t have strong COI policies at all, and both GiveWell and OpenPhil have much laxer COI policies than the above, from what I can tell, which seem like two of the most relevant reference points.
Open Phil has also written a bunch about how they no longer disclose most COIs because the cost was quite large, so overall it seems like a bad idea to just blindly err on the side of caution (since one of the most competent organizations in our direct orbit has decided that that strategy was a mistake).
The above COI policy is more restrictive than the policy for any other fund (since its supplementary and in addition to the official CEA COI policy), so it’s also not particularly lax in a general sense.
I am pretty uncertain about this case. My current plan is to have a policy of disclosing these things for a while, and then allow donors and other stakeholders to give us feedback on whether they think some of the grants were bad as a result of those conflicts.
Again, CSET is a pretty concrete example here, with many people at Open Phil being close friends with people at CSET. Or many people at GiveWell being friends with people at GiveDirectly or AMF. I don’t know their internal COI policies, but I don’t expect those GiveWell or Open Phil employees to completely recuse themselves from the decisions related to those organizations.
There is a more general heuristic here, where at this stage I prefer our policies to end up disclosing a lot of information, so that others can be well-informed about the tradeoffs we are making. If you err on the side of recusal, you will just prevent a lot of grants from being made, the opportunity cost of which is really hard to communicate to potential donors and stakeholders, and it’s hard for people to get a sense of the tradeoffs. So I prefer starting relatively lax, and then over time figuring out ways in which we can reduce bad incentives while still preserving the value of many of the grants that are very context-heavy.
I wonder why you think recusal is a bad way to address COIs. The downsides seem minimal to me: The other fund managers can still vote in favor of a grant, and the recused fund manager can still provide information about the potential grantee. This will also automatically mean that other fund managers have to invest more time into investigating the grant, which is something you seemed to favor. I’d be keen to hear your thoughts.
In comparison, using internal veto power seems like a more brittle solution that relies more on attention from other fund managers and might not work in all instances.
In comparison, disclosure often seems more complicated to me because it interferes with the privacy of fund managers and potential grantees.
I think Open Phil’s situation is substantially different because they are accountable to a very different type of donor, have fewer grant evaluators per grant, and most of their grants fall outside the EA community such that COIs are less common. (That said, I wonder about the COI policy for their EA grants committee.) GiveWell is also in a landscape where COIs are much less likely to arise.
I think there should be a fairly restrictive COI policy for all of the funds, not just for the LTFF.
The usual thing that I’ve seen happen in the case of recusals is that the recused person can no longer bring their expertise to the table, and de-facto when a fund-member is recused from a grant, without someone else having the expertise to evaluate the grant, it is much less likely for that grant to happen. This means two things:
1. Projects are now punished for establishing relationships with grantmakers and working together with grantmakers
2. Grantmakers are punished for establishing relationships with organizations and projects they are excited about
3. Funds can no longer leverage the expertise of the people with the most relevant context
In general when someone is recused they seem to no longer argue for why a grant is important, and in a hit-based view a lot of the time the people who have positive models for why a grant is important are also most likely to have a social network that is strongly connected to the grant in question.
I don’t expect a loosely connected committee like the LTFF or other EA Funds to successfully extract that information from the relevant fund-member, and so a conservative COI policy will reliably fail to make the most valuable grants. Maybe an organization in which people had the time to spend hundreds of hours talking to each other can afford to just have someone with expertise recuse themselves and then try to download their models of why a grant is promising and evaluate it themselves independently, but the LTFF (and I expect other EA Funds) do not have that luxury. I have not seen a group of people navigate this successfully and de-facto I am very confident that a process that relies heavily on recusals will just tend to fail to make grants when the fund-member with the most relevant expertise is excused.
Having fewer grant evaluators per grant is a choice that Open Phil made that the EA Funds can also make, I don’t see how that is an external constraint. It is at least partially a result of trusting in the hit-based giving view that generates a lot of my intuitions around recusals. Nothing is stopping the EA Funds from having fewer grant evaluators per grant (and de-facto most grants are only investigated by a single person on a fund team, with the rest just providing basic oversight, which is why recusals are so costly, because frequently only a single fund member even has the requisite skills and expertise necessary to investigate a grant in a reasonable amount of time).
While most grants fall outside of the EA community, many if not most of the grant investigators will still have COIs with the organizations they are evaluating, because that is where they will extend their social network. So the people who work at GiveWell tend to have closer social ties to organizations working in that space (often having been hired from that space), the people working on biorisk will have social ties to the existing pandemic prevention space, etc. I do think that overall Open Phil’s work is somewhat less likely to hit on COIs but not that much. I also overall trust Open Phil’s judgement a lot more in domains where they are socially embedded in the relevant network, and I think Open Phil also thinks that, and puts a lot of emphasis of understanding the specific social constraints and hierarchies in the fields they are making grants in. Again, a recusal-heavy COI policy would create really bad incentives on grantmakers here, and isolate the fund from many of the most important sources of expertise.
I’ve also outlined my reasoning quite a bit in other comments, here is one of the ones that goes into a bunch of detail: https://forum.effectivealtruism.org/posts/Hrd73RGuCoHvwpQBC/request-for-feedback-draft-of-a-coi-policy-for-the-long-term?commentId=mjJEK8y4e7WycgosN
I think this comment highlights some of the reasons for why I am hesitant to just err on the side of disclosure for personal friendships.
I’m sympathetic to this consideration, but I think it applies much more strongly to romantic/sexual relationships than friendships.