I think most EA organizations should have a very high standard for outright rejecting donorsā gifts on the basis of the donorsā wrongdoings. In particular, even if Ben Delo is guilty of everything heās charged with doing, I donāt think EA charities should reject his donations on that basis. (I know that CEA has not yet said it would do so or recommend doing so; just proactively voicing my opinion in case thatās under consideration.)
For one thing, it seems pretty antithetical to EAāand the stakes of the work we purport to engage ināto reject money on such a basis. We care about doing good effectively, and more money is useful to doing that. We should be very reluctant to give up free money, especially if the counterfactual is the money is spent less effectively or selfishly. If, as we often suppose, it costs about $3,000 to save a life through GiveWell, it seems very implausible to me that whatever good comes from rejecting a donation of that size must be vastly outweighed by the good that can be accomplished by those donations. Indeed, itās not clear to me that EA even has a legitimate inherent interest in this caring about a donorās identity at all, without anything more.
I also think a standard of rejecting people on the basis of their wrongdoings is morally suspect. I really think we should avoid judging people on the basis of the worse thing theyāve done, even if that worst thing is criminal. Furthermore, as most people would probably agree, legality and criminality are imperfect indicators of morality, and so making donor-relations decisions on that basis seems slippery without more. This is especially true for malum prohibitum crimes.
Furthermore, even criminals or overall-bad-people should be allowedāand encouraged!āto engage in morally good actions, including donating to effective charities. Itās very unclear to me what good can come of a blanket policy against rejecting donations from such individuals, and it risks depriving them of the opportunity to do good things, including as acts of penance or out of a genuine desire for moral growth.
Relatedly, EAs are often appropriately more risk-tolerant than others. Legal risk is one important type of risk; I donāt see it as categorically different from other risks. I recall Tara Mac Aulayās 80K interview, where she noted:
When weāve tried to outsource some of these [administrative] things, people from a more traditional non-EA background will be overly focused on professionalism or making sure that you fully comply with all of the legal restrictions whereas the approach that we tend to take is more to try and look at what are the costs of non compliance and what bad things actually happen if we donāt tick every box on some government checklist and then figure out how much time and effort we should spend on these things versus all of the other priorities in the organization. And I think thatās something thatās really hard to do without this kind of context in EA culture.
I donāt think we want to encourage a culture in which EAs stridently minimize all legal risks, because that can impede effectiveness! While we (and major EA organizations) may want to refrain from encouraging such risky behavior, I donāt think assumption of such risks in pursuit of the greater good should be grounds for total rejection of donations (just as it wouldnāt be for other forms of risk). We donāt want to do shun would-be Robin Hoods.
I do see a few reasons why a charity might reasonably worry about things like this:
PR risks.
Worrying about effects on the charityās incentives or culture.
Worrying about whitewashing the donorās past deeds.
Worrying about accepting āblood money.ā
Worrying about the transaction being voided as a fraudulent transfer/āvoidable transaction
(1) is of course a legitimate worry, but I think EAs rightfully donāt weigh PR as highly as other charities/āmovements, especially when the PR concerns are not founded in good moral reasoning. If we think that accepting donations from people convicted of crimes is better for the world, I may prefer us to stand by our convictions. Regardless, anonymization can obviously mitigate some (most?) of these risks.
(2) is also a very legitimate worry, but it seems like it should be solvable through either anonymizing donations or having an ethical firewall between those who know major donorsā identities and those who set organizational priorities. For people who have engaged in violent or otherwise-repugnant behavior (such that employees/āvolunteers/āothers have good reason to want to not be around them), then simply excluding the donor (and even their name and likeness) from spaces should be sufficient.
(3) is a bit more complex, but would also be solved with anonymization. Iām actually not sure that the worry is a well-founded one, since it seems like a personās good acts are morally relevant to assessing that personās overall goodness in light of bad acts. But if we donāt want to be complicit in that process, then precluding attribution should solve the problem.
Iām not sure I have a good working theory of how to deal with (4), other than to say I think it would be perfectly reasonable for any charity to assume that any anonymous donation was not the fruits of some heinous crime (as is almost always true). Furthermore, even if it was, itās not clear to me what good for the world is accomplished by precluding a person who is by supposition bad from getting rid of her money: bad people should probably have less money, not more.
Iām not an expert on (5), so maybe thatās whatās driving this.
Thus, in all cases except (5), anonymization would seem to do a good job of protecting charitiesā legitimate interests while also allowing for more donations to go through. CEAās efforts to deanonymize donor identity are therefore a bit puzzling to me. Unless (4) and (5) are doing a lot of lifting, Iām not sure I see the reasons for CEAās worries in this particular case or any cases in the same ballpark.
Justifying potentially bad stuff with āthe stakes of the work EA doesā feels like a slippery slope and a bit fanatic. There should be principled reasons that holds true for all charities, the cost-benefit approach you use the second part of your comment is better. Related: this thread on whether itās okay to work in the Tobacco industry.
Some other reasons I am uncomfortable with rejecting donations on such a basis:
It seems inconsistent with the less-punitive approach to crime that EA orgs like Open Phil are supporting
I donāt think what Delo did, even assuming he is guilty, is morally worse than many behaviors that are (properly) tolerated among EAs, like a lifetime of eating factory-farmed meat
As an addendum of2) and 4), FWIW, on the object-level Iām not particularly convinced that Ben Delo has acted particularly immorally (though I have not looked in detail at the allegations).
If we were to conflate morality with legality, we would also believe that eg anti-animal agriculture activists are evil terrorists, and that open science is similarly evil. Moreover, since there is not a particularly strong principled reason to privilege US/āUK law as moral guidance over the laws of other countries, we should take seriously the possibility that we should revise our views based on the legaldoctrinesofothercountries, which may have some counter-intuitive results.
(3) is a bit more complex, but would also be solved with anonymization.
Anonymization would probably solve (3), but would, unfortunately, likely create PR risks of its own. Lawrence Lessig made a similar argument a while ago:
Everyone seems to treat it as if the anonymity and secrecy around Epsteinās gift are a measure of some kind of moral failing. I see it as exactly the opposite. IF you are going to take type 3 money, then you should only take it anonymously.
Unfortunately, from what I can remember, public response to this argument was overwhelmingly negative, and The New York times (yes, that newspaper) published a story whose headline portrayed Lessig in a very bad light (Lessig subsequently filed a defamation lawsuit against the Times, which he withdrew after the headline was amended four months later). I personally would not have anticipated such a response, since the argument seems pretty reasonable to me, and I wonder if EAs as a whole may be apt to underestimate certain PR risks simply because they rely on their own subjective sense of the merits of the relevant arguments to predict how the broader public and the media will react to them.
Yeah, this is a good point. But this is why I limited my position to setting āa very high standardā for rejecting donations (and not rejecting cases from people āin the same ballparkā as Delo, assuming he is guilty, which we should not) and not ānever.ā
Also, I think there are some salient differences with the Epstein case, beyond the enormous gulf in moral turpitude implicated by the cases. Ito knew about Epsteinās identity, and IIRC Epstein had toured the Media Lab. A truly anonymized system should allow for neither of these.
(I also thought the Lessig article was perfectly reasonable.)
Following up on this: I had a conversation that updated me to believe that CEA is doing the right thing here. Unfortunately I canāt disclose much about that conversation, but I am posting this here for accountability.
I think you make a couple of good points, and overall I updated a fair bit into the direction of āaccepting functionally anonymous donations is ~always OK, even if you know the money has morally questionable originā.
Iām still not fully convinced, and suspect there are realistic cases where at least initially Iād be fairly strongly opposed to taking such donations.
Iām not sure if I can fully justify my intuition /ā if the things Iām going to say are actually its main drivers, but at first glance I see two reasons to be hesitant:
Iād guess that in practice it can be very hard to implement the level of anonymity you suggest. E.g. relationships to large donors in practice are often handled by senior staff who do have influence over the orgās strategic direction.
This is partly due to common (actual or perceived) donor preferences.
But it also makes sense from the orgās perspective: e.g. knowing the details about the relationships to large donors is fairly relevant when doing risk management. But for a holistic risk management you also need to look at other information thatās quite dispersed throughout the org; certainly org leadership needs to be involved. So the org has an incentive that might preclude setting up the kind of āfirewallsā you advocateāand when they are in place, there will be incentives to subvert them, which seems like a bad/ārisky setup.
I think that outside perceptions are a quite significant obstacle, for reasons that go beyond āPR risksā in a narrow sense. My sense is that the stakes in the arena of āmoral/āpolitical signalingā are quite high for many actors, in particular if you rely a lot on informal cooperation based on perceptions of hard-to-verify shared interests. And whom you take money from will often be quite significant in that arena.
One issue here is that the level of anonymization /ā protection from adverse incentives you advocate will often be hard to verify from the outside.
If I know that charity X has received a substantial donation from Y, my prior will be that Y has significant influence over X. In typical cases, it would be quite costly (in terms of time, but potentially also inside/āsensitive information that would need to be shared) to convince me that this is not the case.
Another significant issue is a kind of ācontagionā due to higher-order social reasoning: Suppose I know that charity X has received a substantial donation from Y. Suppose further that I know that Xās relationship to Z is relevant for Xās ability to achieve its mission (think e.g. X = MyAISafetyOrg, Z = DeepMind). Even if Iām personally not that concerned about accepting donations from Y, I might still be concerned that X made a bad move if I think that Z would disapprove of getting funded by Y. āBad moveā here might refer to a narrow sense of competence, but also again to āmoralā/āinfluence issues: if it seems to me that X is willing to accept the cost that most others are going to worry if Y has influence over X, this makes it seem more likely that Y in fact has influence over X.
Consider also that if it would be costly for X to publicly acknowledge it accepted a donation from Y, then in virtue of this very fact accepting an anonymous donation from Y gives Y influence/āleverage over X (because Y can threaten to disclose their donation).
My impression is that related concepts like āvirtue signallingā are often discussed in a derogatory fashion in the EA sphere. Iād like to therefore add that when I said āmoral/āpolitical signallingā Iām not thinking of arguably excessive/āpathological cases from, say, highly public party politics as central examples. Iām more thinking of the reasons why āintegrityā (and, for philosophers, Strawsonian āreactive attitudesā) is a thing /ā an everyday concept, and of credibility/āāimproving oneās bargaining positionā.
(Similarly, note that āfollow the moneyā is a common heuristics.)
Someone pointed out to me that money laundering prevention due diligence could be another reason, especially for conditional grants or organizations like CEA that regrant.
[Conflict disclosure: A charity on whose board I sit, the Legal Priorities Project, has been funded by Ben Delo. I write solely in my personal capacity.]
I think most EA organizations should have a very high standard for outright rejecting donorsā gifts on the basis of the donorsā wrongdoings. In particular, even if Ben Delo is guilty of everything heās charged with doing, I donāt think EA charities should reject his donations on that basis. (I know that CEA has not yet said it would do so or recommend doing so; just proactively voicing my opinion in case thatās under consideration.)
For one thing, it seems pretty antithetical to EAāand the stakes of the work we purport to engage ināto reject money on such a basis. We care about doing good effectively, and more money is useful to doing that. We should be very reluctant to give up free money, especially if the counterfactual is the money is spent less effectively or selfishly. If, as we often suppose, it costs about $3,000 to save a life through GiveWell, it seems very implausible to me that whatever good comes from rejecting a donation of that size must be vastly outweighed by the good that can be accomplished by those donations. Indeed, itās not clear to me that EA even has a legitimate inherent interest in this caring about a donorās identity at all, without anything more.
I also think a standard of rejecting people on the basis of their wrongdoings is morally suspect. I really think we should avoid judging people on the basis of the worse thing theyāve done, even if that worst thing is criminal. Furthermore, as most people would probably agree, legality and criminality are imperfect indicators of morality, and so making donor-relations decisions on that basis seems slippery without more. This is especially true for malum prohibitum crimes.
Furthermore, even criminals or overall-bad-people should be allowedāand encouraged!āto engage in morally good actions, including donating to effective charities. Itās very unclear to me what good can come of a blanket policy against rejecting donations from such individuals, and it risks depriving them of the opportunity to do good things, including as acts of penance or out of a genuine desire for moral growth.
Relatedly, EAs are often appropriately more risk-tolerant than others. Legal risk is one important type of risk; I donāt see it as categorically different from other risks. I recall Tara Mac Aulayās 80K interview, where she noted:
I donāt think we want to encourage a culture in which EAs stridently minimize all legal risks, because that can impede effectiveness! While we (and major EA organizations) may want to refrain from encouraging such risky behavior, I donāt think assumption of such risks in pursuit of the greater good should be grounds for total rejection of donations (just as it wouldnāt be for other forms of risk). We donāt want to do shun would-be Robin Hoods.
I do see a few reasons why a charity might reasonably worry about things like this:
PR risks.
Worrying about effects on the charityās incentives or culture.
Worrying about whitewashing the donorās past deeds.
Worrying about accepting āblood money.ā
Worrying about the transaction being voided as a fraudulent transfer/āvoidable transaction
(1) is of course a legitimate worry, but I think EAs rightfully donāt weigh PR as highly as other charities/āmovements, especially when the PR concerns are not founded in good moral reasoning. If we think that accepting donations from people convicted of crimes is better for the world, I may prefer us to stand by our convictions. Regardless, anonymization can obviously mitigate some (most?) of these risks.
(2) is also a very legitimate worry, but it seems like it should be solvable through either anonymizing donations or having an ethical firewall between those who know major donorsā identities and those who set organizational priorities. For people who have engaged in violent or otherwise-repugnant behavior (such that employees/āvolunteers/āothers have good reason to want to not be around them), then simply excluding the donor (and even their name and likeness) from spaces should be sufficient.
(3) is a bit more complex, but would also be solved with anonymization. Iām actually not sure that the worry is a well-founded one, since it seems like a personās good acts are morally relevant to assessing that personās overall goodness in light of bad acts. But if we donāt want to be complicit in that process, then precluding attribution should solve the problem.
Iām not sure I have a good working theory of how to deal with (4), other than to say I think it would be perfectly reasonable for any charity to assume that any anonymous donation was not the fruits of some heinous crime (as is almost always true). Furthermore, even if it was, itās not clear to me what good for the world is accomplished by precluding a person who is by supposition bad from getting rid of her money: bad people should probably have less money, not more.
Iām not an expert on (5), so maybe thatās whatās driving this.
Thus, in all cases except (5), anonymization would seem to do a good job of protecting charitiesā legitimate interests while also allowing for more donations to go through. CEAās efforts to deanonymize donor identity are therefore a bit puzzling to me. Unless (4) and (5) are doing a lot of lifting, Iām not sure I see the reasons for CEAās worries in this particular case or any cases in the same ballpark.
Justifying potentially bad stuff with āthe stakes of the work EA doesā feels like a slippery slope and a bit fanatic. There should be principled reasons that holds true for all charities, the cost-benefit approach you use the second part of your comment is better. Related: this thread on whether itās okay to work in the Tobacco industry.
Some other reasons I am uncomfortable with rejecting donations on such a basis:
It seems inconsistent with the less-punitive approach to crime that EA orgs like Open Phil are supporting
I donāt think what Delo did, even assuming he is guilty, is morally worse than many behaviors that are (properly) tolerated among EAs, like a lifetime of eating factory-farmed meat
As an addendum of2) and 4), FWIW, on the object-level Iām not particularly convinced that Ben Delo has acted particularly immorally (though I have not looked in detail at the allegations).
If we were to conflate morality with legality, we would also believe that eg anti-animal agriculture activists are evil terrorists, and that open science is similarly evil. Moreover, since there is not a particularly strong principled reason to privilege US/āUK law as moral guidance over the laws of other countries, we should take seriously the possibility that we should revise our views based on the legal doctrines of other countries, which may have some counter-intuitive results.
Strongly agree. Itās what I was getting at with the malum prohibitum thing.
Anonymization would probably solve (3), but would, unfortunately, likely create PR risks of its own. Lawrence Lessig made a similar argument a while ago:
Unfortunately, from what I can remember, public response to this argument was overwhelmingly negative, and The New York times (yes, that newspaper) published a story whose headline portrayed Lessig in a very bad light (Lessig subsequently filed a defamation lawsuit against the Times, which he withdrew after the headline was amended four months later). I personally would not have anticipated such a response, since the argument seems pretty reasonable to me, and I wonder if EAs as a whole may be apt to underestimate certain PR risks simply because they rely on their own subjective sense of the merits of the relevant arguments to predict how the broader public and the media will react to them.
Yeah, this is a good point. But this is why I limited my position to setting āa very high standardā for rejecting donations (and not rejecting cases from people āin the same ballparkā as Delo, assuming he is guilty, which we should not) and not ānever.ā
Also, I think there are some salient differences with the Epstein case, beyond the enormous gulf in moral turpitude implicated by the cases. Ito knew about Epsteinās identity, and IIRC Epstein had toured the Media Lab. A truly anonymized system should allow for neither of these.
(I also thought the Lessig article was perfectly reasonable.)
Following up on this: I had a conversation that updated me to believe that CEA is doing the right thing here. Unfortunately I canāt disclose much about that conversation, but I am posting this here for accountability.
My very quick take:
I think you make a couple of good points, and overall I updated a fair bit into the direction of āaccepting functionally anonymous donations is ~always OK, even if you know the money has morally questionable originā.
Iām still not fully convinced, and suspect there are realistic cases where at least initially Iād be fairly strongly opposed to taking such donations.
Iām not sure if I can fully justify my intuition /ā if the things Iām going to say are actually its main drivers, but at first glance I see two reasons to be hesitant:
Iād guess that in practice it can be very hard to implement the level of anonymity you suggest. E.g. relationships to large donors in practice are often handled by senior staff who do have influence over the orgās strategic direction.
This is partly due to common (actual or perceived) donor preferences.
But it also makes sense from the orgās perspective: e.g. knowing the details about the relationships to large donors is fairly relevant when doing risk management. But for a holistic risk management you also need to look at other information thatās quite dispersed throughout the org; certainly org leadership needs to be involved. So the org has an incentive that might preclude setting up the kind of āfirewallsā you advocateāand when they are in place, there will be incentives to subvert them, which seems like a bad/ārisky setup.
I think that outside perceptions are a quite significant obstacle, for reasons that go beyond āPR risksā in a narrow sense. My sense is that the stakes in the arena of āmoral/āpolitical signalingā are quite high for many actors, in particular if you rely a lot on informal cooperation based on perceptions of hard-to-verify shared interests. And whom you take money from will often be quite significant in that arena.
One issue here is that the level of anonymization /ā protection from adverse incentives you advocate will often be hard to verify from the outside.
If I know that charity X has received a substantial donation from Y, my prior will be that Y has significant influence over X. In typical cases, it would be quite costly (in terms of time, but potentially also inside/āsensitive information that would need to be shared) to convince me that this is not the case.
Another significant issue is a kind of ācontagionā due to higher-order social reasoning: Suppose I know that charity X has received a substantial donation from Y. Suppose further that I know that Xās relationship to Z is relevant for Xās ability to achieve its mission (think e.g. X = MyAISafetyOrg, Z = DeepMind). Even if Iām personally not that concerned about accepting donations from Y, I might still be concerned that X made a bad move if I think that Z would disapprove of getting funded by Y. āBad moveā here might refer to a narrow sense of competence, but also again to āmoralā/āinfluence issues: if it seems to me that X is willing to accept the cost that most others are going to worry if Y has influence over X, this makes it seem more likely that Y in fact has influence over X.
Consider also that if it would be costly for X to publicly acknowledge it accepted a donation from Y, then in virtue of this very fact accepting an anonymous donation from Y gives Y influence/āleverage over X (because Y can threaten to disclose their donation).
My impression is that related concepts like āvirtue signallingā are often discussed in a derogatory fashion in the EA sphere. Iād like to therefore add that when I said āmoral/āpolitical signallingā Iām not thinking of arguably excessive/āpathological cases from, say, highly public party politics as central examples. Iām more thinking of the reasons why āintegrityā (and, for philosophers, Strawsonian āreactive attitudesā) is a thing /ā an everyday concept, and of credibility/āāimproving oneās bargaining positionā.
(Similarly, note that āfollow the moneyā is a common heuristics.)
Someone pointed out to me that money laundering prevention due diligence could be another reason, especially for conditional grants or organizations like CEA that regrant.