I think a much larger portion of donation matching than people in EA seem to believe is more like EA Giving Tuesday on Facebook than completely illusory — the funds would go to charity otherwise, but probably somewhat less effective charity.
Moreover, through whose eyes do we assess this?
Suppose Open Phil decides to match $1MM in new-donor, small/medium donations to effective animal-welfare charities. It announces that any unused portion of the match will go to an AI safety organization. For example, it might think the AI safety org is marginally more effective but would prefer $1MM to the effective animal charities plus influencing $1MM that would otherwise go to dog/cat shelters. That does not strike me as manipulative or uncooperative; it would be an honest reflection of Open Phil’s values and judgment.
If Joe EA views both the animal charities and the AI safety org as roughly equal in desirability, the match may not be counterfactual. But through the eyes of Tom Animal Lover (likely along with the vast majority of the US population), this would be almost completely counterfactual. Tom values animal welfare strongly enough (and/or is indifferent enough to AI) that the magnitude of difference between the animal charity and the AI charity dwarfs the magnitude of difference between the AI charity and setting the money on fire.
All that is to say that if our focus is on respecting donors, I submit that we should avoid rejecting good matching opportunities merely because they are not counterfactual based on our own judgment about the relative merits of the charities involved. Doing so would go beyond affording donors respect and honesty and into the realm of infantilizing them.
Moreover, through whose eyes do we assess this?
Suppose Open Phil decides to match $1MM in new-donor, small/medium donations to effective animal-welfare charities. It announces that any unused portion of the match will go to an AI safety organization. For example, it might think the AI safety org is marginally more effective but would prefer $1MM to the effective animal charities plus influencing $1MM that would otherwise go to dog/cat shelters. That does not strike me as manipulative or uncooperative; it would be an honest reflection of Open Phil’s values and judgment.
If Joe EA views both the animal charities and the AI safety org as roughly equal in desirability, the match may not be counterfactual. But through the eyes of Tom Animal Lover (likely along with the vast majority of the US population), this would be almost completely counterfactual. Tom values animal welfare strongly enough (and/or is indifferent enough to AI) that the magnitude of difference between the animal charity and the AI charity dwarfs the magnitude of difference between the AI charity and setting the money on fire.
All that is to say that if our focus is on respecting donors, I submit that we should avoid rejecting good matching opportunities merely because they are not counterfactual based on our own judgment about the relative merits of the charities involved. Doing so would go beyond affording donors respect and honesty and into the realm of infantilizing them.