First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?
I might be getting this wrong, but my understanding is that a bunch of donors immediately started ‘defecting’ (= pulling out of funding the kinds of work GV is excited about) once they learned of GV’s excitement for GW/OPP causes, on the assumption that GV would at some future point adopt a general policy of (unconditionally?) ‘cooperating’ (= fully funding everything to the extent it cares about those things).
I think GW/GV/OPP arrived at their decision in an environment where they saw a non-trivial number of donors preemptively ‘defecting’ either based on a misunderstanding of whether GW/GV/OPP was already ‘cooperating’ (= they didn’t realize that GW/GV/OPP was funding less than the full amount it wanted funded), or based on the assumption that GW/GV/OPP was intending to do so later (and perhaps could even be induced to do if others withdrew their funding). If my understanding of this is right, then it both made the cooperative equilibrium seem less likely, and made it seem extra important for GW/GV/OPP to very loudly and clearly communicate their non-CooperateBot policy lest the misapprehension spread even further.
I think the difficulty of actually communicating en masse with smaller GW donors, much less having a real back-and-forth negotiation with them, played a very large role in GW/GV/OPP’s decisions here, including their decision to choose an ‘obviously arbitrary’ split number like 50% rather than something more subtle.
It also assumes that people are taking the cost-per-life-saved numbers at face value, and if so, then GiveWell already thinks they’ve been misled.
I’m not sure I understand this point. Is this saying that if people are already misled to some extent, or in some respect, then it doesn’t matter what related ways one’s actions might confuse them?
(Disclaimer: I work for MIRI, which has received an Open Phil grant. As usual, the above is me speaking on my own behalf, not on MIRI’s.)
if Good Ventures committed to fully funding the GiveWell top charities, other donors might withdraw funding to fund the next-best thing by their values, confident that they’d be offset. A commitment to “splitting” would prevent this...
I have two main objections to this. First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?
If GV fully funded the top charities, and others also funded them, then they would be overfunded by GV’s lights. if A and B both like X (and have the same desired funding level for it), but have different second choices of Y and Z, the fully cooperative solution would not involve either A or B funding X alone.
if A and B both like X (and have the same desired funding level for it), but have different second choices of Y and Z, the fully cooperative solution would not involve either A or B funding X alone.
I’m not sure this is right. What if A and B both commit to fully funding their top charities, as soon as they find such opportunities (i.e., without taking other people’s reactions into consideration)? That seems like a fully cooperative solution that on expectation would work as well as A and B trying to negotiate a “fair division” of funding for X. Also, I’m not sure this analogy applies to the situation where A is a single big donor and B is a bunch of small donors, since in that case A and B can’t actually negotiate so A unilaterally deciding on a split would seem to lead to some deadweight loss (e.g., missed funding opportunities).
BTW, are you aware of a fully thought-out analysis of Good Venture’s “splitting” policy (whether such a policy is a good idea, and what the optimal split is)? For such an important question, I’m surprised how little apparent deliberation and empirical investigation has been done on it. Even if the value of information here is just 1% of the total funding, that would amount to about $100,000,000. (Not to mention that the analysis could be applied to other analogous situations with large and small donors.)
That’s true! Fortunately, there are a few important mitigating factors:
This game proceeds in continuous time, so there’s plenty of opportunity for donors to inform each other of their actions. For the GiveWell top charities, this often happens by reporting the donation to—or making it through—GiveWell.
As you’ve pointed out, excess donations—if they in fact turn out to be excess—can simply be funged against implicitly via lower room for more funding estimates in the following year.
A commitment to full funding doesn’t have to take the form of initially giving them the whole amount—for instance, if the estimated funding gap is X, and GV would expect other donors to contribute amount X-Y if it weren’t around, it can give Y, monitor other donations, and fill in gaps as they occur. It could even wait until after “giving season” to get more info.
On cost per life saved numbers, I’m saying that “defend the state where these #s are true” is a silly goal for an organization that doesn’t think anyone should have taken them literally in the first place. One of the many complicating factors for the cost per life saved numbers is that there are other inputs, some of which are complements to your donation, others of which are substitutes.
If a substantial share of other donors were already observed defecting, that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.
It seems like genuinely unfriendly behavior on the part of other donors and it would have been a public service at that time to call them out on this.
that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.
Ben, you have advocated just giving to the best thing at the margin, simply. Doing that while taking room for more funding into account automatically results in what you are calling ‘defecting’ here in this post (which I object to, since the game theoretic analogy is dubious, and you’re using it in a highly morally charged way to criticize a general practice with respect to a single actor). That’s a normal way of assessing donations in effective altruism, and common among strategic philanthropists.
The ‘driving away donors’ bit was repeatedly discussed, as was the routine occurrence of such issues in large-scale philanthropy (where foundations bargain with each other over shares of funding in areas of common interest).
I don’t actually think it’s defecting to take into account room for more funding. I do think it’s defecting to try to control the behavior of other donors, who have more info about their opportunity cost than you do. Defecting is not always unjustified, but it’s nice when we can find and maintain cooperate-cooperate equilibria.
I don’t think it’s unreasonable to describe major foundations as engaged in an iterated game where they display a combination of cooperative and uncooperative behavior to test each other’s boundaries and guard their own in a moderately low-trust equilibrium. If you think there’s something especially good about the EA way, it shouldn’t be that surprising that large established charities sometimes engage in uncooperative behavior. I’m holding the Open Philanthropy Project and Good Ventures to a higher standard because they say they want to do better and I believe them.
My understanding is that GiveWell has mostly counted “leveraged” donations as costs towards their cost per life saved figures, rather than counting them as free money, and I think it’s been right to do so. This seems like basically the same thing.
The prospect of driving away donors was discussed. Direct evidence of a reduction in donations wasn’t, unless I missed something big. My impression is that donations from other sources were growing at the time and have continued to grow substantially from year to year.
Given that, I could maybe see the case for committing not to give more than the anticipated remainder assuming growth in other donations continued apace, as a credible threat against shirking, but 50-50 “splitting” massively undershoots that mark.
Thanks for summarizing this, Ben!
I might be getting this wrong, but my understanding is that a bunch of donors immediately started ‘defecting’ (= pulling out of funding the kinds of work GV is excited about) once they learned of GV’s excitement for GW/OPP causes, on the assumption that GV would at some future point adopt a general policy of (unconditionally?) ‘cooperating’ (= fully funding everything to the extent it cares about those things).
I think GW/GV/OPP arrived at their decision in an environment where they saw a non-trivial number of donors preemptively ‘defecting’ either based on a misunderstanding of whether GW/GV/OPP was already ‘cooperating’ (= they didn’t realize that GW/GV/OPP was funding less than the full amount it wanted funded), or based on the assumption that GW/GV/OPP was intending to do so later (and perhaps could even be induced to do if others withdrew their funding). If my understanding of this is right, then it both made the cooperative equilibrium seem less likely, and made it seem extra important for GW/GV/OPP to very loudly and clearly communicate their non-CooperateBot policy lest the misapprehension spread even further.
I think the difficulty of actually communicating en masse with smaller GW donors, much less having a real back-and-forth negotiation with them, played a very large role in GW/GV/OPP’s decisions here, including their decision to choose an ‘obviously arbitrary’ split number like 50% rather than something more subtle.
I’m not sure I understand this point. Is this saying that if people are already misled to some extent, or in some respect, then it doesn’t matter what related ways one’s actions might confuse them?
(Disclaimer: I work for MIRI, which has received an Open Phil grant. As usual, the above is me speaking on my own behalf, not on MIRI’s.)
Cross-posted from Ben’s blog:
If GV fully funded the top charities, and others also funded them, then they would be overfunded by GV’s lights. if A and B both like X (and have the same desired funding level for it), but have different second choices of Y and Z, the fully cooperative solution would not involve either A or B funding X alone.
[CoI notice: I consult for OpenPhil.]
I’m not sure this is right. What if A and B both commit to fully funding their top charities, as soon as they find such opportunities (i.e., without taking other people’s reactions into consideration)? That seems like a fully cooperative solution that on expectation would work as well as A and B trying to negotiate a “fair division” of funding for X. Also, I’m not sure this analogy applies to the situation where A is a single big donor and B is a bunch of small donors, since in that case A and B can’t actually negotiate so A unilaterally deciding on a split would seem to lead to some deadweight loss (e.g., missed funding opportunities).
BTW, are you aware of a fully thought-out analysis of Good Venture’s “splitting” policy (whether such a policy is a good idea, and what the optimal split is)? For such an important question, I’m surprised how little apparent deliberation and empirical investigation has been done on it. Even if the value of information here is just 1% of the total funding, that would amount to about $100,000,000. (Not to mention that the analysis could be applied to other analogous situations with large and small donors.)
That’s true! Fortunately, there are a few important mitigating factors:
This game proceeds in continuous time, so there’s plenty of opportunity for donors to inform each other of their actions. For the GiveWell top charities, this often happens by reporting the donation to—or making it through—GiveWell.
As you’ve pointed out, excess donations—if they in fact turn out to be excess—can simply be funged against implicitly via lower room for more funding estimates in the following year.
A commitment to full funding doesn’t have to take the form of initially giving them the whole amount—for instance, if the estimated funding gap is X, and GV would expect other donors to contribute amount X-Y if it weren’t around, it can give Y, monitor other donations, and fill in gaps as they occur. It could even wait until after “giving season” to get more info.
On cost per life saved numbers, I’m saying that “defend the state where these #s are true” is a silly goal for an organization that doesn’t think anyone should have taken them literally in the first place. One of the many complicating factors for the cost per life saved numbers is that there are other inputs, some of which are complements to your donation, others of which are substitutes.
If a substantial share of other donors were already observed defecting, that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.
It seems like genuinely unfriendly behavior on the part of other donors and it would have been a public service at that time to call them out on this.
Ben, you have advocated just giving to the best thing at the margin, simply. Doing that while taking room for more funding into account automatically results in what you are calling ‘defecting’ here in this post (which I object to, since the game theoretic analogy is dubious, and you’re using it in a highly morally charged way to criticize a general practice with respect to a single actor). That’s a normal way of assessing donations in effective altruism, and common among strategic philanthropists.
The ‘driving away donors’ bit was repeatedly discussed, as was the routine occurrence of such issues in large-scale philanthropy (where foundations bargain with each other over shares of funding in areas of common interest).
I don’t actually think it’s defecting to take into account room for more funding. I do think it’s defecting to try to control the behavior of other donors, who have more info about their opportunity cost than you do. Defecting is not always unjustified, but it’s nice when we can find and maintain cooperate-cooperate equilibria.
I don’t think it’s unreasonable to describe major foundations as engaged in an iterated game where they display a combination of cooperative and uncooperative behavior to test each other’s boundaries and guard their own in a moderately low-trust equilibrium. If you think there’s something especially good about the EA way, it shouldn’t be that surprising that large established charities sometimes engage in uncooperative behavior. I’m holding the Open Philanthropy Project and Good Ventures to a higher standard because they say they want to do better and I believe them.
My understanding is that GiveWell has mostly counted “leveraged” donations as costs towards their cost per life saved figures, rather than counting them as free money, and I think it’s been right to do so. This seems like basically the same thing.
The prospect of driving away donors was discussed. Direct evidence of a reduction in donations wasn’t, unless I missed something big. My impression is that donations from other sources were growing at the time and have continued to grow substantially from year to year.
Given that, I could maybe see the case for committing not to give more than the anticipated remainder assuming growth in other donations continued apace, as a credible threat against shirking, but 50-50 “splitting” massively undershoots that mark.