Thanks for starting a discussion on this topic. I’ve been worrying about it too, and summarized my worries in this comment last month.
My worry is that as soon as we try to attribute impact to individual agents – something that strikes me as at least somewhat artificial, as you, Michael, and others in this thread have laid out – this will make it very hard to come up with an attribution system that does not create perverse incentives.
This is aggravated by many people’s inclination toward competitiveness, EA’s focus on prioritization, dependence on donors, and maybe interest in selling one’s impact to fund further operation (at least in cases of scarcity of funding).
“Prioritization is centrally about comparison, so especially charities that are dependent on funding from donors who donate to the best charity are highly incentivized to think in terms of comparison. If a single charity thinks in terms of comparison (defects), then it would be self-destructive of the other charities not to defect either. This does not hold for repeated prisoner’s dilemmas, but I can’t see any way to ever get out of such a situation in order to repeat it. Here prioritization and attribution in combination produce pervese incentives if flow-through effects of cooperation are not made an explicit part of the prioritization (I wrote about this here).”
Nonprofit with Balls has written about one manifestation of this problem and warns of the donor hording and creation of shadow missions that it leads to. He also points out that evolutionary pressures among charities favor those that succumb to these perverse incentives.
Does anyone have an idea of how to estimate how bad this problem is?
Thanks for starting a discussion on this topic. I’ve been worrying about it too, and summarized my worries in this comment last month.
My worry is that as soon as we try to attribute impact to individual agents – something that strikes me as at least somewhat artificial, as you, Michael, and others in this thread have laid out – this will make it very hard to come up with an attribution system that does not create perverse incentives.
This is aggravated by many people’s inclination toward competitiveness, EA’s focus on prioritization, dependence on donors, and maybe interest in selling one’s impact to fund further operation (at least in cases of scarcity of funding).
“Prioritization is centrally about comparison, so especially charities that are dependent on funding from donors who donate to the best charity are highly incentivized to think in terms of comparison. If a single charity thinks in terms of comparison (defects), then it would be self-destructive of the other charities not to defect either. This does not hold for repeated prisoner’s dilemmas, but I can’t see any way to ever get out of such a situation in order to repeat it. Here prioritization and attribution in combination produce pervese incentives if flow-through effects of cooperation are not made an explicit part of the prioritization (I wrote about this here).”
Nonprofit with Balls has written about one manifestation of this problem and warns of the donor hording and creation of shadow missions that it leads to. He also points out that evolutionary pressures among charities favor those that succumb to these perverse incentives.
Does anyone have an idea of how to estimate how bad this problem is?