I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus.
If, as Jon suggests, the average impact scales well (even if historically not smoothly), unless you confirm that your donation won’t make a difference, and most donations make little difference, in the unlikely event that it does (because you push them past such a threshold), it can make a huge difference to make up for it, enough so that the expected value looks good even if you don’t know whether you’ll push them past such a threshold. It’s similar to this argument for veg*nism.
It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table.
Have you confirmed this about AMF? In the case of GiveDirectly, they give to whole villages at a time, so maybe the benefactors in the “marginal” village will get more or less, but I imagine there’s a cutoff where they won’t bother and just wait for more donations instead. Similarly, the school-based deworming charities might wait until they have enough to deworm another whole school. Of course, these villages and schools might be really small, so it might not matter too much unless you’re making very small donations.
Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.
If, as Jon suggests, the average impact scales well (even if historically not smoothly), unless you confirm that your donation won’t make a difference, and most donations make little difference, in the unlikely event that it does (because you push them past such a threshold), it can make a huge difference to make up for it, enough so that the expected value looks good even if you don’t know whether you’ll push them past such a threshold. It’s similar to this argument for veg*nism.
Have you confirmed this about AMF? In the case of GiveDirectly, they give to whole villages at a time, so maybe the benefactors in the “marginal” village will get more or less, but I imagine there’s a cutoff where they won’t bother and just wait for more donations instead. Similarly, the school-based deworming charities might wait until they have enough to deworm another whole school. Of course, these villages and schools might be really small, so it might not matter too much unless you’re making very small donations.
Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.