I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus. You’re right that under some circumstances, a risk-neutral expected value calculus will favor small donors donating to “step-functional” charities that can’t scale their operations smoothly per marginal dollar donated, but my argument was that in the specific case of multiplier charities, the odds of a small-dollar donation being counterfactually responsible for moving the organization up an impact step are particularly infinitesimal (or at least that this is the most reasonable thing for small-dollar donors without inside information to believe). The fact that impact in this context is step-functional is a part of the explanation for the argument, not the conclusion of the argument.
With respect to the question of “relative step-functionality,” though, it’s also not clear to me why, compared to a multiplier charity, one would think that giving to GiveWell’s Maximum Impact Fund would be any more step-functional on the margin. It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table. Moreover, I find this particularly confusing in the case of the Maximum Impact Fund, which allocates grants to target specific funding gaps, often corresponding to very well-defined organizational initiatives (e.g. expansions into new regions), the individual cost-effectiveness of which GiveWell has modeled. It’s obviously true that regardless of whether one gives to a multiplier charity or to the Maximum Impact Fund, there is some chance that one’s donations either A) languish unused in a bank account, or B) counterfactually cause something hugely impactful to happen, but given that in the case of GiveWell, we know the end recipients have a specific, highly cost-effective use already planned out for this particular chunk of money (and if they have extra, they can just put it toward… more nets), whereas in the multiplier charity case, we don’t have any reason to believe they could use these specific fundsat all (not to mention productively), doesn’t it seem like the balance of expected values here favors going with GiveWell?
Finally, while it is obviously true that most nets don’t save lives, I fail to see how that bears on the question at hand. We both agree that this is reflected in GiveWell’s cost-effectiveness analysis, which we (presumably) both agree that we have strong reason to trust. We have no such independent cost-effectiveness analysis of any multiplier charity. And the fact that most nets don’t save lives certainly isn’t a reason why the impact of donations to the AMF would not rise by some smooth function of dollars donated. The only premise on which that argument depends is that if they don’t have anything else good to do with my money (which presumably, they do, having earned a grant from the Maximum Impact Fund), they can always just buy more nets. Given the current scale of global net distribution relative to the total malaria burden, it seems wildly unlikely that a much larger percentage of those nets would fail to save lives than was the case during previous net distributions.
I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus.
If, as Jon suggests, the average impact scales well (even if historically not smoothly), unless you confirm that your donation won’t make a difference, and most donations make little difference, in the unlikely event that it does (because you push them past such a threshold), it can make a huge difference to make up for it, enough so that the expected value looks good even if you don’t know whether you’ll push them past such a threshold. It’s similar to this argument for veg*nism.
It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table.
Have you confirmed this about AMF? In the case of GiveDirectly, they give to whole villages at a time, so maybe the benefactors in the “marginal” village will get more or less, but I imagine there’s a cutoff where they won’t bother and just wait for more donations instead. Similarly, the school-based deworming charities might wait until they have enough to deworm another whole school. Of course, these villages and schools might be really small, so it might not matter too much unless you’re making very small donations.
Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.
I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus. You’re right that under some circumstances, a risk-neutral expected value calculus will favor small donors donating to “step-functional” charities that can’t scale their operations smoothly per marginal dollar donated, but my argument was that in the specific case of multiplier charities, the odds of a small-dollar donation being counterfactually responsible for moving the organization up an impact step are particularly infinitesimal (or at least that this is the most reasonable thing for small-dollar donors without inside information to believe). The fact that impact in this context is step-functional is a part of the explanation for the argument, not the conclusion of the argument.
With respect to the question of “relative step-functionality,” though, it’s also not clear to me why, compared to a multiplier charity, one would think that giving to GiveWell’s Maximum Impact Fund would be any more step-functional on the margin. It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table. Moreover, I find this particularly confusing in the case of the Maximum Impact Fund, which allocates grants to target specific funding gaps, often corresponding to very well-defined organizational initiatives (e.g. expansions into new regions), the individual cost-effectiveness of which GiveWell has modeled. It’s obviously true that regardless of whether one gives to a multiplier charity or to the Maximum Impact Fund, there is some chance that one’s donations either A) languish unused in a bank account, or B) counterfactually cause something hugely impactful to happen, but given that in the case of GiveWell, we know the end recipients have a specific, highly cost-effective use already planned out for this particular chunk of money (and if they have extra, they can just put it toward… more nets), whereas in the multiplier charity case, we don’t have any reason to believe they could use these specific funds at all (not to mention productively), doesn’t it seem like the balance of expected values here favors going with GiveWell?
Finally, while it is obviously true that most nets don’t save lives, I fail to see how that bears on the question at hand. We both agree that this is reflected in GiveWell’s cost-effectiveness analysis, which we (presumably) both agree that we have strong reason to trust. We have no such independent cost-effectiveness analysis of any multiplier charity. And the fact that most nets don’t save lives certainly isn’t a reason why the impact of donations to the AMF would not rise by some smooth function of dollars donated. The only premise on which that argument depends is that if they don’t have anything else good to do with my money (which presumably, they do, having earned a grant from the Maximum Impact Fund), they can always just buy more nets. Given the current scale of global net distribution relative to the total malaria burden, it seems wildly unlikely that a much larger percentage of those nets would fail to save lives than was the case during previous net distributions.
If, as Jon suggests, the average impact scales well (even if historically not smoothly), unless you confirm that your donation won’t make a difference, and most donations make little difference, in the unlikely event that it does (because you push them past such a threshold), it can make a huge difference to make up for it, enough so that the expected value looks good even if you don’t know whether you’ll push them past such a threshold. It’s similar to this argument for veg*nism.
Have you confirmed this about AMF? In the case of GiveDirectly, they give to whole villages at a time, so maybe the benefactors in the “marginal” village will get more or less, but I imagine there’s a cutoff where they won’t bother and just wait for more donations instead. Similarly, the school-based deworming charities might wait until they have enough to deworm another whole school. Of course, these villages and schools might be really small, so it might not matter too much unless you’re making very small donations.
Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.