Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.
Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.