While it likely is true of some EAs, it it’s a simplistic strawman to assume that those of us who favor donating to AMF (though in practice I prefer donating to research and meta-charity more) do so due to risk aversion. Saying that would require knowing, with confidence, the expected value of a donation to MIRI.
I certainly would prefer to donate to a 0.01% chance of saving 11K lives than a 100% chance of saving a life. But I don’t actually know that MIRI actually represents a superior expected value bet.
(See some discussion about MIRI’s chance of success here and here).
Obviously different people have different motivations for their donations. I disagree that it’s a straw man, though, because I wasn’t trying to misrepresent any views and I think risk aversion actually is one of the main reasons that people tend to support causes such as AMF that help people “one at a time” over causes that are larger scale but less likely to succeed. MIRI’s chance of success wasn’t central to my argument—if you think it has basically zero net positive then substitute in whatever cause you think actually is positive (in-vitro meat research, CRIPSR research, politics, etc). Perhaps you’ve already done that and think that AMF still has higher expected value, in which case I would say you’re not risk averse (per se), but then I’d also think that you’re in the minority.
For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they’ve done themselves, nor because of risk aversion, but rather because they’ve largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.
“This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).”
An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you’re leery of long casual chains because you’ve developed a defense mechanism against your values being Eulered or Dutch-Booked.
I think it’s true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is “I’d prefer to go with the safer and more established causes that GW recommends.” Even if they haven’t explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.
While it likely is true of some EAs, it it’s a simplistic strawman to assume that those of us who favor donating to AMF (though in practice I prefer donating to research and meta-charity more) do so due to risk aversion. Saying that would require knowing, with confidence, the expected value of a donation to MIRI.
I certainly would prefer to donate to a 0.01% chance of saving 11K lives than a 100% chance of saving a life. But I don’t actually know that MIRI actually represents a superior expected value bet.
(See some discussion about MIRI’s chance of success here and here).
Obviously different people have different motivations for their donations. I disagree that it’s a straw man, though, because I wasn’t trying to misrepresent any views and I think risk aversion actually is one of the main reasons that people tend to support causes such as AMF that help people “one at a time” over causes that are larger scale but less likely to succeed. MIRI’s chance of success wasn’t central to my argument—if you think it has basically zero net positive then substitute in whatever cause you think actually is positive (in-vitro meat research, CRIPSR research, politics, etc). Perhaps you’ve already done that and think that AMF still has higher expected value, in which case I would say you’re not risk averse (per se), but then I’d also think that you’re in the minority.
For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they’ve done themselves, nor because of risk aversion, but rather because they’ve largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.
http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
Key quote:
“This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).”
An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you’re leery of long casual chains because you’ve developed a defense mechanism against your values being Eulered or Dutch-Booked.
Apologies for the weird terminology, see: http://slatestarcodex.com/2014/08/10/getting-eulered/ and: https://en.wikipedia.org/wiki/Dutch_book
I think it’s true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is “I’d prefer to go with the safer and more established causes that GW recommends.” Even if they haven’t explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.