For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they’ve done themselves, nor because of risk aversion, but rather because they’ve largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.
“This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).”
An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you’re leery of long casual chains because you’ve developed a defense mechanism against your values being Eulered or Dutch-Booked.
I think it’s true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is “I’d prefer to go with the safer and more established causes that GW recommends.” Even if they haven’t explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.
For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they’ve done themselves, nor because of risk aversion, but rather because they’ve largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.
http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
Key quote:
“This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).”
An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you’re leery of long casual chains because you’ve developed a defense mechanism against your values being Eulered or Dutch-Booked.
Apologies for the weird terminology, see: http://slatestarcodex.com/2014/08/10/getting-eulered/ and: https://en.wikipedia.org/wiki/Dutch_book
I think it’s true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is “I’d prefer to go with the safer and more established causes that GW recommends.” Even if they haven’t explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.