I think that highly unusual claims require a lot of evidence. I believe I can save the life of a child in the developing world for ~£3000 and can see a lot of evidence for this. I am keen to support more research in this area but if I had to decide now to donate today between say technical AI safety research like MIRI or an effective developing world charity like AMF, I would give to the later.
I think longtermists sometimes object that shorttermists (?) often ignore the long-term (or indirect) effects of their actions as if they neatly cancel out, but they too are faced with deep uncertainty and complex cluelessness. For example, what are the long-term effects of the Against Malaria Foundation, based on population sizes, economic growth, technological development, future values, and could they be more important than the short-term effects and have opposing signs (or opposing signs relative to a different intervention)?
Furthermore, if you have deep uncertainty about the effects of an intervention or the future generally, this can infect all of your choices, because you’re comparing all interventions, not just each intervention against some benchmark “do nothing” intervention*. You might think that donating to AMF is robustly better ex ante than doing nothing, while a particular longtermist intervention is not, but that doesn’t mean donating to AMF is robustly better ex ante than the longtermist intervention. So why choose AMF over the longtermist intervention?
As another example, if you were to try to take into all effects of the charities, are we justified in our confidence that AMF is better than the Make-A-Wish Foundation (EA Forum post) or even doing nothing? What are the population effects? Are we sure they aren’t negative? What are the effects on nonhuman animals, both farmed and wild? How much weight should we give the experiences of nonhuman animals?
Maybe we should have a general skepticism of causal effects that aren’t sufficiently evidence-backed and quantified, though, so unless you’ve got a reliable effect size estimate or estimates for bounds for an effect size, we should assume it’s ~0.
Some discussion here and in the comments. I defend shorttermism there.
*although I want to suggest that the following rule is reasonable, and for now endorse it:
If I think doing X is robustly better than doing nothing in expectation, and no other action is robustly better than doing X in expectation, then X is permissible.
I think a lot of us do it anyway (whenever we discuss the sign of some intervention or effect, we’re usually comparing to “doing nothing”) without considering that it might be unjustified for a consequentialist who shouldn’t distinguish between acts and omissions, and I suspect this might explain your preference for AMF over MIRI.
I think longtermists sometimes object that shorttermists (?) often ignore the long-term (or indirect) effects of their actions as if they neatly cancel out, but they too are faced with deep uncertainty and complex cluelessness. For example, what are the long-term effects of the Against Malaria Foundation, based on population sizes, economic growth, technological development, future values, and could they be more important than the short-term effects and have opposing signs (or opposing signs relative to a different intervention)?
Furthermore, if you have deep uncertainty about the effects of an intervention or the future generally, this can infect all of your choices, because you’re comparing all interventions, not just each intervention against some benchmark “do nothing” intervention*. You might think that donating to AMF is robustly better ex ante than doing nothing, while a particular longtermist intervention is not, but that doesn’t mean donating to AMF is robustly better ex ante than the longtermist intervention. So why choose AMF over the longtermist intervention?
As another example, if you were to try to take into all effects of the charities, are we justified in our confidence that AMF is better than the Make-A-Wish Foundation (EA Forum post) or even doing nothing? What are the population effects? Are we sure they aren’t negative? What are the effects on nonhuman animals, both farmed and wild? How much weight should we give the experiences of nonhuman animals?
Maybe we should have a general skepticism of causal effects that aren’t sufficiently evidence-backed and quantified, though, so unless you’ve got a reliable effect size estimate or estimates for bounds for an effect size, we should assume it’s ~0.
Some discussion here and in the comments. I defend shorttermism there.
*although I want to suggest that the following rule is reasonable, and for now endorse it:
I think a lot of us do it anyway (whenever we discuss the sign of some intervention or effect, we’re usually comparing to “doing nothing”) without considering that it might be unjustified for a consequentialist who shouldn’t distinguish between acts and omissions, and I suspect this might explain your preference for AMF over MIRI.
I illustrate these points in the context of intervention portfolio diversification for deep and moral uncertainty in this post.