This is a fun paper. But it rests a lot on an unsupported intuition about whatās required in order to ātake the depth of our uncertainty seriouslyā (i.e., that this requires imprecise credences with a very wide range of imprecision). Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.
Iām a bit surprised that this is getting downvoted, rather than just disagree-voted. Itās fine to reach a different verdict and all, but yāall really think the methodological point Iām making here shouldnāt even be said? Weird.
I didnāt downvote, but if I had, it would be because I donāt think itās āsurely falseā āthat a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundationā, and that claim seems overconfident. (Or, rather, AMF could be no better than burning money or the Make a Wish Foundation, even if all are better than FMF, in case there is asymmetry between AMF and FMF.)
I specifically worry that AMF could be bad if and because it hurts farmed animals more than it helps people, considering also that descendants of beneficiaries will likely consume more factory farmed animal products, with increasing animal product consumption and intensification with economic development. Wild animal (invertebrate) effects could again go either way. If youāre an expectational total utilitarian or otherwise very risk-neutral wrt aggregate welfare, then you may as well ignore the near term benefits and harms and focus on the indirect effects on the far future, e.g. through how it affects the EA community and x-risks. (Probably FMF would have very bad community effects, worse than AMFās are good relative to more direct near term effects, unless FMF quietly acts to convince people to stop donating to AMF.)
And I say this as a recurring small donor to malaria charities including AMF. I think AMF can still be a worthwhile part of a portfolio of interventions, even if it turns out to not look robustly good on its own (it could be that few things do). See my post Hedging against deep and moral uncertainty for illustration.
Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.
Is this a fair comparison? For readersā context, Andreas compares the Against Malaria Foundation (AMF) with Make-A-Wish Foundation:
In comparing Make-A-Wish Foundation unfavourably to Against Malaria Foundation, Singer (2015) observes that Ā³saving a life is better than making a wish come true.Ā“ (6) Arguably, there is a qualifier missing from this statement: Āµall else being equal. Saving a childās life need not be better than fulfilling a childās wish if the indirect effects of saving the childās life are worse than those of fulfilling the wish. We have already touched on some of the potential negative indirect effects associated with the mass distribution of insecticide-treated anti-malarial bed-nets in section 2.2, but they are worth revisiting in order to make clear the depth of our uncertainty.
Firstly, there are potential effects on population. When people survive childhood in greater numbers, it is natural to expect the population to grow. The explosion in global population observed since the 17th century is arguably attributable principally to declining mortality (McKeown 1976). However, we must also account for the impact of reduced childhood mortality on family planning. When childhood mortality declines, parents in developing countries need not have as many children in 14 order to ensure that they can be supported in old age. As a result, averting child deaths may cause the rate of population growth to decline (Heer and Smith 1968). It is the position of the Gates Foundation that averting child deaths at the current margin will reduce population size (Gates and Gates 2014). Many studies confirm that the effect of reduced childhood mortality on population size is offset by reduced fertility (Schultz 1997; Conley, McCord, and Sachs 2007; Lorentzen, McMillan, and Wacziarg 2008; Murtin 2013). Others find that the reduction in births is less than one-to-one with respect to averted child deaths (Bhalotra and van Soest 2008; Herzer, Strulik, and Vollmer 2012; Bhalotra, Hollywood, and Venkataramani 2012). Unfortunately, the studies just noted are of different kinds (cross-country comparisons, panel studies, quasi-experiments, large-sample micro-studies), with different strengths and weaknesses, making it difficult to draw firm conclusions. 13
I agree increasing malaria is surely worse than decreasing malaria, but I would not say Make-A-Wish Foundation is surely worse than AMF. Given this distinction, I (lightly) downvoted your comment.
It is a fair comparison. Andreasā relevant claim is that it isnāt clear what the sign of the effect from AMF is. If AMF is negative, then its oppositeāFMFāwould presumably be positive.
If AMF is negative, then its oppositeāFMFāwould presumably be positive.
I am not sure about this. I think Andreasā claim is that AMF may be negative due to indirect effects. So, conditional on AMF being negative, one should expect the indirect effects would dominate the direct ones. This means a good candidare for āMinus AMFā, an organisation whose value is symmetric to that of AMF, would have both direct and indirect effects symmetric to those of AMF.
The name For Malaria Foundation (FMF) suggested to me an organisation whose interventions have direct effects with similar magnitude, but opposite sign of those of AMF. However, the negative indirect effects of intentionally increasing malaria deaths seem worse than the negative of the positive indirect effects of decreasing malaria deaths[1]. So, AMF being negative would imply FMF having positive direct effects, but in this case I would expect FMFās indirect effects to be sufficiently negative for it to be overall net negative.
If youāre worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with āMinus AMFā in my original comment. (Or imagine stipulating away any such differences.) It doesnāt affect the essential point.
Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, Iām most inclined to think this is one of those cases where weāve got a philosophical argument we donāt immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think Iām most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. Iām more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper āTough enough? Robust satisficing as a decision norm for long-term policy analysisā but we werenāt especially sold on them.
Thanks, yeah, I remember liking that paper. Though Iām inclined to think you should assign (precise) higher-order probabilities to the various āadmissible probability functionsā, from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?
General lesson: if we donāt have any good way of dealing with imprecise credences, we probably shouldnāt regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.
General lesson: if we donāt have any good way of dealing with imprecise credences, we probably shouldnāt regard them as rationally mandatory.
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?
Iām more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility.
Another response could be to just look for more structure in our credences weāve failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If weāre epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, weāve actually failed to adequately capture our credences and their structure with these probability functions as they are.
One way to represent this is to have another probability function to mix all of those probability functions (ā(precise) higher-order probabilities to the various āadmissible probability functionsā), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have multiple such mixing functions of probability functions, but such that AMF still looks good on each mixing function. If youāre sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.
Of course, an alternative explanation could be that we arenāt actually justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.
This is a fun paper. But it rests a lot on an unsupported intuition about whatās required in order to ātake the depth of our uncertainty seriouslyā (i.e., that this requires imprecise credences with a very wide range of imprecision). Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.
Iām a bit surprised that this is getting downvoted, rather than just disagree-voted. Itās fine to reach a different verdict and all, but yāall really think the methodological point Iām making here shouldnāt even be said? Weird.
I didnāt downvote, but if I had, it would be because I donāt think itās āsurely falseā āthat a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundationā, and that claim seems overconfident. (Or, rather, AMF could be no better than burning money or the Make a Wish Foundation, even if all are better than FMF, in case there is asymmetry between AMF and FMF.)
I specifically worry that AMF could be bad if and because it hurts farmed animals more than it helps people, considering also that descendants of beneficiaries will likely consume more factory farmed animal products, with increasing animal product consumption and intensification with economic development. Wild animal (invertebrate) effects could again go either way. If youāre an expectational total utilitarian or otherwise very risk-neutral wrt aggregate welfare, then you may as well ignore the near term benefits and harms and focus on the indirect effects on the far future, e.g. through how it affects the EA community and x-risks. (Probably FMF would have very bad community effects, worse than AMFās are good relative to more direct near term effects, unless FMF quietly acts to convince people to stop donating to AMF.)
And I say this as a recurring small donor to malaria charities including AMF. I think AMF can still be a worthwhile part of a portfolio of interventions, even if it turns out to not look robustly good on its own (it could be that few things do). See my post Hedging against deep and moral uncertainty for illustration.
Hi Richard,
Is this a fair comparison? For readersā context, Andreas compares the Against Malaria Foundation (AMF) with Make-A-Wish Foundation:
I agree increasing malaria is surely worse than decreasing malaria, but I would not say Make-A-Wish Foundation is surely worse than AMF. Given this distinction, I (lightly) downvoted your comment.
Thanks for explaining!
It is a fair comparison. Andreasā relevant claim is that it isnāt clear what the sign of the effect from AMF is. If AMF is negative, then its oppositeāFMFāwould presumably be positive.
Thanks for following up!
I am not sure about this. I think Andreasā claim is that AMF may be negative due to indirect effects. So, conditional on AMF being negative, one should expect the indirect effects would dominate the direct ones. This means a good candidare for āMinus AMFā, an organisation whose value is symmetric to that of AMF, would have both direct and indirect effects symmetric to those of AMF.
The name For Malaria Foundation (FMF) suggested to me an organisation whose interventions have direct effects with similar magnitude, but opposite sign of those of AMF. However, the negative indirect effects of intentionally increasing malaria deaths seem worse than the negative of the positive indirect effects of decreasing malaria deaths[1]. So, AMF being negative would imply FMF having positive direct effects, but in this case I would expect FMFās indirect effects to be sufficiently negative for it to be overall net negative.
I am utilitarian, but recognise saving a life, and abstaining from saving a live can have different indirect consequences.
If youāre worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with āMinus AMFā in my original comment. (Or imagine stipulating away any such differences.) It doesnāt affect the essential point.
Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, Iām most inclined to think this is one of those cases where weāve got a philosophical argument we donāt immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think Iām most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. Iām more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper āTough enough? Robust satisficing as a decision norm for long-term policy analysisā but we werenāt especially sold on them.
Thanks, yeah, I remember liking that paper. Though Iām inclined to think you should assign (precise) higher-order probabilities to the various āadmissible probability functionsā, from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?
General lesson: if we donāt have any good way of dealing with imprecise credences, we probably shouldnāt regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?
Another response could be to just look for more structure in our credences weāve failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If weāre epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, weāve actually failed to adequately capture our credences and their structure with these probability functions as they are.
One way to represent this is to have another probability function to mix all of those probability functions (ā(precise) higher-order probabilities to the various āadmissible probability functionsā), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have multiple such mixing functions of probability functions, but such that AMF still looks good on each mixing function. If youāre sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.
Of course, an alternative explanation could be that we arenāt actually justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.