(Edit: I no longer endorse suffering-focused ethics.)
Regardless of your stance on population ethics, I think in general it makes sense to take DALYs as a heuristic for how much good you can do with your money. Clearly all population ethical views consider improving existing lives in quality (decreasing YLDs, years lived with disability) a good thing. Preventing deaths expressed through reducing YLLs (Years of Life Lost) is probably overall good as well, although different views will assign more or less value to it. I agree with Michael Dickens that if the value of longer lives comes from adding life-years (reducing YLL) alone, this would indeed amount to something like total utilitarianism.
I think a steelman of GiveWell’s view would be that in fact the YLL component of DALYs can be motivated by some other things, like preference dissatisfaction or decreasing the suffering of the parents of children as well. I believe that for reasons of cooperation between agents it always makes sense to consider the preferences of other beings at least to some degree. Fulfilling already existing preferences seems like something most people would agree to, whether they would also like to bring additional fulfilled preferences into existence or not. Therefore, death is intrinsically bad according to most reasonable views, since it violates the preferences of existing beings severely. In that sense, decreasing YLLs should be always good, even for non-classical utilitarians.
Unlike Michael, I personally would be less reluctant to accept a ranking of world states that can’t be boiled down to an easy mathematical function of the aggregated wellbeing, i.e. I’d be less turned off by more “complex” moral views. And I’d be less willing to bite bullets like the repugnant conclusion, or the “very repugnant conclusion,” where a world with fewer, but very happy individuals can be worse than a world containing any finite amount of extreme torture that is outweighed by an even greater amount of beings that live lives just barely worth living. Accepting this conclusion is a quite a controversial stance in my eyes. Given anti-realism, it is absolutely unclear to me why GiveWell would have to adhere to a total utilitarian view. They could very well accept all the inconsistencies Michael mentions and still just maximize EV according to their own (complex) values. I agree that they should probably specify their view more explicitly and it remains unclear what they are really optimizing for (see also http://blog.givewell.org/2008/08/22/dalys-and-disagreement/).
A candidate I am favouring that could possibly match a lot of people’s intuitions would be something like negative idealized preference utilitarianism or more generally any form of suffering-focused ethics (e.g. trying to reduce extreme involuntary suffering without doing anything crazy or anything that would be considered really bad by other agents).
I believe this is the most plausible attempt at a resolution I’ve heard so far. Thanks, Johannes.
Like some other responses I’ve heard, if we accept your proposed view on population ethics, we’d still have to substantially update the common view on the value of AMF. Remember, I’m not saying that YLL’s don’t have value; I’m saying that it’s controversial and probably incoherent to claim that the value of AMF’s lives saved equal the (time-discounted) number of additional life-years lived.
If the importance of YLL’s comes from the suffering of parents, as you suggest, YLL’s will look really different than just one DALY per year of life lost. If we adopt more of a preference-utilitarian view, we end up producing contradictory conclusions in the same scenarios that I discussed in my original essay—you can’t claim that AMF saves 35 DALYs without knowing AMF’s population effects.
They could very well just accept all the inconsistencies Dickens mentions and still just maximize EV according to their own (complex) values.
If you’re inconsistent, you cannot coherently maximize EV. You can only maximize EV if you can apply a unique real-valued EV function over states or actions, and such a function only exists in a consistent system.
(Edit: I no longer endorse negative utilitarianism or suffering-focused ethics.)
Thank you! Cross-posting my reply as well:
If we adopt more of a preference-utilitarian view, we end up producing contradictory conclusions in the same scenarios that I discussed in my original essay—you can’t claim that AMF saves 35 DALYs without knowing AMF’s population effects.
Shouldn’t this be fixed by negative preference utilitarianism? There could be value in not violating the “preference-equivalent” of dying one year earlier, but no value in creating additional “life-year” preferences. A YLL would be equivalent to a violated life-preference, then. You could avert YLLs by not having children, of course, which seems plausible to me (if noone is born, whose preference is violated by dying from Malaria?). Being born and dying from Malaria would be worse than non-existence, so referring to your “Bigger Problem”-scenarios, A < B < C and C = D.
Regarding EV: I agree, there has to be one ranking mapping world-states onto real numbers (or R^n if you drop the continuity-axiom). So you’re right in the sense that the supposed GiveWell-ranking of world-states that you assume doesn’t work out. I still think that there might be a way to make a creative mapping in the real world so that the GiveWell focus on DALYs without regarding population size can be somehow translated into a utility function. Anyway, I would kind of agree that AMF turns out to be less effective than previously thought, both from an SFE and a classical view smile emoticon
One thing that seems noteworthy is the fact that the population effect actually brings people closer together than they were before: Ignoring population effects, AMF has high impact from a CU perspective but low impact from a suffering-focussed perspective; accounting for population effects, the difference almost vanishes. Another way of looking at it: In situations where the population remains constant, population ethics becomes irrelevant.
So accounting for population effects mainly gives us these two updates:
Population-ethical views become less relevant for prioritisation between various GiveWell charities (and not more relevant, as some seemed to suggest (possibly with the exception of the negative preference view)).
AMF might be less effective than deworming charities according to most population-ethical views (but still more effective than cash transfers due to developmental effects of malaria prevention).
If we consider wild-animal suffering, I think AMF looks better than charities that don’t create as many human lives. This could once again make AMF more cost-effective according to many population-ethical views (unless you consider wild insects to have good lives on average).
(Edit: I no longer endorse suffering-focused ethics.)
Regardless of your stance on population ethics, I think in general it makes sense to take DALYs as a heuristic for how much good you can do with your money. Clearly all population ethical views consider improving existing lives in quality (decreasing YLDs, years lived with disability) a good thing. Preventing deaths expressed through reducing YLLs (Years of Life Lost) is probably overall good as well, although different views will assign more or less value to it. I agree with Michael Dickens that if the value of longer lives comes from adding life-years (reducing YLL) alone, this would indeed amount to something like total utilitarianism.
I think a steelman of GiveWell’s view would be that in fact the YLL component of DALYs can be motivated by some other things, like preference dissatisfaction or decreasing the suffering of the parents of children as well. I believe that for reasons of cooperation between agents it always makes sense to consider the preferences of other beings at least to some degree. Fulfilling already existing preferences seems like something most people would agree to, whether they would also like to bring additional fulfilled preferences into existence or not. Therefore, death is intrinsically bad according to most reasonable views, since it violates the preferences of existing beings severely. In that sense, decreasing YLLs should be always good, even for non-classical utilitarians.
Unlike Michael, I personally would be less reluctant to accept a ranking of world states that can’t be boiled down to an easy mathematical function of the aggregated wellbeing, i.e. I’d be less turned off by more “complex” moral views. And I’d be less willing to bite bullets like the repugnant conclusion, or the “very repugnant conclusion,” where a world with fewer, but very happy individuals can be worse than a world containing any finite amount of extreme torture that is outweighed by an even greater amount of beings that live lives just barely worth living. Accepting this conclusion is a quite a controversial stance in my eyes. Given anti-realism, it is absolutely unclear to me why GiveWell would have to adhere to a total utilitarian view. They could very well accept all the inconsistencies Michael mentions and still just maximize EV according to their own (complex) values. I agree that they should probably specify their view more explicitly and it remains unclear what they are really optimizing for (see also http://blog.givewell.org/2008/08/22/dalys-and-disagreement/).
A candidate I am favouring that could possibly match a lot of people’s intuitions would be something like negative idealized preference utilitarianism or more generally any form of suffering-focused ethics (e.g. trying to reduce extreme involuntary suffering without doing anything crazy or anything that would be considered really bad by other agents).
(cross-posted here: https://www.facebook.com/groups/effective.altruists/permalink/1071588459564177/)
Cross-posting my reply:
I believe this is the most plausible attempt at a resolution I’ve heard so far. Thanks, Johannes.
Like some other responses I’ve heard, if we accept your proposed view on population ethics, we’d still have to substantially update the common view on the value of AMF. Remember, I’m not saying that YLL’s don’t have value; I’m saying that it’s controversial and probably incoherent to claim that the value of AMF’s lives saved equal the (time-discounted) number of additional life-years lived.
If the importance of YLL’s comes from the suffering of parents, as you suggest, YLL’s will look really different than just one DALY per year of life lost. If we adopt more of a preference-utilitarian view, we end up producing contradictory conclusions in the same scenarios that I discussed in my original essay—you can’t claim that AMF saves 35 DALYs without knowing AMF’s population effects.
If you’re inconsistent, you cannot coherently maximize EV. You can only maximize EV if you can apply a unique real-valued EV function over states or actions, and such a function only exists in a consistent system.
(Edit: I no longer endorse negative utilitarianism or suffering-focused ethics.)
Thank you! Cross-posting my reply as well:
Shouldn’t this be fixed by negative preference utilitarianism? There could be value in not violating the “preference-equivalent” of dying one year earlier, but no value in creating additional “life-year” preferences. A YLL would be equivalent to a violated life-preference, then. You could avert YLLs by not having children, of course, which seems plausible to me (if noone is born, whose preference is violated by dying from Malaria?). Being born and dying from Malaria would be worse than non-existence, so referring to your “Bigger Problem”-scenarios, A < B < C and C = D.
Regarding EV: I agree, there has to be one ranking mapping world-states onto real numbers (or R^n if you drop the continuity-axiom). So you’re right in the sense that the supposed GiveWell-ranking of world-states that you assume doesn’t work out. I still think that there might be a way to make a creative mapping in the real world so that the GiveWell focus on DALYs without regarding population size can be somehow translated into a utility function. Anyway, I would kind of agree that AMF turns out to be less effective than previously thought, both from an SFE and a classical view smile emoticon
One thing that seems noteworthy is the fact that the population effect actually brings people closer together than they were before: Ignoring population effects, AMF has high impact from a CU perspective but low impact from a suffering-focussed perspective; accounting for population effects, the difference almost vanishes. Another way of looking at it: In situations where the population remains constant, population ethics becomes irrelevant.
So accounting for population effects mainly gives us these two updates:
Population-ethical views become less relevant for prioritisation between various GiveWell charities (and not more relevant, as some seemed to suggest (possibly with the exception of the negative preference view)).
AMF might be less effective than deworming charities according to most population-ethical views (but still more effective than cash transfers due to developmental effects of malaria prevention).
If we consider wild-animal suffering, I think AMF looks better than charities that don’t create as many human lives. This could once again make AMF more cost-effective according to many population-ethical views (unless you consider wild insects to have good lives on average).
Excuse me, what does EV stand for?
EV stands for Expected Value. (I think I actually meant Expected Utility more precisely)