(Edit: I no longer endorse negative utilitarianism or suffering-focused ethics.)
Thank you! Cross-posting my reply as well:
If we adopt more of a preference-utilitarian view, we end up producing contradictory conclusions in the same scenarios that I discussed in my original essay—you can’t claim that AMF saves 35 DALYs without knowing AMF’s population effects.
Shouldn’t this be fixed by negative preference utilitarianism? There could be value in not violating the “preference-equivalent” of dying one year earlier, but no value in creating additional “life-year” preferences. A YLL would be equivalent to a violated life-preference, then. You could avert YLLs by not having children, of course, which seems plausible to me (if noone is born, whose preference is violated by dying from Malaria?). Being born and dying from Malaria would be worse than non-existence, so referring to your “Bigger Problem”-scenarios, A < B < C and C = D.
Regarding EV: I agree, there has to be one ranking mapping world-states onto real numbers (or R^n if you drop the continuity-axiom). So you’re right in the sense that the supposed GiveWell-ranking of world-states that you assume doesn’t work out. I still think that there might be a way to make a creative mapping in the real world so that the GiveWell focus on DALYs without regarding population size can be somehow translated into a utility function. Anyway, I would kind of agree that AMF turns out to be less effective than previously thought, both from an SFE and a classical view smile emoticon
(Edit: I no longer endorse suffering-focused ethics.)
Regardless of your stance on population ethics, I think in general it makes sense to take DALYs as a heuristic for how much good you can do with your money. Clearly all population ethical views consider improving existing lives in quality (decreasing YLDs, years lived with disability) a good thing. Preventing deaths expressed through reducing YLLs (Years of Life Lost) is probably overall good as well, although different views will assign more or less value to it. I agree with Michael Dickens that if the value of longer lives comes from adding life-years (reducing YLL) alone, this would indeed amount to something like total utilitarianism.
I think a steelman of GiveWell’s view would be that in fact the YLL component of DALYs can be motivated by some other things, like preference dissatisfaction or decreasing the suffering of the parents of children as well. I believe that for reasons of cooperation between agents it always makes sense to consider the preferences of other beings at least to some degree. Fulfilling already existing preferences seems like something most people would agree to, whether they would also like to bring additional fulfilled preferences into existence or not. Therefore, death is intrinsically bad according to most reasonable views, since it violates the preferences of existing beings severely. In that sense, decreasing YLLs should be always good, even for non-classical utilitarians.
Unlike Michael, I personally would be less reluctant to accept a ranking of world states that can’t be boiled down to an easy mathematical function of the aggregated wellbeing, i.e. I’d be less turned off by more “complex” moral views. And I’d be less willing to bite bullets like the repugnant conclusion, or the “very repugnant conclusion,” where a world with fewer, but very happy individuals can be worse than a world containing any finite amount of extreme torture that is outweighed by an even greater amount of beings that live lives just barely worth living. Accepting this conclusion is a quite a controversial stance in my eyes. Given anti-realism, it is absolutely unclear to me why GiveWell would have to adhere to a total utilitarian view. They could very well accept all the inconsistencies Michael mentions and still just maximize EV according to their own (complex) values. I agree that they should probably specify their view more explicitly and it remains unclear what they are really optimizing for (see also http://blog.givewell.org/2008/08/22/dalys-and-disagreement/).
A candidate I am favouring that could possibly match a lot of people’s intuitions would be something like negative idealized preference utilitarianism or more generally any form of suffering-focused ethics (e.g. trying to reduce extreme involuntary suffering without doing anything crazy or anything that would be considered really bad by other agents).
(cross-posted here: https://www.facebook.com/groups/effective.altruists/permalink/1071588459564177/)