Prioritarianism seems at least sorta reasonable to me.* But even if all other utilitarians agreed with me (see Larks’s comment), I think any kind of weighting along these lines opens up such an absolutely gigantic can of worms, for relatively little benefit, that it isn’t a high priority to try and work explicitly prioritarian weights into our evaluations. The slippery-slope argument isn’t just a minor complaint. It’s a major problem, since weighting some QUALYs more than others (even in small, sensible ways) would seem to break the Schelling point of egalitarian moral concern and create an inherently political battleground in its place.
So, I see the appeal of prioritarianism—it definitely seems at least “reasonable” to me. But the WHO’s age-based weighting system also seems reasonable enough—accounting for societal productivity by weighting QALYs is a big of a kludge, but it might be a convenient way to capture secondary effects and externalities of an intervention. Plus, it fits with and a commonly-held human intuition that some deaths (of young people in their prime) are more tragic than others (ie elderly), although most of this is surely already captured by normal QALYs. More speculatively, animal welfare activists often compare animals to humans by factors like neuron count, as a proxy for estimating animals’ richness of conscious experience. Should EAs take this logic further, and weight people by intelligence (or perhaps by their meditative/spiritual attainments?? or by whether they happen to have an optimistic or pessimistic personality??) as a proxy for variation in the quality of conscious experience among different humans? This too seems at least a “reasonable” idea to me, except for the obvious fact that this would open up an incredibly toxic political battleground that could potentially destroy the EA movement, all for what would ultimately be a very minor weighting adjustment that would probably barely nudge our estimates of which causes are most promising. (Or flip it around… if we extend prioritarianism to animals, does this totally obliterate all human welfare concerns in favor of reducing insect suffering, Tomalisk-style? What distinguishes prioritarianism as you are thinking of it from suffering-focused ethical systems?)
The problem here seems similar to saying, “We should weight people’s votes, so that parents get more votes than the childless (to represent the future interests of their children), or so that people living in a particular state get more of a vote on policies that will especially impact their state.” Reasonable! In fact, in some situations I kinda wish we could do that! But if not gone about in a careful way, this could destroy the schelling point of one-person-one-vote and create an instant political battlefield of zero-sum conflict, where everyone feels that their voting power is up for grabs and they have to fight to protect their interests.
What I am arguing is that there is something inherent to the situation with QALYs or votes (something about like, game theory or Rawl’s veil of ignorance and societal contracts… but I can’t figure out how to succinctly define it), which gives the slippery-slope argument much more bite than in other policy contexts where the landscape might be more inherently “thermostatic”. (Like arguing that if we do reasonable government intervention X, pretty soon we will be doing crazy socialist program Y).
On the other hand, of course, weighting and prioritizing things is often important—the whole EA movement has done an incredible amount of good thanks to the realization that you can and should be willing to do the math and prioritize some causes over others! In retrospect, that’s obviously worth ruffling some feathers among charitable causes deemed lower-priority (like funding the arts, supporting animal shelters, or helping homeless people in rich-world nations). Personally I am a big fan (probably too much) of making clever little adjustments here and there based on esoteric philosophical considerations, and it’s one of the things that I find fun about Effective Altruism. But some weighting ideas are just intrinsically much more political than others, and breaking the powerful simplicity and symmetry of “all QALYs are equal” would be a big step.
Rather than advocate that we adopt prioritarianism as a fundamental moral consideration right away, I would want to take any changes very cautiously and do a lot of research ahead of time—I would be happy to see more research done on what exactly a prioritarian weighting scheme would look like, how big the weights would be for different categories, etc. And maybe some attempts to mitigate the slippery-slope problems by finding a framing for the argument where certain key adjustments seem obvious to add in but there isn’t a natural path left open for adding endless special cases. If we were thinking of breaking “one person one vote” in favor of some cool system of liquid democracy with amorphous overlapping jurisdictions or something, we’d want to really work out our theory ahead of time and make very clear what kinds of vote-weighting is acceptable and what kinds are verboten.
*[Aside: I feel the appeal of prioritarianism, but I’m also suspicious that my intuition—including my whole sympathy towards social equality and helping the unfortunate—comes from the empirical fact that it is often in practice much easier to help the less-well off than to help those who already have great lives. If it was actually almost always harder to help the less well-off, and this had been true for hundreds of years, maybe my cultural/moral intuitions about compassion and who to help would be totally different?? Hard to really imagine what that world would look like, but interesting to contemplate.]
Prioritarianism seems at least sorta reasonable to me.* But even if all other utilitarians agreed with me (see Larks’s comment), I think any kind of weighting along these lines opens up such an absolutely gigantic can of worms, for relatively little benefit, that it isn’t a high priority to try and work explicitly prioritarian weights into our evaluations. The slippery-slope argument isn’t just a minor complaint. It’s a major problem, since weighting some QUALYs more than others (even in small, sensible ways) would seem to break the Schelling point of egalitarian moral concern and create an inherently political battleground in its place.
So, I see the appeal of prioritarianism—it definitely seems at least “reasonable” to me. But the WHO’s age-based weighting system also seems reasonable enough—accounting for societal productivity by weighting QALYs is a big of a kludge, but it might be a convenient way to capture secondary effects and externalities of an intervention. Plus, it fits with and a commonly-held human intuition that some deaths (of young people in their prime) are more tragic than others (ie elderly), although most of this is surely already captured by normal QALYs. More speculatively, animal welfare activists often compare animals to humans by factors like neuron count, as a proxy for estimating animals’ richness of conscious experience. Should EAs take this logic further, and weight people by intelligence (or perhaps by their meditative/spiritual attainments?? or by whether they happen to have an optimistic or pessimistic personality??) as a proxy for variation in the quality of conscious experience among different humans? This too seems at least a “reasonable” idea to me, except for the obvious fact that this would open up an incredibly toxic political battleground that could potentially destroy the EA movement, all for what would ultimately be a very minor weighting adjustment that would probably barely nudge our estimates of which causes are most promising. (Or flip it around… if we extend prioritarianism to animals, does this totally obliterate all human welfare concerns in favor of reducing insect suffering, Tomalisk-style? What distinguishes prioritarianism as you are thinking of it from suffering-focused ethical systems?)
The problem here seems similar to saying, “We should weight people’s votes, so that parents get more votes than the childless (to represent the future interests of their children), or so that people living in a particular state get more of a vote on policies that will especially impact their state.” Reasonable! In fact, in some situations I kinda wish we could do that! But if not gone about in a careful way, this could destroy the schelling point of one-person-one-vote and create an instant political battlefield of zero-sum conflict, where everyone feels that their voting power is up for grabs and they have to fight to protect their interests.
What I am arguing is that there is something inherent to the situation with QALYs or votes (something about like, game theory or Rawl’s veil of ignorance and societal contracts… but I can’t figure out how to succinctly define it), which gives the slippery-slope argument much more bite than in other policy contexts where the landscape might be more inherently “thermostatic”. (Like arguing that if we do reasonable government intervention X, pretty soon we will be doing crazy socialist program Y).
On the other hand, of course, weighting and prioritizing things is often important—the whole EA movement has done an incredible amount of good thanks to the realization that you can and should be willing to do the math and prioritize some causes over others! In retrospect, that’s obviously worth ruffling some feathers among charitable causes deemed lower-priority (like funding the arts, supporting animal shelters, or helping homeless people in rich-world nations). Personally I am a big fan (probably too much) of making clever little adjustments here and there based on esoteric philosophical considerations, and it’s one of the things that I find fun about Effective Altruism. But some weighting ideas are just intrinsically much more political than others, and breaking the powerful simplicity and symmetry of “all QALYs are equal” would be a big step.
Rather than advocate that we adopt prioritarianism as a fundamental moral consideration right away, I would want to take any changes very cautiously and do a lot of research ahead of time—I would be happy to see more research done on what exactly a prioritarian weighting scheme would look like, how big the weights would be for different categories, etc. And maybe some attempts to mitigate the slippery-slope problems by finding a framing for the argument where certain key adjustments seem obvious to add in but there isn’t a natural path left open for adding endless special cases. If we were thinking of breaking “one person one vote” in favor of some cool system of liquid democracy with amorphous overlapping jurisdictions or something, we’d want to really work out our theory ahead of time and make very clear what kinds of vote-weighting is acceptable and what kinds are verboten.
*[Aside: I feel the appeal of prioritarianism, but I’m also suspicious that my intuition—including my whole sympathy towards social equality and helping the unfortunate—comes from the empirical fact that it is often in practice much easier to help the less-well off than to help those who already have great lives. If it was actually almost always harder to help the less well-off, and this had been true for hundreds of years, maybe my cultural/moral intuitions about compassion and who to help would be totally different?? Hard to really imagine what that world would look like, but interesting to contemplate.]