I think that even if you ascribe to a wholly utilitarian world value system, there are a significant amount of people, including funders and policy makers, who might hold a prioritarianist value set, so this is an important area to consider.
I think you are significantly understating how many views contradict this form of prioritisation.
It is not merely the case that utilitarians would disagree with this view; there are a wide range of ethical and political systems that would do so. This includes communitarian views, which hold that people have stronger moral obligations towards communities they are a member of. If we consider your example:
If you were to ask a room of people to pick between two interventions, which both cost the same amount of money, assuming all else is equal:
A QALY gain of 9 years for someone living in Nigeria, which has an average life expectancy of 55.
A QALY gain of 10 years for someone living in the UK, which has an average life expectancy of 81.
I would posit that most people would pick the former. …
...my guess is that if we consistently applied this sort of popular-intuition adjustment to our Cost-Effectiveness Evaluations, we would actually end up massively less likely to fund interventions in Nigeria. If you look at British people’s donations, or the evaluations made by HMG, there is a clear nationalist bias: the NHS is willing to spend far more to save a Brit than the FCDO is to save a foreigner, and ‘cut the NHS to increase foreign aid’ is not a politically popular position.
Even if it was the case that most non-utilitarians agreed with this particular adjustment, it’s not clear what conclusion utilitarians should draw from this. If everyone else is (from their perspective) biased in one direction, perhaps utilitarians should focus their efforts in the opposite direction, because it will be more neglected!
I also think there are strong strategic reasons to avoid making such an adjustment. Cost-effectiveness estimates aspire to a position of relative neutrality: that they are in some sense ‘the view of the universe’. Trying to adjust them based on ‘equity’ is inherently controversial because there are many different views of what this consists in. In the 1980s people used equity concerns to suggest that homosexuals were ‘at fault’ for HIV/AIDs, and hence research/treatment should not be a priority. More recently equity concerns have been used to justify a number of explicitly racist policies around medical access (though some of these, like in Utah, have been repealed after legal challenge), and by the CDC to support de-prioritising age in vaccine prioritisation, even though this would increase deaths in all groups. This causes conflict rather than focusing on solving the problem for everyone.
EAs use cost-effectiveness evaluations because we want to treat everyone equally; making adjustments to this based on controversial political stances seems somewhat undesirable. These adjustments seem extremely theoretically undermotivated: for example, the Asaria post you linked suggests treating sex-differences in life expectancy as fair but ethnicity-differences as unfair:
We might make the value judgment that differences in health due to sex are fair, whereas differences in health due to IMD and ethnicity are unfair
There doesn’t seem to be any justification for this at all… except perhaps that one is politically popular and the other is politically unpopular. Accepting these adjustments seems to invite moral gerrymandering, where people attempt to re-define the morally salient group in order to bring themselves some advantage, as we have seen with US racial categorizations.
Your example formula, adjusting by life expectancy, also suggests this would be a big issue:
DCEA = cost effectiveness * life expectancy/100
Traditionally we apply this analysis at the individual level, but doing so here seems to give ridiculous results. Saving the life of a baby born in severe distress, with perhaps only minutes to live, would be hundreds of thousands times more important than that of an adult. Intuitively it seems plausible that babies are unusually morally valuable, but perhaps not quite hundreds of thousands of times more important. Similarly an elderly person with a very low life expectancy, being kept alive for a very short period at great cost, could look like an attractive intervention (or at least not an unattractive one) precisely because their situation was so dire.
So I’m guessing you’d want to apply this at the group level—adjusting for the life expectancy of the group/region, before analysing the benefits for the individual. But then is it the case that moving someone from one group to another would change how valuable it is to help them? For example, we could take a very sick person from an area with a low life expectancy and move them into a richer, healthier area. Even if they remain just as sick, and it costs the same amount to heal them, this metric would suggest we should de-prioritise this intervention. Conversely, if I was rejected medicine, I could qualify by moving from the west to Bangladesh, at which point I would benefit from the ambient levels of poverty around me. This seems also quite perverse: what ultimately matters is the person and their wellbeing, not the backdrop. I should not be able to moral gerrymander myself into a higher position of moral desert just based on group definitions or physical proximity.
I think you are significantly understating how many views contradict this form of prioritisation.
It is not merely the case that utilitarians would disagree with this view; there are a wide range of ethical and political systems that would do so. This includes communitarian views, which hold that people have stronger moral obligations towards communities they are a member of. If we consider your example:
...my guess is that if we consistently applied this sort of popular-intuition adjustment to our Cost-Effectiveness Evaluations, we would actually end up massively less likely to fund interventions in Nigeria. If you look at British people’s donations, or the evaluations made by HMG, there is a clear nationalist bias: the NHS is willing to spend far more to save a Brit than the FCDO is to save a foreigner, and ‘cut the NHS to increase foreign aid’ is not a politically popular position.
Even if it was the case that most non-utilitarians agreed with this particular adjustment, it’s not clear what conclusion utilitarians should draw from this. If everyone else is (from their perspective) biased in one direction, perhaps utilitarians should focus their efforts in the opposite direction, because it will be more neglected!
I also think there are strong strategic reasons to avoid making such an adjustment. Cost-effectiveness estimates aspire to a position of relative neutrality: that they are in some sense ‘the view of the universe’. Trying to adjust them based on ‘equity’ is inherently controversial because there are many different views of what this consists in. In the 1980s people used equity concerns to suggest that homosexuals were ‘at fault’ for HIV/AIDs, and hence research/treatment should not be a priority. More recently equity concerns have been used to justify a number of explicitly racist policies around medical access (though some of these, like in Utah, have been repealed after legal challenge), and by the CDC to support de-prioritising age in vaccine prioritisation, even though this would increase deaths in all groups. This causes conflict rather than focusing on solving the problem for everyone.
EAs use cost-effectiveness evaluations because we want to treat everyone equally; making adjustments to this based on controversial political stances seems somewhat undesirable. These adjustments seem extremely theoretically undermotivated: for example, the Asaria post you linked suggests treating sex-differences in life expectancy as fair but ethnicity-differences as unfair:
There doesn’t seem to be any justification for this at all… except perhaps that one is politically popular and the other is politically unpopular. Accepting these adjustments seems to invite moral gerrymandering, where people attempt to re-define the morally salient group in order to bring themselves some advantage, as we have seen with US racial categorizations.
Your example formula, adjusting by life expectancy, also suggests this would be a big issue:
Traditionally we apply this analysis at the individual level, but doing so here seems to give ridiculous results. Saving the life of a baby born in severe distress, with perhaps only minutes to live, would be hundreds of thousands times more important than that of an adult. Intuitively it seems plausible that babies are unusually morally valuable, but perhaps not quite hundreds of thousands of times more important. Similarly an elderly person with a very low life expectancy, being kept alive for a very short period at great cost, could look like an attractive intervention (or at least not an unattractive one) precisely because their situation was so dire.
So I’m guessing you’d want to apply this at the group level—adjusting for the life expectancy of the group/region, before analysing the benefits for the individual. But then is it the case that moving someone from one group to another would change how valuable it is to help them? For example, we could take a very sick person from an area with a low life expectancy and move them into a richer, healthier area. Even if they remain just as sick, and it costs the same amount to heal them, this metric would suggest we should de-prioritise this intervention. Conversely, if I was rejected medicine, I could qualify by moving from the west to Bangladesh, at which point I would benefit from the ambient levels of poverty around me. This seems also quite perverse: what ultimately matters is the person and their wellbeing, not the backdrop. I should not be able to moral gerrymander myself into a higher position of moral desert just based on group definitions or physical proximity.