Thank you for your post! I am an IDinsight researcher who was heavily involved in this project and I will share some of my perspectives (if I’m misrepresenting GiveWell, feel free to let me know!):
My understanding is GiveWell wanted multiple perspectives to inform their moral weights, including a utilitarian perspective of respecting beneficiaries/recipient’s preferences, as well as others (examples here). Even though beneficiary preferences may not the the only factor, it is an important one and one where empirical evidence was lacking before the study, which was why GiveWell and IDinsight decided to do it.
Also, the overall approach is that, because it’s unrealistic to understand every beneficiary’s preferences and target aid at the personal level, GiveWell and we had to come up with aggregate numbers to be used across all GiveWell top charities. (In the future, there may be the possibility of breaking it down further, e.g. by geography, as new evidence emerges. Also, note that we focus on preferences over outcomes—saving lives vs. increasing income—rather than interventions, and I explain here why we and GiveWell think that’s a better approach given our purposes.)
My understanding is that ideally GiveWell would like to know children’s preferences (e.g. value of statistical life) if that was valid (e.g. rational) and could be measured, but in practice it could not be done, so we tried to use other things as proxies for it, e.g.
Measuring “child VSL” as their parents/caretakers’ willingness-to-pay (WTP) to reduce the children’s mortality (rather than own, which is the definition of standard VSL)
Taking adults’ VSL and adjusting it by the relative values adults place on individuals of different ages (there were other ).
(Something else that one could do here is to estimate own VSL (WTP) to reduce own mortality as a function of age. We did not have enough sample to do this. If I remember correctly, studies that have looked at it had conflicting evidence on the relationship between VSL and age.)
Obviously none of these is perfect—we have little idea how close our proxy is to the true object of interest, children’s WTP to reduce own mortality—if that is a valid object at all, and what to do if not (which gets into tricky philosophical issues). But both approaches we tried gave a higher value for children’s lives than for adult lives so we conclude it would be reasonable to place a higher value on children’s lives if donors’/GiveWell’s moral weights are largely/solely influenced by beneficiaries. But you are right that the philosophical foundation isn’t solid. (Within the scope of the project we had to optimize for informing practical decisions, and we are not professional philosophers, but I agree overall more discussions on this by philosophers would be helpful.)
Finally, another tricky issue that came up was—as you mentioned as well—what to do with “extreme” preferences (e.g. always choosing to save lives). Two related questions that are more fundamental are
If we want to put some weights on beneficiaries’ views, should we use “preferences” (in the sense of what they prefer to happen to themselves, e.g. VSL for self) OR “moral views” (what they think should happen to their community)? For instance, people seem to value lives a lot higher in the latter case (although one nontrivial driver of the difference is questions on the moral views were framed without uncertainty—which was a practicality we couldn’t get around, as including it in an already complex hypothetical scenario trading off lives and cash transfers seemed extremely confusing to respondents).
In the case where you want to put some weights on their moral views (and I don’t think that would be consistent with utilitarianism—not sure what philosophical view that is, but I think certainly not unreasonable to put some weight on it), what do you do if you disagree with their view? E.g. I probably wouldn’t put weight on views that were sexist or racist; what about views that purport you should value saving lives above increasing income no matter the tradeoff?
I don’t have a good answer, and I’m really curious to see philosophical arguments here. My guess is respecting recipient communities moral views would be appealing to some in the development sector, and I’m wondering what should be done when that comes into conflict with other goals, e.g. maximizing their utility / satisfying preferences.
Thank you for your post! I am an IDinsight researcher who was heavily involved in this project and I will share some of my perspectives (if I’m misrepresenting GiveWell, feel free to let me know!):
My understanding is GiveWell wanted multiple perspectives to inform their moral weights, including a utilitarian perspective of respecting beneficiaries/recipient’s preferences, as well as others (examples here). Even though beneficiary preferences may not the the only factor, it is an important one and one where empirical evidence was lacking before the study, which was why GiveWell and IDinsight decided to do it.
Also, the overall approach is that, because it’s unrealistic to understand every beneficiary’s preferences and target aid at the personal level, GiveWell and we had to come up with aggregate numbers to be used across all GiveWell top charities. (In the future, there may be the possibility of breaking it down further, e.g. by geography, as new evidence emerges. Also, note that we focus on preferences over outcomes—saving lives vs. increasing income—rather than interventions, and I explain here why we and GiveWell think that’s a better approach given our purposes.)
My understanding is that ideally GiveWell would like to know children’s preferences (e.g. value of statistical life) if that was valid (e.g. rational) and could be measured, but in practice it could not be done, so we tried to use other things as proxies for it, e.g.
Measuring “child VSL” as their parents/caretakers’ willingness-to-pay (WTP) to reduce the children’s mortality (rather than own, which is the definition of standard VSL)
Taking adults’ VSL and adjusting it by the relative values adults place on individuals of different ages (there were other ).
(Something else that one could do here is to estimate own VSL (WTP) to reduce own mortality as a function of age. We did not have enough sample to do this. If I remember correctly, studies that have looked at it had conflicting evidence on the relationship between VSL and age.)
Obviously none of these is perfect—we have little idea how close our proxy is to the true object of interest, children’s WTP to reduce own mortality—if that is a valid object at all, and what to do if not (which gets into tricky philosophical issues). But both approaches we tried gave a higher value for children’s lives than for adult lives so we conclude it would be reasonable to place a higher value on children’s lives if donors’/GiveWell’s moral weights are largely/solely influenced by beneficiaries. But you are right that the philosophical foundation isn’t solid. (Within the scope of the project we had to optimize for informing practical decisions, and we are not professional philosophers, but I agree overall more discussions on this by philosophers would be helpful.)
Finally, another tricky issue that came up was—as you mentioned as well—what to do with “extreme” preferences (e.g. always choosing to save lives). Two related questions that are more fundamental are
If we want to put some weights on beneficiaries’ views, should we use “preferences” (in the sense of what they prefer to happen to themselves, e.g. VSL for self) OR “moral views” (what they think should happen to their community)? For instance, people seem to value lives a lot higher in the latter case (although one nontrivial driver of the difference is questions on the moral views were framed without uncertainty—which was a practicality we couldn’t get around, as including it in an already complex hypothetical scenario trading off lives and cash transfers seemed extremely confusing to respondents).
In the case where you want to put some weights on their moral views (and I don’t think that would be consistent with utilitarianism—not sure what philosophical view that is, but I think certainly not unreasonable to put some weight on it), what do you do if you disagree with their view? E.g. I probably wouldn’t put weight on views that were sexist or racist; what about views that purport you should value saving lives above increasing income no matter the tradeoff?
I don’t have a good answer, and I’m really curious to see philosophical arguments here. My guess is respecting recipient communities moral views would be appealing to some in the development sector, and I’m wondering what should be done when that comes into conflict with other goals, e.g. maximizing their utility / satisfying preferences.