I think it may be important to draw a theory/âpractice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they donât matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do more harm than good.
there are many cases in which instrumental favoritism would seem less appropriate. We do not want emergency room doctors to pass judgment on the social value of their patients before deciding who to save, for example. And there are good utilitarian reasons for this: such judgments are apt to be unreliable, distorted by all sorts of biases regarding privilege and social status, and institutionalizing them could send a harmful stigmatizing message that undermines social solidarity. Realistically, it seems unlikely that the minor instrumental benefits to be gained from such a policy would outweigh these significant harms. So utilitarians may endorse standard rules of medical ethics that disallow medical providers from considering social value in triage or when making medical allocation decisions
But these instrumental reasons to be cautious of over-optimization donât imply that we should completely ignore the fact that saving people has instrumental benefits that saving animals doesnât.
So I disagree that accepting capacity-based arguments for GHD over AW forces one to also optimize for saving productive over unproductive people, in a fine-grained way that many would find offensive. The latter decision-procedure risks extra harms that the former does not. (I think recognition of this fact is precisely why many find the idea offensive.)
I think it may be important to draw a theory/âpractice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they donât matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do more harm than good.
As explained on utilitarianism.net:
But these instrumental reasons to be cautious of over-optimization donât imply that we should completely ignore the fact that saving people has instrumental benefits that saving animals doesnât.
So I disagree that accepting capacity-based arguments for GHD over AW forces one to also optimize for saving productive over unproductive people, in a fine-grained way that many would find offensive. The latter decision-procedure risks extra harms that the former does not. (I think recognition of this fact is precisely why many find the idea offensive.)