Contractualism doesn’t allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual’s claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon’s theory then says save the one person.
Yeah Scanlon’s theory doesn’t allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.
We find it plausible that if Scanlon’s argument for saving the greater number succeeds, then, for some n, you ought to save the n. Here’s our thinking: First, imagine a version of Death/Paraplegia in which n = 1. In this case, you ought to save Nora outright. Now imagine a version in which n = 2. In this case, if you were to save Nora, then, plausibly, the additional person on the side of the many has a complaint, for you would thereby treat the case as though the additional person weren’t even there. (Recall that we’re supposing that Scanlon’s argument described above succeeds.) So, you ought to do something appropriately responsive to the additional person’s presence. Perhaps this will take the form of flipping a coin to determine whether to save Nora or the many; perhaps it will take the form of running a lottery heavily weighted on Nora’s side—the details won’t matter here. What matters is that whatever such an act would be, it would presumably be “closer” to saving the n than what it was permissible to do when n = 1 (namely, saving Nora outright).
But now imagine iterating this process over and over, increasing the size of n by 1 each time. Eventually, we think, you’ll get to a point where outright saving the n is the only acceptable thing to do. This suggests that Scanlonian contractualism can accommodate some aggregation of lesser bads, at least if Scanlon’s argument for saving the greater number is successful.[emphasis mine]
But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don’t think it goes through for ice cream, or AMF. A system that doesn’t natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.
(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream).
One way you could do this is by defining what kinds of claims would be “relevant” to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.
Some serious debilitating conditions could also be relevant to death, too, even if less severe.
On the other hand, ice cream is never relevant to death, so there’s no trade off between them. Headaches (a common example) wouldn’t be relevant to death, either.
But this seems kind of wrong as stated, or at least it needs more nuance.
There’s a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.
However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.
Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.
Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.
Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.
I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the ‘additional’ person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it’s better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they’re undermining the theory.
The logic is that the ‘additional’ person in the pair has a complaint because you’re acting as if they aren’t there. But you aren’t acting as if they aren’t there—you’re noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual’s larger claim. Which is kind of the whole point of the theory!
Contractualism doesn’t allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual’s claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon’s theory then says save the one person.
Yeah Scanlon’s theory doesn’t allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.
But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don’t think it goes through for ice cream, or AMF. A system that doesn’t natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.
(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream).
One way you could do this is by defining what kinds of claims would be “relevant” to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.
Some serious debilitating conditions could also be relevant to death, too, even if less severe.
On the other hand, ice cream is never relevant to death, so there’s no trade off between them. Headaches (a common example) wouldn’t be relevant to death, either.
I think this is the idea behind one approach to limited aggregation, specifically Voorhoeve, 2014 (https://doi.org/10.1086/677022).
But this seems kind of wrong as stated, or at least it needs more nuance.
There’s a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.
However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.
Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.
Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.
Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.
I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the ‘additional’ person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it’s better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they’re undermining the theory.
The logic is that the ‘additional’ person in the pair has a complaint because you’re acting as if they aren’t there. But you aren’t acting as if they aren’t there—you’re noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual’s larger claim. Which is kind of the whole point of the theory!