The problem (often called the “statistical lives problem”) is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I’d say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved
Yep you’re right. And importantly, this isn’t a far-off hypothetical: as Jaime alludes to, under most reasonable statistical assumptions AMF will almost certainly save a great number of lives with probability close to 1, not just save many lives in expectation. The only problem is that you don’t know for sure who those people are, ex ante.
Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make “all else equal” to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem.
The problem (often called the “statistical lives problem”) is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I’d say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Yep you’re right. And importantly, this isn’t a far-off hypothetical: as Jaime alludes to, under most reasonable statistical assumptions AMF will almost certainly save a great number of lives with probability close to 1, not just save many lives in expectation. The only problem is that you don’t know for sure who those people are, ex ante.
Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make “all else equal” to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem.