Note that the AMF example does not quite work, because if each net has a 0.3% chance of preventing death, and all are independent, then with 330M nets you are >99% sure of saving at least ~988k people.
Contractualism doesn’t allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual’s claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon’s theory then says save the one person.
Yeah Scanlon’s theory doesn’t allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.
We find it plausible that if Scanlon’s argument for saving the greater number succeeds, then, for some n, you ought to save the n. Here’s our thinking: First, imagine a version of Death/Paraplegia in which n = 1. In this case, you ought to save Nora outright. Now imagine a version in which n = 2. In this case, if you were to save Nora, then, plausibly, the additional person on the side of the many has a complaint, for you would thereby treat the case as though the additional person weren’t even there. (Recall that we’re supposing that Scanlon’s argument described above succeeds.) So, you ought to do something appropriately responsive to the additional person’s presence. Perhaps this will take the form of flipping a coin to determine whether to save Nora or the many; perhaps it will take the form of running a lottery heavily weighted on Nora’s side—the details won’t matter here. What matters is that whatever such an act would be, it would presumably be “closer” to saving the n than what it was permissible to do when n = 1 (namely, saving Nora outright).
But now imagine iterating this process over and over, increasing the size of n by 1 each time. Eventually, we think, you’ll get to a point where outright saving the n is the only acceptable thing to do. This suggests that Scanlonian contractualism can accommodate some aggregation of lesser bads, at least if Scanlon’s argument for saving the greater number is successful.[emphasis mine]
But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don’t think it goes through for ice cream, or AMF. A system that doesn’t natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.
(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream).
One way you could do this is by defining what kinds of claims would be “relevant” to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.
Some serious debilitating conditions could also be relevant to death, too, even if less severe.
On the other hand, ice cream is never relevant to death, so there’s no trade off between them. Headaches (a common example) wouldn’t be relevant to death, either.
But this seems kind of wrong as stated, or at least it needs more nuance.
There’s a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.
However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.
Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.
Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.
Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.
I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the ‘additional’ person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it’s better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they’re undermining the theory.
The logic is that the ‘additional’ person in the pair has a complaint because you’re acting as if they aren’t there. But you aren’t acting as if they aren’t there—you’re noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual’s larger claim. Which is kind of the whole point of the theory!
As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.
Note that the AMF example does not quite work, because if each net has a 0.3% chance of preventing death, and all are independent, then with 330M nets you are >99% sure of saving at least ~988k people.
Contractualism doesn’t allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual’s claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon’s theory then says save the one person.
Yeah Scanlon’s theory doesn’t allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.
But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don’t think it goes through for ice cream, or AMF. A system that doesn’t natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.
(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream).
One way you could do this is by defining what kinds of claims would be “relevant” to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.
Some serious debilitating conditions could also be relevant to death, too, even if less severe.
On the other hand, ice cream is never relevant to death, so there’s no trade off between them. Headaches (a common example) wouldn’t be relevant to death, either.
I think this is the idea behind one approach to limited aggregation, specifically Voorhoeve, 2014 (https://doi.org/10.1086/677022).
But this seems kind of wrong as stated, or at least it needs more nuance.
There’s a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.
However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.
Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.
Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.
Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.
I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the ‘additional’ person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it’s better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they’re undermining the theory.
The logic is that the ‘additional’ person in the pair has a complaint because you’re acting as if they aren’t there. But you aren’t acting as if they aren’t there—you’re noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual’s larger claim. Which is kind of the whole point of the theory!
As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.