(My only understanding of contractualism comes from this post, The Good Place, and the SEP article. Apologies for any misunderstandings)
tl;dr: I think contractualism will lead to pretty radically different answers than AMF. So I dispute the “if contractualism, then AMF” conditional. Further, I think the results it gives is so implausible that we should be willing to reject contractualism as it applies to the impartial allocation of limited resources. I’m interested in responses to both claims, but I’m happy to see replies that just address one or the other.
Suppose there’s a rare disease that would kill Mary rather painfully with probability ~=1. Suppose further that we estimate that it takes ~1 billion dollars to cure her. It seems that under contractualism, every American (population ~330 million) is obligated to chip in 3 dollars to save Mary’s life. It is after all implausible that a tax increase of 3 dollars per year has nearly as much moral claim to wrongness as someone dying painfully, even under the much more relaxed versions that you propose. [1]
Without opining on whether contractualism makes sense in its own lane[2], I personally think the above is a reductio ad absurdum of contractualism as applied to the rational impartial allocation of limited resources, namely that it elevates a cognitive bias (the identifiable victim effect) to a core moral principle. But perhaps other people think privileging Mary’s life over every American having 3 dollars (the equivalent on the margin to 330 million used books or 330 million ice cream cones) is defensible or even morally obligatory. Well it so happens that 3 dollars is close to the price of a antimalarial bednet. My guess is that contractualism, even under the more relaxed versions, will have trouble coming up with why preventing some number of people (remember, contractualism doesn’t do aggregations!) having a ~50% chance of getting malaria and a ~0.3% chance of dying is morally preferable to preventing someone from dying with probability ~1. This despite the insecticidal bednets potentially saving tens or even hundreds of thousands of lives in expectation!
But I guess one person’s modus tollens is another’s modus ponens. What I consider to be a rejection of contractualism can also logically be interpreted by others as a rejection of AMF, in favor of much more expensive interventions that can save someone’s life with probability closer to 1. (And in practice, I wouldn’t be surprised if the actual price to save a life for someone operating under this theory is more like a million dollars than a billion). So I would guess people who believe in contractualism as applied to charities to end up making pretty radically different choices than the current EA set of options.
EDIT: 2023/10/15: I see that Jakub has already made the same point I had before I commented, just more abstract and philosophical.
A related issue with this form of contractualism is its demandingness, which taken literally seems more demanding than even naive act utilitarianism. Act utilitarianism is often criticized for its demandingness, but utilitarianism at least permits people to have simple pleasures while others suffer (as long as the simple pleasures are cheap to come by).
The problem (often called the “statistical lives problem”) is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I’d say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved
Yep you’re right. And importantly, this isn’t a far-off hypothetical: as Jaime alludes to, under most reasonable statistical assumptions AMF will almost certainly save a great number of lives with probability close to 1, not just save many lives in expectation. The only problem is that you don’t know for sure who those people are, ex ante.
Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make “all else equal” to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem.
Note that the AMF example does not quite work, because if each net has a 0.3% chance of preventing death, and all are independent, then with 330M nets you are >99% sure of saving at least ~988k people.
Contractualism doesn’t allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual’s claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon’s theory then says save the one person.
Yeah Scanlon’s theory doesn’t allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.
We find it plausible that if Scanlon’s argument for saving the greater number succeeds, then, for some n, you ought to save the n. Here’s our thinking: First, imagine a version of Death/Paraplegia in which n = 1. In this case, you ought to save Nora outright. Now imagine a version in which n = 2. In this case, if you were to save Nora, then, plausibly, the additional person on the side of the many has a complaint, for you would thereby treat the case as though the additional person weren’t even there. (Recall that we’re supposing that Scanlon’s argument described above succeeds.) So, you ought to do something appropriately responsive to the additional person’s presence. Perhaps this will take the form of flipping a coin to determine whether to save Nora or the many; perhaps it will take the form of running a lottery heavily weighted on Nora’s side—the details won’t matter here. What matters is that whatever such an act would be, it would presumably be “closer” to saving the n than what it was permissible to do when n = 1 (namely, saving Nora outright).
But now imagine iterating this process over and over, increasing the size of n by 1 each time. Eventually, we think, you’ll get to a point where outright saving the n is the only acceptable thing to do. This suggests that Scanlonian contractualism can accommodate some aggregation of lesser bads, at least if Scanlon’s argument for saving the greater number is successful.[emphasis mine]
But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don’t think it goes through for ice cream, or AMF. A system that doesn’t natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.
(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream).
One way you could do this is by defining what kinds of claims would be “relevant” to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.
Some serious debilitating conditions could also be relevant to death, too, even if less severe.
On the other hand, ice cream is never relevant to death, so there’s no trade off between them. Headaches (a common example) wouldn’t be relevant to death, either.
But this seems kind of wrong as stated, or at least it needs more nuance.
There’s a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.
However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.
Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.
Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.
Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.
I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the ‘additional’ person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it’s better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they’re undermining the theory.
The logic is that the ‘additional’ person in the pair has a complaint because you’re acting as if they aren’t there. But you aren’t acting as if they aren’t there—you’re noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual’s larger claim. Which is kind of the whole point of the theory!
As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.
(My only understanding of contractualism comes from this post, The Good Place, and the SEP article. Apologies for any misunderstandings)
tl;dr: I think contractualism will lead to pretty radically different answers than AMF. So I dispute the “if contractualism, then AMF” conditional. Further, I think the results it gives is so implausible that we should be willing to reject contractualism as it applies to the impartial allocation of limited resources. I’m interested in responses to both claims, but I’m happy to see replies that just address one or the other.
Suppose there’s a rare disease that would kill Mary rather painfully with probability ~=1. Suppose further that we estimate that it takes ~1 billion dollars to cure her. It seems that under contractualism, every American (population ~330 million) is obligated to chip in 3 dollars to save Mary’s life. It is after all implausible that a tax increase of 3 dollars per year has nearly as much moral claim to wrongness as someone dying painfully, even under the much more relaxed versions that you propose. [1]
Without opining on whether contractualism makes sense in its own lane[2], I personally think the above is a reductio ad absurdum of contractualism as applied to the rational impartial allocation of limited resources, namely that it elevates a cognitive bias (the identifiable victim effect) to a core moral principle. But perhaps other people think privileging Mary’s life over every American having 3 dollars (the equivalent on the margin to 330 million used books or 330 million ice cream cones) is defensible or even morally obligatory. Well it so happens that 3 dollars is close to the price of a antimalarial bednet. My guess is that contractualism, even under the more relaxed versions, will have trouble coming up with why preventing some number of people (remember, contractualism doesn’t do aggregations!) having a ~50% chance of getting malaria and a ~0.3% chance of dying is morally preferable to preventing someone from dying with probability ~1. This despite the insecticidal bednets potentially saving tens or even hundreds of thousands of lives in expectation!
But I guess one person’s modus tollens is another’s modus ponens. What I consider to be a rejection of contractualism can also logically be interpreted by others as a rejection of AMF, in favor of much more expensive interventions that can save someone’s life with probability closer to 1. (And in practice, I wouldn’t be surprised if the actual price to save a life for someone operating under this theory is more like a million dollars than a billion). So I would guess people who believe in contractualism as applied to charities to end up making pretty radically different choices than the current EA set of options.
EDIT: 2023/10/15: I see that Jakub has already made the same point I had before I commented, just more abstract and philosophical.
A related issue with this form of contractualism is its demandingness, which taken literally seems more demanding than even naive act utilitarianism. Act utilitarianism is often criticized for its demandingness, but utilitarianism at least permits people to have simple pleasures while others suffer (as long as the simple pleasures are cheap to come by).
As you and SEP both note, contractualism is only supposed to describe a subset of morality, not all of it.
The problem (often called the “statistical lives problem”) is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I’d say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Yep you’re right. And importantly, this isn’t a far-off hypothetical: as Jaime alludes to, under most reasonable statistical assumptions AMF will almost certainly save a great number of lives with probability close to 1, not just save many lives in expectation. The only problem is that you don’t know for sure who those people are, ex ante.
Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make “all else equal” to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem.
Note that the AMF example does not quite work, because if each net has a 0.3% chance of preventing death, and all are independent, then with 330M nets you are >99% sure of saving at least ~988k people.
Contractualism doesn’t allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual’s claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon’s theory then says save the one person.
Yeah Scanlon’s theory doesn’t allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.
But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don’t think it goes through for ice cream, or AMF. A system that doesn’t natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.
(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream).
One way you could do this is by defining what kinds of claims would be “relevant” to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.
Some serious debilitating conditions could also be relevant to death, too, even if less severe.
On the other hand, ice cream is never relevant to death, so there’s no trade off between them. Headaches (a common example) wouldn’t be relevant to death, either.
I think this is the idea behind one approach to limited aggregation, specifically Voorhoeve, 2014 (https://doi.org/10.1086/677022).
But this seems kind of wrong as stated, or at least it needs more nuance.
There’s a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.
However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.
Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.
Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.
Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.
I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the ‘additional’ person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it’s better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they’re undermining the theory.
The logic is that the ‘additional’ person in the pair has a complaint because you’re acting as if they aren’t there. But you aren’t acting as if they aren’t there—you’re noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual’s larger claim. Which is kind of the whole point of the theory!
As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.