Hey Bob, I’m currently working on a paper about a similar issue, so this has been quite interesting to read! (I’m discussing more generally the implications of limited aggregation, but as you note contractualism has primarily distinct implications because of its (partially) non-aggregative nature.) While I mostly agree with your claims about the implications of the ex ante view, I disagree with your claim that this is the most plausible version of contractualism. In fact, I think that the ex ante view is clearly wrong and we should not be much concerned with what it implies.
First, briefly to the application part. I think you are right that, given the ex ante view, we should not focus on mitigating x-risks, and that we should rather perform global health interventions. However, as you note, there is usually a very large group of potential beneficiaries when it comes to global health interventions, so that the probability for each individual to be benefited is quite small, resulting in heavily diminished ex ante claims. I wonder, therefore, if we shouldn’t, on the ex ante view, rather spend our resources on (relatively needy) people we know or people in small communities. Even if these people would benefit from our resources 100+ times less than the global poor, this could well be overcompensated by the much higher probabilities for each of these individuals to actually be benefited.
But again, I think the ex ante view is clearly false anyway. The easiest way to see this is that the view implies that we should prioritize one identified person over any number of “statistical” people. That is: on the ex ante view, we should save a given person for sure rather than (definitely!) save one million people if these are randondomly chosen from a sufficiently large population. In fact, there are even worse implications (the identified person could merely lose a finger and not her life if we don’t help), but I think this implication is already bad enough to confidently reject the view. I don’t know of anybody who is willing to accept that implication. The typical (if not universal?) reaction of advocates of the ex ante view is to go pluralist and claim that the verdicts of the ex ante view only correspond to one of several pro tanto reasons. As far as I know, no such view has actually been developed and I think any such view would be highly implausible as well; but even if it succeeded, its implications would be much more moderate: all we’d learn is that there is one of several pro tanto reasons that favour acting in (presumably) some short-term way. This could be well compatible with classic long-term interventions being overall most choiceworthy / obligatory.
I’m sure that I’m not telling you much, if anything, new here, so I wonder what you think of these arguments?
I wonder if the Greater Burden Principle over ex ante interests tells you not to do broad exploratory research into interventions and causes or even much or any research at all, because any such research is very unlikely to benefit any particular individual. Instead, you should just pick from one of the interventions you already know rather than spread the ex ante benefits more thinly by investigating more options. Any time you expand the set of interventions under consideration, those who’d benefit ex ante in the original set lose substantially ex ante in the expanded set because they’re now less likely to be targeted at all, while those added only stand to gain a little ex ante, because whatever intervention is chosen is unlikely to help them.
To make it even more concrete, consider helping A with 100% probability. Now, you consider the possibilities of helping B or C, and you’re very unsure now about which of A, B or C you’ll help after you investigate further, so now assign each a 1⁄3 chance of being helped. A loses a ~67% chance of being helped, which is larger than the ~33% chance each of B and C gain. So, you shouldn’t even start to consider helping B and C instead of A.
However, if you did it one at a time, i.e. first consider B, going from 100% A to 50% A and 50% B, this would be permissible. And then going from 50% A and 50% B to 33% to each of A, B and C is also permissible (and required, because C gains 33% compared to the loss of 17% to each of A and B).
Is this a strawman? Or maybe contractualists make more space for deliberation by recognizing other reasons, like you suggest.
Thanks so much for this, Jakob. Really great questions. On the application part, let me first quote something I wrote to MSJ below:
I was holding the standard EA interventions fixed, but I agree that, given contractualism, there’s a case to be made for other priorities. Minimally, we’d need to evaluate our opportunities in these and similar areas. It would be a bit surprising if EA had landed on the ideal portfolio for an aim it hasn’t had in mind: namely, minimizing relevant strength-weighted complaints.
That being said, a lot depends here on the factors that influence claim strength. Averting even a relatively low probability of death can trump lots of other possible benefits. And cost matters for claim strength too: all else equal, people have weaker claims to large amounts of our resources than they do to small amounts. So, yes, it could definitely work out that, given contractualism, EA has the wrong priorities even within the global health space, but isofar as some popular interventions are focused on inexpensive ways of saving lives, we’ve got at least a few considerations that strongly support those interventions. That being said, we can’t really know unless we run the numbers.
Re: the statistical lives problem for the ex ante view, I have a few things to say—which, to be clear, don’t amount to a direct reply of the form, “Here’s why the view doesn’t face the problem.” First, every view has horrible problems. When it comes to moral theory, we’re in a “pick your poison” situation. There are certainly some views I’m willing to write off as “clearly false,” but I wouldn’t say that of most versions of contractualism. In general, my approach to applied ethics is to say, “Moral theory is brutally hard and often the best we can do is try to assess whether we end up in roughly the same spot practically regardless of where we start theoretically.” Second, and in the same spirit, my main goal here is to complement Emma Curran’s work: she’s already defended the same conclusion for the ex post version of the view. So, it’s progress enough to show that, whichever way you go, you get something other than prioritizing x-risk. Third, the ex ante view doesn’t imply that we should prioritize one identified person over any number of “statistical” people unless all else is equal—and all else often isn’t equal. I grant that there are going to be lots of cases where identified lives trump statistical lives, but for the kinds of reasons I mentioned when thinking about your great application question, we still need to sort out the details re: claim strength.
Thanks for your helpful reply! I’m very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don’t deserve more than negligible weight—which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a “knock-down argument” as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of “statistical” people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of “statistical” people—and at least to me this is just “clearly wrong”. I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.
After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I’m afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the “ex post” view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Very interesting, Jakob! I’ll have to contact Tomi to get his draft. Thanks for the heads up about this work. And, of course, I’ll be curious to see what you’re working on when you’re able to share!
(None of this may be news to you, either, but potentially of interest to other readers.)
Furthermore, ex ante views will tend to be dynamically inconsistent. For example, you have a lottery where you pick one person to be sacrificed for the benefit of the many, and this looks permissible to everyone ex ante, but once we find out who will be sacrificed, it’s no longer permissible. And it wouldn’t be permissible no matter who we found out would be sacrificed. This violates the Sure-Thing Principle. That being said, I’m not sure I’d call violating the STP enough to rule out a view or principle, but it should count against the view.
To satisfy the STP, you’re also pretty close to maximizing expected utility due to Savage’s Theorem and generalizations. But maximizing expected utility with a specifically unbounded utility functions, like total welfare, also violates a more general version of the Sure-Thing Principle, because of St Petersburg prospects (infinite expected utility, but finite actual utility in each outcome), e.g. Russell and Isaacs, 2021 https://philarchive.org/rec/RUSINP-2 . It also gets worse, because Anteriority (weaker than ex ante Pareto, but generalized to individuals whose existence is uncertain) + Impartiality + Stochastic Dominance are jointly inconsistent due to St Petersburg-like prospects over population sizes, with a few additional modest assumptions (Goodsell, 2021, https://philpapers.org/rec/GOOASP-2 ).
We could idealize and decide as if we had full information, looking for agreement, or just take an ex post view. See Fleurbaey and Voorhoeve, 2013 https://philarchive.org/rec/VOODAY
Yes, that’s another problem indeed—thanks for the addition! Johann Frick (“Contractualism and Social Risk”) offers a “decomposition test” as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this “stage-wise ex ante contractualism” has its own additional problems.
Thanks for sharing! I think Frick’s approach looks pretty promising, although either with limited/partial aggregation or, as he does, recognizing that this isn’t the full picture, and we can have other reasons to balance, to appropriately handle cases with many statistical lives at stake but low individual risks. What additional problems did you have in mind?
Hmm I can’t recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto—which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it’s necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone’s ex ante interest and still not justified, right?
Hey Bob, I’m currently working on a paper about a similar issue, so this has been quite interesting to read! (I’m discussing more generally the implications of limited aggregation, but as you note contractualism has primarily distinct implications because of its (partially) non-aggregative nature.) While I mostly agree with your claims about the implications of the ex ante view, I disagree with your claim that this is the most plausible version of contractualism. In fact, I think that the ex ante view is clearly wrong and we should not be much concerned with what it implies.
First, briefly to the application part. I think you are right that, given the ex ante view, we should not focus on mitigating x-risks, and that we should rather perform global health interventions. However, as you note, there is usually a very large group of potential beneficiaries when it comes to global health interventions, so that the probability for each individual to be benefited is quite small, resulting in heavily diminished ex ante claims. I wonder, therefore, if we shouldn’t, on the ex ante view, rather spend our resources on (relatively needy) people we know or people in small communities. Even if these people would benefit from our resources 100+ times less than the global poor, this could well be overcompensated by the much higher probabilities for each of these individuals to actually be benefited.
But again, I think the ex ante view is clearly false anyway. The easiest way to see this is that the view implies that we should prioritize one identified person over any number of “statistical” people. That is: on the ex ante view, we should save a given person for sure rather than (definitely!) save one million people if these are randondomly chosen from a sufficiently large population. In fact, there are even worse implications (the identified person could merely lose a finger and not her life if we don’t help), but I think this implication is already bad enough to confidently reject the view. I don’t know of anybody who is willing to accept that implication. The typical (if not universal?) reaction of advocates of the ex ante view is to go pluralist and claim that the verdicts of the ex ante view only correspond to one of several pro tanto reasons. As far as I know, no such view has actually been developed and I think any such view would be highly implausible as well; but even if it succeeded, its implications would be much more moderate: all we’d learn is that there is one of several pro tanto reasons that favour acting in (presumably) some short-term way. This could be well compatible with classic long-term interventions being overall most choiceworthy / obligatory.
I’m sure that I’m not telling you much, if anything, new here, so I wonder what you think of these arguments?
I wonder if the Greater Burden Principle over ex ante interests tells you not to do broad exploratory research into interventions and causes or even much or any research at all, because any such research is very unlikely to benefit any particular individual. Instead, you should just pick from one of the interventions you already know rather than spread the ex ante benefits more thinly by investigating more options. Any time you expand the set of interventions under consideration, those who’d benefit ex ante in the original set lose substantially ex ante in the expanded set because they’re now less likely to be targeted at all, while those added only stand to gain a little ex ante, because whatever intervention is chosen is unlikely to help them.
To make it even more concrete, consider helping A with 100% probability. Now, you consider the possibilities of helping B or C, and you’re very unsure now about which of A, B or C you’ll help after you investigate further, so now assign each a 1⁄3 chance of being helped. A loses a ~67% chance of being helped, which is larger than the ~33% chance each of B and C gain. So, you shouldn’t even start to consider helping B and C instead of A.
However, if you did it one at a time, i.e. first consider B, going from 100% A to 50% A and 50% B, this would be permissible. And then going from 50% A and 50% B to 33% to each of A, B and C is also permissible (and required, because C gains 33% compared to the loss of 17% to each of A and B).
Is this a strawman? Or maybe contractualists make more space for deliberation by recognizing other reasons, like you suggest.
Thanks so much for this, Jakob. Really great questions. On the application part, let me first quote something I wrote to MSJ below:
That being said, a lot depends here on the factors that influence claim strength. Averting even a relatively low probability of death can trump lots of other possible benefits. And cost matters for claim strength too: all else equal, people have weaker claims to large amounts of our resources than they do to small amounts. So, yes, it could definitely work out that, given contractualism, EA has the wrong priorities even within the global health space, but isofar as some popular interventions are focused on inexpensive ways of saving lives, we’ve got at least a few considerations that strongly support those interventions. That being said, we can’t really know unless we run the numbers.
Re: the statistical lives problem for the ex ante view, I have a few things to say—which, to be clear, don’t amount to a direct reply of the form, “Here’s why the view doesn’t face the problem.” First, every view has horrible problems. When it comes to moral theory, we’re in a “pick your poison” situation. There are certainly some views I’m willing to write off as “clearly false,” but I wouldn’t say that of most versions of contractualism. In general, my approach to applied ethics is to say, “Moral theory is brutally hard and often the best we can do is try to assess whether we end up in roughly the same spot practically regardless of where we start theoretically.” Second, and in the same spirit, my main goal here is to complement Emma Curran’s work: she’s already defended the same conclusion for the ex post version of the view. So, it’s progress enough to show that, whichever way you go, you get something other than prioritizing x-risk. Third, the ex ante view doesn’t imply that we should prioritize one identified person over any number of “statistical” people unless all else is equal—and all else often isn’t equal. I grant that there are going to be lots of cases where identified lives trump statistical lives, but for the kinds of reasons I mentioned when thinking about your great application question, we still need to sort out the details re: claim strength.
Really appreciate the very helpful engagement!
Thanks for your helpful reply! I’m very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don’t deserve more than negligible weight—which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a “knock-down argument” as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of “statistical” people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of “statistical” people—and at least to me this is just “clearly wrong”. I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.
After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I’m afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the “ex post” view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Very interesting, Jakob! I’ll have to contact Tomi to get his draft. Thanks for the heads up about this work. And, of course, I’ll be curious to see what you’re working on when you’re able to share!
Thanks for your interest! I will let you know when my paper is ready/readable. Maybe I’m also going to write a forum post about it.
(None of this may be news to you, either, but potentially of interest to other readers.)
Furthermore, ex ante views will tend to be dynamically inconsistent. For example, you have a lottery where you pick one person to be sacrificed for the benefit of the many, and this looks permissible to everyone ex ante, but once we find out who will be sacrificed, it’s no longer permissible. And it wouldn’t be permissible no matter who we found out would be sacrificed. This violates the Sure-Thing Principle. That being said, I’m not sure I’d call violating the STP enough to rule out a view or principle, but it should count against the view.
To satisfy the STP, you’re also pretty close to maximizing expected utility due to Savage’s Theorem and generalizations. But maximizing expected utility with a specifically unbounded utility functions, like total welfare, also violates a more general version of the Sure-Thing Principle, because of St Petersburg prospects (infinite expected utility, but finite actual utility in each outcome), e.g. Russell and Isaacs, 2021 https://philarchive.org/rec/RUSINP-2 . It also gets worse, because Anteriority (weaker than ex ante Pareto, but generalized to individuals whose existence is uncertain) + Impartiality + Stochastic Dominance are jointly inconsistent due to St Petersburg-like prospects over population sizes, with a few additional modest assumptions (Goodsell, 2021, https://philpapers.org/rec/GOOASP-2 ).
We could idealize and decide as if we had full information, looking for agreement, or just take an ex post view. See Fleurbaey and Voorhoeve, 2013 https://philarchive.org/rec/VOODAY
Yes, that’s another problem indeed—thanks for the addition! Johann Frick (“Contractualism and Social Risk”) offers a “decomposition test” as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this “stage-wise ex ante contractualism” has its own additional problems.
Thanks for sharing! I think Frick’s approach looks pretty promising, although either with limited/partial aggregation or, as he does, recognizing that this isn’t the full picture, and we can have other reasons to balance, to appropriately handle cases with many statistical lives at stake but low individual risks. What additional problems did you have in mind?
Hmm I can’t recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto—which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it’s necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone’s ex ante interest and still not justified, right?