Thanks so much for this, Jakob. Really great questions. On the application part, let me first quote something I wrote to MSJ below:
I was holding the standard EA interventions fixed, but I agree that, given contractualism, there’s a case to be made for other priorities. Minimally, we’d need to evaluate our opportunities in these and similar areas. It would be a bit surprising if EA had landed on the ideal portfolio for an aim it hasn’t had in mind: namely, minimizing relevant strength-weighted complaints.
That being said, a lot depends here on the factors that influence claim strength. Averting even a relatively low probability of death can trump lots of other possible benefits. And cost matters for claim strength too: all else equal, people have weaker claims to large amounts of our resources than they do to small amounts. So, yes, it could definitely work out that, given contractualism, EA has the wrong priorities even within the global health space, but isofar as some popular interventions are focused on inexpensive ways of saving lives, we’ve got at least a few considerations that strongly support those interventions. That being said, we can’t really know unless we run the numbers.
Re: the statistical lives problem for the ex ante view, I have a few things to say—which, to be clear, don’t amount to a direct reply of the form, “Here’s why the view doesn’t face the problem.” First, every view has horrible problems. When it comes to moral theory, we’re in a “pick your poison” situation. There are certainly some views I’m willing to write off as “clearly false,” but I wouldn’t say that of most versions of contractualism. In general, my approach to applied ethics is to say, “Moral theory is brutally hard and often the best we can do is try to assess whether we end up in roughly the same spot practically regardless of where we start theoretically.” Second, and in the same spirit, my main goal here is to complement Emma Curran’s work: she’s already defended the same conclusion for the ex post version of the view. So, it’s progress enough to show that, whichever way you go, you get something other than prioritizing x-risk. Third, the ex ante view doesn’t imply that we should prioritize one identified person over any number of “statistical” people unless all else is equal—and all else often isn’t equal. I grant that there are going to be lots of cases where identified lives trump statistical lives, but for the kinds of reasons I mentioned when thinking about your great application question, we still need to sort out the details re: claim strength.
Thanks for your helpful reply! I’m very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don’t deserve more than negligible weight—which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a “knock-down argument” as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of “statistical” people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of “statistical” people—and at least to me this is just “clearly wrong”. I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.
After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I’m afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the “ex post” view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Very interesting, Jakob! I’ll have to contact Tomi to get his draft. Thanks for the heads up about this work. And, of course, I’ll be curious to see what you’re working on when you’re able to share!
Thanks so much for this, Jakob. Really great questions. On the application part, let me first quote something I wrote to MSJ below:
That being said, a lot depends here on the factors that influence claim strength. Averting even a relatively low probability of death can trump lots of other possible benefits. And cost matters for claim strength too: all else equal, people have weaker claims to large amounts of our resources than they do to small amounts. So, yes, it could definitely work out that, given contractualism, EA has the wrong priorities even within the global health space, but isofar as some popular interventions are focused on inexpensive ways of saving lives, we’ve got at least a few considerations that strongly support those interventions. That being said, we can’t really know unless we run the numbers.
Re: the statistical lives problem for the ex ante view, I have a few things to say—which, to be clear, don’t amount to a direct reply of the form, “Here’s why the view doesn’t face the problem.” First, every view has horrible problems. When it comes to moral theory, we’re in a “pick your poison” situation. There are certainly some views I’m willing to write off as “clearly false,” but I wouldn’t say that of most versions of contractualism. In general, my approach to applied ethics is to say, “Moral theory is brutally hard and often the best we can do is try to assess whether we end up in roughly the same spot practically regardless of where we start theoretically.” Second, and in the same spirit, my main goal here is to complement Emma Curran’s work: she’s already defended the same conclusion for the ex post version of the view. So, it’s progress enough to show that, whichever way you go, you get something other than prioritizing x-risk. Third, the ex ante view doesn’t imply that we should prioritize one identified person over any number of “statistical” people unless all else is equal—and all else often isn’t equal. I grant that there are going to be lots of cases where identified lives trump statistical lives, but for the kinds of reasons I mentioned when thinking about your great application question, we still need to sort out the details re: claim strength.
Really appreciate the very helpful engagement!
Thanks for your helpful reply! I’m very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don’t deserve more than negligible weight—which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a “knock-down argument” as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of “statistical” people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of “statistical” people—and at least to me this is just “clearly wrong”. I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.
After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I’m afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the “ex post” view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Very interesting, Jakob! I’ll have to contact Tomi to get his draft. Thanks for the heads up about this work. And, of course, I’ll be curious to see what you’re working on when you’re able to share!
Thanks for your interest! I will let you know when my paper is ready/readable. Maybe I’m also going to write a forum post about it.