As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.
As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.