Your argument would only establish that we shouldn’t be indifferent to (or discount or substantially discount) future lives, not that we have reason to ensure future people are born in the first place or to create people. Multiple views that don’t find extinction much worse than almost everyone dying + population recovery could still recommend avoiding the extra deaths of future people. Especially “wide” person-affecting views.[1]
On “wide” person-affecting views, if you have one extra person Alice in outcome A, but a different extra person Bob in outcome B, and otherwise the same people in both, then you treat Alice and Bob like the same person across the two outcomes. They’re “counterparts”. For more on this, and how to extend to different numbers of non-overlapping people between A and B, see Meacham, 2012, section 4 (or short summary in Koehler, 2021) and Thomas, 2019, section 5.3. I also discuss some different person-affecting views here.
Under wide views, with the virus that kills more people, the necessary people+matched counterparts are worse off than with the virus that kills fewer people.
(I’d guess there are different ways to specify the intuition of neutrality; your argument might succeed against some but not others.)
Some versions of negative preference utilitarianism or views that minimize aggregate DALYs might, too, but if the extra early deaths prevent additional births, then in fact killing more people with the viruses could prevent more deaths overall, and that could be better on these views. These are pretty antinatalist views. That being said, I am fairly sympathetic to antinatalism about future people, but more so because I don’t think good lives can make up for bad ones.
Thanks! Perhaps I haven’t grasped what you’re saying. In my example, if the first virus mutates, it’ll be the one that kills more people--17 billion. If the second virus mutates, the entire human population dies at once from the virus, so only 8 billion people die in toto.
On either wide or narrow person-affecting views, it seems like we have to say that the first outcome—seven billion deaths and then ten million deaths a year for the next millennium—is worse than the second (extinction). But is that plausible? Doesn’t this example undermine person-affecting views of either kind?
Actually, I guess that on a narrow person-affecting view, the first outcome would not be worse than the second, because plausibly a pandemic of this kind would affect the identities of subsequent generations. Assuming the lives of the people who died were still worth living, while the first virus would be worse for people—because it would kill ten billion more of them—it would not, for the most part, be worse for particular people. But that seems like the wrong kind of reason to conclude that A is better than B.
Your argument would only establish that we shouldn’t be indifferent to (or discount or substantially discount) future lives, not that we have reason to ensure future people are born in the first place or to create people. Multiple views that don’t find extinction much worse than almost everyone dying + population recovery could still recommend avoiding the extra deaths of future people. Especially “wide” person-affecting views.[1]
On “wide” person-affecting views, if you have one extra person Alice in outcome A, but a different extra person Bob in outcome B, and otherwise the same people in both, then you treat Alice and Bob like the same person across the two outcomes. They’re “counterparts”. For more on this, and how to extend to different numbers of non-overlapping people between A and B, see Meacham, 2012, section 4 (or short summary in Koehler, 2021) and Thomas, 2019, section 5.3. I also discuss some different person-affecting views here.
Under wide views, with the virus that kills more people, the necessary people+matched counterparts are worse off than with the virus that kills fewer people.
(I’d guess there are different ways to specify the intuition of neutrality; your argument might succeed against some but not others.)
Some versions of negative preference utilitarianism or views that minimize aggregate DALYs might, too, but if the extra early deaths prevent additional births, then in fact killing more people with the viruses could prevent more deaths overall, and that could be better on these views. These are pretty antinatalist views. That being said, I am fairly sympathetic to antinatalism about future people, but more so because I don’t think good lives can make up for bad ones.
Thanks! Perhaps I haven’t grasped what you’re saying. In my example, if the first virus mutates, it’ll be the one that kills more people--17 billion. If the second virus mutates, the entire human population dies at once from the virus, so only 8 billion people die in toto.
On either wide or narrow person-affecting views, it seems like we have to say that the first outcome—seven billion deaths and then ten million deaths a year for the next millennium—is worse than the second (extinction). But is that plausible? Doesn’t this example undermine person-affecting views of either kind?
Actually, I guess that on a narrow person-affecting view, the first outcome would not be worse than the second, because plausibly a pandemic of this kind would affect the identities of subsequent generations. Assuming the lives of the people who died were still worth living, while the first virus would be worse for people—because it would kill ten billion more of them—it would not, for the most part, be worse for particular people. But that seems like the wrong kind of reason to conclude that A is better than B.