The longtermist could then argue that an analogous argument applies to “other-defence” of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don’t have different but comparably serious problems. But this assumption can’t be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.
I realize now I interpreted “rights” in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans’ legal rights to protect nonhuman animals or future people could be morally permissible.
The longtermist could then argue that an analogous argument applies to “other-defence” of future generations.
I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone’s birth would not violate that then non-existent person’s rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people’s rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it’s not clear it would deserve the kind of priority it gets in EA. See this paper, for example, which I got from one of Torres’ articles and takes a contractualist approach. I think a rights-based approach could treat it similarly.
It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don’t get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don’t know how they would normally respond to this.
Considering “innocent threats” complicates things further, too, and it looks like there’s disagreement over the permissibility of harming innocent threats to prevent harm caused by them.
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and when they are not, they would not necessarily violate rights at all. The original objection raised by Halstead concerns rights violations, not merely causing serious harm to prevent another (possibly greater) harm. Maybe this is a sneaky way to dodge the objection, and doesn’t really dodge it at all, since there’s a similar objection. Also, it depends on what’s meant by “rights”.
Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.
So, what kinds of objectionable harms could be justified on such views? I don’t think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don’t think I have anything interesting to say off the top of my head.
The longtermist could then argue that an analogous argument applies to “other-defence” of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don’t have different but comparably serious problems. But this assumption can’t be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.
I realize now I interpreted “rights” in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans’ legal rights to protect nonhuman animals or future people could be morally permissible.
I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone’s birth would not violate that then non-existent person’s rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people’s rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it’s not clear it would deserve the kind of priority it gets in EA. See this paper, for example, which I got from one of Torres’ articles and takes a contractualist approach. I think a rights-based approach could treat it similarly.
It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don’t get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don’t know how they would normally respond to this.
Considering “innocent threats” complicates things further, too, and it looks like there’s disagreement over the permissibility of harming innocent threats to prevent harm caused by them.
I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and when they are not, they would not necessarily violate rights at all. The original objection raised by Halstead concerns rights violations, not merely causing serious harm to prevent another (possibly greater) harm. Maybe this is a sneaky way to dodge the objection, and doesn’t really dodge it at all, since there’s a similar objection. Also, it depends on what’s meant by “rights”.
Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.
So, what kinds of objectionable harms could be justified on such views? I don’t think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don’t think I have anything interesting to say off the top of my head.