Regarding the risk that longtermism could lead people to violate rights, it seems to me like you could make exactly the same argument for any view that prioritises between different things. For instance, as Peter Singer has pointed out, billions of animals are tortured and killed every year. By exactly analogous reasoning, one could say that other problems ‘dwindle into irrelevance’ as other values are sacrificed at the altar of the astronomical expected value of preventing factory farming. So, this would justify animal rights terrorism and the like and other abhorrent actions
“Don’t be fanatical about utilitarian or longtermist concerns and don’t take actions that violate common sense morality” is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.
More generally, there’s often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.
I haven’t read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot of work to make sure people don’t come away with such takes, then that seems like a major omission.
I also agree with this. There are many reasons for consequentialists to respect common sense morality.
I was just making the point that the rhetorical argument about rights can pretty much be made about any moral view. eg The authors seem to believe that degrowth would be a good idea, and it is a built in feature of degrowth that it would have enormous humanitarian costs
I don’t want to dip into discussions that don’t directly concern the issues I created this account to discuss, but your characterisation of degrowth as having “enormous humanitarian costs” “built in” is flatly untrue in a way that is obvious to anyone who has read any degrowth literature, e.g. Kallis or Hickel.
This is not the only time you have mischaracterised democratic and ecological positions on this post, please stop.
ok, see my comment below on covid and degrowth. It is difficult to see how we could reach a sustainable state via degrowth without shrinking the population by several billion and by reducing everyone’s living standards to pre-industrial levels, i.e. most people living on <$2 per day.
If you (or someone else) wants to defend degrowth on the Forum, it would probably be more useful to actually make degrowth arguments, rather than linking to a polemic that isn’t even trying to make an objective assessment.
I’m not sure that there are any attempted-objective assessments of degrowth (at least, not that I’ve found) and the post I linked provides an overview of the topic as understood by most of its key proponents. If I wanted to introduce people to EA, would it be inappropriate to offer them a copy of Doing Good Better?
I didn’t make specific arguments because frankly I shouldn’t need to. Someone who has written about climate change should not be making unequivocally untrue statements about basic aspects of a core strand of environmental economics. My assumption was that, given Halstead’s experience, his mischaracterizations could not have been due to a lack of knowledge.
This will probably be dogpiled due to “tone” but to be honest I have rewritten this comment twice to move away from clear statements of my views towards more EA-friendly language to make it as charitable as possible. There just aren’t many nice ways of saying that, well...
I agree that I don’t see anything wrong with linking to that paper.
I do think my view is quite defensible. eg in the discussion of degrowth below, the author says “We could very plausibly stop or at least delay climate change by drastically reducing the use of technology right now (COVID bought us a few months just by shutting down planes although that has “recovered” now )” the experience of the massive global humanitarian and economic disaster of covid seems like a very poor advert for a position ‘we can make degrowth work if only we try’. it’s killed 15 million people and hundreds of millions of people have been locked indoors for months.
I really don’t see the link between reducing air travel and the fact that COVID killed millions of people and necessitated lockdown measures.
I’m going to disengage now. Repeatedly mischaracterizing opposing views and deploying non-sequiturs for rhetorical reasons do not indicate to me that this will be a productive conversation.
See my reply to Will above. It’s a fair point that it’s not very helpful to spectators (besides indicating that the claim referred to should perhaps not be taken at face value) but my intention was to reply to Halstead rather than the audience.
In my view, it would be condescending if I was referring to most people, but not in this case. My point is that someone who has written about climate issues more than once in the past and who is considered something of an authority on climate issues within EA can be expected to have basic background knowledge on climate topics.
If we are going to have a hierarchical culture led by “thought leaders”, I think we should at least hold them to a certain standard.
I think Halstead knows what degrowth advocates claim about degrowth (that it won’t have built-in humanitarian costs). And I think he disagrees with them, which isn’t the same as not understanding their arguments.
Imagine people arguing whether to invade Iraq in the year following the 9/11 attacks. One of them points out that invading the country will involve enormous built-in humanitarian costs. Their interlocutor replies:
“Your characterization of an Iraq invasion as having “enormous humanitarian costs” “built in” is flatly untrue in a way that is obvious to anyone who has read any Iraq invasion literature, e.g. Rumsfeld and Powell.”
The second person may genuinely see Rumsfeld and Powell as experts worth listening to. The first person may see their arguments as clearly wrong, and not even worth addressing (if they think it’s common sense that war will incur humanitarian costs).
The first person isn’t necessarily right — in 2002, there was lots of disagreement between experts on the outcome of an Iraq invasion! — but I wouldn’t conclude that their words are “flatly untrue” or that they lack “basic background knowledge”.
As a moderator: the “basic background knowledge” point is skirting the boundaries of the Forum’s norms; even if you didn’t intend to condescend, I found it condescending, for the reasons I note in my other reply.
The initial comment — which claims that Halstead is misrepresenting a position, when “he understands and disagrees” is also possible — also seems uncharitable.
I do see this charitable reading as an understandable thing to miss, given that everyone is leaving brief comments about a complex question and there isn’t much context. But I also think there are ways to say “I don’t think you’re taking position X seriously enough” without saying “you are lying about the existence of position X, please stop lying”.
But it is basic background knowledge, and that point needs to be made clear to those less familiar with the topic! This isn’t an issue of understanding and disagreeing, as demonstrated by his non-sequitur about COVID if nothing else.
If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons, you have two choices: they are being honest but ignorant (which is unlikely, embarrassing for them and worrying for any community that treats them as an authority) or they are being dishonest (which is bad for everyone). There is no “charitable” position here.
I understand and agree with the discourse norms here, but if someone is demonstrably, repeatedly, unequivocally acting in bad faith then others must be able to call that out.
It is basic background knowledge that degrowth literature exists (which John knows), it is not basic background knowledge that we “know” that we could implement degrowth without major humanitarian consequences as degrowth has never been demonstrated at global scale. The opposite is not true either (so you might characterize Halstead as over-confident).
Degrowth is not a strategy we could clearly implement to tackle climate challenge (we do not know whether it is politically or techno-economically feasible and one can plausibly be quite skeptical) and we do not know whether it could be implemented without significant humanitarian consequences, a couple of green thinkers finding it feasible and desirable is not sufficient evidence to speak of “knowing”.
If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons...
I think there are a bunch of examples we could use here, which fall along a spectrum of “believability” or something like that.
Where the unbelievable end of the spectrum is e.g. “China has never imprisoned a Uyghur who wasn’t an active terrorist”, and the believable end of the spectrum is e.g. “gravity is what makes objects fall”.
If someone argues that objects fall because of something something the luminiferous aether, it seems really unlikely that “they have a background in physics but just disagree about gravity” is the right explanation.
If someone argues that China actually imprisons many non-terrorist Uyghurs, it seems really likely that “they have a background in the Chinese government’s claims but just disagree with the Chinese government” is the right explanation.
So what about someone who argues that degrowth is very likely to lead to “enormous humanitarian costs”? How likely is it that “they have a background in the claims of Hickel et al. but disagree” is the right explanation, vs. something like “they’ve never read Hickel” or “they believe Hickel is right but are lying”?
Moreover, is it “basic background knowledge” that degrowth would not be very likely to lead to “enormous humanitarian costs”?
What you think of those questions seems to depend on how you feel about the degrowth question generally. To some people, it seems perfectly believable that we could realistically achieve degrowth without enormous humanitarian costs. To other people, this seems unbelievable.
I see Halstead as being on the “unbelievable” side and you as being on the “believable” side. Given that there are two sides to the question, with some number of reasonable scholars on each side, Halstead would ideally hedge his language (“degrowth would likely have enormous humanitarian costs” rather than “built-in feature”). And you’d ideally hedge your language (“fails to address reasonable arguments from people like Hickel” rather than “flatly untrue in a way that is obvious”).
*****
I cared more about your reply than Halstead’s comment because, while neither person is doing the ideal hedge thing, your comment was more rude/aggressive than Halstead’s.
(I could imagine someone reading his comment as insulting to the authors, but I personally read it as “he thinks the authors are deliberately making a tradeoff of one value for another” rather than “he thinks the authors support something that is clearly monstrous”.)
To me, the situation reads as one person making contentious claim X, and the other saying “X is flatly wrong in a way that is obvious to anyone who reads contentious author Y, stop mischaracterizing the positions of people like author Y” — when the first person never mentioned author Y.
Perhaps the first person should have mentioned author Y somewhere, if only to say “I disagree with them” — in this case, author Y is pretty famous for their views — but even so, a better response is “I think X is wrong because of the points made by author Y”.
*****
I’d feel the same way even if someone were making some contentious statement about EA. And I hope that I’d respond to e.g. “effective altruism neglects systemic change” with something like “I think article X shows this isn’t true, why are you saying this?”
I’d feel differently if that person were posting the same kinds of comments frequently, and never responding to anyone’s follow-up questions or counterarguments. Given your initial comment, maybe that’s how you feel about Halstead + degrowth? (Though if that’s the case, I still think the burden of proof is on the person accusing another of bad faith, and they should link to other cases of the person failing to engage.)
I agree that there is an analogy to animal suffering here, but there’s a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.
Depending on the view, legitimate self-defence and “other-defence” don’t violate rights at all, and this seems close to common sense when applied to protect humans. Even deontological views could in principle endorse—but I think in practice today should condemn—coercively preventing individuals from harming nonhuman animals, including farmed animals, as argued in this paper, published in the Journal of Controversial Ideas, a journal led and edited by McMahan, Minerva and Singer. Of course, this conflicts with the views of most humans today, who don’t extend similarly weighty rights/claims to nonhuman animals.
EDIT: I realize now I interpreted “rights” in moral terms (e.g. deontological terms), when you may have intended it to be interpreted legally.
The longtermist could then argue that an analogous argument applies to “other-defence” of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don’t have different but comparably serious problems. But this assumption can’t be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.
I realize now I interpreted “rights” in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans’ legal rights to protect nonhuman animals or future people could be morally permissible.
The longtermist could then argue that an analogous argument applies to “other-defence” of future generations.
I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone’s birth would not violate that then non-existent person’s rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people’s rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it’s not clear it would deserve the kind of priority it gets in EA. See this paper, for example, which I got from one of Torres’ articles and takes a contractualist approach. I think a rights-based approach could treat it similarly.
It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don’t get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don’t know how they would normally respond to this.
Considering “innocent threats” complicates things further, too, and it looks like there’s disagreement over the permissibility of harming innocent threats to prevent harm caused by them.
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and when they are not, they would not necessarily violate rights at all. The original objection raised by Halstead concerns rights violations, not merely causing serious harm to prevent another (possibly greater) harm. Maybe this is a sneaky way to dodge the objection, and doesn’t really dodge it at all, since there’s a similar objection. Also, it depends on what’s meant by “rights”.
Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.
So, what kinds of objectionable harms could be justified on such views? I don’t think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don’t think I have anything interesting to say off the top of my head.
Regarding the risk that longtermism could lead people to violate rights, it seems to me like you could make exactly the same argument for any view that prioritises between different things. For instance, as Peter Singer has pointed out, billions of animals are tortured and killed every year. By exactly analogous reasoning, one could say that other problems ‘dwindle into irrelevance’ as other values are sacrificed at the altar of the astronomical expected value of preventing factory farming. So, this would justify animal rights terrorism and the like and other abhorrent actions
“Don’t be fanatical about utilitarian or longtermist concerns and don’t take actions that violate common sense morality” is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.
Some examples:
https://www.lesswrong.com/posts/dWTEtgBfFaz6vjwQf/ethical-injunctions (2008)
https://philpapers.org/rec/ORDHTB (2008)
https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/ (2015)
https://forum.effectivealtruism.org/tag/naive-vs-sophisticated-consequentialism (multiple articles; 2016-2021)
More generally, there’s often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.
I haven’t read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot of work to make sure people don’t come away with such takes, then that seems like a major omission.
I also agree with this. There are many reasons for consequentialists to respect common sense morality.
I was just making the point that the rhetorical argument about rights can pretty much be made about any moral view. eg The authors seem to believe that degrowth would be a good idea, and it is a built in feature of degrowth that it would have enormous humanitarian costs
I don’t want to dip into discussions that don’t directly concern the issues I created this account to discuss, but your characterisation of degrowth as having “enormous humanitarian costs” “built in” is flatly untrue in a way that is obvious to anyone who has read any degrowth literature, e.g. Kallis or Hickel.
This is not the only time you have mischaracterised democratic and ecological positions on this post, please stop.
ok, see my comment below on covid and degrowth. It is difficult to see how we could reach a sustainable state via degrowth without shrinking the population by several billion and by reducing everyone’s living standards to pre-industrial levels, i.e. most people living on <$2 per day.
It seems that you fundamentally misunderstand degrowth. For an introduction I suggest this:
https://www.annualreviews.org/doi/abs/10.1146/annurev-environ-102017-025941
If you (or someone else) wants to defend degrowth on the Forum, it would probably be more useful to actually make degrowth arguments, rather than linking to a polemic that isn’t even trying to make an objective assessment.
I’m not sure that there are any attempted-objective assessments of degrowth (at least, not that I’ve found) and the post I linked provides an overview of the topic as understood by most of its key proponents. If I wanted to introduce people to EA, would it be inappropriate to offer them a copy of Doing Good Better?
I didn’t make specific arguments because frankly I shouldn’t need to. Someone who has written about climate change should not be making unequivocally untrue statements about basic aspects of a core strand of environmental economics. My assumption was that, given Halstead’s experience, his mischaracterizations could not have been due to a lack of knowledge.
This will probably be dogpiled due to “tone” but to be honest I have rewritten this comment twice to move away from clear statements of my views towards more EA-friendly language to make it as charitable as possible. There just aren’t many nice ways of saying that, well...
you see the problem?
I agree that I don’t see anything wrong with linking to that paper.
I do think my view is quite defensible. eg in the discussion of degrowth below, the author says “We could very plausibly stop or at least delay climate change by drastically reducing the use of technology right now (COVID bought us a few months just by shutting down planes although that has “recovered” now )” the experience of the massive global humanitarian and economic disaster of covid seems like a very poor advert for a position ‘we can make degrowth work if only we try’. it’s killed 15 million people and hundreds of millions of people have been locked indoors for months.
I really don’t see the link between reducing air travel and the fact that COVID killed millions of people and necessitated lockdown measures.
I’m going to disengage now. Repeatedly mischaracterizing opposing views and deploying non-sequiturs for rhetorical reasons do not indicate to me that this will be a productive conversation.
...is a non-argument that’s both condescending and helps only a tiny fraction of people reading your comment.
See my reply to Will above. It’s a fair point that it’s not very helpful to spectators (besides indicating that the claim referred to should perhaps not be taken at face value) but my intention was to reply to Halstead rather than the audience.
In my view, it would be condescending if I was referring to most people, but not in this case. My point is that someone who has written about climate issues more than once in the past and who is considered something of an authority on climate issues within EA can be expected to have basic background knowledge on climate topics.
If we are going to have a hierarchical culture led by “thought leaders”, I think we should at least hold them to a certain standard.
I think Halstead knows what degrowth advocates claim about degrowth (that it won’t have built-in humanitarian costs). And I think he disagrees with them, which isn’t the same as not understanding their arguments.
Imagine people arguing whether to invade Iraq in the year following the 9/11 attacks. One of them points out that invading the country will involve enormous built-in humanitarian costs. Their interlocutor replies:
“Your characterization of an Iraq invasion as having “enormous humanitarian costs” “built in” is flatly untrue in a way that is obvious to anyone who has read any Iraq invasion literature, e.g. Rumsfeld and Powell.”
The second person may genuinely see Rumsfeld and Powell as experts worth listening to. The first person may see their arguments as clearly wrong, and not even worth addressing (if they think it’s common sense that war will incur humanitarian costs).
The first person isn’t necessarily right — in 2002, there was lots of disagreement between experts on the outcome of an Iraq invasion! — but I wouldn’t conclude that their words are “flatly untrue” or that they lack “basic background knowledge”.
As a moderator: the “basic background knowledge” point is skirting the boundaries of the Forum’s norms; even if you didn’t intend to condescend, I found it condescending, for the reasons I note in my other reply.
The initial comment — which claims that Halstead is misrepresenting a position, when “he understands and disagrees” is also possible — also seems uncharitable.
I do see this charitable reading as an understandable thing to miss, given that everyone is leaving brief comments about a complex question and there isn’t much context. But I also think there are ways to say “I don’t think you’re taking position X seriously enough” without saying “you are lying about the existence of position X, please stop lying”.
But it is basic background knowledge, and that point needs to be made clear to those less familiar with the topic! This isn’t an issue of understanding and disagreeing, as demonstrated by his non-sequitur about COVID if nothing else.
If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons, you have two choices: they are being honest but ignorant (which is unlikely, embarrassing for them and worrying for any community that treats them as an authority) or they are being dishonest (which is bad for everyone). There is no “charitable” position here.
I understand and agree with the discourse norms here, but if someone is demonstrably, repeatedly, unequivocally acting in bad faith then others must be able to call that out.
It is basic background knowledge that degrowth literature exists (which John knows), it is not basic background knowledge that we “know” that we could implement degrowth without major humanitarian consequences as degrowth has never been demonstrated at global scale. The opposite is not true either (so you might characterize Halstead as over-confident).
Degrowth is not a strategy we could clearly implement to tackle climate challenge (we do not know whether it is politically or techno-economically feasible and one can plausibly be quite skeptical) and we do not know whether it could be implemented without significant humanitarian consequences, a couple of green thinkers finding it feasible and desirable is not sufficient evidence to speak of “knowing”.
I think there are a bunch of examples we could use here, which fall along a spectrum of “believability” or something like that.
Where the unbelievable end of the spectrum is e.g. “China has never imprisoned a Uyghur who wasn’t an active terrorist”, and the believable end of the spectrum is e.g. “gravity is what makes objects fall”.
If someone argues that objects fall because of something something the luminiferous aether, it seems really unlikely that “they have a background in physics but just disagree about gravity” is the right explanation.
If someone argues that China actually imprisons many non-terrorist Uyghurs, it seems really likely that “they have a background in the Chinese government’s claims but just disagree with the Chinese government” is the right explanation.
So what about someone who argues that degrowth is very likely to lead to “enormous humanitarian costs”? How likely is it that “they have a background in the claims of Hickel et al. but disagree” is the right explanation, vs. something like “they’ve never read Hickel” or “they believe Hickel is right but are lying”?
Moreover, is it “basic background knowledge” that degrowth would not be very likely to lead to “enormous humanitarian costs”?
What you think of those questions seems to depend on how you feel about the degrowth question generally. To some people, it seems perfectly believable that we could realistically achieve degrowth without enormous humanitarian costs. To other people, this seems unbelievable.
I see Halstead as being on the “unbelievable” side and you as being on the “believable” side. Given that there are two sides to the question, with some number of reasonable scholars on each side, Halstead would ideally hedge his language (“degrowth would likely have enormous humanitarian costs” rather than “built-in feature”). And you’d ideally hedge your language (“fails to address reasonable arguments from people like Hickel” rather than “flatly untrue in a way that is obvious”).
*****
I cared more about your reply than Halstead’s comment because, while neither person is doing the ideal hedge thing, your comment was more rude/aggressive than Halstead’s.
(I could imagine someone reading his comment as insulting to the authors, but I personally read it as “he thinks the authors are deliberately making a tradeoff of one value for another” rather than “he thinks the authors support something that is clearly monstrous”.)
To me, the situation reads as one person making contentious claim X, and the other saying “X is flatly wrong in a way that is obvious to anyone who reads contentious author Y, stop mischaracterizing the positions of people like author Y” — when the first person never mentioned author Y.
Perhaps the first person should have mentioned author Y somewhere, if only to say “I disagree with them” — in this case, author Y is pretty famous for their views — but even so, a better response is “I think X is wrong because of the points made by author Y”.
*****
I’d feel the same way even if someone were making some contentious statement about EA. And I hope that I’d respond to e.g. “effective altruism neglects systemic change” with something like “I think article X shows this isn’t true, why are you saying this?”
I’d feel differently if that person were posting the same kinds of comments frequently, and never responding to anyone’s follow-up questions or counterarguments. Given your initial comment, maybe that’s how you feel about Halstead + degrowth? (Though if that’s the case, I still think the burden of proof is on the person accusing another of bad faith, and they should link to other cases of the person failing to engage.)
I agree that there is an analogy to animal suffering here, but there’s a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.
Depending on the view, legitimate self-defence and “other-defence” don’t violate rights at all, and this seems close to common sense when applied to protect humans. Even deontological views could in principle endorse—but I think in practice today should condemn—coercively preventing individuals from harming nonhuman animals, including farmed animals, as argued in this paper, published in the Journal of Controversial Ideas, a journal led and edited by McMahan, Minerva and Singer. Of course, this conflicts with the views of most humans today, who don’t extend similarly weighty rights/claims to nonhuman animals.
EDIT: I realize now I interpreted “rights” in moral terms (e.g. deontological terms), when you may have intended it to be interpreted legally.
The longtermist could then argue that an analogous argument applies to “other-defence” of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don’t have different but comparably serious problems. But this assumption can’t be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.
I realize now I interpreted “rights” in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans’ legal rights to protect nonhuman animals or future people could be morally permissible.
I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone’s birth would not violate that then non-existent person’s rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people’s rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it’s not clear it would deserve the kind of priority it gets in EA. See this paper, for example, which I got from one of Torres’ articles and takes a contractualist approach. I think a rights-based approach could treat it similarly.
It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don’t get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don’t know how they would normally respond to this.
Considering “innocent threats” complicates things further, too, and it looks like there’s disagreement over the permissibility of harming innocent threats to prevent harm caused by them.
I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and when they are not, they would not necessarily violate rights at all. The original objection raised by Halstead concerns rights violations, not merely causing serious harm to prevent another (possibly greater) harm. Maybe this is a sneaky way to dodge the objection, and doesn’t really dodge it at all, since there’s a similar objection. Also, it depends on what’s meant by “rights”.
Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.
So, what kinds of objectionable harms could be justified on such views? I don’t think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don’t think I have anything interesting to say off the top of my head.