I teach international relations and political theory at the University of Nottingham. Much of my research relates to intergenerational ethics, existential risk, or both. I’ve examined cases of ongoing great power peace, and argued that nuclear war is inevitable in the long term if we try to perpetuate nuclear deterrence. I’ve also written extensively about the ethics of climate change, argued that governments should make more use of public debt to address it, and proposed solutions to the non-identity problem and the mere addition paradox.
Matthew Rendall
Thanks—that’s odd. The ‘elephant’ post isn’t showing up on mine.
Politics and the EA Forum
The elephant (and donkey) in the room
Pandemic apathy
Odd! Perhaps this one will work better.
Thanks, Vasco! That’s odd—the Clare Palmer link is working for me. It’s her paper ‘Does Nature Matter? The Place of the Nonhuman in the Ethics of Climate Change’—what looks like a page proof is posted on www.academia.edu.
One of the arguments in my paper is that we’re not morally obliged to do the expectably best thing of our own free will, even if we reliably can, when it would benefit others who will be much better off than we are whatever we do. So I think we disagree on that point. That said, I entirely endorse your argument about heuristics, and have argued elsewhere that even act utilitarians will do better if they reject extreme savings rates.
Thanks, Vasco! You are welcome to list me in the acknowledgements. I’m glad to have the reference to Tomasik’s post, which Timothy Chan also cited below, and appreciate the detailed response. That said, I doubt we should be agnostic on whether the overall effects of global heating on wild animals will be good or bad.
The main upside of global heating for animal welfare, on Tomasik’s analysis, is that it could decrease wild animal populations, and thus wild animal suffering. On balance, in his view, the destruction of forests and coral reefs is a good thing. But that relies on the assumption that most wild animal lives are worse than nothing. Tomasik and others have given some powerful reasons to think this, but there are also strong arguments on the other side. Moreover, as Clare Palmer argues, global heating might increase wild animal numbers—and even Tomasik doesn’t seem sure it would decrease them.
In contrast, the main downside, in Tomasik’s analysis, is less controversial: that global heating is going to cause a lot of suffering by destroying or changing the habitats to which wild animals are adapted. ‘An “unfavorable climate”’, notes Katie McShane, ‘is one where there isn’t enough to eat, where what kept you safe from predators and diseases in the past no longer works, where you are increasingly watching your offspring and fellow group members suffer and die, and where the scarcity of resources leads to increased conflict, destabilizing group structures and increasing violent confrontations.’ Palmer isn’t so sure: ‘Even if some animals suffer and die, climate change might result in an overall net gain in pleasure, or preference satisfaction (for instance) in the context of sentient animals. This may be unlikely, but it’s not impossible.’ True. But even if it’s only unlikely that global heating’s effects will be good, it means that its effects on existing animals are bad in expectation.
There’s another factor which Tomasik mentions in passing: there is some chance that global heating could lead to the collapse of human civilisation—perhaps in conjunction with other factors. In some respects, this would be a good thing for non-humans—notably, it would put an end to factory farming. It would also preclude the possibility of our spreading wild animal suffering to other planets. On the flipside, however, it would also eliminate the possibility of our doing anything sizable to mitigate wild animal suffering on earth.
Now, while there may be more doubt about the upsides than about the downsides of our GHG emissions, that needn’t decide the issue if the upsides are big enough. But even if Tomasik and others are right that wild animal lives are bad on net, there’s also doubt about whether global heating will reduce the number of wild animal lives. And even if both are these premises are met, I’m not sure they’d outweigh the suffering global heating would inflict on those wild animals who will exist.
I think you have misinterpreted what my article about discounting is recommending. In contrast to some other writers, I’m not calling for discounting at the lowest possible rate. Even at a rate of 2%, catastrophic damages evaporate in cost-benefit analysis if they occur more than a couple of centuries hence, thus giving next to no weight to the distant future. However, a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great. I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist. My approach thus has affinities with the one advocated by Geir Asheim here.
One implication is that while we’re under no obligation to make future rich people richer, we ought to be very worried about worst-case climate change scenarios, since in those humans could be poorer. Another is that since most non-humans for the foreseeable future will be worse off than we are, we shouldn’t discount their interests away.
Vasco, I’ve read your post to which the first link leads quickly, so please correct me if I’m wrong. However, it left me wondering about two things:
(a) It wasn’t clear to me that the estimate of global heating damages was counting global heating damages to non-humans. The references to DALYs and ‘climate change affecting more people with lower income’ lead me to suspect you’re not. But non-humans will surely be the vast majority of the victims of global heating—as well as, in some cases, its beneficiaries. While Timothy Chan is quite right to point out below that this is a complex matter, it’s certainly isn’t going to be a wash, and if the effects are negative, they’re likely to be very bad.
(b) It appears you were working with a study that employed a discount rate of 2%. That’s going to discount damages in 100 years to 13% of their present value, and damages in 200 years to 1.9% of their present value—and it goes downhill from there. But that seems very hard to justify. Discounting is often defended on the ground that our descendants will be richer than we are. But that rationale doesn’t apply to damages in worst-case scenarios. Because they could be so enduring, these damages are huge in expectation. Second, future non-humans won’t be richer than we are, so benefits to them don’t have diminishing marginal utility compared with benefits to us.
The US government—including, so far as I know, the EPA—uses a discount rate that is higher than two percent, which makes future damages from global heating evaporate even more quickly. What’s more, I’d be surprised if it’s trying to value damages to wild animals in terms of the value they would attach to avoiding them, as opposed to the value that American human beings do. The latter approach, as Dale Jamieson has observed, is rather like valuing harm to slaves by what their masters would pay to avoid it.
So far as it goes, your argument seems correct. But you’re leaving out a significant factor here—carbon emissions. Beef cattle are extraordinarily carbon intensive even compared to other animals raised for food. If you eat them, your emissions, combined with other people’s emissions, are going to cause a huge amount of both human and non-human suffering.
There’s a complication. You could, in principle, offset the damage from your carbon emissions. But you could also, in principle, eat animals who have been raised free range, and whose lives have probably been worth living up to the time they’re killed.
Both of these will require you to spend extra money, and investigate whether you’re really getting what you pay for. Rather than going to all this trouble—and here we’ll agree—it seems a lot better simply to eat an Impossible Burger.
I think we’re talking past each other. My claim is that taking precautionary measures in case A will prevent more deaths in expectation (17 billion/1000 = 17 million) than taking precautionary measures in case B (8 billion/1000 = 8 million). We can all agree that it’s better, other things being equal, to save more deaths in expectation than fewer. On the Intuition of Neutrality, other things seemingly are equal, making it more important to take precautionary measures against the virus in A than against the virus in B.
But this is a reductio ad absurdum. Would it really be better for humanity to go extinct than to suffer ten million deaths from the virus per year for the next thousand years? And if not, shouldn’t we accept that the reason is that additional (good) lives have value?
Thanks, Richard! I’ve just had a look at your post and see you’ve anticipated a number of the points I made here. I’m interested in the problem of model uncertainty, but most of the treatments of it I’ve found have been technical, which isn’t much help to a maths illiterate like me. Some of the literature on moral uncertainty is relevant, and there’s an interesting treatment in Toby Ord’s, Rafaela Hillerbrand’s and Anders Sandberg’s paper here. But I’d be glad to learn of other philosophical treatments if you or others can recommend any.
Thanks! Just my subjective judgement. I feel pretty confident that 0.5% would be too low. I’d be more open to the view that 5-10% isn’t high enough. If the latter is true, then that would strengthen my argument. I’d be interested what other people think.
Thanks! I was indeed assuming total extinction in B. As you say, antinatalist views will prefer A to B. If antinatalism is correct, then my argument against the intuition of neutrality fails.
Our discussion has been helpful to me, because it’s made me realise that my argument is really directed against views that accept the intuition of neutrality, but aren’t either (a) antinatalist or (b) narrow person-affecting.
That does limit its scope. Nevertheless, common sense morality seems to accept the intuition of neutrality, but not anti-natalism. Nor does it seem to accept narrow person-affecting views (thus most laypeople’s embrace of the No Difference View when it comes to the non-identity problem). It’s that ‘moderate middle’, so to speak, at whom my argument is directed.
Extinction risk and longtermism: a broader critique of Thorstad
Thanks—that’s very helpful. On a wide person-affecting view, A would be worse, but if we limit our analysis to present/necessary people, then outcome B would be worse. That had not occurred to me, probably because I find narrow person-affecting views so implausible.
However, it doesn’t seem very damaging to my argument. If we take a hardcore narrow person-affecting view, the extra ten billion deaths shouldn’t count at all in our assessment. But surely that’s very hard to believe.
Alternatively, if we adopt what Parfit calls a ‘two tier view’, then we’d give some weight to the deaths of the contingent people in scenario A, but less than to the deaths of present/necessary people. Even if we discounted them by a factor of five, however, scenario A would still be worse than scenario B. What is more, we can adjust the numbers:
Scenario A: Seven billion necessary people die immediately and ten million die annually for the next 10,000 years for a total of 107 billion. Most of the future people are contingent.
Scenario B: Eight billion die at once. All are necessary people.
On the two tier-view, deaths of necessary people would have to be more than a hundred times as bad as those of contingent ones for B to be worse. That is hard to believe.
Bottom line:
Plausible person-affecting views will judge A better than B.
That A is better than B is, however, implausible.
∴ No otherwise plausible person-affecting view renders a plausible judgement about this case.
∴ Person-affecting views do not provide a convincing rationale for rejecting my argument against the Intuition of Neutrality.
Actually, I guess that on a narrow person-affecting view, the first outcome would not be worse than the second, because plausibly a pandemic of this kind would affect the identities of subsequent generations. Assuming the lives of the people who died were still worth living, while the first virus would be worse for people—because it would kill ten billion more of them—it would not, for the most part, be worse for particular people. But that seems like the wrong kind of reason to conclude that A is better than B.
- 17 Apr 2024 18:54 UTC; 3 points) 's comment on A simple argument for the badness of human extinction by (
Thanks! Perhaps I haven’t grasped what you’re saying. In my example, if the first virus mutates, it’ll be the one that kills more people--17 billion. If the second virus mutates, the entire human population dies at once from the virus, so only 8 billion people die in toto.
On either wide or narrow person-affecting views, it seems like we have to say that the first outcome—seven billion deaths and then ten million deaths a year for the next millennium—is worse than the second (extinction). But is that plausible? Doesn’t this example undermine person-affecting views of either kind?
Thanks for the feedback! As a matter of fact, I agree that the second scenario is worse. My aim was to undermine the ‘intuition of neutrality’—the claim that we have no reason to create additional happy lives. Perhaps it’ll help to state the argument in the form of a syllogism:
Premise: If the government can either (A) save an expected 7 million human lives now and another expected 8 million over the next thousand years or (b) save humanity from destruction, it’s at least as good to do (B).
Intuition of Neutrality: It’s good to save or improve existing lives, but it isn’t good to create new ones. (Or to quote Jan Narveson, ‘We are in favor of making people happy, but neutral about making happy people.’ )
If the government adopts policy A, it will save 17 million lives in expectation.
If the government adopts policy B, it will save 8 million lives in expectation.
∴ Policy A will save more existing lives in expectation than policy B (by 3 and 4)
∴ If the Intuition of Neutrality is correct, policy A must be better than policy B (by 2 and 5)
But policy A is not better than policy B (by 1)
∴ The Intuition of Neutrality must be false (by 6 and 7).
That might be right—but then wouldn’t it be a major problem for EA if it were unable to discuss rationally one of the most important factors determining whether it achieved its goals? This election is likely to have huge implications not only for how (or whether) the world manages a number of x-risks to a minimally satisfactory extent, but also for many other core EA concerns such as international development, and probably farm animals too (a right-wing politician with a deregulatory agenda, for whom ‘animals’ is a favourite insult, is scarcely going to have their welfare at heart).