A response to Michael Plant’s review of What We Owe The Future

(Also posted to my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)

Philosopher and Founder of the Happier Lives Institute, Michael Plant, recently wrote an open access review of Will MacAskill’s book What We Owe The Future.

Plant states that he found the case for longtermism put forward in the book “unpersuasive” and provides some reasons for this in the review. I didn’t find these reasons very convincing and am happy to remain a longtermist—this post explains why. My aim isn’t so much to defend What We Owe The Future as it is to defend longtermism.

Summary of Plant’s challenges to the book

Plant has four challenges to the book:

  1. The book is ambiguous on how seriously we should take longtermism (i.e. how much resource should go towards longtermist causes as opposed to say global poverty).

  2. The following premises aren’t as simple or uncontroversial as the book suggests:

    1. Future people count.

    2. We can make their lives go better.

  3. The book doesn’t do enough to demonstrate we can actually shape the long-term future as there aren’t clear examples of past successes of doing so.

  4. It is unclear whether longtermism, if true, would alter our priorities as we already have reason to worry about some ‘longtermist’ causes—for example risks from Artificial Intelligence.

#1 is a critique of the book and not longtermism per se, so I cover this briefly at the end of the post in an addendum. In short, this may be a valid criticism of the book—it has been a while since I read it but in podcasts I recall MacAskill saying that how much resource we should devote to longtermism is an open question. MacAskill does say however that it is clear we should be devoting a lot more resources than we currently do.

#4 is a related critique to #1 - without specifying how seriously we should take longtermism we can’t really say how much it would alter our current priorities. I will leave this to the addendum as well.

#2 and #3 is where I want to respond in some detail.

Plant misrepresents MacAskill’s view on the implications of the ‘intuition of neutrality’ for longtermism

The intuition of neutrality is the view in population ethics that, roughly, adding a person to the population is in itself ethically neutral. If it is ethically neutral to add people, we shouldn’t be concerned by the fact that extinction would remove the possibility of future people.

Plant states in his summary of the book:

If [the intuition of neutrality] is correct, this would present a severe challenge to longtermism: we would be indifferent about bringing about those future (happy) generations.

By including this in the summary section Plant is (incorrectly) implying that this is a view that MacAskill espouses in the book. Quoting from the book itself, MacAskill says (emphasis mine):

If you endorse the intuition of neutrality, then ensuring that our future is good, while civilisation persists, might seem much more important than ensuring our future is long. You would still think that safeguarding civilisation is good because doing so reduces the risk of death for those alive today, and you might still put great weight on the loss of future artistic and scientific accomplishments that the end of civilisation would entail. But you wouldn’t regard the absence of future generations in itself as a moral loss.

MacAskill does not think that longtermism would be undermined if the intuition of neutrality is correct. MacAskill does think that it would undermine the importance of ensuring we don’t go extinct, but he thinks there are other longtermist approaches that improve the quality of the future assuming we don’t go extinct, and that these would remain important.

As I have previously explained, these approaches might include:

  • Mitigating climate change.

  • Institutional design.

  • Ensuring aligned AI.

Even if it is Plant’s view, he is wrong to state as fact /​ imply that MacAskill believes that the intuition of neutrality being incorrect would “present a severe challenge to longtermism”. Similarly to MacAskill, I disagree that it would.

I am unconvinced that “many” disagree that ‘future people count’

From Plant’s review:

Far future people might little count in practice, because they are largely hypothetical, and hypothetical people may not count. The premise ‘future people count’ is not straightforward.

The intuition of neutrality is based on the notion there is a morally relevant distinction between people who do exist and those who could exist. MacAskill may not be sympathetic to the intuition of neutrality, but many philosophers think, after serious reflection, it is approximately correct.

Plant raises the “intuition of neutrality” as the justification for doubting that hypothetical future people count. Plant says that “many philosophers think, after serious reflection, it [the intuition of neutrality] is approximately correct”. He doesn’t define what “many” means, but does give two examples of people. I am left having to take Plant on his word that “many” philosophers accept the intuition of neutrality because he certainly hasn’t provided any evidence of that claim—one cannot reasonably consider two people to be “many”.

There is in fact evidence that people generally do not endorse the intuition of neutrality, viewing it as good to create new happy people and as bad to create new unhappy people. This is very unsurprising to me. For example, I think it would be difficult to find even a small number of people who would think it morally neutral to create a life that was certain to undergo immense agony for its entire existence.

What do philosophers think? Unfortunately I am unaware of a clear survey on the matter. Off the top of my head I am aware of Johann Frick, Jan Narveson and Melinda Roberts (after reading Plant’s review) who endorse the intuition of neutrality. I am aware of far more who reject it or on balance seem against it. These include: Will MacAskill, Hilary Greaves, John Broome, Derek Parfit, Peter Singer, Torbjörn Tännsjö, Jeff McMahan, Toby Ord, and probably each of the 28 people who publicly stated they don’t find the repugnant conclusion problematic (5 of which I have already listed).

Of course me being aware of more philosophers who are anti rather than pro the intuition of neutrality doesn’t prove anything. My point is that Plant certainly hasn’t achieved his goal of changing my impression that the intuition of neutrality is a pretty fringe view amongst both philosophers and the general public. To do that, he would have to provide some evidence.

Plant presents a very one-sided analysis of the non-identity problem

In arguing against MacAskill’s premise that “we can make [future people’s lives] go better”, Plant raises a (possibly) valid point:

Our actions today will change who exists later. If we enact some policy, then Angela will never exist, but Bob will. The result is that we cannot make the far future better for anyone in particular: it is not better for Angela not to exist, and it is not good for Bob to exist (intuitively, existence is never better for a person); this is the infamous non-identity problem. A direct implication of this, however, is that (3) seems false: we cannot make the lives of future people go better: all we can do is cause someone not to exist and someone else to exist instead. In one sense then, people far away in time are just like those far away in space: we are powerless to help them.

I’m fairly agnostic on most of what Plant is saying here. While it has been argued that it might be better for a person to exist than not, I am unclear on how many philosophers accept this and I am not sure where I personally land on this issue. Ultimately this is all a moot point for me because I am not really interested in making future lives go better. Instead I am interested in improving the future.

The impression Plant gives in his review is that improving the future is not possible because of the non-identity problem. This is a very one-sided view of the non-identity problem. As the Stanford Encyclopedia of Philosophy (SEP) page on the non-identity problem points out, one can also view the non-identity problem as a problem for the person-affecting intuition that “an act can be wrong only if that act makes things worse for, or (we can say) harms, some existing or future person”. Why? Because combining the person-affecting intuition with the fact that our actions change who will live in the future can lead to some very counterintuitive conclusions.

Let me give two examples:

  1. Emitting carbon dioxide in vast quantities is fine: emitting carbon dioxide in vast quantities would speed up climate change and make earth much hotter and much less pleasant to live on in the future. Emitting carbon dioxide however changes future identities, so doesn’t actually make things worse for anyone in particular. The conclusion? Emit carbon dioxide to your heart’s content.

  2. Enacting a policy that involves placing millions of landmines set to go off in 200 years in random locations on earth is fine: let’s assume future people don’t know about this policy. The landmines are going to cause a vast amount of suffering, killing people, leaving children parentless, and leaving people permanently incapacitated. Placing the landmines changes future identities though, so doesn’t actually make things worse for anyone in particular. Conclusion: go ahead, place the landmines, who cares?

The first example at least is very well known, and is powerful. Indeed, the SEP article states that rejecting the person-affecting intuition has been the most common response to the non-identity problem (which incidentally might also imply that most philosophers reject the ‘intuition of neutrality’).

Some philosophers have rejected the person-affecting intuition by adopting an ‘impersonal’ account of wrongdoing in which the identities of affected individuals is not important. A common ‘impersonal’ account is ‘total utilitarianism’ which involves judging states of the world by adding up the total amount of wellbeing in each state (not caring who this wellbeing accrues to). Adopting total utilitarianism implies that both of the actions in the above examples (emitting carbon dioxide and placing landmines) are likely wrong because they will reduce future wellbeing. This is a very intuitive way of judging these policies.

With regards to total utilitarianism, Plant says that “many (still) regard it as a non-starter” due to Parfit’s “repugnant conclusion”. Again, no evidence is provided for the claim that “many” hold this view.

Plant might say there wasn’t space to cover these nuances but, honestly, these aren’t nuances—these are extremely important points that are core to the non-identity problem and our moral consideration of the far future. I won’t go as far as to label Plant as dishonest by omitting these considerations but, given that Plant is a PhD philosopher, I find it puzzling that he could present such a one-sided analysis of the non-identity problem. Given this I personally find the following in his review quite ironic:

Hence, it seems objectionable to describe the premises as simple and uncontroversial, especially when the readers are primarily non-philosophers who are liable to take MacAskill at his word. I appreciate MacAskill is trying to spare the reader the intricacies of population ethics. Yet, there is a difference between saying ‘This is incredibly complicated, but everyone ultimately agrees’ and ‘This is incredibly complicated, people disagree furiously, and my argument (may) rely on a controversial view I will only briefly defend.’

Plant dismisses being able to shape the far future much too easily

In his review Plant expresses skepticism that ending slavery was an example of influencing the long-run future because he doesn’t think that it was “contingent” i.e. he doubts that that if the abolitionists hadn’t done what they did it might have been that slavery would never have been abolished (or at least not for a very long time).

Plant does no more than simply state that he doubts the abolition of slavery was contingent. He doesn’t engage with the arguments that MacAskill puts forward for contingency in his book. I’m going to put that aside and assume that Plant is right on this point.

Even if Plant is right on the abolition of slavery not being contingent, his dismissal that we can shape the future seems much too strong. Firstly he says:

I would like to have seen MacAskill explore the extent to which the case for longtermism depends on being able to point to past successes. We should be sceptical that we will succeed if others have failed – unless we can identify what is different now.

There are in fact a few things that are different now:

  1. We actually have a concept of longtermism now: and we have a whole community of people trying to influence the far future. Before Parfit’s Reasons and Persons was published in 1986, the moral importance of influencing the far future wasn’t really on anyone’s radar. Longtermism wasn’t solidified as a concept until around four years ago. Actually having a community of people trying to influence the far future seems like a good reason to believe it is now more likely than it once was that we will succeed in doing so.

  2. We now have the technological means to destroy /​ near destroy humanity: making humanity go extinct /​ destroying humanity to an extent that it would be unlikely to recover simply wasn’t possible in the past. Now we could do the following (none of which are overly crazy to imagine might happen):

    1. Destroy /​ near destroy humanity due to nuclear war.

    2. Cause runaway climate change that could result in a slower long-run growth rate or permanently reduce the planet’s carrying capacity.

  3. Technological progress means we have some concerning developments on the horizon: there are some (more speculative) existential risks on the horizon that could wipe out our future potential:

    1. Malicious actors could engineer pandemics that destroy /​ near destroy humanity.

    2. Artificial Intelligence (AI) could pose an existential risk (as many (yes “many”) believe, including some of those who were pivotal in the development of modern AI).

Plant goes on to doubt the claim that we are living at the ‘hinge of history’ i.e. that we are currently living at an especially pivotal, if not the most pivotal, period that will ever occur. For what it’s worth, I have some degree of sympathy for Plant’s view—overall I am unsure. However, it is not true that not being at the ‘hinge of history’ invalidates longtermism. As MacAskill’s previous work states, not being at the hinge implies that we should invest for a particularly pivotal time in the future e.g. by investing money, through movement-building, or through research. This is covered by the concept of ‘patient longtermism’ which has received some attention within the EA community.

Conclusion

Overall, I am left unmoved by Plant’s criticisms. He has omitted a lot of relevant details. I am aware that he wrote a book review rather than an attack on longtermism, which may explain some of his omissions. Having said that, his final decision to “sit out” on longtermism is not justified by the points he includes in his review. I wouldn’t want readers of his review to take an Oxford philosopher at his word when so much that is hugely important and relevant has been left unsaid.

Addendum—does longtermism mean revolution?

The above concludes my criticism of Plant’s review, but I wanted to opine on Plant’s point that MacAskill isn’t clear enough on how seriously we should longtermism and if longtermism would constitute a ‘revolution’. As I stated up front, I think it might be a fair criticism that MacAskill was too vague on how seriously we should take longtermism in terms of the amount of resources that should go towards longtermist causes (although I suspect MacAskill was intentionally vague for instrumental reasons).

My personal view is that, if we accept the premises of longtermism (as I do), we should take longtermism exceedingly seriously. After all, we have the whole future at stake.

In practice, I think we would ideally entirely reorient society towards making the future go well. This would mean every single person (yes everyone) thinking how they can best improve the far future and acting on that basis. In practice this isn’t feasible, but I’m talking about the best case scenario.

Would that make our society into some hellscape where we let people die on the street because we don’t actually care about present people? In short—no. Such a society would not be a stable one—there would be revolutions and chaos and we certainly wouldn’t be able to focus on improving the far future. A world where we all come together and work collaboratively to improve the future is a world in which we necessarily help as many people as possible work productively towards that goal—and that means very little poverty and suffering. Those with the ‘personal fit’ to help current people work productively towards that goal would do so—not everyone can work on technical AI alignment.

Does that mean that a longtermist society actually looks quite similar to the one we have now after all? No, I don’t think so. I think a longtermist society is one in which every person is taught longtermism at school. Every person chooses their career based on longtermist principles. Virtually the whole machine learning community would be working on the technical problem of AI alignment. Governments would have departments for reducing existential risk /​ for future generations. 100% of philosophers would be working on the question “how can we best improve the far future”. We would save a lot more than we do and mitigate climate change far more than we do. We might even have widespread surveillance to ensure we don’t destroy ourselves (to be clear I am personally unsure if this would be required/​desirable). We would have everyone on earth working together to improve the far future instead of what we have now—countries working against each other to come out on top.

So my answer to the question ‘would the truth of longtermism spur a revolution?’ is a resounding YES.