Response to Recent Criticisms of Longtermism

This piece is a response to two recent essays by Phil Torres which are critical of longtermism. It does not presume that the reader is familiar with longtermism, and is thus not directed towards regular Forum readers who likely will already be familiar with this material.

Introduction

Recently, Phil Torres wrote two essays which are critical of longtermism. For the sake of brevity, this piece does not summarize them but assumes the reader has previously read them. My view is that Torres misportrays longtermism. So, in this essay I introduce longtermism, and then explain and respond to the main criticisms that Torres offers of it.

This is a long piece, so I encourage you to skip to the sections which most interest you. Here is the basic structure:

What is longtermism?

Longtermism is the view that positively influencing the long term future is a key moral priority of our time. It’s based on the ideas that future people have moral worth, there could be very large numbers of future people, and that what we do today can affect how well or poorly their lives go [1].

Humanity might last for a very long time. A typical species’ lifespan would mean there are hundreds of thousands of years ahead of us [2] and the Earth will remain habitable for hundreds of millions of years [3]. If history were a novel, we may be living on its very first page [4]. More than just focusing on this mind bending scope, we can imagine — at least vaguely — the characters that might populate it: billions and billions of people: people who will feel the sun on their skin, fall in love, laugh at a joke, and experience all the other joys that life has to offer. Yet, our society pays relatively little attention to how our actions might affect people in the future.

Concern for future generations is not a new idea. Environmentalists have advocated for the interests of future people for many decades. Concern for future generations is enshrined in the Iroquois Nation’s constitution. ​​John Adams, the second U.S. president, believed American institutions might last for thousands of years [5], while Ben Franklin bequeathed money to American cities under the provision that it could only be used centuries later.

That being said, there are several distinctive aspects of recent longtermist research and thinking, including the sheer timescales under consideration, the particular global problems that have been highlighted, and the consideration for the immense potential value of the future. Those engaged in longtermist research often look for events that will impact not just centuries, but potentially the whole future of civilisation — which might amount to millions or even billions of years. As for the global problems, a particular focus has been on existential risks: risks that threaten the destruction of humanity’s long-term potential [6]. Risks that have been highlighted by longtermist researchers include those from advanced artificial intelligence, engineered pathogens, nuclear war, extreme climate change, global totalitarianism, and others. If you care about the wellbeing of future generations, and take the long term seriously, then it’s of crucial importance to mitigate these or similarly threatening risks. Finally, recent longtermist thinking is distinct in its consideration of the magnitude of value that could exist, and the potential harm that could occur if we fail to protect it. For example, existential risks could bring about the extinction of humanity or all life on earth, the unrecovered collapse of civilisation, or the permanent, global establishment of a harmful ideology or some unjust institutional structure.

Criticisms of longtermism

Phil Torres recently wrote two essays critical of longtermism (which this essay presumes the reader is already familiar with). Much of this criticism misses the mark because Torres does not accurately explain what longtermism is, and fails to capture the heterogeneity in longtermist thought. He does sometimes gesture at important issues that require further discussion and reflection among longtermists, but because he often misrepresents longtermist positions, he ultimately adds more heat than light to those issues.

I do not mean to deter criticism in general. I have read critical pieces which helped refine and sharpen my own understanding of what longtermism should be aiming for, but I think it is also important to respond to criticism — particularly to the elements which seem off-base.

One housekeeping note — this piece largely focuses on criticisms from the Aeon essay, as it is more comprehensive. I have tried to note when I am answering a point that is solely in the Current Affairs piece.

Beware of Missing Context

If this is what longtermism is, why does it seem otherwise in Torres’ articles? One answer is selective quotation.

For example, Torres quotes Bostrom saying that “priority number one, two, three and four should … be to reduce existential risk”. But he omits the crucial qualifier at the beginning of the sentence: “[f]or standard utilitarians.” Bostrom is exploring what follows from a particular ethical view, not endorsing that view himself. In his case, Bostrom is not even a consequentialist [7]. Much the same can be said for MacAskill and Greaves’s paper “The case for strong longtermism,” where they work through the implications of variations on “total utilitarianism” before discussing what follows if this assumption is relaxed. Torres cuts the framing assumptions around these quotations, which are critical to understanding what contexts these conclusions actually apply in.

More generally, it should be borne in mind that Torres quotes from academic philosophy papers and then evaluates the quoted statements as if they were direct advice for everyday actions or policy. It should not be surprising that this produces strange results — nor is it how we treat other philosophical works, otherwise we would spend a lot of energy worrying about letting philosophy professors get too close to trolleys.

In another instance, Torres quotes Bostrom’s paper “The Future of Humanity” to show how longtermism makes one uncaring towards non-existential catastrophes. In a section where Bostrom is distinguishing between catastrophes that kill all humans or permanently limit our potential, versus catastrophes that do not have permanent effects on humanity’s development, Torres highlights the fact that Bostrom calls this latter group of events “a potentially recoverable setback: a giant massacre for man, a small misstep for mankind.” Torres does not mention the very next line where Bostrom writes, “[a]n existential catastrophe is therefore qualitatively distinct from a ‘mere’ collapse of global civilization, although in terms of our moral and prudential attitudes perhaps we should simply view both as unimaginably bad outcomes.” Bostrom distinguishes between the concepts of an existential catastrophe and the collapse of civilization, and immediately suggests that we should regard both as unimaginably bad. The non-existential catastrophe does not shrink in importance from the perspective of longtermism. Rather, the existential catastrophe looms even larger — both outcomes remain so bad as to strain the imagination.

A particularly egregious example of selective quotation is when Torres quotes three sentences from Nick Beckstead’s PhD thesis, where Beckstead claims it is plausible that saving a life in a rich country is potentially more instrumentally important — because of its impacts on future generations — than saving a life in a poor country. In his Current Affairs piece, Torres claims that these lines could be used to show that longtermism supports white supremacy. All Torres uses to support this claim are three sentences from a 198-page thesis. He states, before he offers the line, that “[Toby] Ord enthusiastically praises [the thesis] as one of the most important contributions to the longtermist literature,” not bothering to note that Ord might be praising any of the other 197 pages. It is key to note that the rest of the thesis does not deal with obligations to those in rich or poor countries, but makes the argument that “from a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” It is primarily a forceful moral argument for the value of the long term future and the actions we can take to protect it.

Torres also does not place the quotation with the relevant context about the individual. Nick Beckstead was among the first members of Giving What We Can, a movement whose members have donated over 240 million dollars to effective charities, primarily in lower income countries. Beckstead joined GWWC in 2010 when it was solely global poverty focused, founded the first GWWC group in the US, donated thousands of dollars to global poverty interventions as a graduate student making about $20,000 per year, and served on the organization’s board. This was all happening during the time he wrote this dissertation.

But what about that specific quote? In this quote, Beckstead is looking at which interventions are likely to save the lives of future people — who are, inherently, a powerless and voiceless group. When he says that he might favor saving a life in a wealthy country he is not saying this because he believes that person is intrinsically more valuable. As a utilitarian-leaning philosopher, Beckstead holds that these lives have the same intrinsic value. He then raises another consideration about the second-order effects of saving different lives: the person in the wealthy country might be better placed to prevent future catastrophes or invent critical technology that will improve the lives of our many descendents. Additionally, in the actual line Torres quotes, Beckstead writes that this conclusion only holds “all else being equal,” and we know all else is not equal — donations go further in the lower-income countries, and interventions there are comparatively neglected, which is why many prominent longtermists, such as Beckstead, have focused on donating to causes in low income countries. The quote is part of a philosophical exploration that raises some complex issues, but doesn’t have clear practical consequences. It certainly does not mean that in practice longtermists support saving the lives of those in rich countries rather than poor.

In general, throughout the two Torres pieces, one should be careful to take any of the particularly surprising quotations about longtermism at face value because of how frequently they are stripped of important context. Reading the complete pieces they came from will show this. It seems that Torres goes in with the aim to prove longtermism is dangerous and misguided, and is willing to shape the quotes he finds to this end, rather than give a more balanced and carefully-argued view on this philosophy.

Now I would like to go through the various criticisms that Torres raises about longtermism and answer them in greater depth.

Climate change

Torres is critical of longtermism’s treatment of climate change. Torres claims that longtermists do not call climate change an existential risk, and he conflates not calling climate change an existential risk with not caring about it at all. There are several questions to disentangle here:

  • A values question: do longtermists care about climate change or think it is worth working to mitigate?

  • An empirical question: will climate change increase the risk of the full extinction of humanity or an unrecoverable collapse of civilization?

  • And a terminological question: based on the answers to the two questions above, should we call climate change an existential risk?

The answer to the first question is straightforward. Longtermists do care about climate change. There are researchers at longtermist organizations who study climate change and there are active debates among longtermists over how best to use donations to mitigate climate change, and longtermists have helped contribute millions to climate change charities. There is active discussion about nuclear power and how to ensure that if geoengineering is done, that it is done safely and responsibly. These are not hallmarks of a community that does not care about climate change. Although it is fair to say that longtermists direct fewer resources towards it than they do towards other causes like biosecurity or AI safety — this also has to do with how many resources are already being directed towards climate change versus towards these other issues, which will be discussed more here.

There is disagreement among longtermists on the empirical question about whether, and the degree to which, climate change increases the risk of the full extinction of humanity or an unrecoverable collapse of civilization. Some think climate change is unlikely to cause either outcome. Some think it is plausible that it could. Open questions include the extent to which climate change:

  • Exacerbates the risk of war between great powers [8]

  • Slows down technological progress [9]

  • Inhibits civilisational recovery after a collapse

  • Could trigger an extreme feedback effect (such as the burn-off of stratocumulus clouds, leading to 8 degrees of warming over the course of a year [10]).

Neither group disagrees that climate change will have horrible effects that are worth working to stop.

Finally, on the terminological question: for longtermists who do not think climate change will cause the full extinction of humanity or an unrecoverable collapse of civilization, it makes sense that they do not call it an existential risk, given the definition of existential risk. We have terms to designate different types of events: if someone calls one horrible event a genocide and another a murder, this does not imply that they are fine with murders. Longtermists still think climate change is very bad, and are strongly in favour of climate change mitigation.

Torres gestures angrily multiple times that longtermists are callous not to call climate change an existential risk, but he does not even argue that climate change is one. In the Aeon piece, he at times refers to it as a “dire threat” and that climate change will “caus[e] island nations to disappear, trigge[r] mass migrations and kil[l] millions of people.” Longtermists would agree with these descriptions — and would certainly think these are horrible outcomes worth preventing. What Torres does not argue is that climate change will cause the “premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development [11]”

There are many reasons for longtermists to care about climate change. These include the near-term suffering it will cause, that it does have long-term effects [12], and that climate change will also worsen other existential threats, which we will return to below. Additionally, many climate activists who have never heard the word “longtermism” are motivated by concern for future people, for example, appealing to the effects of climate change on the lives of their children and grandchildren.

Potential

Torres never clearly defines longtermism. Instead Torres writes, “The initial thing to notice is that longtermism, as proposed by Bostrom and Beckstead, is not equivalent to ‘caring about the long term’ or ‘valuing the wellbeing of future generations’. It goes way beyond this.” What Torres takes issue with is that longtermism does not just focus on avoiding human extinction because of the suffering involved in all humans being annihilated, but that there is some further harm from a loss of “potential.”

Torres misses something here and continues to do so throughout the rest of the piece — potential is not some abstract notion, it refers to the billions of people [13] who now do not get to exist. Imagine if everyone on earth discovered they were sterile. There would obviously be suffering from the fact that many living wanted to have children and now realize they cannot have them, but there would also be some additional badness from the fact that no one would be around to experience the good things about living. We might be glad that no one is around to experience suffering. But there would also be no one around to witness the beauty of nature, to laugh at a joke, to listen to music, to look at a painting, and to have all the other worthwhile experiences in life. This seems like a tragic outcome [14].

Torres keeps his discussion of potential abstract and mocks the grand language that longtermists use to describe our “extremely long and prosperous future,” but he never makes the connection that “potential” implies actual experiencing beings. Extinction forecloses the lives of billions and billions. Yes, that does seem terrible.

Like with longtermism, Torres does not offer a clear definition of existential risk. Existential risks are those risks which threaten the destruction of humanity’s long-term potential. Some longtermists prefer to focus only on risks which could cause extinction, because this is a particularly crisp example of this destruction. There are fairly intuitive ways to see how other outcomes might also cause this: imagine a future where humanity does not go extinct, but instead falls into a global totalitarian regime which is maintained by technological surveillance so effective that its residents can never break free from it. That seems like a far worse future than one where humanity is free to govern itself in the way it likes. This is one example of “locking in” a negative future for humanity, but not the only one. It seems that working to prevent extinction or outcomes where humanity is left in a permanently worse position are sensible and valuable pursuits. He also quips that longtermists have coined a “scary-sounding term” for catastrophes that could bring about these outcomes: “an existential risk.” It seems deeply inappropriate to think that extinction or permanent harm to humanity should be met with anything other than a “scary-sounding” term.

By leaving “potential” abstract, Torres conceals the fact that this refers to the billions of sentient beings who will not get to experience the world if an existential catastrophe occurs. When it is clear that this is what potential refers to, then it becomes much more obvious why longtermism places the importance it does on preventing existential catastrophes.

Non-existential catastrophes, prioritization, and difficult tradeoffs

Limited time and resources force us to make painful and difficult tradeoffs in what we work on, and more crucially, in who we help. This is an awful situation to be in, but one we cannot escape. We all know the sensation of opening the newspaper or the newsfeed and feeling overwhelmed by the amount of suffering or injustice we see in the world, and not knowing how to begin alleviating it. It certainly can seem callous to work on reducing the risk of an existential catastrophe when so many people are suffering in the present — just as some may feel it is callous to work on climate change mitigation when so many are starving, or sick, or homeless. The climate activist might charge that they work on what they do so that more people are not starving or sick or homeless in the future as climate change worsens, just as the existential risk reduction advocate might charge that an engineered pandemic or nuclear war would similarly cause concrete suffering and is thus worth preventing. We do not mean to minimize the difficult choice of prioritizing who to help. This is one of the hardest things that caring people have to do.

Torres is critical of how longtermism handles this difficult tradeoff. He indicates that longtermism says that if problems are not existential risks, we “ought not to worry much about them.” Longtermism does encourage people to focus attention on certain issues rather than others, but that is not at all the same as saying one should no longer “worry” about other catastrophes, and it is certainly not how longtermists actually think. The people who are working to prevent existential risks are deeply motivated by people living good lives. They hope to prevent suffering on a massive scale, like by preventing nuclear war or a gruesome pandemic, but that does not mean they are immune to noticing or caring about suffering on other scales. On the contrary, it is precisely this sensitivity to all kinds of everyday suffering that often motivates a special worry about the possibility of much larger-scale disasters.

So while longtermists do worry and deeply care about all kinds of suffering, many longtermist researchers do encourage people to work on existential risks, as opposed to risks that play out at smaller scales. There are several factors contributing to this recommendation.

First, longtermism works from the view that we should be impartial in our moral care. People are equally morally valuable regardless of their characteristics like race, gender, ethnicity, or critically, when they are born. Someone has no less moral worth because they are born ten, a hundred, or a thousand years in the future.

Next, many longtermist researchers draw on the evaluative factors of importance, neglectedness, and tractability to help choose what to work on. These come from the effective altruism movement more broadly, but also motivate longtermist project selection. For more on the relationship between longtermism and effective altruism, see here. Importance is about the impact of a project, usually meaning how many lives are saved or improved. Neglectedness looks at how many people and resources are already being devoted to a problem — if this number is lower than what is going to some other project, adding more may have a higher marginal impact. Finally, tractability looks at how easy it is to make progress on a project. If something is very important and neglected, but impossible to change, it would not make sense to work on.

Preventing existential risks scores high on importance because it affects the entirety of humanity’s future. To see the importance of avoiding extinction, imagine a nuclear war that kills 99% of people [15]. This would be a horrific tragedy involving unspeakable suffering. And imagine the world 50 years later, then 500, then several thousand. First there might be deep and crushing grief and mourning, then back-breaking struggle to survive, regrow, and rebuild, then possibly, if these few survivors persevere through unimaginable challenges, once again a world filled with people — people living, learning, loving, laughing, playing music, and witnessing the beauty of nature. Now imagine a nuclear war that kills 100% of people. There is the same horrible suffering during the event. But 50, 500, 1000 years later? A barren earth. No one to experience its joys and pains, no one to experience anything at all, ever again. Torres seems to resist the process of comparing the “importance” of preventing different catastrophes. If you take on a moral view that does not allow for comparisons of badness — say you are faced with preventing a murder, a genocide, or the extinction of humanity, and your moral view gives you no way to choose between these — then the way that longtermism prioritizes does not and cannot make sense to you. But this is no weakness of longtermism.

Next, longtermist work seems deeply neglected. There are very few people working on existential risk mitigation [16]. This means that each additional person causes a relatively large proportional increase in the amount of work that’s being done. This connects back to how longtermists interact with climate change. Climate change is less neglected than other risks like nuclear war or biosecurity. For example, there are currently no major funders funding nuclear security, whereas climate change gets $5-9 billion from philanthropists every year, and hundreds of billions from governments and the private sector. That means that people are more likely to already be doing the most essential climate change work than the most essential work for the other areas.

It is important to recognize what question longtermists are answering, which is “how can we do the most good on the margin?” not “where should all of society’s resources go?” Longtermists have limited resources and so prioritize things like AI and biosecurity, and that is easily confused with the view that climate change should not get any money at all. I think almost all longtermists would agree that society should spend much more on climate change than it does right now. It can make sense to both be glad that a lot of work on mitigating climate change is happening and think that additional resources that we are able to direct are better used for potential big threats that are getting less attention at the moment.

What longtermism is directing people to work on should be taken in the context of what work is already being done: the world is currently spending close to nothing on protecting and improving the long-run future. As such, longtermists can and do disagree about how much of such spending would be ideal, while all agree that it makes sense to spend and do far more. For the last couple of years, roughly $200 million has been spent yearly on longtermist cause areas, and about $20 billion has so far been committed by philanthropists engaged with longtermist ideas (these posts give a good overview of the funding situation). For comparison, the world spends ~$60 billion on ice cream each year. Longtermism is shifting efforts at the margin of this situation — increasing the amount of money going towards longtermism, not redirecting all global resources away from their current purposes.

Finally, tractability. It seems there are feasible ways to reduce existential risks, meaning that people should implement them. Some examples for reducing biorisks include advocacy and policy work to prevent “gain-of-function” research, where pathogens are made more deadly or more infectious; work to improve our ability to rapidly develop flexible vaccines that can be applied to a novel disease; and work to build an early detection capability which more systematically and proactively tests for new pathogens. These projects are feasible and seem likely to reduce biorisks. There are many more such projects, both for biorisks and other types of risks.

I do not want to make it seem as if all longtermist work is trading off against nearer term or more certain benefits. There is some longtermist work which also helps with near-term problems, like work to improve institutional decision making. Likewise, some work to reduce existential risks also helps prevent non-existential catastrophes, like work to prevent pandemics or improve food security during catastrophes. Many longtermists interested in preventing biological existential risks worked on COVID, and projects that are likely to prevent the next COVID [17]. Some also argue that even purely focusing on current generations, the risk of an existential catastrophe in the next few decades is high enough that it makes sense to work on reducing these risks for the sake of people who are alive now.

Longtermism tells people to work on existential risks because they seem very important, neglected, and tractable. It does not say that these are the only important things in the world. And it does seem odd to accuse those who work to prevent one kind of suffering of being callous for not working on another — one struggles to imagine Torres telling someone who is campaigning against genocide that they are being heartless for not focusing on homicides in their community. Deciding where to devote our limited time and resources is a painful and difficult decision. Longtermism is one framework for helping us decide. We do not pretend that making this choice is easy or pleasant, and we wish there were no constraints on our ability to work on every important problem at once.

Limiting Conditions

A related concern to longtermism’s handling of non-existential catastrophes is whether longtermism could be used to justify committing harms, perhaps even serious ones, if it helped prevent or lower the chance of an existential catastrophe.

Of course, someone who doesn’t actually subscribe to longtermism could simply use its ideas as cover in a clearly disingenuous way — for example, an autocrat could claim he is acting to benefit the long-term future when really he is just refusing to take into account the needs of his current subjects for his own gain. This does not seem to be a feature unique to longtermism, and does not seem it should count strongly against it since so many ideologies could be used this way.

A related criticism is that longtermism actually suggests we do things which would ultimately harm humanity’s long-term future, and that it is therefore self-defeating. It seems like the best conclusion here is not that longtermism is self-defeating, but instead that it just doesn’t suggest doing these things when there are better options. To the extent that longtermism seems to suggest we do things which would be bad by its own lights, this is likely just a sign that these criticisms apply to an overly simplistic version of longtermism — not that it undercuts the essence of this view.

But what about the more worrying example of someone who has carefully considered and understood longtermism, and who believes that its conclusions instruct them to do something harmful? Ideologies — particularly utopian ideologies — have led to some of the gravest atrocities in recent history. There is an open question of whether longtermism is more susceptible to this than other ideologies.

It does not seem — either in practice or in theory — that longtermists are using or will use their philosophy to justify harm.

There is one argument to be made just from the character and dispositions of longtermists: Those who are interested in longtermism often got into this field because they wanted to reduce suffering on the largest scale they could. These are not individuals who take causing harm lightly. These people are working to prevent nuclear war, bioweapons attacks, and the misuse of powerful new technologies — clearly they are attentive to the myriad ways that humans are vulnerable to injury and death, and are working continuously to reduce them. Many longtermists donate regularly to altruistic causes. A significant portion of longtermists are vegetarian or vegan out of concern for the welfare of animals, revealing a desire to reduce suffering regardless of species. It seems that in practice — if you look at what longtermists are doing — they are unusually careful not to cause harm.

But what about future longtermists? Are there structural or philosophical features of longtermism that prevent it from being used to justify harm?

Torres ignores that this is a concern that longtermists themselves have raised, discussed, and written on. There are pieces on why you can’t take expected value too seriously (the former from Bostrom, which might temper Torres’ accusations that Bostrom naively and dangerously used expected value to justify insane actions — Bostrom was one of the earliest in the longtermist world counseling against simply using expected value to guide actions). There are multiple papers written on the problem of fanaticism (while Torres says this is a term “some” longtermists embrace, he cites one, single-authored academic philosophy paper, and neglects to mention the multiple papers which point to fanaticism as a serious concern) [18]. The websites of multiple major longtermist organizations feature multiple pieces on why effective altruists and longtermists should not cause harm even in pursuit of laudable goals. There is also a strong emphasis on moral uncertainty, which tempers any tendency towards extreme actions that are not robustly good by the lights of multiple views.

Torres almost makes the connection that this is a live part of longtermist discussion when he includes a thoughtful quote from Olle Haggstrom, in which he worries about a potential lack of limiting conditions in longtermism. Instead of recognizing that the fact that this quote exists reveals something healthy and promising about longtermism’s ability to address these concerns, Torres only remarks that despite this quote Haggstrom “perplexingly… tends otherwise to speak favourably of longtermism.” It should be noted that Haggstrom resents the way this quote has been cherry-picked, because by only showing his (reasonable) criticism, Torres makes it seem as if he rejects longtermism when in fact he supports it.

Longtermism is not all-encompassing and dogmatic. Figuring out what matters in the world and how we should act are some of the hardest and most important problems we face. By highlighting the importance of mitigating existential risks, longtermists do not pretend to have solved what to do — they have proposed one plausible answer (out of potentially many more) that they are continuously scrutinizing, questioning, challenging, and improving.

Nor is longtermism wedded to a particular view of ethics. In fact, two key longtermist researchers, Toby Ord and William MacAskill, wrote a book about how to take ‘moral uncertainty’ seriously. There are many different moral frameworks that can support longtermism [19] — not just utilitarianism, as some have argued. Longtermists also place value on finding conclusions that are supported by multiple views. Having spent time in this community of researchers, I’ve been surprised by how pluralistic longtermism is. There are longtermist leftists and libertarians, Christians and Jews, utilitarians and deontologists [20]. Because longtermism recognizes the difficulty of its project and the grave importance of getting it right, longtermism is open to criticism. Critics are invited to speak in prominent forums, and longtermists frequently criticize longtermism with the view that criticism is stimulating and healthy, and improves the ideas that make up longtermism.

Does potential inherently include transhumanism, space expansionism, and total utilitarianism?

Torres claims that one can “unpack what longtermists mean by our ‘long-term potential’” into “three main components: transhumanism, space expansionism, and a moral view closely associated with what philosophers call ‘total utilitarianism’.” However, there is wide disagreement among longtermists about what exactly potential involves.

Potential could mean many things

Like I said before, when longtermists speak about potential, they mean first and foremost all the sentient beings who could exist in the future and experience the world. There is reason to believe that future beings could have even better lives than present beings. There is a general trend of humans getting healthier and wealthier, living longer, being more literate. Potentially, these trends could continue far into the future, meaning that if some catastrophe wipes out humanity and prevents all future generations from existing, then those beings won’t get to experience those good things.

But if we don’t go extinct, what exactly do we want? There is much more agreement in the longtermist world about what we want to avoid than what we want to obtain. And that’s ok! It seems appropriate to recognize that we do not know exactly what future generations will want, and to focus on giving them self-determination. That’s why we want to avoid extinction, but also various kinds of “lock-in” where decisions are made that are hard for future people to reverse (like the surveillance enabled totalitarian example).

Most longtermists are not utopians: they very deliberately do not have some well-described grand future in mind. Many longtermists do believe that the the world can be far, far better than it is today. But it could be better in so many ways, including ways we’re only dimly aware of. This means we want to keep our options open, and not be so presumptuous to declare, or overly constrain, what the future should look like. Some have even suggested the need for a long period of reflection where humanity works out its values before it engages in any more ambitious or irreversible projects.

Some longtermists do support space expansionism, transhumanism, and total utilitarianism — here’s why

Torres argues that potential is a much more laden concept than I have laid out above, one that does not just denote the experiencing beings who could exist, but also specific facts about their existence. Some longtermists do think our potential will only be realized if we become transhuman and spacefaring. I want to explain why they might think this.

First, it is important to note that transhumanism refers to an eclectic mixture of ideas and is, in my experience, a fringe view within longtermism, and that for any particular ‘transhumanist’ claim or goal there is a good chance that most longtermists have never heard of it let alone agree with it. But I will still offer a basic argument for why some longtermists do see transhumanism as part of fulfilling our potential: It seems there are many ways we can improve our existence. Over time, humans have invented medicines and health treatments that reduce physical suffering and extend our lifespan. These seem to be very good developments. Transhumanism, in its most basic form, is just an extension of this: can we be healthier and live longer, can we be more empowered in our physical forms. Those who think becoming transhuman is part of fulfilling our potential are focusing on the quality aspect of future lives: they want future generations to not just exist, but also exist with even better experiences than we have. Torres mentions the connection between transhumanism and eugenics to suggest that those who support transhumanism are similarly sullied. Transhumanism has been connected with dark practices. So has modern medicine. It would seem wrong for someone to argue that anyone who supported medical trials was morally suspect because some medical trials had been conducted on prisoners of war or done in other unethical ways. Obviously, doing medical trials like that is wrong, just as it would be wrong to use transhumanism in the way that eugenicists used it. But going back to the basic premise that there are ways to improve the human condition, and the transhumanist project focuses on finding those ways, it seems plausible that this could be part of fulfilling humanity’s potential.

Space expansionism is the next element that Torres views as an inherent component of what longtermists mean by potential. While again, it is not inherently part of longtermist’s definition of potential, there is a reason why some might include it: the future could be home to far more people than are currently alive, perhaps even spread out much further across space than today — just as we far outnumber, and are spread far more widely, than our forager ancestors. There are other benefits from space expansionism too: Resources brought back from space would improve the lives of those living on earth. Settlements on more planets could protect against certain existential risks. Allowing for settlements on new planets could allow for a greater diversity of ways of living, with residents of new planets able to try out new political or economic systems. Ultimately all of these come back to more people living worthwhile lives. But we simply don’t need to decide whether a spacefaring future would be best today — what matters is that future people have the opportunity to decide.

Utilitarianism and the Total View

This leads right into the discussion about population ethics. This is a thorny and complex topic that is likely too esoteric to lay out here. Torres uses a thought experiment about comparing sizes of imaginary populations to try to discredit a view within population ethics called the “total view.” The example does not seem to represent a choice that we will ever actually face in the real world. Philosophers have debated population ethics for decades, and it is generally agreed that all views have highly counterintuitive conclusions. It is therefore flawed to say “look, this view implies something that seems weird, therefore it must be wrong,” since this would imply that no view is correct. The total view does not seem to face more counterintuitive implications than other views.

Torres moves on from the total view to discussing utilitarianism more broadly (which does not have to be combined with the total view). Torres is wrong when he claims longtermism is “utilitarianism repackaged.” One does not have to be a total utilitarian, or any kind of utilitarian, to be a longtermist [21]. Torres argues that all longtermists are utilitarians, which is simply false. I am not. Nor is Bostrom [22]. Nor are many others. That being said, there are a significant number of utilitarian longtermists, and Torres does not do this ethical view justice with his description.

Utilitarianism aims to improve the lives of all sentient beings, giving equal moral consideration to the wellbeing of all individuals. Utilitarianism’s focus on being impartial about who to help is radical in a world where most people feel tight allegiances to those near them, like to their own race or nationality. Many people find a philosophy that instructs one to impartially consider the value of all beings — that someone in a distant country could be worth as much as you or someone near and dear to you — sensible and compelling; It seems natural to extend this to future generations. Utilitarianism can also point to its track record: Torres gives short shrift to a philosophy that was far ahead of the curve on topics like abolitionism, animal rights, and women’s rights.

Utilitarianism comes in many varieties, so it is hard to speak for all of them, but what Torres seems to miss is that utilitarianism is not some abstract aim, but is ultimately grounded in the welfare of conscious beings, because someone must be around to be having the positive experiences that this view values. In that respect, utilitarianism is humanistic. It underlines that there is no abstract sense of value outside of the lives of living, experiencing beings.

Specific blueprints or inspiration?

Torres primarily uses two pieces to justify his very specific vision of what longtermism aims at: one of the final chapters of Toby Ord’s The Precipice and Bostrom’s Letter from Utopia. These pieces are meant to provide two speculative, inspirational pictures of what the future might hold, not to lay out precise guidelines for what realizing our potential involves. They are not meant to predict the future or instruct it. Torres misses the huge emphasis within Ord’s work on “the long reflection,” [23] essentially some window of time where humanity can, as a whole, reflect on what it wants to be and to achieve. Obviously, the long reflection is idealized and may never happen for a range of reasons, but the fact that Ord presents it as desirable reveals something key about his (and many other longtermists) view on humanity’s potential: we do not know exactly what realizing it looks like, future humanity has to work that out for itself. Torres pulls out two quotes from Bostrom and Ord to try to prove that they view transhumanism as inherently part of realizing humanity’s potential, but the quotes don’t say that. Instead, they say that what has to be avoided is permanently taking away this choice from future humanity. Bostrom wants to avoid “permanent foreclosure of any possibility of this kind of transformative change” and Ord wants to avoid “forever preserving humanity as it is now.” Both focus on the “permanent” and the “forever” — they want to avoid lock-in, which is actually radically empowering to future generations, not forcing them to become transhuman, but fighting to preserve the choice for them to become what they want.

Torres concludes his argument on potential as transhumanism, space expansionism, and total utilitarianism by saying “[t]hat is what our ‘vast and glorious’ potential consists of: massive numbers of technologically enhanced digital posthumans inside huge computer simulations spread throughout our future light cone.” Yes, some longtermists might support these as elements of fulfilling our potential. Others might view “fulfilling our potential” as involving a flourishing earth-based humanity that stays embodied but lives out the next billion years on a more peaceful, prosperous, equal, and healthy planet. Some might reject the idea of becoming “posthuman” through enhancement, some might reject a highly technological future, some might reject the idea that existing in a virtual or simulated environment could be as good as existing in the real one. These are real questions that future generations will need to work out. And what is most clear is that Torres is wrong to present this as the sole or consensus view on what longtermism aims for — longtermism aims to avoid extinction or lock-in, and to give future generations the chance to work out what they want.

Technological Development

As Torres concludes his Aeon piece, he turns towards longtermism’s relationship to technological development. He claims it is “self-defeating,” essentially that longtermism’s support for advanced technological development will bring about existential risks. Torres writes in his concluding paragraph that “technology is far more likely to cause our extinction before this distant future event than to save us from it.” This is a claim that many longtermists would actually agree with — as demonstrated by their recognition that the largest existential risks are anthropogenic in nature, particularly from advanced technologies like AI and biotechnology [24].

What Torres misses in this section is that longtermists are not acting alone in the world. Even if they are acutely aware of technology’s risks, longtermists cannot unilaterally decide to stop technological progress, despite Torres implying that they can. Longtermists are keenly aware of the dangers of technology, and “steering” technological progress is a core strategy of longtermism. Yes, longtermist researchers typically do not advocate for the wholesale pause or reversal of technological progress — because, short of a disaster, that seems deeply implausible. As mentioned above, longtermism pays attention to the “tractability” of various problems and strategies. Given the choice, many longtermists would likely slow technological development if they could.

Also, advocating for a pause or reversal would likely lose longtermists the opportunity to do something which is possible — direct that technological development in ways that are better and safer than it would have gone otherwise. Longtermist researchers frequently work with the safety teams in leading AI labs like Deepmind and OpenAI. Longtermism has originated research on “differential technological development,” essentially how to develop safe and defensive technologies faster than offensive and dangerous ones, how to slow or speed the development of various technologies, and what order technologies should arrive in. In the biosecurity realm, longtermist researchers are working to improve lab safety and to prevent “gain of function” research in biology. These are hallmarks of a philosophy and movement that take the risks of technology very seriously, and are working urgently to mitigate them.

Longtermists do, also, see the benefits of technology to mitigate certain other risks or to just generally improve standards of living. Take the fight against climate change: developing better technology for clean energy is a core tool in our arsenal.

Conclusion

Torres opens his Aeon piece by listing risks like pandemics, nuclear war, nanotechnology, geoengineering, and artificial intelligence. He believes that fears about extinction are based on “robust scientific conclusions.” He seems to think extinction would be very bad and he believes “you should care about the long term.” But he claims, vehemently, that he is not a longtermist. I would argue that Torres is a longtermist. He pays attention to the value of the future and he connects reaching it to overcoming certain large-scale risks. That being said, I don’t care what Torres calls himself. Longtermism is not an identity and certainly not an ideology — it is a shared project animated by concern for the long-run future, which can and should contain many conflicting viewpoints.

What is important is that we work to set the world on a positive trajectory, and work to reduce existential risks, both to protect the present generation from harm and to ensure that there will be future generations living worthwhile lives. We should aim to leave a better world for our descendents stretching far, far into the future. That future might be embodied and limited to this planet. It might be populated by barely recognizable beings scattered throughout the galaxy. I think that Torres and I can agree that that is for future generations to decide. Let’s ensure they have the chance to.

……………..

Although this piece is long, it used to be much longer. If there is some point I failed to address, please reach out, since I may already have written something on it. For example, I can share sections on longtermism’s relationship to 1) Nature 2) Surveillance and Preemptive War 3) Seeking Influence, which didn’t make it in the final draft in an attempt to be concise.

End Notes

[1] Will MacAskill, “What We Owe the Future.”

[2] Barnosky et al. 2011

[3] Wolf & Toon 2015

[4] Will MacAskill, “What We Owe the Future.”

[5] From John Adams’s Preface to his A Defence of the Constitutions of Government of the United States. In: The Works of John Adams, Second President of the United States: with a Life of the Author, Notes and Illustrations, by his Grandson Charles Francis Adams (Boston: Little, Brown and Co., 1856). 10 volumes. Vol. 4. P. 298 https://​​oll.libertyfund.org/​​title/​​adams-the-works-of-john-adams-vol-4#Adams_1431-04_948

[6] Toby Ord, “The Precipice.”

[7] https://​​www.nickbostrom.com/​​ethics/​​infinite.html, https://​​www.nickbostrom.com/​​papers/​​pascal.pdf , https://​​www.nickbostrom.com/​​ethics/​​dignity-enhancement.pdf

[8] E.g. Hsiang, Solomon M., Marshall Burke, and Edward Miguel. “Quantifying the influence of climate on human conflict.” Science 341.6151 (2013).

[9] Dell, Melissa, Benjamin F. Jones, and Benjamin A. Olken. Climate change and economic growth: Evidence from the last half century. No. w14132. National Bureau of Economic Research, 2008.

[10] “The breakup of the stratocumulus clouds is more rapid than it would be in nature because of the unrealistically small thermal inertia of the underlying slab ocean” Tapio Schneider, Colleen M. Kaul, and Kyle G. Pressel, ‘Possible Climate Transitions from Breakup of Stratocumulus Decks under Greenhouse Warming’, Nature Geoscience 12, no. 3 (March 2019): 163–67

[11] https://​​www.existential-risk.org/​​concept.html#:~:text=As%20noted%2C%20an%20existential%20risk,the%20entire%20future%20of%20humankind.

[12] CO2 stays in the atmosphere for hundreds of thousands of years! That certainly seems to qualify as long-term effects that we should be wary of saddling our descendents with.

[13] While I use “people” here, we should also consider animals. Any sentient being seems worth our moral consideration.

[14] Some other examples which might make this intuitive: If you learned that an asteroid was coming in 200 years to destroy us, should we ignore that because the people involved are merely potential? When we store nuclear waste, should we only worry about storing it safely for several generations, or take the additional resources to store it until it is no longer dangerous?

[15] This thought experiment is taken from Parfit https://​​wmpeople.wm.edu/​​asset/​​index/​​cvance/​​videos

[16] If we purely look at EAs, we get a number of several thousand, although there are likely more non EAs also working on these problems https://​​forum.effectivealtruism.org/​​posts/​​zQRHAFKGWcXXicYMo/​​ea-survey-2019-series-how-many-people-are-there-in-the-ea

[17] Some examples: https://​​www.fhi.ox.ac.uk/​​the-effectiveness-and-perceived-burden-of-nonpharmaceutical-interventions-against-covid-19-transmission-a-modelling-study-with-41-countries/​​ , https://​​www.nature.com/​​articles/​​d41586-021-02111-7

[18] Not a paper, but a quote from someone seen as a leading longtermist researcher highlighting fanaticism as a problem https://​​twitter.com/​​anderssandberg/​​status/​​1452561591304605698

[19] Toby Ord, the Precipice, pages 65-81 and The Case for Strong Longtermism

[20] https://​​link.springer.com/​​article/​​10.1007/​​s42048-018-0002-3, also https://​​plato.stanford.edu/​​entries/​​justice-intergenerational/​​#CurrInteJust

[21] See: The Precipice, pages 65-81, The Case for Strong Longtermism, https://​​globalprioritiesinstitute.org/​​wp-content/​​uploads/​​Stefan-Riedener_Existential-risks-from-a-Thomist-Christian-perspective.pdf,

[22] https://​​www.nickbostrom.com/​​ethics/​​infinite.html , https://​​www.nickbostrom.com/​​papers/​​pascal.pdf , https://​​www.nickbostrom.com/​​ethics/​​dignity-enhancement.pdf

[23] Toby Ord, “The Precipice,” 297-298.

[24] https://​​forum.effectivealtruism.org/​​tag/​​anthropogenic-existential-risk and The Precipice