I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges. I split the criticisms into two sections: the first on problematic ethical assumptions or commitments, the second on problematic policy proposals.
Note that I both respect and disagree with all three authors. Torres piece is insightful and thought-provoking, as well as polemical; Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse; and Bostrom is a groundbreaking visionary, though one can dispute many of his views.
Problematic ethical assumptions or commitments
Torres argues that longtermism rests on assumptions and makes commitments that are problematic and unusual/niche. He is correct that Bostrom has a number of unusual ethical views, and in his early writing he was perhaps overly fond of a contrarian ‘even given these incredibly conservative assumptions the argument goes through’ framing. But Torres does not sufficiently appreciate that these limitations and constraints have largely been acknowledged by longtermist philosophers, who have (re)formulated longtermism so as to not require these assumptions and commitments.
Total utilitarianism
Torres suggests that longtermism is based on an ethical assumption of total utilitarianism, a view in which we should maximise wellbeing based on adding together the wellbeing of all the individuals in a group. Such a ‘more is better’ ethical view accords significant weight to trillions of future individuals. He points out that total utilitarianism is not a majority opinion amongst moral philosophers.
However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism. One of the achievements of The Precipice is Ord’s arguments pointing out the affinities between longtermism with other ethical traditions, such as conservatism, obligations to the past, virtue ethics. One can be committed to a range of ethical views and endorse longtermism.
Trillions of simulations on computronium
Torres suggests that the scales are tilted towards longtermism by including in the calculation quadrillions of simulations of individuals living flourishing lives. The view that such simulations would be moral agents, or that this future is desirable, is certainly unusual.
But one doesn’t have to be committed to this view for the argument to work. The argument goes through if we assume that humanity never leaves Earth, and simply survives until the Earth is uninhabitable – or even more conservatively, survives the duration of an average mammalian species. There are still trillions of future individuals, whose interests and dignity matter.
‘Reducing risk from 0.001% to 0.0001% is not the same as saving thousands of lives’
Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives. This a clear example of early Bostrom stating his argument in a philosophically robust, but very counterintuitive way. Worries about this framing have been common for over a decade, in the debate over ‘Pascal’s Mugging’.
However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1⁄6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%. Specifically on Pascal’s Mugging, a number of decision-theory responses have been proposed, which I will not discuss here.
Transhumanism and space settlement & ‘Not reaching technological maturity = existential risk’
Torres suggests that longtermism is committed to transhumanism and space settlement (in order to expand the number of future individuals), and argues that Bostrom bakes this commitment into existential risk through a negative definition of existential risk as any future that does not achieve technological maturity (through extinction, plateauing, etc).
However, while Bostrom certainly does think this future is ethically desirable, longtermism is not committed to it. Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed. Longtermism is not committed to any particular outcome from the Long Reflection. For example, if after the Long Reflection humanity decided to never become post-humans, and never leave Earth, this would not necessarily be viewed by longtermists as a destruction of humanity’s potential, simply one choice as to how to spend that potential.
Problematic policy proposals
Torres argues that longtermists are required to endorse problematic policy proposals. I argue that they are not – I personally would not endorse these proposals.
‘Continue developing technology to reduce natural risk’
Torres argues that longtermists are commited to continued technological development for transhumanist/space settlement reasons – and to prevent natural risks – but that this is “nuts” because (as he fairly points out) longtermists themselves argue that natural risk is tiny compared to anthropogenic risk.
However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies. This is not a call to continue technological development in order to become post-humans or reduce asteroid/supervolcano risk – it is to differentially progress technology, assuming that overall technological development is hard/impossible to stop. I would agree with this assumption, but one may reasonably question it, especially when phrased as a form of strong ‘technological completism’ (any technology that can get invented will get invented).
Justifies surveillance
Torres argues against the “turnkey totalitarianism” (extensive and intrusive mass surveillance and control to prevent misuse of advanced technology) explored in Bostrom’s ‘Vulnerable World Hypothesis’, and implies that longtermism is committed to such a policy.
However, longtermism does not have to be committed to such a proposal. In particular, one can simply object that Bostrom has a mistaken threat model. The existential risks we have faced so far (nuclear and biological weapons, climate change) have largely come from state militaries and large companies, and the existential risks we may soon face (from new biotechnologies and transformative AI) will also come from the same threat sources. The focus of existential risk prevention should therefore be on states and companies. Risks from individuals and small groups are relatively much smaller. These small benefits from the kind of mass surveillance Bostrom explores means it is not justified by a cost-benefit analysis.
Nevertheless, in the contrived hypothetical of ‘anyone with a microwave could have a nuclear weapon’, would longtermism be committed to restrictions on liberty? I address this in the next heading.
Justifies mass murder
Torres argues that longtermists would have to be willing to commit horrendous acts (e.g. destroy Germany with nuclear weapons) if it would prevent extinction.
This is a classic objection to all forms of consequentialism and utilitarianism – from the Trolley Problem to the Colosseum objection. There are many classic responses, ranging from disputing the hypothetical to pointing out that other ethical views are also committed to such an action.
It is not a unique objection to longtermism, and loses some of its force as longtermism does not have to be based on utilitarianism (as I said above). I would also point out that it is an odd accusation to level, as longtermism places such high priority on peace, disarmament and avoiding catastrophes.
Justifies giving money to the rich rather than the extreme poor, which is a form of white supremacy
Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”
However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people. Longtermists might in practice donate to NGOs or scientists in the developed world, but the ultimate beneficiaries are future generations. Indeed, the same might be true of other cause areas e.g. work on a malaria vaccine or clean meat. Torres does not seem to accord much weight to how much longtermists recognise this as a moral dilemma and feel very conflicted – most longtermists began as committed to ending the moral crimes of extreme poverty, or of factory farming. There are many huge tragedies, but one must unfortunately chose were to spend one’s limited time and resources.
Longtermism is committed to the view that future generations matter morally. They are moral equals. When someone is born is a morally irrelevant fact, like their race, gender, nationality or sexuality. Furthermore, present people are in a unjust, exploitative power imbalance with future generations. Future generations have no voice or vote in our political and economic systems. They can do nothing to affect us. Our current political and economic systems are set up to overwhelmingly benefit those currently alive, often at the cost of exploiting, and loading costs onto, future generations.
This lack of recognition of moral equality, lack of representation, power imbalance and exploitation shares many characteristics with white supremacy/racism/colonialism and other unjust power structures. It is ironic to accuse a movement arguing on behalf of the voiceless of being a form of white supremacy.
Response to Torres’ ‘The Case Against Longtermism’
This short post responds to some of the criticisms of longtermism in Torres’ minibook: Were the Great Tragedies of History “Mere Ripples”? The Case Against Longtermism, which I came across in this syllabus.
I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges. I split the criticisms into two sections: the first on problematic ethical assumptions or commitments, the second on problematic policy proposals.
Note that I both respect and disagree with all three authors. Torres piece is insightful and thought-provoking, as well as polemical; Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse; and Bostrom is a groundbreaking visionary, though one can dispute many of his views.
Problematic ethical assumptions or commitments
Torres argues that longtermism rests on assumptions and makes commitments that are problematic and unusual/niche. He is correct that Bostrom has a number of unusual ethical views, and in his early writing he was perhaps overly fond of a contrarian ‘even given these incredibly conservative assumptions the argument goes through’ framing. But Torres does not sufficiently appreciate that these limitations and constraints have largely been acknowledged by longtermist philosophers, who have (re)formulated longtermism so as to not require these assumptions and commitments.
Total utilitarianism
Torres suggests that longtermism is based on an ethical assumption of total utilitarianism, a view in which we should maximise wellbeing based on adding together the wellbeing of all the individuals in a group. Such a ‘more is better’ ethical view accords significant weight to trillions of future individuals. He points out that total utilitarianism is not a majority opinion amongst moral philosophers.
However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism. One of the achievements of The Precipice is Ord’s arguments pointing out the affinities between longtermism with other ethical traditions, such as conservatism, obligations to the past, virtue ethics. One can be committed to a range of ethical views and endorse longtermism.
Trillions of simulations on computronium
Torres suggests that the scales are tilted towards longtermism by including in the calculation quadrillions of simulations of individuals living flourishing lives. The view that such simulations would be moral agents, or that this future is desirable, is certainly unusual.
But one doesn’t have to be committed to this view for the argument to work. The argument goes through if we assume that humanity never leaves Earth, and simply survives until the Earth is uninhabitable – or even more conservatively, survives the duration of an average mammalian species. There are still trillions of future individuals, whose interests and dignity matter.
‘Reducing risk from 0.001% to 0.0001% is not the same as saving thousands of lives’
Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives. This a clear example of early Bostrom stating his argument in a philosophically robust, but very counterintuitive way. Worries about this framing have been common for over a decade, in the debate over ‘Pascal’s Mugging’.
However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1⁄6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%. Specifically on Pascal’s Mugging, a number of decision-theory responses have been proposed, which I will not discuss here.
Transhumanism and space settlement & ‘Not reaching technological maturity = existential risk’
Torres suggests that longtermism is committed to transhumanism and space settlement (in order to expand the number of future individuals), and argues that Bostrom bakes this commitment into existential risk through a negative definition of existential risk as any future that does not achieve technological maturity (through extinction, plateauing, etc).
However, while Bostrom certainly does think this future is ethically desirable, longtermism is not committed to it. Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed. Longtermism is not committed to any particular outcome from the Long Reflection. For example, if after the Long Reflection humanity decided to never become post-humans, and never leave Earth, this would not necessarily be viewed by longtermists as a destruction of humanity’s potential, simply one choice as to how to spend that potential.
Problematic policy proposals
Torres argues that longtermists are required to endorse problematic policy proposals. I argue that they are not – I personally would not endorse these proposals.
‘Continue developing technology to reduce natural risk’
Torres argues that longtermists are commited to continued technological development for transhumanist/space settlement reasons – and to prevent natural risks – but that this is “nuts” because (as he fairly points out) longtermists themselves argue that natural risk is tiny compared to anthropogenic risk.
However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies. This is not a call to continue technological development in order to become post-humans or reduce asteroid/supervolcano risk – it is to differentially progress technology, assuming that overall technological development is hard/impossible to stop. I would agree with this assumption, but one may reasonably question it, especially when phrased as a form of strong ‘technological completism’ (any technology that can get invented will get invented).
Justifies surveillance
Torres argues against the “turnkey totalitarianism” (extensive and intrusive mass surveillance and control to prevent misuse of advanced technology) explored in Bostrom’s ‘Vulnerable World Hypothesis’, and implies that longtermism is committed to such a policy.
However, longtermism does not have to be committed to such a proposal. In particular, one can simply object that Bostrom has a mistaken threat model. The existential risks we have faced so far (nuclear and biological weapons, climate change) have largely come from state militaries and large companies, and the existential risks we may soon face (from new biotechnologies and transformative AI) will also come from the same threat sources. The focus of existential risk prevention should therefore be on states and companies. Risks from individuals and small groups are relatively much smaller. These small benefits from the kind of mass surveillance Bostrom explores means it is not justified by a cost-benefit analysis.
Nevertheless, in the contrived hypothetical of ‘anyone with a microwave could have a nuclear weapon’, would longtermism be committed to restrictions on liberty? I address this in the next heading.
Justifies mass murder
Torres argues that longtermists would have to be willing to commit horrendous acts (e.g. destroy Germany with nuclear weapons) if it would prevent extinction.
This is a classic objection to all forms of consequentialism and utilitarianism – from the Trolley Problem to the Colosseum objection. There are many classic responses, ranging from disputing the hypothetical to pointing out that other ethical views are also committed to such an action.
It is not a unique objection to longtermism, and loses some of its force as longtermism does not have to be based on utilitarianism (as I said above). I would also point out that it is an odd accusation to level, as longtermism places such high priority on peace, disarmament and avoiding catastrophes.
Justifies giving money to the rich rather than the extreme poor, which is a form of white supremacy
Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”
However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people. Longtermists might in practice donate to NGOs or scientists in the developed world, but the ultimate beneficiaries are future generations. Indeed, the same might be true of other cause areas e.g. work on a malaria vaccine or clean meat. Torres does not seem to accord much weight to how much longtermists recognise this as a moral dilemma and feel very conflicted – most longtermists began as committed to ending the moral crimes of extreme poverty, or of factory farming. There are many huge tragedies, but one must unfortunately chose were to spend one’s limited time and resources.
Longtermism is committed to the view that future generations matter morally. They are moral equals. When someone is born is a morally irrelevant fact, like their race, gender, nationality or sexuality. Furthermore, present people are in a unjust, exploitative power imbalance with future generations. Future generations have no voice or vote in our political and economic systems. They can do nothing to affect us. Our current political and economic systems are set up to overwhelmingly benefit those currently alive, often at the cost of exploiting, and loading costs onto, future generations.
This lack of recognition of moral equality, lack of representation, power imbalance and exploitation shares many characteristics with white supremacy/racism/colonialism and other unjust power structures. It is ironic to accuse a movement arguing on behalf of the voiceless of being a form of white supremacy.