Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism.
Introduction
Suppose you are an altruist. You want to do as much good as possible with the resources available to you. What might you do? One option is to address pressing short-term challenges. For example, GiveWell (2021) estimates that $5,000 spent on bed nets could save a life from malaria today.
Recently, a number of longtermists (Greaves and MacAskill 2021; MacAskill 2022b) have argued that you could do much more good by acting to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, you might work to regulate chemical and biological weapons, or to reduce the threat of nuclear conflict (Bostrom and Cirkovi ´ c´ 2011; MacAskill 2022b; Ord 2020).
Many authors argue that efforts to mitigate existential risk have enormous value. For example, Nick Bostrom (2013) argues that even on the most conservative assumptions, reducing existential risk by just one-millionth of one percentage point would be as valuable as saving a hundred million lives today. Similarly, Hilary Greaves and Will MacAskill (2021) estimate that early efforts to detect potentially lethal asteroid impacts in the 1980s and 1990s had an expected cost of just fourteen cents per life saved. If this is right, then perhaps an altruist should focus on existential risk mitigation over short term improvements.
There are many ways to push back here. Perhaps we might defend population-ethical assumptions such as neutrality (Naverson 1973; Frick 2017) that cut against the importance of creating happy people. Alternatively, perhaps we might introduce decision-theoretic assumptions such as risk aversion (Pettigrew 2022), ambiguity aversion (Buchak forthcoming) or anti-fanaticism (Monton 2019; Smith 2014) that tell against risky, ambiguous and low-probability gambles to prevent existential catastrophe. We might challenge assumptions about aggregation (Curran 2022; Heikkinen 2022), personal prerogatives (Unruh forthcoming), and rights used to build a deontic case for existential risk mitigation. We might discount the well-being of future people (Lloyd 2021; Mogensen 2022), or hold that pressing current duties, such as reparative duties (Cordelli 2016), take precedence over duties to promote far-future welfare.
These strategies set themselves a difficult task if they accept the longtermist’s framing on which existential risk mitigation is not simply better, but orders of magnitude better than competing short-termist interventions. Is it really so obvious that we should not save future lives at an expected cost of fourteen cents per life? While some moves, such as neutrality, may carry the day against even astronomical numbers, many of the moves on this list would be bolstered when joined with a competing maneuver: questioning the longtermist’s moral mathematics.
In this paper, I argue that many leading models of existential risk mitigation systematically neglect morally relevant considerations in determining the value of existential risk mitigation. This has two effects. First, debates about the value of existential risk mitigation are mislocated, because many of the most important parameters are neither modeled nor discussed. Second, the value of existential risk mitigation is inflated by many orders of magnitude. I look at three mistakes in the moral mathematics of existential risk: mishandling of cumulative risk (Section 3), background risk (Section 4), and population dynamics (Section 5). This will help us to gain a better understanding of the factors relevant to valuing existential risk mitigation. And under many assumptions, once these mistakes are corrected, the value of existential risk mitigation will be far from astronomical.
Reflecting on these mistakes in the moral mathematics of existential risk raises at least four classes of positive lessons for longtermism and the study of existential risk, discussed in Section 5. There, we will see the importance of treating existential risk mitigation as a difficult intergenerational coordination problem (Section 6.1); a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation (Section 6.2); renewed importance of population dynamics, including the demographics of digital minds (Section 6.3); and a novel form of the cluelessness challenge to longtermism (Section 6.4). But first, let us begin with some clarificatory remarks (Section 2).
Three mistakes in the moral mathematics of existential risk (David Thorstad)
Link post
Abstract
Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism.
Introduction
Suppose you are an altruist. You want to do as much good as possible with the resources available to you. What might you do? One option is to address pressing short-term challenges. For example, GiveWell (2021) estimates that $5,000 spent on bed nets could save a life from malaria today.
Recently, a number of longtermists (Greaves and MacAskill 2021; MacAskill 2022b) have argued that you could do much more good by acting to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, you might work to regulate chemical and biological weapons, or to reduce the threat of nuclear conflict (Bostrom and Cirkovi ´ c´ 2011; MacAskill 2022b; Ord 2020).
Many authors argue that efforts to mitigate existential risk have enormous value. For example, Nick Bostrom (2013) argues that even on the most conservative assumptions, reducing existential risk by just one-millionth of one percentage point would be as valuable as saving a hundred million lives today. Similarly, Hilary Greaves and Will MacAskill (2021) estimate that early efforts to detect potentially lethal asteroid impacts in the 1980s and 1990s had an expected cost of just fourteen cents per life saved. If this is right, then perhaps an altruist should focus on existential risk mitigation over short term improvements.
There are many ways to push back here. Perhaps we might defend population-ethical assumptions such as neutrality (Naverson 1973; Frick 2017) that cut against the importance of creating happy people. Alternatively, perhaps we might introduce decision-theoretic assumptions such as risk aversion (Pettigrew 2022), ambiguity aversion (Buchak forthcoming) or anti-fanaticism (Monton 2019; Smith 2014) that tell against risky, ambiguous and low-probability gambles to prevent existential catastrophe. We might challenge assumptions about aggregation (Curran 2022; Heikkinen 2022), personal prerogatives (Unruh forthcoming), and rights used to build a deontic case for existential risk mitigation. We might discount the well-being of future people (Lloyd 2021; Mogensen 2022), or hold that pressing current duties, such as reparative duties (Cordelli 2016), take precedence over duties to promote far-future welfare.
These strategies set themselves a difficult task if they accept the longtermist’s framing on which existential risk mitigation is not simply better, but orders of magnitude better than competing short-termist interventions. Is it really so obvious that we should not save future lives at an expected cost of fourteen cents per life? While some moves, such as neutrality, may carry the day against even astronomical numbers, many of the moves on this list would be bolstered when joined with a competing maneuver: questioning the longtermist’s moral mathematics.
In this paper, I argue that many leading models of existential risk mitigation systematically neglect morally relevant considerations in determining the value of existential risk mitigation. This has two effects. First, debates about the value of existential risk mitigation are mislocated, because many of the most important parameters are neither modeled nor discussed. Second, the value of existential risk mitigation is inflated by many orders of magnitude. I look at three mistakes in the moral mathematics of existential risk: mishandling of cumulative risk (Section 3), background risk (Section 4), and population dynamics (Section 5). This will help us to gain a better understanding of the factors relevant to valuing existential risk mitigation. And under many assumptions, once these mistakes are corrected, the value of existential risk mitigation will be far from astronomical.
Reflecting on these mistakes in the moral mathematics of existential risk raises at least four classes of positive lessons for longtermism and the study of existential risk, discussed in Section 5. There, we will see the importance of treating existential risk mitigation as a difficult intergenerational coordination problem (Section 6.1); a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation (Section 6.2); renewed importance of population dynamics, including the demographics of digital minds (Section 6.3); and a novel form of the cluelessness challenge to longtermism (Section 6.4). But first, let us begin with some clarificatory remarks (Section 2).
Read the rest of the paper