My main reaction is that I was puzzled by the framing. It is obviously an allusion to Parfit’s ‘Five Mistakes in Moral Mathematics’. But there are major differences. Parfit was objecting to pieces of maths that are embedded in our common-sense understanding of morality, such as the share of the total view. He argued that the maths of morality is different to that. You are complaining about three modelling assumptions about the empirics of risk over time and population over time. You don’t present any disagreement with the moral mathematics (which is just a big expected value calculation over the total wellbeing of all future people). And you don’t even suggest that there were mistakes made in the modelling, just that they may rely on assumptions that weren’t foregrounded and thus be misleading.
I liked the work you did on foregrounding those assumptions, but felt a bit let down by the framing. I expected a Parfit-mark-II piece that showed how our commonsense understanding of the ethics of longtermism relied on mistaken moral assumptions, but instead found a piece that mainly just suggested different modelling assumptions (and in my view, assumptions that are more misleading than those in the pieces you critique).
The framing also sounds to my ear to be a bit insulting to your peers. To some extent, any paper in moral philosophy where the author disagrees with another person could be reframed to be about a mistake in the other person’s moral reasoning, but I’m glad authors don’t typically choose that frame. Instead, they say that opponent said P, while this piece argues not P, or that the opponent assumed Q, while here are some reasons that Q is not a safe assumption. This keeps the focus on the content.
This is a lot of text re a piece’s framing, but I wanted to lay it out because (at least for me) the framing really does distract from the content.
(I’ll split further reactions on particular parts into separate comments)
Regarding the ‘first mistake’, you correctly show that survival of a species for a billion years requires reaching a low per-period level of risk (averaging roughly 1 in a billion per year). I don’t disagree with that and I doubt Bostrom would either. No complex species has yet survived so long, but that is partly because there have been less than 1 billion years since complex life began. But there are species (or at least families) that have survived almost the whole time, such as the Nautilus (which has survived 500 million years). So risk levels comparable to 1 in a billion per year do occur. For Bostrom’s modelling of the EV of risk reduction to work, he just needs there to be at least a small chance (say 1 in 1 million) that risk declines to such a level or beyond. That sounds eminently plausible to me, and my best guess of this probability would be much higher.
You say that: “there is a clear sense in which the drop in existential risk that Bostrom envisions is not small, but instead very large”. But note that this is not the drop in existential risk that need be caused by the intervention Bostrom is evaluating. He relies on there being at least a slender possibility that risk levels fall to something like those of the safest species on Earth, but the intervention doesn’t need to bring that about.
So on this ‘first mistake’, I agree that it is often also useful to think of things in per-period risk, and that this could provide a sanity check. But in this case, I think Bostrom’s estimate passes that sanity check, so don’t think he has made any kind of mistake here.
Regarding the ‘second mistake’, I don’t see how it is very different from the first one. If there remains high average per-period risk, then the expected benefits of avoiding nearterm risk is indeed greatly lowered — from ‘overwhelming’ to just ‘large’. In effect, it starts to approach the level of risk to currently existing people (which is sometimes argued to be so large already that we don’t need to talk about future generations).
But it doesn’t seem unreasonable to me for Millet and Snyder-Beattie to model things with an expected lifespan for humanity equal to that of a typical species. It is true that if risk stays high, then we won’t get that, but risk staying high would be a more contentious assumption. And uncertainty about the final rate, tends to increase the expectation. e.g. If there was even a 1 in 400 chance that we last as long as the Nautilus, then that alone would make M & SB’s assumption an underestimate. Again, I can’t see any ‘mistake’ here.
I was actually much more intrigued by your comment about a systematic overestimate due to an implicit assumption of independence between the variables they estimate. I’d have loved to see that developed instead.
There is also room for an interesting critique of EV of risk reduction as the best measure. Your arguments generally put pressure on the idea that the estimate of M & SB (or other people’s duration estimates) are typical of the probability distribution. That is, they might be OK as estimates of the expectations (means), but they get much of that EV from the extreme tail of the distribution. And we might have Pascallian concerns about cases like that, where there is a decent case that we shouldn’t compare prospects like this by their expectations.
‘But risk staying high would be a more contentious assumption.’ Why? I take it this is really the heart of the disagreement, so it would be good to hear what makes you think this.
I thought section 6.1 on ‘Cumulative risk and intergenerational coordination’ was very good. Many people (including those promoting action on existential risk) neglect how important it is that we get risk down and then keep it down. This is a necessary part of what I call existential security in my section of The Precipice devoted to our longterm strategy. And it is not easy to achieve. One strategy I talk about is implementing a constitution for humanity, committing future generations to work within their own diminishing share of a finite existential risk budget.
I broadly agree with section 6.2 on dialectical flips, which is why I made broadly the same point in The Precipice (p 275):
“This is contrary to our intuitions, as people who estimate risk to be low typically use this as an argument against prioritizing work on existential risk.”
I was thus a bit surprised to see my book cited in that section as something endorsing the Time of Perils, but not as something that had made the same point you are making about why high risk per-period risk reduces the EV of work on single-period risk reduction.
Re the third ‘mistake’, there is a long history of thinking that carrying capacity is a decent proxy for long term population. Is it a good proxy? Probably not in many situations. Is it better than extrapolating out the current growth dynamics for millions of years? Probably. My guess is that it is a simple defensible rough model here. And by laying out separate estimates for different scales being reached, there is also a pretty good sensitivity analysis. I think you are right that this could be improved by adding cases of permanent population collapse to the sensitivity analysis. But it won’t change the EV much. So again, I wonder if a superior critique would be: these estimates are more or less correct in EV terms, but we should be suspicious of EV.
Hi David,
Thanks for sharing this.
My main reaction is that I was puzzled by the framing. It is obviously an allusion to Parfit’s ‘Five Mistakes in Moral Mathematics’. But there are major differences. Parfit was objecting to pieces of maths that are embedded in our common-sense understanding of morality, such as the share of the total view. He argued that the maths of morality is different to that. You are complaining about three modelling assumptions about the empirics of risk over time and population over time. You don’t present any disagreement with the moral mathematics (which is just a big expected value calculation over the total wellbeing of all future people). And you don’t even suggest that there were mistakes made in the modelling, just that they may rely on assumptions that weren’t foregrounded and thus be misleading.
I liked the work you did on foregrounding those assumptions, but felt a bit let down by the framing. I expected a Parfit-mark-II piece that showed how our commonsense understanding of the ethics of longtermism relied on mistaken moral assumptions, but instead found a piece that mainly just suggested different modelling assumptions (and in my view, assumptions that are more misleading than those in the pieces you critique).
The framing also sounds to my ear to be a bit insulting to your peers. To some extent, any paper in moral philosophy where the author disagrees with another person could be reframed to be about a mistake in the other person’s moral reasoning, but I’m glad authors don’t typically choose that frame. Instead, they say that opponent said P, while this piece argues not P, or that the opponent assumed Q, while here are some reasons that Q is not a safe assumption. This keeps the focus on the content.
This is a lot of text re a piece’s framing, but I wanted to lay it out because (at least for me) the framing really does distract from the content.
(I’ll split further reactions on particular parts into separate comments)
Regarding the ‘first mistake’, you correctly show that survival of a species for a billion years requires reaching a low per-period level of risk (averaging roughly 1 in a billion per year). I don’t disagree with that and I doubt Bostrom would either. No complex species has yet survived so long, but that is partly because there have been less than 1 billion years since complex life began. But there are species (or at least families) that have survived almost the whole time, such as the Nautilus (which has survived 500 million years). So risk levels comparable to 1 in a billion per year do occur. For Bostrom’s modelling of the EV of risk reduction to work, he just needs there to be at least a small chance (say 1 in 1 million) that risk declines to such a level or beyond. That sounds eminently plausible to me, and my best guess of this probability would be much higher.
You say that: “there is a clear sense in which the drop in existential risk that Bostrom envisions is not small, but instead very large”. But note that this is not the drop in existential risk that need be caused by the intervention Bostrom is evaluating. He relies on there being at least a slender possibility that risk levels fall to something like those of the safest species on Earth, but the intervention doesn’t need to bring that about.
So on this ‘first mistake’, I agree that it is often also useful to think of things in per-period risk, and that this could provide a sanity check. But in this case, I think Bostrom’s estimate passes that sanity check, so don’t think he has made any kind of mistake here.
Regarding the ‘second mistake’, I don’t see how it is very different from the first one. If there remains high average per-period risk, then the expected benefits of avoiding nearterm risk is indeed greatly lowered — from ‘overwhelming’ to just ‘large’. In effect, it starts to approach the level of risk to currently existing people (which is sometimes argued to be so large already that we don’t need to talk about future generations).
But it doesn’t seem unreasonable to me for Millet and Snyder-Beattie to model things with an expected lifespan for humanity equal to that of a typical species. It is true that if risk stays high, then we won’t get that, but risk staying high would be a more contentious assumption. And uncertainty about the final rate, tends to increase the expectation. e.g. If there was even a 1 in 400 chance that we last as long as the Nautilus, then that alone would make M & SB’s assumption an underestimate. Again, I can’t see any ‘mistake’ here.
I was actually much more intrigued by your comment about a systematic overestimate due to an implicit assumption of independence between the variables they estimate. I’d have loved to see that developed instead.
There is also room for an interesting critique of EV of risk reduction as the best measure. Your arguments generally put pressure on the idea that the estimate of M & SB (or other people’s duration estimates) are typical of the probability distribution. That is, they might be OK as estimates of the expectations (means), but they get much of that EV from the extreme tail of the distribution. And we might have Pascallian concerns about cases like that, where there is a decent case that we shouldn’t compare prospects like this by their expectations.
‘But risk staying high would be a more contentious assumption.’ Why? I take it this is really the heart of the disagreement, so it would be good to hear what makes you think this.
I thought section 6.1 on ‘Cumulative risk and intergenerational coordination’ was very good. Many people (including those promoting action on existential risk) neglect how important it is that we get risk down and then keep it down. This is a necessary part of what I call existential security in my section of The Precipice devoted to our longterm strategy. And it is not easy to achieve. One strategy I talk about is implementing a constitution for humanity, committing future generations to work within their own diminishing share of a finite existential risk budget.
I broadly agree with section 6.2 on dialectical flips, which is why I made broadly the same point in The Precipice (p 275):
“This is contrary to our intuitions, as people who estimate risk to be low typically use this as an argument against prioritizing work on existential risk.”
I was thus a bit surprised to see my book cited in that section as something endorsing the Time of Perils, but not as something that had made the same point you are making about why high risk per-period risk reduces the EV of work on single-period risk reduction.
Re the third ‘mistake’, there is a long history of thinking that carrying capacity is a decent proxy for long term population. Is it a good proxy? Probably not in many situations. Is it better than extrapolating out the current growth dynamics for millions of years? Probably. My guess is that it is a simple defensible rough model here. And by laying out separate estimates for different scales being reached, there is also a pretty good sensitivity analysis. I think you are right that this could be improved by adding cases of permanent population collapse to the sensitivity analysis. But it won’t change the EV much. So again, I wonder if a superior critique would be: these estimates are more or less correct in EV terms, but we should be suspicious of EV.
Thanks Toby! Comments much appreciated.