You raise a good point, and I do think WAS persisting into the long-term future is a serious concern. That said, I think the distinction between incidental and intentional suffering is absolutely crucial from a longtermist perspective.
Agents who value ecosystems or nature aesthetically don’t have “create suffering” as a terminal value. The suffering is a byproduct—one they might be open to eliminate if they could do so without destroying what they actually care about. That makes this amenable to Pareto improvements: keep the ecology, remove the suffering. It’s at least conceivable that those who value ecosystems would be open to interventions that reduce suffering in nature—though they’d probably dislike doing so via advanced technology like nanobots. (Though they might be open to more “natural” interventions, but more on that in a moment.)
It’s also worth noting that WAS at its current Earthly scale isn’t an s-risk (by definition s-risks entail vastly more suffering than currently exists on Earth). For it to become one, you’d need agents who actively spread it to other star systems and insist that all the animals keep suffering, and refuse any intervention. At that point, you’re arguably describing something that could be called “ecological fanaticism”: dogmatic certainty, a simplistic nature-good/intervention-evil dichotomy, and willingness to perpetuate vast suffering in service of that ideology. Admittedly, this is a bit of a definitional stretch but it’s at least in the neighborhood.
As an aside, I think it’s worth noting that a lot of people already care about reducing wild animal suffering in certain ways. Videos of people rescuing wild animals—dogs from drowning, deer stuck on ice—get millions of views and enthusiastic responses. There seems to be broad latent demand for reducing animal suffering when it’s made salient. The vast majority of wild animal suffering persists not because people terminally value it, but because we lack the resources and technology to do much about it right now. That will change with ASI.
What’s more, fanatics will resist compromise and moral trade. Someone who likes nature and has a vague preference to keep it untouched, but isn’t fanatically locked into this, would presumably allow you to eliminate the suffering if you offered enough resources in return (and you do it in a way that doesn’t offend their sensibilities—superintelligent agents might come up with ways of doing that). It’s plausible that altruistic agents will own at least some non-trivial fraction of the cosmic endowment and would be happy to spend it on exactly such trades. Fanatical agents, by contrast, won’t trade or compromise.
Where I think the concern about fanaticism becomes most acute is with agents who believe that deliberately creating suffering is morally desirable—e.g., extreme retributivist attitudes wanting to inflict extreme eternal torment. If people with such values have access to ASI, the resulting suffering could dwarf WAS by orders of magnitude, especially factoring in intensity. That’s the type of scenario we’re trying to draw attention to.
On the atrocity table and intentional deaths I also received a somewhat similar concern via DM: filtering for intentional deaths and then finding fanaticism is circular reasoning. I don’t think it is because intentional ≠ ideologically fanatical. You can have intentional mass killing driven by strategic interest, resource extraction, personal megalomania, etc. (And the table does indeed include 2 non-fanatical examples). The finding is that among the worst intentional mass killings, most involved ideological fanaticism. This is a substantive empirical result, not a tautology.
Including famines wouldn’t even change the picture that much. You’d add the British colonial famines, the Chinese famine of 1907, and Mao’s Great Leap Forward (though the Great Leap Forward was itself clearly driven by fanatical ideological zealotry, and certain ideologies—colonialism, laissez-faire ideology, etc.—probably also substantially contributed to the British famines in India).
More importantly, once you start including famines, why not also include pandemics? And once you include pandemics, why not deaths from disease more generally—cancer, heart disease, etc.? And why not include deaths from aging then? Obviously, the vast majority of deaths since 1800 were not due to fanaticism; most were from hunger, disease, and aging.
But with sufficiently advanced technology, you won’t have deaths from disease, hunger, or aging. These deaths don’t reveal anything about terminal preferences. Intentional deaths do. That’s why, from a longtermist perspective, focusing on intentional deaths isn’t cherry-picking, it’s studying the thing that actually matters for predicting what the long-term future looks like.
ASI will give agents enormous control over the universe, so the future will be shaped primarily by the terminal values of whoever controls that technology. Unintentional mass death from incompetence or nature (like aging) is terrible, but solvable.
Last, I worry that we’re getting too hung up on the atrocity table. Even in a world where ideological fanaticism had resulted in only a few historical atrocities, I’d still be concerned about it as a long-term risk. The table is just one outside-view / historical argument among several for why we should take fanaticism seriously. The core reasons for worrying about fanaticism are mostly discussed inthesesections.
Thanks for the comment, Michael.
On wild animal suffering
You raise a good point, and I do think WAS persisting into the long-term future is a serious concern. That said, I think the distinction between incidental and intentional suffering is absolutely crucial from a longtermist perspective.
Agents who value ecosystems or nature aesthetically don’t have “create suffering” as a terminal value. The suffering is a byproduct—one they might be open to eliminate if they could do so without destroying what they actually care about. That makes this amenable to Pareto improvements: keep the ecology, remove the suffering. It’s at least conceivable that those who value ecosystems would be open to interventions that reduce suffering in nature—though they’d probably dislike doing so via advanced technology like nanobots. (Though they might be open to more “natural” interventions, but more on that in a moment.)
It’s also worth noting that WAS at its current Earthly scale isn’t an s-risk (by definition s-risks entail vastly more suffering than currently exists on Earth). For it to become one, you’d need agents who actively spread it to other star systems and insist that all the animals keep suffering, and refuse any intervention. At that point, you’re arguably describing something that could be called “ecological fanaticism”: dogmatic certainty, a simplistic nature-good/intervention-evil dichotomy, and willingness to perpetuate vast suffering in service of that ideology. Admittedly, this is a bit of a definitional stretch but it’s at least in the neighborhood.
As an aside, I think it’s worth noting that a lot of people already care about reducing wild animal suffering in certain ways. Videos of people rescuing wild animals—dogs from drowning, deer stuck on ice—get millions of views and enthusiastic responses. There seems to be broad latent demand for reducing animal suffering when it’s made salient. The vast majority of wild animal suffering persists not because people terminally value it, but because we lack the resources and technology to do much about it right now. That will change with ASI.
What’s more, fanatics will resist compromise and moral trade. Someone who likes nature and has a vague preference to keep it untouched, but isn’t fanatically locked into this, would presumably allow you to eliminate the suffering if you offered enough resources in return (and you do it in a way that doesn’t offend their sensibilities—superintelligent agents might come up with ways of doing that). It’s plausible that altruistic agents will own at least some non-trivial fraction of the cosmic endowment and would be happy to spend it on exactly such trades. Fanatical agents, by contrast, won’t trade or compromise.
Where I think the concern about fanaticism becomes most acute is with agents who believe that deliberately creating suffering is morally desirable—e.g., extreme retributivist attitudes wanting to inflict extreme eternal torment. If people with such values have access to ASI, the resulting suffering could dwarf WAS by orders of magnitude, especially factoring in intensity. That’s the type of scenario we’re trying to draw attention to.
On the atrocity table and intentional deaths
I also received a somewhat similar concern via DM: filtering for intentional deaths and then finding fanaticism is circular reasoning. I don’t think it is because intentional ≠ ideologically fanatical. You can have intentional mass killing driven by strategic interest, resource extraction, personal megalomania, etc. (And the table does indeed include 2 non-fanatical examples). The finding is that among the worst intentional mass killings, most involved ideological fanaticism. This is a substantive empirical result, not a tautology.
Including famines wouldn’t even change the picture that much. You’d add the British colonial famines, the Chinese famine of 1907, and Mao’s Great Leap Forward (though the Great Leap Forward was itself clearly driven by fanatical ideological zealotry, and certain ideologies—colonialism, laissez-faire ideology, etc.—probably also substantially contributed to the British famines in India).
More importantly, once you start including famines, why not also include pandemics? And once you include pandemics, why not deaths from disease more generally—cancer, heart disease, etc.? And why not include deaths from aging then? Obviously, the vast majority of deaths since 1800 were not due to fanaticism; most were from hunger, disease, and aging.
But with sufficiently advanced technology, you won’t have deaths from disease, hunger, or aging. These deaths don’t reveal anything about terminal preferences. Intentional deaths do. That’s why, from a longtermist perspective, focusing on intentional deaths isn’t cherry-picking, it’s studying the thing that actually matters for predicting what the long-term future looks like.
ASI will give agents enormous control over the universe, so the future will be shaped primarily by the terminal values of whoever controls that technology. Unintentional mass death from incompetence or nature (like aging) is terrible, but solvable.
Last, I worry that we’re getting too hung up on the atrocity table. Even in a world where ideological fanaticism had resulted in only a few historical atrocities, I’d still be concerned about it as a long-term risk. The table is just one outside-view / historical argument among several for why we should take fanaticism seriously. The core reasons for worrying about fanaticism are mostly discussed in these sections.