Given the rate at which existential risks seem to be proliferating, it’s hard not to suspect that unless humanity comes up with a real game-changer, in the long run we’re stuffed. David Thorstad has recently argued that this poses a major challenge to longtermists who advocate prioritising existential risk. The more likely an x-risk is to destroy us, Thorstad notes, the less likely there is to be a long-term future. Nor can we solve the problem by mitigating this or that particular x-risk—we would have to reduce all of them. The expected value of addressing x-risks may not be so high after all. There would still be an argument for prioritising them if we are passing through a ‘time of perils’ after which existential risk will sharply fall. But this is unlikely to be the case.
Thorstad raises a variety of intriguing questions which I plan to tackle in a later post, picking up in part on Owen Cotton-Barratt’s insightful comments here. In this post I’ll focus on a particular issue—his claim that settling outer space is unlikely to drive the risk of human extinction low enough to rescue the longtermist case. Like other species, ours seems more likely to survive if it is widely distributed. Some critics, however, argue that space settlements would still be physically vulnerable, and even writers sympathetic to the project maintain they would remain exposed to dangerous information. Certainly many, perhaps most, settlements would remain vulnerable. But would all of them?
First let’s consider physical vulnerability. Daniel Deudney and Phil (Émile) Torres have warned of the possibility of interplanetary or even interstellar conflict. But once we or other sentient beings spread to other planets, it would render travel between them time-consuming. On the one hand, that would seem to preclude any United Federation of Planets to keep the peace, as Torres notes, but it would also make warfare difficult and—very likely—pointless, just as it once was between Europe and the Americas. It’s certainly possible, as Thorstad notes, that some existential threat could doom us all before humanity gets to this point, but it doesn’t seem like a cert.
Deudney seems to anticipate this objection, and argues that ‘the volumes of violence relative to the size of inhabited territories will still produce extreme saturation….[U]ntil velocities catch up with the enlarged distances, solar space will be like the Polynesian diaspora—with hydrogen bombs.’ But if islands are far enough apart, the fact that weapons could obliterate them wouldn’t matter if there were no way to deliver the weapons. It would still matter, but less so, if it took a long time to deliver the weapons, allowing the targeted island to prepare. Ditto, it would seem, for planets.
Suppose that’s right. We might still not be out of the woods. Deudney warns that ‘giant lasers and energy beams employed as weapons might be able to deliver destructive levels of energy across the distances of the inner solar system in times comparable to ballistic missiles across terrestrial distances.’ But he goes on to note that ‘the distances in the outer solar system and beyond will ultimately prevent even this form of delivering destructive energy at speeds that would be classified as instantaneous.’ That might not matter so much if the destructive energy reached its target in the end. Still, I’d be interested whether any EA Forum readers know whether interstellar death rays of this kind are feasible at all.
There’s also the question of why war would occur. Liberals maintain that economic interdependence promotes peace, but as critics have long pointed out, it also gives states something to fight about. Bolivia and Bhutan don’t get into wars with each other because they don’t—I assume—interact very much at all. One theory about why there’s been so little interstate conflict within South America is that geographical barriers such as the Amazon not only make it hard to fight other countries but also deprive them of reasons for fighting. The same might be true of space settlements.
Even if space settlements were invulnerable to physical attack, this needn’t mean they would be safe. Information—such as misaligned AI or pernicious ideologies—could still spread at the speed of light. ‘Many risks, such as disease, war, tyranny and permanently locking in bad values’, Toby Ord writes in The Precipice, ‘are correlated across different planets: if they affect one, they are somewhat more likely to affect the others too. A few risks, such as unaligned [artificial general intelligence] and vacuum collapse, are almost completely correlated: if they affect one planet, they will likely affect all.’ This leads Ord to conclude that space colonization would be insufficient to eliminate existential risk.
This might be true for vacuum collapse, but is it for misaligned AI? For it to be transmitted to other worlds, it would take not only a sender, but recipients. Ditto for designer diseases, computer viruses and so forth. While many extraterrestrial civilizations would no doubt maintain contact with others, it seems improbable that all would. Some groups that settled new planets would do it to get away from earth civilization for religious, moral or aesthetic reasons. They might have the explicit goal of preserving humanity from existential risk. Many such groups would deliberately isolate themselves, in part out of fear of the scenarios Ord discusses, and work hard to prevent contact with other planets.
These controls probably wouldn’t be airtight. Even the totalitarian states of the twentieth century couldn’t stop everyone from listening to foreign radio broadcasts. And misaligned superintelligence might be extremely clever at tricking other planets to tune in. Still, some of these civilisations would probably find ways to make it difficult, notable artificial superintelligence of their own. If some ASIs succeeded in cutting off their worlds from communication, they might survive indefinitely. Alternatively, some might voluntarily renounce modern technology, or suffer a natural or anthropogenic cataclysm that returned them to Neolithic conditions, and survive a long time. If there were enough settlements, only a minority would have to survive in this way for there to be a big long-term future. In his Philosophy and Public Affairs paper, Thorstad explicitly brackets the effects of AI, and as I’ll argue in a later post, that puts a big asterisk on his argument.
None of this shows that human beings ought to expand into space—at least not without further argument. Trying to do might, as Deudney and Torres argue, create new existential risks that are worse than the ones we already face, or—by spreading life to other planets—multiply the sum of wild animal suffering enormously. Even if some space settlements worked out great, the overall outcome could be bad if most became dystopias. What the possibilities do suggest, however, is that contra Thorstad, there could well be an astronomical amount at stake in how we address both existential and suffering risks.
Space settlement and the time of perils: a critique of Thorstad
Given the rate at which existential risks seem to be proliferating, it’s hard not to suspect that unless humanity comes up with a real game-changer, in the long run we’re stuffed. David Thorstad has recently argued that this poses a major challenge to longtermists who advocate prioritising existential risk. The more likely an x-risk is to destroy us, Thorstad notes, the less likely there is to be a long-term future. Nor can we solve the problem by mitigating this or that particular x-risk—we would have to reduce all of them. The expected value of addressing x-risks may not be so high after all. There would still be an argument for prioritising them if we are passing through a ‘time of perils’ after which existential risk will sharply fall. But this is unlikely to be the case.
Thorstad raises a variety of intriguing questions which I plan to tackle in a later post, picking up in part on Owen Cotton-Barratt’s insightful comments here. In this post I’ll focus on a particular issue—his claim that settling outer space is unlikely to drive the risk of human extinction low enough to rescue the longtermist case. Like other species, ours seems more likely to survive if it is widely distributed. Some critics, however, argue that space settlements would still be physically vulnerable, and even writers sympathetic to the project maintain they would remain exposed to dangerous information. Certainly many, perhaps most, settlements would remain vulnerable. But would all of them?
First let’s consider physical vulnerability. Daniel Deudney and Phil (Émile) Torres have warned of the possibility of interplanetary or even interstellar conflict. But once we or other sentient beings spread to other planets, it would render travel between them time-consuming. On the one hand, that would seem to preclude any United Federation of Planets to keep the peace, as Torres notes, but it would also make warfare difficult and—very likely—pointless, just as it once was between Europe and the Americas. It’s certainly possible, as Thorstad notes, that some existential threat could doom us all before humanity gets to this point, but it doesn’t seem like a cert.
Deudney seems to anticipate this objection, and argues that ‘the volumes of violence relative to the size of inhabited territories will still produce extreme saturation….[U]ntil velocities catch up with the enlarged distances, solar space will be like the Polynesian diaspora—with hydrogen bombs.’ But if islands are far enough apart, the fact that weapons could obliterate them wouldn’t matter if there were no way to deliver the weapons. It would still matter, but less so, if it took a long time to deliver the weapons, allowing the targeted island to prepare. Ditto, it would seem, for planets.
Suppose that’s right. We might still not be out of the woods. Deudney warns that ‘giant lasers and energy beams employed as weapons might be able to deliver destructive levels of energy across the distances of the inner solar system in times comparable to ballistic missiles across terrestrial distances.’ But he goes on to note that ‘the distances in the outer solar system and beyond will ultimately prevent even this form of delivering destructive energy at speeds that would be classified as instantaneous.’ That might not matter so much if the destructive energy reached its target in the end. Still, I’d be interested whether any EA Forum readers know whether interstellar death rays of this kind are feasible at all.
There’s also the question of why war would occur. Liberals maintain that economic interdependence promotes peace, but as critics have long pointed out, it also gives states something to fight about. Bolivia and Bhutan don’t get into wars with each other because they don’t—I assume—interact very much at all. One theory about why there’s been so little interstate conflict within South America is that geographical barriers such as the Amazon not only make it hard to fight other countries but also deprive them of reasons for fighting. The same might be true of space settlements.
Even if space settlements were invulnerable to physical attack, this needn’t mean they would be safe. Information—such as misaligned AI or pernicious ideologies—could still spread at the speed of light. ‘Many risks, such as disease, war, tyranny and permanently locking in bad values’, Toby Ord writes in The Precipice, ‘are correlated across different planets: if they affect one, they are somewhat more likely to affect the others too. A few risks, such as unaligned [artificial general intelligence] and vacuum collapse, are almost completely correlated: if they affect one planet, they will likely affect all.’ This leads Ord to conclude that space colonization would be insufficient to eliminate existential risk.
This might be true for vacuum collapse, but is it for misaligned AI? For it to be transmitted to other worlds, it would take not only a sender, but recipients. Ditto for designer diseases, computer viruses and so forth. While many extraterrestrial civilizations would no doubt maintain contact with others, it seems improbable that all would. Some groups that settled new planets would do it to get away from earth civilization for religious, moral or aesthetic reasons. They might have the explicit goal of preserving humanity from existential risk. Many such groups would deliberately isolate themselves, in part out of fear of the scenarios Ord discusses, and work hard to prevent contact with other planets.
These controls probably wouldn’t be airtight. Even the totalitarian states of the twentieth century couldn’t stop everyone from listening to foreign radio broadcasts. And misaligned superintelligence might be extremely clever at tricking other planets to tune in. Still, some of these civilisations would probably find ways to make it difficult, notable artificial superintelligence of their own. If some ASIs succeeded in cutting off their worlds from communication, they might survive indefinitely. Alternatively, some might voluntarily renounce modern technology, or suffer a natural or anthropogenic cataclysm that returned them to Neolithic conditions, and survive a long time. If there were enough settlements, only a minority would have to survive in this way for there to be a big long-term future. In his Philosophy and Public Affairs paper, Thorstad explicitly brackets the effects of AI, and as I’ll argue in a later post, that puts a big asterisk on his argument.
None of this shows that human beings ought to expand into space—at least not without further argument. Trying to do might, as Deudney and Torres argue, create new existential risks that are worse than the ones we already face, or—by spreading life to other planets—multiply the sum of wild animal suffering enormously. Even if some space settlements worked out great, the overall outcome could be bad if most became dystopias. What the possibilities do suggest, however, is that contra Thorstad, there could well be an astronomical amount at stake in how we address both existential and suffering risks.