I teach international relations and political theory at the University of Nottingham. Much of my research relates to intergenerational ethics, existential risk, or both. I’ve examined cases of ongoing great power peace, and argued that nuclear war is inevitable in the long term if we try to perpetuate nuclear deterrence. I’ve also written extensively about the ethics of climate change, argued that governments should make more use of public debt to address it, and proposed solutions to the non-identity problem and the mere addition paradox.
Matthew Rendall
Thanks! So far as I know, you’re right about interstellar travel. But suppose we got a good bit of dispersal within the solar system, say, ten settlements. There seems a reasonable chance that at least some would deliberately break off communication with the rest of the solar system and develop effective means of policing this. They would then—so far as I can tell—be immune to existential risks transmitted by information—e.g., misaligned AI.
It’s true that they could still be vulnerable to physical attack, such as a killer probe, but how likely is this? It’s conceivable that either human actors or misaligned ASI could decide to wipe out or conquer hermit settlements elsewhere in the solar system, but that strikes me as rather improbable. They’d have to have a strange set of motives.
It might also be hard to do. Since the aggressor would have to project power across a huge distance, so long as the potential victims had means of detecting a probe or some other attack, we might expect the offence-defence balance to favour the defence. (This wouldn’t be true, however, if the reason the settlements had ‘gone off the grid’ was that they had returned to pre-modern conditions, either by choice or by catastrophe.)
Space settlement and the time of perils: a critique of Thorstad
It’s in his book Inequality, chapter 9. Ingmar Persson makes a similar argument about the priority view here: https://link.springer.com/article/10.1023/A:1011486120534.
Larry Temkin has noted an independent reason for doubting the person-affecting restriction stated in section 2.1. Suppose on a wellbeing scale of 1-100 we can create either
A. Kolya, Lev and Maksim, each on 50 or
B. Katya on 40, Larissa on 50 and Maria on 60.
Many would think A better than B, either because it is more equal or because it is better for the worse-off (understood de dicto). But it is not better for any particular person.
Stephen Van Evera [1] argues that for purposes of explaining the outbreak of war, what’s most important is not what the objective o/d balance is (he thinks it usually favours the defence), but rather what states believe it is. If they believe it favours the offence (as VE and some other scholars argue that they did before World War I), war is more likely.
It seems as if perceptions should matter less in the case of cyberattacks. Whereas a government is unlikely to launch a major war unless it thinks either that it has good prospects of success or that it faces near-certain defeat if it doesn’t, the costs of a failed cyberattack are much lower.
Mearsheimer does claim that states rationally pursue security. However, the assumption that states are rational actors—shared by most contemporary realists—is a huge stretch. The original—and still most influential—statement of neorealist theory, Kenneth Waltz’s Theory of International Politics, did not employ a rational actor assumption, but rather appealed to natural selection—states that did not behave as if they sought to maximize security would tend to die out (or, as Waltz put it, ‘fall by the wayside’). In contrast to Mearsheimer, Waltz at least motivated his assumption of security-seeking, rather than simply assuming it.
In subsequent publications, Waltz argued that states would be very cautious with nuclear weapons, and that the risk of nuclear war was very low—almost zero. Setting aside the question of whether almost zero is good enough in the long term, this claim is very questionable. From outside the realist paradigm, Scott Sagan has argued that internal politics are likely to predispose some states—particularly new nuclear states with military-dominated governments—to risky policies.
In a recent critique of both Waltz and Mearsheimer (https://journals.sagepub.com/doi/full/10.1177/00471178221136993), I myself argue that (a) on Waltz’s natural selection logic, we should actually expect great powers to act as if they were pursuing influence, not security—which should make them more risk-acceptant; and (b) Sagan’s worries about internal politics leading to risky nuclear policies should be plausible even within neorealist theory, properly conceived (for the latter argument, see my section ‘Multilevel selection theory’).
Bottom line: When you dig down into neorealist logic,the claim that states will be cautious and competent in dealing with nuclear weapons starts to look really shaky. Classical realists like Hans Morgenthau and John Herz had a better handle on the issue.
Gorsuch wrote that the law would impose costs on farmers but that it ‘serve[d] moral and health interests of some (disputable) magnitude for in-state residents.’ These considerations were, supposedly, ‘incommensurable’, and should thus be left up to the voters.
Interestingly, he does not specify whether he means human or non-human in-state residents. Almost surely he meant the first. The magnitude of the interests involved becomes indisputably overwhelming if we factor in the latter. However, the rationale for respecting the judgement of the voters is correspondingly weakened, since the majority of those affected are disenfranchised.
Indeed. Sharing the work—and the links—was an important step to ensure it lives on in the work of other writers. I’d seen two of the papers, but not the dissertation, which I expect to eventually read and draw on. It’s very sad he won’t be able to turn it into a book.
Good you’re doing this! One suggestion concerning your argument: when assessing the impact of nuclear weapons, it’s helpful to think about their likely effects over the short and long term.
Nuclear war probably is ‘somewhat, but not extremely unlikely’ over the next few decades. If we retain nuclear weapons for centuries, on the other hand, it’s very likely indeed.
Similarly, I agree that it’s important to recognise that nuclear weapons have significant advantages for their possessors (as you mention in your ‘9 mistakes’), including increasing their security in the short term. But a state that tries to keep up nuclear deterrence forever is almost surely dooming itself to an eventual nuclear war.
I elaborate on these considerations in a recent paper here: https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.13142. Here, as elsewhere, we need to take the long view as well as the short one.
Thanks! It seems to me that we should be cautious about assuming that attackers will have the advantage. IR scholars have spent a lot of time examining the offence-defence balance in terrestrial military competition, and while there’s no consensus—even about whether a balance can be identified—I think it’s fair to say that most scholars who find the concept useful believe it tends to favour the defence. That seems particularly plausible when it’s a matter of projecting force at interstellar distances—though if space lasers are possible it could be a different matter (I’d like to know more about this, as I noted in my original post).
If, moreover, attack were possible, it might be with the aim not of destruction, but of conquest. If it succeeded, so long as it didn’t lead to outright extinction, this could still mean astronomical suffering. That is a problem with Thorstad’s argument which I’ll pick up in a subsequent post—it treats existential risks as synonymous with extinction ones.