Doubting Deterrence by Denial

Link post

Reposted (with edits) from the Oxford Emerging Threats Group Journal.

File:THAAD (cropped).jpg
The U.S. Army Ralph Scott/​Missile Defense Agency/​U.S. Department of Defense, CC BY 2.0 <https://​​creativecommons.org/​​licenses/​​by/​​2.0>, via Wikimedia Commons.

Deterrence by denial refers to strategies [that] seek to deter an action by making it infeasible or unlikely to succeed, thus denying a potential aggressor confidence in attaining its objective”. It contrasts with deterrence by punishment: deterrence through the threat of retaliatory force. Regarding the governance of emerging technologies, the notion is simple: actors should disincentivise adversaries — whether rival states or terrorist organisations — from using emerging technologies to effect harm by ensuring they are adequately defended from them. For example, robust biosurveillance and rapid countermeasures could deter the malicious use of emerging biotechnologies; better physical and cyber defences for cyberinfrastructure could prevent AI-enabled cyberattacks; and the proliferation of satellites could deter space weaponisation and orbital attacks through increasing our ability to detect space weapons and increasing the number of targets that need to be taken down by an adversary.

This idea is gaining momentum, particularly as complexities around the offence-defence balance of dual-use technologies motivate reliance on defence-forward approaches such as differential technological development. However, there has been limited theoretical engagement with the concept. A cursory consideration of deterrence by denial reveals it is limited by the challenges of information asymmetries, horizontal proliferation strategies, and underappreciated strategic tradeoffs. These are all confounded by sensitivities to different types of actors and technologies. It is important to not naively rely on deterrence-by-denial strategies without deeper consideration of these factors.

(In)credible Signalling and Information Asymmetries

In order to be successful in a deterrence-by-denial strategy, one must not only develop adequate defensive capabilities but also credibly signal that your defensive strategies would actually work and ensure that adversaries are actually paying attention to this information in the first place. Strategies such as costly signalling – bearing costs to demonstrate your commitment – are likely to fail in situations when deterrence requires a perception that a defence will be successful.

In the context of emerging technologies, this is particularly difficult when there are few case studies or established norms about what effective defensive institutions should look like. It is even more difficult when there is a great degree of variance built into defensive outcomes. A costly, decades-long biosurveillance program may have little effect on biodefence if its underlying technology and bioinformatic processing are poor — precisely as seen with the United States’ BioWatch program.

Costly signalling is not the only approach, and there are many other solutions to establishing credible signals. However, the problem is that the absence of consensus and the variance of defensive outcomes mean the most promising strategies are precisely those that may give away useful information for subverting your defensive capabilities. While some have argued that transparency and reputation-building can establish credible signals, these strategies can be double-edged swords if they provide information that an adversary can use to subvert your defences.

All of this assumes that actors are even paying attention to defensive capabilities at this level of granularity, which cannot be taken as given for all types of actors. An important source of variance is whether the threat is from states with sophisticated intelligence programs, groups like Aum Shinrikyo, who attempted to deploy Anthrax in 1993 before their successful chemical weapons attack in 1995, or even lone wolves like Bruce Ivins — who is suspected of committing the 2001 Amerithrax attacks. This is especially problematic as the spread of dangerous capabilities increases the number and types of actors interested in causing harm. The second tension is then making sure that efforts to ensure relevant actors are paying attention to defensive capabilities without creating attention hazards and motivating them to pursue entirely new modalities for harm. In all, establishing the right level of information asymmetry is the critical challenge. The difficulty of credible signalling is a critical reason deterrence-by-denial is far from a panacea.

General Purposeness and ‘Horizontal’ Capability Proliferation

As stated, the concerns about credible signalling stem from worries that the absence of credible signals and attention hazards may motivate actors to pursue attacks they otherwise would not have with better information about how to subvert defences. However, this cost may be worth bearing if it discourages enough other actors and sufficiently constrains their ability to subvert defences.

However, this second consideration cannot be brushed off so lightly. The more general-purpose a technology is, the easier it is for defences to be subverted. For sufficiently general-purpose or fungible technologies, deterrence-by-denial strategies may have the unintended outcome of shifting the proliferation of capabilities from improving a narrow set of capabilities an adversary may have a comparative advantage in (vertical proliferation) to widening their set of capabilities (horizontal proliferation) instead of merely suppressing capabilities. An important historical precedent here would be the development of missile defence systems during the Cold War. As defensive technologies aimed at specific threats advanced, adversaries (namely the US and USSR) responded by diversifying their offensive strategies — developing countermeasures like multiple independently targetable reentry vehicles (MIRV), decoys, and manoeuvrable warheads.

A motivating factor for pursuing deterrence by denial is that offensive strategies motivate arms races between actors in a way that defensive strategies do not. However, the distinction between horizontal and vertical proliferation helps elucidate the ways in which deterrence-by-denial strategies may merely shape the type of race that occurs. In particular, it may generate races in which actors chase increasing levels of general-purposeness and fungibility rather than the normal pressures of race dynamics towards increasing quantities and potencies of technologies, as was seen with the development of missile defence systems during the Cold War.

Of course, independent pressures towards general purposeness mean it may be the case that the effect of deterrence by denial in contributing towards this is low. Additionally, whether or not this outcome is even worse than vertical proliferation is highly dependent on technological particularities and whether there are increasing costs to scale relative to the gains from diversification. However, the critical insight is then that the effect of deterrence-by-denial on motivating risky horizontal proliferation is sensitive to the technology in question. Particularly in a world where AI may accelerate technological development cycles, I think there are entire classes of technologies where pursuing a deterrence-by-denial strategy is net negative due to these horizontal proliferation dynamics.

Artificial intelligence itself is probably a case where the independent pressures towards general purposeness mean that deterrence by denial plays an insignificant role in magnifying risks. However, the future of aerial warfare — whether drones or even nanotechnology — could represent one set of technologies where prioritising the development of deterrence-by-denial measures may inadvertently accelerate the development of increasingly general-purpose military technologies. This is not to say that vertical proliferation of weaker drones would necessarily be a preferred outcome. However, it does raise questions about the importance of deterrence by denial within the broader governance portfolio and what additional measures should be in place to mitigate some of these unwanted effects. At the very least, a successful deterrence by denial strategy is unlikely to be a cure-all and, instead, must be appropriately scoped for relevant technologies.

Deterrence-Defence Trade-offs

While deterrence by denial has its flaws as a strategy, a key reason to be excited about it is that it doubles up as a deterrence strategy and a defensive one. The marginal cost of pursuing deterrence on top of the defensive capabilities can be so low that it ultimately remains a cost-effective strategy, even if it has some downsides. However, this argument ignores some necessary trade-offs between pursuing defence for its own sake versus deterrence by denial.

Central to all the relevant trade-offs here is this tension between credible signalling as a deterrent and maintaining secrecy to avoid pushing adversaries to subvert one’s defensive capabilities. The first trade-off is that there are fundamental strategic differences in the risk-reward calculus between deterrence and defence. A salient example of this tension is the difficult question of how far along the heavy tail of catastrophic outcomes should be considered. For deterrence, the overall risk-reward calculus is not merely about whether a greater number of actors are dissuaded compared to those motivated to subvert defences. One must also consider the capabilities of the remaining actors who now have additional information about your defensive systems, even if they are the minority. This may mean prioritising secrecy to mitigate the likelihood of worst-case scenario outcomes, even at the cost of failing to deter less serious attacks. However, for defence, the risk-reward calculus is about the extent to which one should be prepared for extremely unlikely but severe incidents, even if not due to adversaries. In some scenarios, secrecy may actively worsen risks. This is particularly relevant concerning autonomous risks from artificial general intelligence, where proposed solutions, such as ensuring the transparency of powerful AIs to ensure they can be inspected, trade off against strategic confidentiality. This is not to say that these strategic differences cannot be operationalised under a single framework. However, these differences mean this unification is not a trivial task.

Secondly, these divergent strategies mean additional costs through divergent tactical approaches and institutional design. These optimisation trade-offs can be costly, often arising because robust defence can often be invisible absent additional measures to signal defensive capabilities, as well as the fact that measures that effectively signal defensive capabilities may have little effect on defence. The result may be the creation of new initiatives, strategies, or even institutions primarily focused on deterrent signalling. Here, it must be noted that deterrence by denial does not necessarily need to be defensive. For example, in a proposal for deterrence by denial in cyberspace, Erica D. Borghard and Shawn W. Lonergan are clear that it [entails] maneuver [sic] and operations to achieve deterrent effects, rather than holding forces in reserve and communicating a capability and intent to use it”. They propose, for example, that the US actively increase the number of cyberoperations aimed at crippling the capabilities of adversaries to deter others. Such a program is much more costly than merely expanding defensive capabilities.

Finally, deterrence by denial may pose additional operational and administrative costs on top of robust defences. This is for at least three reasons. Firstly, there are concrete costs of tracking threats, receiving intelligence, and integrating defensive systems into a national security apparatus. Secondly, there are the actual costs of credible signalling, such as investing in certification processes, public testing, or in-field deployments for reputation-building. Finally, a deterrence-focused strategy creates the need for proactive adaptation to an adversary’s perceptions of one’s capability rather than adapting to assessments of their capabilities alone in order to ensure deterrence is working. This can confer extensive intelligence costs in particular.

Conclusion: Towards Fine-Grained Theories of Deterrence by Denial

The success of a deterrence-by-denial strategy relies on the ability to credibly signal defensive capabilities to whether the particular technology of interest motivates inadvertent horizontal escalation that ultimately undermines national and international security. These considerations cannot be taken lightly, as deterrence by denial itself has the potential to introduce considerable costs over robust defence alone. Rapid developments in technologies such as artificial intelligence, biotechnology, and quantum computing — as well as rising threats such as Russia’s recent expansion of one of its former major bioweapons facilities — make it prudent for actors to consider how they can deter and defend against emerging technology threats. However, actors should not naively rely on deterrence-by-denial strategies without seriously considering the feasibility of credible signalling, the potential for inadvertent horizontal proliferation, and recognition of the many trade-offs to ensure such strategies are as effective as possible.