Thoughts on The Weapon of Openness

The Weapon of Openness is an essay published by Arthur Kantrowitz and the Foresight Institute in 1989. In it, Kantrowitz argues that the long-term costs of secrecy in adversarial technology development outweigh the benefits, and that openness (defined as “public access to the information needed for the making of public decisions”) will therefore lead to better technology relative to adversaries and hence greater national security. As a result, more open societies will tend to outperform more secretive societies, and policymakers should tend strongly towards openness even in cases where secrecy is tempting in the short-term.

The Weapon of Openness presents itself as a narrow attack on secrecy in technological development. In the process, however, it makes many arguments which seem to generalise to other domains of societal decision-making, and can hence be viewed as a more general attack on certain kinds of secretiveness[1]. As such, it seems worth reviewing and reflecting on the arguments in the essay and how they might be integrated with a broader concern for information hazards and the long-term future.

The essay itself is fairly short and worth reading in its entirety, so I’ve tried to keep this fairly brief. Any unattributed blockquotes in the footnotes are from the original text.

Secrecy in technological development

The benefits of secrecy in adversarial technological development are obvious, at least in theory. Barring leaks, infiltration, or outright capture in war, the details of your technology remain opaque to outsiders. With these details obscured, it is much more difficult for adversaries to either copy your technology or design countermeasures against it. If you do really well at secrecy, even the relative power level of your technology remains obscured, which can be useful for game-theoretic reasons[2].

The costs of secrecy are more subtle, and easier to miss, but potentially even greater than the benefits. This should sound alarm bells for anyone familiar with the failure modes of naïve consequentialist reasoning.

One major cost is cutting yourself off from the broader scientific and technological discourse, greatly restricting the ability of experts outside the project to either propose new suggestions or point out flaws in your current approach. This is bad enough by itself, but it also makes it much more difficult for project insiders to enlist outside expertise during internal disputes over the direction of the project. The result, says Kantrowitz, is that disputes within secret projects have a much greater tendency to be resolved politically, rather than on the technical merits. That means making decisions that flatter the decision-makers, those they favour and those they want to impress, and avoiding changes of approach that might embarrass those people. This might suffice for relatively simple projects that involve making only incremental improvements on existing technology, but when the project aims for an ambitious leap in capabilities (and hence is likely to involve several false starts and course corrections) it can be crippling[3].

This claimed tendency of secret projects to make technical decisions on political grounds hints at Kantrowitz’s second major argument[4]: that secrecy greatly facilitates corruption. By screening not only the decisions but the decision-making progress from outside scrutiny, secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers. Given the well-known general tendency of humans to respond to selfish incentives, the result is unsurprising: greatly increased toleration of waste, delay and other inefficiencies, up to and including outright corruption in the narrow sense, when these inefficiencies make the lives of decision-makers or those they favour easier, or increase their status (e.g. by increasing their budget)[5].

This incentive to corruption is progressive and corrosive, gradually but severely impairing general organisational effectiveness in ways that will obviously impair the effectiveness of the secret project. If the same organisation performs other secret projects in the future, the corrosion will be passed to these successor projects in the form of normalised deviance and generalised institutional decay. Since the corrupted institutions are the very ones responsible for identifying this corruption, and are screened from most or all external accountability, this problem can be very difficult to reverse.

Hence, says Kantrowitz, states that succumb to the temptations of secret technological development may reap some initial gains, but will gradually see these gains eaten away by impaired scientific/​technological exchange and accumulating corruption until they are on net far less effective than if they’d stayed open the whole time. The implication of this seems to be that the US and its allies should be tend much more towards openness and less towards secrecy, at least in the technological domain in peacetime[6].

Secrecy as a short-term weapon

Finally, Kantrowitz makes the interesting argument that secrecy can be a highly effective short-term weapon, even if it isn’t a viable long-term strategy.

When a normally-open society rapidly increases secrecy as a result of some emergency pressure (typically war) they initially retain the strong epistemic institutions and norms fostered by a culture of openness, and can thus continue to function effectively while reaping the adversarial advantages provided by secrecy. In addition, the pressures of the emergency can provide an initial incentive for good behaviour: “the behavior norms of the group recruited may not tolerate the abuse of secrecy for personal advancement or interagency rivalry.”

As such, groups that previously functioned well in the open can continue to function well (or even better) in secret, at least for some short time. If the emergency persists for a long time, however, or if the secret institutions persist past the emergency that created them, the corroding effects of secrecy – on efficacy and corruption – will begin to take root and grow, eventually and increasingly compromising the functionality of the organisation.

Secrecy may therefore be good tactics, but bad strategy. If true, this would explain how some organisations (most notably the Manhatten Project) produce such impressive achievements while remaining highly secretive, while also explaining why these are exceptions to the general rule.

Speculating about this myself, this seems like an ominous possibility: the gains from secrecy are clearly legible and acquired rapidly, while the costs accrue gradually and in a way difficult for an internal actor to spot. The initial successes justify the continuation of secrecy past the period where it provided the biggest gains, after which the accruing costs of declining institutional health make it increasingly difficult to undo. Those initial successes, if later made public, also serve to provide the organisation with a good reputation and public support, while the organisations declining performance in current events are kept secret. As a result, the organisation’s secrecy could retain both public and private support well past the time at which it begins to be a net impediment to efficacy[7].

If this argument is true, it suggests that secrecy should be kept as a rare, short-term weapon in the policy toolbox. Rather than an indispensible tool of state policy, secrecy might then be regarded analogously to a powerful but addictive stimulant: to be used sparingly in emergencies and otherwise avoided as much as possible.

Final thoughts

The Weapon of Openness presents an important-seeming point in a convincing-seeming way. Its arguments jibe with my general understanding of human nature, incentives, and economics. If true, they seem to present an important counterpoint to concerns about info hazards and information security. At the same time, the piece is an essay, not a paper, and goes to relatively little effort to make itself convincing beyond laying out its central vision: Kantrowitz provides few concrete examples and cites even fewer sources. I am, in general, highly suspicious of compelling-seeming arguments presented without evidentiary accompaniment, and I think I should be even more so when those arguments are in support of my own (pro-academic, pro-openness) leanings. So I remain somewhat uncertain as to whether the key thesis of the article is true.

(One point against that thesis that immediately comes to mind is that a great deal of successful technological development in an open society is in fact conducted in secret. Monetised open-source software aside, private companies don’t seem to be in the habit of publicly sharing their product before or during product development. A fuller account of the weapon of openness would need to account for why private companies don’t fail in the way secret government projects are alleged to[8].)

If the arguments given in the Weapon of Openness are true, how should those of us primarily concerned with value of the long-term future respond? Long-termists are often sceptical of the value of generalised scientific and technological progress, and in favour of slower, more judicious, differential technological development. The Weapon of Openness suggests this may be a much more difficult needle to thread than it initially seems. We may be sanguine about the slower pace of technological development[9], but the corrosive effects of secrecy on norms and institutions would seem to bode poorly for the long-term preservation of good values required for the future to go well.

Insofar as this corrosion is inevitable, we may simply need to accept serious information hazards as part of our narrow path towards a flourishing future, mitigating them as best we can without resorting to secrecy. Insofar as it is not, exploring new ways[10] to be secretive about certain things while preserving good institutions and norms might be a very important part of getting us to a good future.


  1. ↩︎

    It was, for example, cited in Bostrom’s original information-hazards paper in discussion of reasons one might take a robust anti-secrecy stance.

  2. ↩︎

    Though uncertainty about your power can also be very harmful, if your adversaries conclude you are less powerful than you really are.

  3. ↩︎

    Impediments to the elimination of errors will determine the pace of progress in science as they do in many other matters. It is important here to distinguish between two types of error which I will call ordinary and cherished errors. Ordinary errors can be corrected without embarrassment to powerful people. The elimination of errors which are cherished by powerful people for prestige, political, or financial reasons is an adversary process. In open science this adversary process is conducted in open meetings or in scientific journals. In a secret project it almost inevitably becomes a political battle and the outcome depends on political strength, although the rhetoric will usually employ much scientific jargon.

  4. ↩︎

    As a third argument, Kantrowitz also claims that greater openness can reduce “divisiveness” and hence increase societal unity, further strengthening open societies relative to closed ones. I didn’t find this as well-explained or convincing as his other points so I haven’t discussed it in the main text here.

  5. ↩︎

    The other side of the coin is the weakness which secrecy fosters as an instrument of corruption. This is well illustrated in Reagan’s 1982 Executive Order #12356 on National Security (alarmingly tightening secrecy) which states {Sec. 1.6(a)}: “In no case shall information be classified in order to conceal violations of law, inefficiency, or administrative error; to prevent embarrassment to a person, organization or agency; to restrain competition; or to prevent or delay the release of information that does not require protection in the interest of national security.” This section orders criminals not to conceal their crimes and the inefficient not to conceal their inefficiency. But beyond that it provides an abbreviated guide to the crucial roles of secrecy in the processes whereby power corrupts and absolute power corrupts absolutely. Corruption by secrecy is an important clue to the strength of openness.

  6. ↩︎

    We can learn something about the efficiency of secret vs. open programs in peacetime from the objections raised by Adm. Bobby R. Inman, former director of the National Security Agency, to open programs in cryptography. NSA, which is a very large and very secret agency, claimed that open programs conducted by a handful of matheticians around the world, who had no access to NSA secrets, would reveal to other countries that their codes were insecure and that such research might lead to codes that even NSA could not break. These objections exhibit NSA’s assessment that the best secret efforts, that other countries could mount, would miss techniques which would be revealed by even a small open uncoupled program. If this is true for other countries is it not possible that it also applies to us?

  7. ↩︎

    Kantrowitz expresses similar thoughts: “The general belief that there is strength in secrecy rests partially on its short-term successes. If we had entered WWII with a well-developed secrecy system and the corruption which would have developed with time, I am convinced that the results would have been quite different.”

  8. ↩︎

    There are various possible answers to this I could imagine being true. The first is that private companies are in fact just as vulnerable to the corrosive effects of secrecy as governments are, and that technological progress is much lower than it would be if companies were more open. Assuming arguendo that this is not the case, there are several factors I could imagine being at play. I originally had an itemised list here but the Forum is mangling my footnotes, so I’ll include it as a comment for now.

  9. ↩︎

    How true this is depends on how much importance you place on certain kinds of adversarialism: how important you think it is that particular countries (or, more probably, particular kinds of ideologies) retain their competitive advantage over others. If you believe that the kinds of norms that tend to go with an open society (free, democratic, egalitarian, truth-seeking, etc) are important to the good quality of the long-term future you may be loath to surrender one of those societies’ most important competitive advantages. If you doubt the long-term importance of those norms, or their association with openness, or the importance of that association to the preservation of these norms, this will presumably bother you less.

  10. ↩︎

    I suspect they really will need to be new ways, and not simply old ways with better people. But I as yet know very little about this, and am open to the possibility that solutions already exists about which I know nothing.