Thoughts on The Weapon of Openness

The Weapon of Open­ness is an es­say pub­lished by Arthur Kantrow­itz and the Fore­sight In­sti­tute in 1989. In it, Kantrow­itz ar­gues that the long-term costs of se­crecy in ad­ver­sar­ial tech­nol­ogy de­vel­op­ment out­weigh the benefits, and that open­ness (defined as “pub­lic ac­cess to the in­for­ma­tion needed for the mak­ing of pub­lic de­ci­sions”) will there­fore lead to bet­ter tech­nol­ogy rel­a­tive to ad­ver­saries and hence greater na­tional se­cu­rity. As a re­sult, more open so­cieties will tend to out­perform more se­cre­tive so­cieties, and poli­cy­mak­ers should tend strongly to­wards open­ness even in cases where se­crecy is tempt­ing in the short-term.

The Weapon of Open­ness pre­sents it­self as a nar­row at­tack on se­crecy in tech­nolog­i­cal de­vel­op­ment. In the pro­cess, how­ever, it makes many ar­gu­ments which seem to gen­er­al­ise to other do­mains of so­cietal de­ci­sion-mak­ing, and can hence be viewed as a more gen­eral at­tack on cer­tain kinds of se­cre­tive­ness[1]. As such, it seems worth re­view­ing and re­flect­ing on the ar­gu­ments in the es­say and how they might be in­te­grated with a broader con­cern for in­for­ma­tion haz­ards and the long-term fu­ture.

The es­say it­self is fairly short and worth read­ing in its en­tirety, so I’ve tried to keep this fairly brief. Any unattributed block­quotes in the foot­notes are from the origi­nal text.

Se­crecy in tech­nolog­i­cal development

The benefits of se­crecy in ad­ver­sar­ial tech­nolog­i­cal de­vel­op­ment are ob­vi­ous, at least in the­ory. Bar­ring leaks, in­fil­tra­tion, or out­right cap­ture in war, the de­tails of your tech­nol­ogy re­main opaque to out­siders. With these de­tails ob­scured, it is much more difficult for ad­ver­saries to ei­ther copy your tech­nol­ogy or de­sign coun­ter­mea­sures against it. If you do re­ally well at se­crecy, even the rel­a­tive power level of your tech­nol­ogy re­mains ob­scured, which can be use­ful for game-the­o­retic rea­sons[2].

The costs of se­crecy are more sub­tle, and eas­ier to miss, but po­ten­tially even greater than the benefits. This should sound alarm bells for any­one fa­mil­iar with the failure modes of naïve con­se­quen­tial­ist rea­son­ing.

One ma­jor cost is cut­ting your­self off from the broader sci­en­tific and tech­nolog­i­cal dis­course, greatly re­strict­ing the abil­ity of ex­perts out­side the pro­ject to ei­ther pro­pose new sug­ges­tions or point out flaws in your cur­rent ap­proach. This is bad enough by it­self, but it also makes it much more difficult for pro­ject in­sid­ers to en­list out­side ex­per­tise dur­ing in­ter­nal dis­putes over the di­rec­tion of the pro­ject. The re­sult, says Kantrow­itz, is that dis­putes within se­cret pro­jects have a much greater ten­dency to be re­solved poli­ti­cally, rather than on the tech­ni­cal mer­its. That means mak­ing de­ci­sions that flat­ter the de­ci­sion-mak­ers, those they favour and those they want to im­press, and avoid­ing changes of ap­proach that might em­bar­rass those peo­ple. This might suffice for rel­a­tively sim­ple pro­jects that in­volve mak­ing only in­cre­men­tal im­prove­ments on ex­ist­ing tech­nol­ogy, but when the pro­ject aims for an am­bi­tious leap in ca­pa­bil­ities (and hence is likely to in­volve sev­eral false starts and course cor­rec­tions) it can be crip­pling[3].

This claimed ten­dency of se­cret pro­jects to make tech­ni­cal de­ci­sions on poli­ti­cal grounds hints at Kantrow­itz’s sec­ond ma­jor ar­gu­ment[4]: that se­crecy greatly fa­cil­i­tates cor­rup­tion. By screen­ing not only the de­ci­sions but the de­ci­sion-mak­ing progress from out­side scrutiny, se­crecy greatly re­duces the in­cen­tive for de­ci­sion-mak­ers to make de­ci­sions that could be jus­tified to out­side scru­ti­nisers. Given the well-known gen­eral ten­dency of hu­mans to re­spond to self­ish in­cen­tives, the re­sult is un­sur­pris­ing: greatly in­creased tol­er­a­tion of waste, de­lay and other in­effi­cien­cies, up to and in­clud­ing out­right cor­rup­tion in the nar­row sense, when these in­effi­cien­cies make the lives of de­ci­sion-mak­ers or those they favour eas­ier, or in­crease their sta­tus (e.g. by in­creas­ing their bud­get)[5].

This in­cen­tive to cor­rup­tion is pro­gres­sive and cor­ro­sive, grad­u­ally but severely im­pairing gen­eral or­gani­sa­tional effec­tive­ness in ways that will ob­vi­ously im­pair the effec­tive­ness of the se­cret pro­ject. If the same or­gani­sa­tion performs other se­cret pro­jects in the fu­ture, the cor­ro­sion will be passed to these suc­ces­sor pro­jects in the form of nor­mal­ised de­viance and gen­er­al­ised in­sti­tu­tional de­cay. Since the cor­rupted in­sti­tu­tions are the very ones re­spon­si­ble for iden­ti­fy­ing this cor­rup­tion, and are screened from most or all ex­ter­nal ac­countabil­ity, this prob­lem can be very difficult to re­verse.

Hence, says Kantrow­itz, states that suc­cumb to the temp­ta­tions of se­cret tech­nolog­i­cal de­vel­op­ment may reap some ini­tial gains, but will grad­u­ally see these gains eaten away by im­paired sci­en­tific/​tech­nolog­i­cal ex­change and ac­cu­mu­lat­ing cor­rup­tion un­til they are on net far less effec­tive than if they’d stayed open the whole time. The im­pli­ca­tion of this seems to be that the US and its al­lies should be tend much more to­wards open­ness and less to­wards se­crecy, at least in the tech­nolog­i­cal do­main in peace­time[6].

Se­crecy as a short-term weapon

Fi­nally, Kantrow­itz makes the in­ter­est­ing ar­gu­ment that se­crecy can be a highly effec­tive short-term weapon, even if it isn’t a vi­able long-term strat­egy.

When a nor­mally-open so­ciety rapidly in­creases se­crecy as a re­sult of some emer­gency pres­sure (typ­i­cally war) they ini­tially re­tain the strong epistemic in­sti­tu­tions and norms fostered by a cul­ture of open­ness, and can thus con­tinue to func­tion effec­tively while reap­ing the ad­ver­sar­ial ad­van­tages pro­vided by se­crecy. In ad­di­tion, the pres­sures of the emer­gency can provide an ini­tial in­cen­tive for good be­havi­our: “the be­hav­ior norms of the group re­cruited may not tol­er­ate the abuse of se­crecy for per­sonal ad­vance­ment or in­ter­a­gency ri­valry.”

As such, groups that pre­vi­ously func­tioned well in the open can con­tinue to func­tion well (or even bet­ter) in se­cret, at least for some short time. If the emer­gency per­sists for a long time, how­ever, or if the se­cret in­sti­tu­tions per­sist past the emer­gency that cre­ated them, the cor­rod­ing effects of se­crecy – on effi­cacy and cor­rup­tion – will be­gin to take root and grow, even­tu­ally and in­creas­ingly com­pro­mis­ing the func­tion­al­ity of the or­gani­sa­tion.

Se­crecy may there­fore be good tac­tics, but bad strat­egy. If true, this would ex­plain how some or­gani­sa­tions (most no­tably the Man­hat­ten Pro­ject) pro­duce such im­pres­sive achieve­ments while re­main­ing highly se­cre­tive, while also ex­plain­ing why these are ex­cep­tions to the gen­eral rule.

Spec­u­lat­ing about this my­self, this seems like an om­i­nous pos­si­bil­ity: the gains from se­crecy are clearly leg­ible and ac­quired rapidly, while the costs ac­crue grad­u­ally and in a way difficult for an in­ter­nal ac­tor to spot. The ini­tial suc­cesses jus­tify the con­tinu­a­tion of se­crecy past the pe­riod where it pro­vided the biggest gains, af­ter which the ac­cru­ing costs of de­clin­ing in­sti­tu­tional health make it in­creas­ingly difficult to undo. Those ini­tial suc­cesses, if later made pub­lic, also serve to provide the or­gani­sa­tion with a good rep­u­ta­tion and pub­lic sup­port, while the or­gani­sa­tions de­clin­ing perfor­mance in cur­rent events are kept se­cret. As a re­sult, the or­gani­sa­tion’s se­crecy could re­tain both pub­lic and pri­vate sup­port well past the time at which it be­gins to be a net im­ped­i­ment to effi­cacy[7].

If this ar­gu­ment is true, it sug­gests that se­crecy should be kept as a rare, short-term weapon in the policy toolbox. Rather than an in­dis­pen­si­ble tool of state policy, se­crecy might then be re­garded analo­gously to a pow­er­ful but ad­dic­tive stim­u­lant: to be used spar­ingly in emer­gen­cies and oth­er­wise avoided as much as pos­si­ble.

Fi­nal thoughts

The Weapon of Open­ness pre­sents an im­por­tant-seem­ing point in a con­vinc­ing-seem­ing way. Its ar­gu­ments jibe with my gen­eral un­der­stand­ing of hu­man na­ture, in­cen­tives, and eco­nomics. If true, they seem to pre­sent an im­por­tant coun­ter­point to con­cerns about info haz­ards and in­for­ma­tion se­cu­rity. At the same time, the piece is an es­say, not a pa­per, and goes to rel­a­tively lit­tle effort to make it­self con­vinc­ing be­yond lay­ing out its cen­tral vi­sion: Kantrow­itz pro­vides few con­crete ex­am­ples and cites even fewer sources. I am, in gen­eral, highly sus­pi­cious of com­pel­ling-seem­ing ar­gu­ments pre­sented with­out ev­i­den­tiary ac­com­pani­ment, and I think I should be even more so when those ar­gu­ments are in sup­port of my own (pro-aca­demic, pro-open­ness) lean­ings. So I re­main some­what un­cer­tain as to whether the key the­sis of the ar­ti­cle is true.

(One point against that the­sis that im­me­di­ately comes to mind is that a great deal of suc­cess­ful tech­nolog­i­cal de­vel­op­ment in an open so­ciety is in fact con­ducted in se­cret. Mone­tised open-source soft­ware aside, pri­vate com­pa­nies don’t seem to be in the habit of pub­li­cly shar­ing their product be­fore or dur­ing product de­vel­op­ment. A ful­ler ac­count of the weapon of open­ness would need to ac­count for why pri­vate com­pa­nies don’t fail in the way se­cret gov­ern­ment pro­jects are alleged to[8].)

If the ar­gu­ments given in the Weapon of Open­ness are true, how should those of us pri­mar­ily con­cerned with value of the long-term fu­ture re­spond? Long-ter­mists are of­ten scep­ti­cal of the value of gen­er­al­ised sci­en­tific and tech­nolog­i­cal progress, and in favour of slower, more ju­di­cious, differ­en­tial tech­nolog­i­cal de­vel­op­ment. The Weapon of Open­ness sug­gests this may be a much more difficult nee­dle to thread than it ini­tially seems. We may be san­guine about the slower pace of tech­nolog­i­cal de­vel­op­ment[9], but the cor­ro­sive effects of se­crecy on norms and in­sti­tu­tions would seem to bode poorly for the long-term preser­va­tion of good val­ues re­quired for the fu­ture to go well.

In­so­far as this cor­ro­sion is in­evitable, we may sim­ply need to ac­cept se­ri­ous in­for­ma­tion haz­ards as part of our nar­row path to­wards a flour­ish­ing fu­ture, miti­gat­ing them as best we can with­out re­sort­ing to se­crecy. In­so­far as it is not, ex­plor­ing new ways[10] to be se­cre­tive about cer­tain things while pre­serv­ing good in­sti­tu­tions and norms might be a very im­por­tant part of get­ting us to a good fu­ture.


  1. It was, for ex­am­ple, cited in Bostrom’s origi­nal in­for­ma­tion-haz­ards pa­per in dis­cus­sion of rea­sons one might take a ro­bust anti-se­crecy stance. ↩︎

  2. Though un­cer­tainty about your power can also be very harm­ful, if your ad­ver­saries con­clude you are less pow­er­ful than you re­ally are. ↩︎

  3. Im­ped­i­ments to the elimi­na­tion of er­rors will de­ter­mine the pace of progress in sci­ence as they do in many other mat­ters. It is im­por­tant here to dis­t­in­guish be­tween two types of er­ror which I will call or­di­nary and cher­ished er­rors. Or­di­nary er­rors can be cor­rected with­out em­bar­rass­ment to pow­er­ful peo­ple. The elimi­na­tion of er­rors which are cher­ished by pow­er­ful peo­ple for pres­tige, poli­ti­cal, or fi­nan­cial rea­sons is an ad­ver­sary pro­cess. In open sci­ence this ad­ver­sary pro­cess is con­ducted in open meet­ings or in sci­en­tific jour­nals. In a se­cret pro­ject it al­most in­evitably be­comes a poli­ti­cal bat­tle and the out­come de­pends on poli­ti­cal strength, al­though the rhetoric will usu­ally em­ploy much sci­en­tific jar­gon.

    ↩︎

  4. As a third ar­gu­ment, Kantrow­itz also claims that greater open­ness can re­duce “di­vi­sive­ness” and hence in­crease so­cietal unity, fur­ther strength­en­ing open so­cieties rel­a­tive to closed ones. I didn’t find this as well-ex­plained or con­vinc­ing as his other points so I haven’t dis­cussed it in the main text here. ↩︎

  5. The other side of the coin is the weak­ness which se­crecy fosters as an in­stru­ment of cor­rup­tion. This is well illus­trated in Rea­gan’s 1982 Ex­ec­u­tive Order #12356 on Na­tional Se­cu­rity (alarm­ingly tight­en­ing se­crecy) which states {Sec. 1.6(a)}: “In no case shall in­for­ma­tion be clas­sified in or­der to con­ceal vi­o­la­tions of law, in­effi­ciency, or ad­minis­tra­tive er­ror; to pre­vent em­bar­rass­ment to a per­son, or­ga­ni­za­tion or agency; to re­strain com­pe­ti­tion; or to pre­vent or de­lay the re­lease of in­for­ma­tion that does not re­quire pro­tec­tion in the in­ter­est of na­tional se­cu­rity.” This sec­tion or­ders crim­i­nals not to con­ceal their crimes and the in­effi­cient not to con­ceal their in­effi­ciency. But be­yond that it pro­vides an ab­bre­vi­ated guide to the cru­cial roles of se­crecy in the pro­cesses whereby power cor­rupts and ab­solute power cor­rupts ab­solutely. Cor­rup­tion by se­crecy is an im­por­tant clue to the strength of open­ness.

    ↩︎

  6. We can learn some­thing about the effi­ciency of se­cret vs. open pro­grams in peace­time from the ob­jec­tions raised by Adm. Bobby R. In­man, former di­rec­tor of the Na­tional Se­cu­rity Agency, to open pro­grams in cryp­tog­ra­phy. NSA, which is a very large and very se­cret agency, claimed that open pro­grams con­ducted by a hand­ful of ma­th­et­i­ci­ans around the world, who had no ac­cess to NSA se­crets, would re­veal to other coun­tries that their codes were in­se­cure and that such re­search might lead to codes that even NSA could not break. Th­ese ob­jec­tions ex­hibit NSA’s as­sess­ment that the best se­cret efforts, that other coun­tries could mount, would miss tech­niques which would be re­vealed by even a small open un­cou­pled pro­gram. If this is true for other coun­tries is it not pos­si­ble that it also ap­plies to us?

    ↩︎

  7. Kantrow­itz ex­presses similar thoughts: “The gen­eral be­lief that there is strength in se­crecy rests par­tially on its short-term suc­cesses. If we had en­tered WWII with a well-de­vel­oped se­crecy sys­tem and the cor­rup­tion which would have de­vel­oped with time, I am con­vinced that the re­sults would have been quite differ­ent.” ↩︎

  8. There are var­i­ous pos­si­ble an­swers to this I could imag­ine be­ing true. The first is that pri­vate com­pa­nies are in fact just as vuln­er­a­ble to the cor­ro­sive effects of se­crecy as gov­ern­ments are, and that tech­nolog­i­cal progress is much lower than it would be if com­pa­nies were more open. As­sum­ing ar­guendo that this is not the case, there are sev­eral fac­tors I could imag­ine be­ing at play. I origi­nally had an itemised list here but the Fo­rum is man­gling my foot­notes, so I’ll in­clude it as a com­ment for now. ↩︎

  9. How true this is de­pends on how much im­por­tance you place on cer­tain kinds of ad­ver­sar­i­al­ism: how im­por­tant you think it is that par­tic­u­lar coun­tries (or, more prob­a­bly, par­tic­u­lar kinds of ide­olo­gies) re­tain their com­pet­i­tive ad­van­tage over oth­ers. If you be­lieve that the kinds of norms that tend to go with an open so­ciety (free, demo­cratic, egal­i­tar­ian, truth-seek­ing, etc) are im­por­tant to the good qual­ity of the long-term fu­ture you may be loath to sur­ren­der one of those so­cieties’ most im­por­tant com­pet­i­tive ad­van­tages. If you doubt the long-term im­por­tance of those norms, or their as­so­ci­a­tion with open­ness, or the im­por­tance of that as­so­ci­a­tion to the preser­va­tion of these norms, this will pre­sum­ably bother you less. ↩︎

  10. I sus­pect they re­ally will need to be new ways, and not sim­ply old ways with bet­ter peo­ple. But I as yet know very lit­tle about this, and am open to the pos­si­bil­ity that solu­tions already ex­ists about which I know noth­ing. ↩︎