Thanks for this, both the original work and your commentary was an edifying read.
Iâm not persuaded, although this is mainly owed to the common challenge that noting considerations âforâ or âagainstâ in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm âfor their own goodâ could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.
Although the relevant evidence can neither be fully observed or fairly sampled, thereâs a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. Thereâs some wisdom of the crowd account that secrecy is the default for some âadversarialâ research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct âsecret by defaultâ work have often been around decades (and the states that house them centuries), and although thereâs much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsidesâthe one that springs to mind from my âfieldâ is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.
Given what I said above, citing some favourable examples doesnât say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there arenât really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors âcatch upâ), where there arenât more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad onesâand this requires some degree of something like secrecy.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what itâs worth, I think âsecurity serviceâ norms tend closer to the mark than âacademicâ ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as âdonât publish the bug until the vendor can push a fixâ may not perform as well as one might naively hope: for example, âwhite hatsâ postponing their discoveries hinders collective technological progress, and risks falling behind a âblack hatâ community avidly trading tips and tricks. This consideration can also point the other way: if the âwhite hatsâ are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work âgiving bad people good ideasâ. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.
Thanks Greg! I think a lot of what you say here is true, and well-put. I donât yet consider myself very well-informed in this area, so I wouldnât expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.
Iâm not persuaded, although this is mainly owed to the common challenge that noting considerations âforâ or âagainstâ in principle does not give a lot of evidence of what balance to strike in practice.
I basically agree with this, with the proviso that Iâm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I donât yet know enough to put a number on that.
Given that, and given how little actual evidence Kantrowitz marshals, I donât think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.
Bodies that conduct âsecret by defaultâ work have often been around decades (and the states that house them centuries), and although thereâs much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Do you think there is any evidence for institutional decay due to secrecy? Iâm interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.
My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, Iâd expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
I donât know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldnât be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught.
By âsecond order concernsâ, do you mean the proposed negative effect of secrecy on institutions/âincentives/âetc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off âcutting ones lossesâ - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.
The proposed trend of âgetting steadily worseâ isnât apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesnât give an impression they got dramatically worse despite the 30 years of secrecyâs supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a âmore openâ counterfactual) but the challenge of dissecting out the âbeing secret * timeâ interaction term and showing it is negative is a challenge that should be borne by the affirmative case.
But key to the argument is whether these problems inexorably get worse as time goes on.
Yeah, I was thinking about this yesterday. I agree that this (âinexorable decayâ vs a static cost of secrecy) is probably the key uncertainty here.
âIâm not persuaded, although this is mainly owed to the common challenge that noting considerations âforâ or âagainstâ in principle does not give a lot of evidence of what balance to strike in practice.â
I basically agree with this, with the proviso that Iâm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I donât yet know enough to put a number on that.
I think the points each of you make there are true and important.
As a further indication of the value of Willâs point, I think a big part of the reason weâre having this discussion at all is probably Bostromâs paper on information hazards, which is itself much more a list of considerations than an attempt to weigh them up. Bostrom makes this explicit:
The aim of this paper is to catalogue some of the various possible ways in which information can cause harm. We will not here seek to determine how common and serious these harms are or how they stack up against the many benefits of informationâquestions that would need to be engaged before one could reach a considered position about potential policy implications.
(We could describe efforts such as Bostromâs as âmapping the spaceâ of consequences worth thinking about further, without yet engaging in that further thought.)
It seems possible to me that weâve had more cataloguing of the considerations against openness than those for it, and thus that posts like this one can contribute usefully to the necessary step that comes before weighing up all the considerations in order to arrive at a well-informed decision. (For the same reason, it could also help slightly-inform all the other decisions we unfortunately have to make in the meantime.)
One caveat to that is that a post that mostly covers just the considerations that point in one direction could be counterproductive for those readers who havenât seen the other posts that provide the counterbalance, or who saw them a long time ago. But that issue is hard to avoid, as you canât cover everything in full detail in one place, and it also applies to Bostromâs paper and to a post Iâll be making on this topic soon.
Another caveat in this particular case is that there are two related reasons why decisions on whether to develop/âshare (potentially hazardous) information may demand somewhat more caution than the average decision: the unilateralistâs curse, and the fact that hard-to-reverse decisions destroy option value.
I personally think that itâs still a good idea to openly discuss the reasons for openness, even if a post has to be somewhat lopsided in that direction for brevity and given that other posts were lopsided in the other direction. But I also personally think it might be good to explicitly note those extra reasons for caution somewhere within the âmostly-proâ post, for readers who may come to conclusions on the basis of that one post by itself.
(Just to be clear, I donât see this as disagreeing with Greg or Willâs comments.)
Thanks for this, both the original work and your commentary was an edifying read.
Iâm not persuaded, although this is mainly owed to the common challenge that noting considerations âforâ or âagainstâ in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm âfor their own goodâ could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.
Although the relevant evidence can neither be fully observed or fairly sampled, thereâs a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. Thereâs some wisdom of the crowd account that secrecy is the default for some âadversarialâ research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct âsecret by defaultâ work have often been around decades (and the states that house them centuries), and although thereâs much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsidesâthe one that springs to mind from my âfieldâ is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.
Given what I said above, citing some favourable examples doesnât say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there arenât really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors âcatch upâ), where there arenât more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad onesâand this requires some degree of something like secrecy.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what itâs worth, I think âsecurity serviceâ norms tend closer to the mark than âacademicâ ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as âdonât publish the bug until the vendor can push a fixâ may not perform as well as one might naively hope: for example, âwhite hatsâ postponing their discoveries hinders collective technological progress, and risks falling behind a âblack hatâ community avidly trading tips and tricks. This consideration can also point the other way: if the âwhite hatsâ are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work âgiving bad people good ideasâ. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.
Thanks Greg! I think a lot of what you say here is true, and well-put. I donât yet consider myself very well-informed in this area, so I wouldnât expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.
I basically agree with this, with the proviso that Iâm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I donât yet know enough to put a number on that.
Given that, and given how little actual evidence Kantrowitz marshals, I donât think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.
Do you think there is any evidence for institutional decay due to secrecy? Iâm interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.
My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, Iâd expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
I donât know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldnât be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.
By âsecond order concernsâ, do you mean the proposed negative effect of secrecy on institutions/âincentives/âetc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...
No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off âcutting ones lossesâ - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.
The proposed trend of âgetting steadily worseâ isnât apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesnât give an impression they got dramatically worse despite the 30 years of secrecyâs supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a âmore openâ counterfactual) but the challenge of dissecting out the âbeing secret * timeâ interaction term and showing it is negative is a challenge that should be borne by the affirmative case.
Yeah, I was thinking about this yesterday. I agree that this (âinexorable decayâ vs a static cost of secrecy) is probably the key uncertainty here.
I think the points each of you make there are true and important.
As a further indication of the value of Willâs point, I think a big part of the reason weâre having this discussion at all is probably Bostromâs paper on information hazards, which is itself much more a list of considerations than an attempt to weigh them up. Bostrom makes this explicit:
(We could describe efforts such as Bostromâs as âmapping the spaceâ of consequences worth thinking about further, without yet engaging in that further thought.)
It seems possible to me that weâve had more cataloguing of the considerations against openness than those for it, and thus that posts like this one can contribute usefully to the necessary step that comes before weighing up all the considerations in order to arrive at a well-informed decision. (For the same reason, it could also help slightly-inform all the other decisions we unfortunately have to make in the meantime.)
One caveat to that is that a post that mostly covers just the considerations that point in one direction could be counterproductive for those readers who havenât seen the other posts that provide the counterbalance, or who saw them a long time ago. But that issue is hard to avoid, as you canât cover everything in full detail in one place, and it also applies to Bostromâs paper and to a post Iâll be making on this topic soon.
Another caveat in this particular case is that there are two related reasons why decisions on whether to develop/âshare (potentially hazardous) information may demand somewhat more caution than the average decision: the unilateralistâs curse, and the fact that hard-to-reverse decisions destroy option value.
I personally think that itâs still a good idea to openly discuss the reasons for openness, even if a post has to be somewhat lopsided in that direction for brevity and given that other posts were lopsided in the other direction. But I also personally think it might be good to explicitly note those extra reasons for caution somewhere within the âmostly-proâ post, for readers who may come to conclusions on the basis of that one post by itself.
(Just to be clear, I donât see this as disagreeing with Greg or Willâs comments.)