Thanks for this, both the original work and your commentary was an edifying read.
I’m not persuaded, although this is mainly owed to the common challenge that noting considerations ‘for’ or ‘against’ in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm ‘for their own good’ could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.
Although the relevant evidence can neither be fully observed or fairly sampled, there’s a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There’s some wisdom of the crowd account that secrecy is the default for some ‘adversarial’ research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct ‘secret by default’ work have often been around decades (and the states that house them centuries), and although there’s much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides—the one that springs to mind from my ‘field’ is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.
Given what I said above, citing some favourable examples doesn’t say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren’t really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors ‘catch up’), where there aren’t more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones—and this requires some degree of something like secrecy.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it’s worth, I think ‘security service’ norms tend closer to the mark than ‘academic’ ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as ‘don’t publish the bug until the vendor can push a fix’ may not perform as well as one might naively hope: for example, ‘white hats’ postponing their discoveries hinders collective technological progress, and risks falling behind a ‘black hat’ community avidly trading tips and tricks. This consideration can also point the other way: if the ‘white hats’ are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work ‘giving bad people good ideas’. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.
Thanks Greg! I think a lot of what you say here is true, and well-put. I don’t yet consider myself very well-informed in this area, so I wouldn’t expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.
I’m not persuaded, although this is mainly owed to the common challenge that noting considerations ‘for’ or ‘against’ in principle does not give a lot of evidence of what balance to strike in practice.
I basically agree with this, with the proviso that I’m currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don’t yet know enough to put a number on that.
Given that, and given how little actual evidence Kantrowitz marshals, I don’t think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.
Bodies that conduct ‘secret by default’ work have often been around decades (and the states that house them centuries), and although there’s much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Do you think there is any evidence for institutional decay due to secrecy? I’m interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.
My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, I’d expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
I don’t know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldn’t be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught.
By “second order concerns”, do you mean the proposed negative effect of secrecy on institutions/incentives/etc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off ‘cutting ones losses’ - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.
The proposed trend of ‘getting steadily worse’ isn’t apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn’t give an impression they got dramatically worse despite the 30 years of secrecy’s supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a ‘more open’ counterfactual) but the challenge of dissecting out the ‘being secret * time’ interaction term and showing it is negative is a challenge that should be borne by the affirmative case.
“I’m not persuaded, although this is mainly owed to the common challenge that noting considerations ‘for’ or ‘against’ in principle does not give a lot of evidence of what balance to strike in practice.”
I basically agree with this, with the proviso that I’m currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don’t yet know enough to put a number on that.
I think the points each of you make there are true and important.
As a further indication of the value of Will’s point, I think a big part of the reason we’re having this discussion at all is probably Bostrom’s paper on information hazards, which is itself much more a list of considerations than an attempt to weigh them up. Bostrom makes this explicit:
The aim of this paper is to catalogue some of the various possible ways in which information can cause harm. We will not here seek to determine how common and serious these harms are or how they stack up against the many benefits of information—questions that would need to be engaged before one could reach a considered position about potential policy implications.
(We could describe efforts such as Bostrom’s as “mapping the space” of consequences worth thinking about further, without yet engaging in that further thought.)
It seems possible to me that we’ve had more cataloguing of the considerations against openness than those for it, and thus that posts like this one can contribute usefully to the necessary step that comes before weighing up all the considerations in order to arrive at a well-informed decision. (For the same reason, it could also help slightly-inform all the other decisions we unfortunately have to make in the meantime.)
One caveat to that is that a post that mostly covers just the considerations that point in one direction could be counterproductive for those readers who haven’t seen the other posts that provide the counterbalance, or who saw them a long time ago. But that issue is hard to avoid, as you can’t cover everything in full detail in one place, and it also applies to Bostrom’s paper and to a post I’ll be making on this topic soon.
Another caveat in this particular case is that there are two related reasons why decisions on whether to develop/share (potentially hazardous) information may demand somewhat more caution than the average decision: the unilateralist’s curse, and the fact that hard-to-reverse decisions destroy option value.
I personally think that it’s still a good idea to openly discuss the reasons for openness, even if a post has to be somewhat lopsided in that direction for brevity and given that other posts were lopsided in the other direction. But I also personally think it might be good to explicitly note those extra reasons for caution somewhere within the “mostly-pro” post, for readers who may come to conclusions on the basis of that one post by itself.
(Just to be clear, I don’t see this as disagreeing with Greg or Will’s comments.)
Thanks for this, both the original work and your commentary was an edifying read.
I’m not persuaded, although this is mainly owed to the common challenge that noting considerations ‘for’ or ‘against’ in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm ‘for their own good’ could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.
Although the relevant evidence can neither be fully observed or fairly sampled, there’s a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There’s some wisdom of the crowd account that secrecy is the default for some ‘adversarial’ research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct ‘secret by default’ work have often been around decades (and the states that house them centuries), and although there’s much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides—the one that springs to mind from my ‘field’ is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.
Given what I said above, citing some favourable examples doesn’t say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren’t really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors ‘catch up’), where there aren’t more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones—and this requires some degree of something like secrecy.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it’s worth, I think ‘security service’ norms tend closer to the mark than ‘academic’ ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as ‘don’t publish the bug until the vendor can push a fix’ may not perform as well as one might naively hope: for example, ‘white hats’ postponing their discoveries hinders collective technological progress, and risks falling behind a ‘black hat’ community avidly trading tips and tricks. This consideration can also point the other way: if the ‘white hats’ are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work ‘giving bad people good ideas’. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.
Thanks Greg! I think a lot of what you say here is true, and well-put. I don’t yet consider myself very well-informed in this area, so I wouldn’t expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.
I basically agree with this, with the proviso that I’m currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don’t yet know enough to put a number on that.
Given that, and given how little actual evidence Kantrowitz marshals, I don’t think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.
Do you think there is any evidence for institutional decay due to secrecy? I’m interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.
My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, I’d expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
I don’t know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldn’t be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.
By “second order concerns”, do you mean the proposed negative effect of secrecy on institutions/incentives/etc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...
No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off ‘cutting ones losses’ - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.
The proposed trend of ‘getting steadily worse’ isn’t apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn’t give an impression they got dramatically worse despite the 30 years of secrecy’s supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a ‘more open’ counterfactual) but the challenge of dissecting out the ‘being secret * time’ interaction term and showing it is negative is a challenge that should be borne by the affirmative case.
Yeah, I was thinking about this yesterday. I agree that this (“inexorable decay” vs a static cost of secrecy) is probably the key uncertainty here.
I think the points each of you make there are true and important.
As a further indication of the value of Will’s point, I think a big part of the reason we’re having this discussion at all is probably Bostrom’s paper on information hazards, which is itself much more a list of considerations than an attempt to weigh them up. Bostrom makes this explicit:
(We could describe efforts such as Bostrom’s as “mapping the space” of consequences worth thinking about further, without yet engaging in that further thought.)
It seems possible to me that we’ve had more cataloguing of the considerations against openness than those for it, and thus that posts like this one can contribute usefully to the necessary step that comes before weighing up all the considerations in order to arrive at a well-informed decision. (For the same reason, it could also help slightly-inform all the other decisions we unfortunately have to make in the meantime.)
One caveat to that is that a post that mostly covers just the considerations that point in one direction could be counterproductive for those readers who haven’t seen the other posts that provide the counterbalance, or who saw them a long time ago. But that issue is hard to avoid, as you can’t cover everything in full detail in one place, and it also applies to Bostrom’s paper and to a post I’ll be making on this topic soon.
Another caveat in this particular case is that there are two related reasons why decisions on whether to develop/share (potentially hazardous) information may demand somewhat more caution than the average decision: the unilateralist’s curse, and the fact that hard-to-reverse decisions destroy option value.
I personally think that it’s still a good idea to openly discuss the reasons for openness, even if a post has to be somewhat lopsided in that direction for brevity and given that other posts were lopsided in the other direction. But I also personally think it might be good to explicitly note those extra reasons for caution somewhere within the “mostly-pro” post, for readers who may come to conclusions on the basis of that one post by itself.
(Just to be clear, I don’t see this as disagreeing with Greg or Will’s comments.)