By screening not only the decisions but the decision-making progress from outside scrutiny, secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers. Given the well-known general tendency of humans to respond to selfish incentives, the result is unsurprising: greatly increased toleration of waste, delay and other inefficiencies, up to and including outright corruption in the narrow sense, when these inefficiencies make the lives of decision-makers or those they favour easier, or increase their status (e.g. by increasing their budget)
Relatedly, in a reply to Gregory Lewis you write:
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
I don’t think I know enough about this to clearly disagree or agree, but I’ve seen some arguments that I think would push against your claims, if the arguments are sound. (I’m not sure the arguments are sound, but I’ll describe them anyway.)
As you say, “secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers.” The arguments I have in mind could see that as a good thing. It could be argued that this frees organisations/individuals to optimise for what really matters, rather than for acting in ways they could easily defend after the fact. (See here for Habryka’s discussion of similar arguments, focusing on the idea of “legibility”—though I should note that he seems to not necessarily or wholeheartedly endorse these arguments.)
For example, it is often claimed that confidentiality/secrecy in cabinet or executive meetings is very important so that people will actually share their thoughts openly, rather than worrying about how their statements might be interpreted, taken out of context, used against them, etc. after the fact.
For another example, I somewhere saw it claimed that things like credentials, prestige of one’s university, etc. are overly emphasised in hiring processes partly because people in charge of hiring aren’t just optimising for the best candidate, but the candidate they can best justify hiring if things do turn out badly. There may be many “bets” they think would be better in expectation, but if they turned out poorly the hirer would struggle to justify their decision to their superiors, whereas if the person from Harvard turned out badly the hirer can claim all the evidence looked good before the fact. (I have no idea if this is true; it’s just a claim I’ve seen.)
In other words, in response to “Given the well-known general tendency of humans to respond to selfish incentives”, these arguments might highlight that there’s also a tendency for at least some people to truly wantto do what they believe is “right”, but to be restrained by incentives of a “justifying” or “bureaucratic” type. So “secrecy” (of a sort) could, perhaps, sometimes allow individuals/organisations to behave more effectively and have better epistemics, rather than optimising for the wrong things, hiding their real views, etc.
But again, these are just possible arguments. I don’t know if I actually agree with them, and I think more empirical evidence would be great. I think there’s also two particular reasons for tentative suspicion of such arguments:
They could be rationalisations of secrecy that is actually in the organisation/individual’s interest for other reasons
There are often reasons why people should “tick the boxes” and optimise what’s “justifiable”/legible/demanded rather than “what they believe is right”. This can happen when individuals are wrong about what’s right, and their organisations or superiors do know better, and did put those tick-boxes there for a good reason.
My own past experience as a teacher suggests (weakly and somewhat tangentially) that there’s truth to both sides of this debate.
Remarkably, and I think quite appallingly, I and most other teachers at my school could mostly operate in secret, in the sense that we were hardly ever observed by anyone except our students (who probably wouldn’t tell our superiors anything less than extreme misconduct). I do think that this allowed increased laziness, distorting results to look good (e.g., “teaching to the test” in bad ways, or marking far too leniently[1]), and semi-misconduct (e.g., large scary male teachers standing over and yelling angrily at 13 year olds). This seems to tangentially support the idea that “secrecy” increases “corruption”.
On the other hand, the school, and curriculum more broadly, also had some quite pointless or counterproductive policies. Being able to “operate in secret” meant that I could ditch the policies that were stupid, not waste time “ticking boxes”, and instead do what was “really right”.
But again, the caveat should be added that it’s quite possible that the school/curriculum was right and I was wrong, and thus I would’ve been better off being put under the floodlights and forced to conform. I tried to bear this sort of epistemic humility in mind, and therefore “go my own way” only relatively rarely, when I thought I had particularly good reasons for doing so.
This all also makes me think that the pros and cons of secrecy will probably vary between individuals and organisations, and in part based on something like how “conscientious” or “value aligned” the individual is. In the extreme, a highly conscientious and altruistic person with excellent morals and epistemics to begin with might thrive if able to operate somewhat “secretly”, as they are then freed to optimise for what really matters. (Though other problems could of course occur.) Conversely, for someone with a more normal level of conscientiousness, self-interest, flawed beliefs about what’s right, and flawed beliefs about the world, openness may free them instead to act self-interestedly, corruptly, or based on what they think is right but is actually worse than what others would’ve told them to do.
Thanks for this post.
Relatedly, in a reply to Gregory Lewis you write:
I don’t think I know enough about this to clearly disagree or agree, but I’ve seen some arguments that I think would push against your claims, if the arguments are sound. (I’m not sure the arguments are sound, but I’ll describe them anyway.)
As you say, “secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers.” The arguments I have in mind could see that as a good thing. It could be argued that this frees organisations/individuals to optimise for what really matters, rather than for acting in ways they could easily defend after the fact. (See here for Habryka’s discussion of similar arguments, focusing on the idea of “legibility”—though I should note that he seems to not necessarily or wholeheartedly endorse these arguments.)
For example, it is often claimed that confidentiality/secrecy in cabinet or executive meetings is very important so that people will actually share their thoughts openly, rather than worrying about how their statements might be interpreted, taken out of context, used against them, etc. after the fact.
For another example, I somewhere saw it claimed that things like credentials, prestige of one’s university, etc. are overly emphasised in hiring processes partly because people in charge of hiring aren’t just optimising for the best candidate, but the candidate they can best justify hiring if things do turn out badly. There may be many “bets” they think would be better in expectation, but if they turned out poorly the hirer would struggle to justify their decision to their superiors, whereas if the person from Harvard turned out badly the hirer can claim all the evidence looked good before the fact. (I have no idea if this is true; it’s just a claim I’ve seen.)
In other words, in response to “Given the well-known general tendency of humans to respond to selfish incentives”, these arguments might highlight that there’s also a tendency for at least some people to truly want to do what they believe is “right”, but to be restrained by incentives of a “justifying” or “bureaucratic” type. So “secrecy” (of a sort) could, perhaps, sometimes allow individuals/organisations to behave more effectively and have better epistemics, rather than optimising for the wrong things, hiding their real views, etc.
But again, these are just possible arguments. I don’t know if I actually agree with them, and I think more empirical evidence would be great. I think there’s also two particular reasons for tentative suspicion of such arguments:
They could be rationalisations of secrecy that is actually in the organisation/individual’s interest for other reasons
There are often reasons why people should “tick the boxes” and optimise what’s “justifiable”/legible/demanded rather than “what they believe is right”. This can happen when individuals are wrong about what’s right, and their organisations or superiors do know better, and did put those tick-boxes there for a good reason.
My own past experience as a teacher suggests (weakly and somewhat tangentially) that there’s truth to both sides of this debate.
Remarkably, and I think quite appallingly, I and most other teachers at my school could mostly operate in secret, in the sense that we were hardly ever observed by anyone except our students (who probably wouldn’t tell our superiors anything less than extreme misconduct). I do think that this allowed increased laziness, distorting results to look good (e.g., “teaching to the test” in bad ways, or marking far too leniently[1]), and semi-misconduct (e.g., large scary male teachers standing over and yelling angrily at 13 year olds). This seems to tangentially support the idea that “secrecy” increases “corruption”.
On the other hand, the school, and curriculum more broadly, also had some quite pointless or counterproductive policies. Being able to “operate in secret” meant that I could ditch the policies that were stupid, not waste time “ticking boxes”, and instead do what was “really right”.
But again, the caveat should be added that it’s quite possible that the school/curriculum was right and I was wrong, and thus I would’ve been better off being put under the floodlights and forced to conform. I tried to bear this sort of epistemic humility in mind, and therefore “go my own way” only relatively rarely, when I thought I had particularly good reasons for doing so.
This all also makes me think that the pros and cons of secrecy will probably vary between individuals and organisations, and in part based on something like how “conscientious” or “value aligned” the individual is. In the extreme, a highly conscientious and altruistic person with excellent morals and epistemics to begin with might thrive if able to operate somewhat “secretly”, as they are then freed to optimise for what really matters. (Though other problems could of course occur.) Conversely, for someone with a more normal level of conscientiousness, self-interest, flawed beliefs about what’s right, and flawed beliefs about the world, openness may free them instead to act self-interestedly, corruptly, or based on what they think is right but is actually worse than what others would’ve told them to do.