Exploring the Streisand Effect

A few weeks ago I did some fairly extensive reading on the Streisand effect. Since I now don’t expect that work to lead to any more formal academic writing, I’m publishing my thoughts on the topic here. This first post will provide a general overview of existing work on the effect; I hope to follow up later with thoughts on what EAs in particular can learn from this.


TL;DR: The Streisand effect occurs when efforts to suppress the spread of information cause it to spread more widely. It occurs for several reasons: because censorship is interesting independent of the information being censored; because it provides a good signal that the information being suppressed is worth knowing; and because it is offensive, triggering instinctive opposition in both target and audience. Would-be censors can risk triggering Streisand effects for a variety of rational and irrational reasons; however, even if a given incident has strong potential to backfire, a significant actual Streisand effect typically requires concerted and media-savvy opposition. It is difficult to estimate the actual rate of Streisand effects as a proportion of attempts at censorship; however, even if “general” Streisand effects that reach national news are very rare, smaller effects that remain localised to particular communities can remain a major concern.


Since its coinage in 2005, the Streisand effect has become a well-known internet phenomenon, and a serious concern for those seeking to limit the spread of information. The effect has many different definitions, but broadly speaking refers to the phenomenon whereby seeking to suppress information results in it becoming more widely known than it otherwise would have.

A few famous examples[1] to set the scene:

  • In 2003, Barbra Streisand sued a photographer for posting a photograph of her house online, as part of a series documenting coastal erosion in California. In addition to losing the suit, the resulting publicity resulted in vastly more people viewing the photograph. This case later became the trope namer.

  • In 2013, French intelligence agencies attempted to suppress an obscure Wikipedia article about a military radio station, going so far as arresting a Wikipedia editor and forcing him to delete the page. The page was quickly restored by an editor outside France and quickly became the most-viewed page on French Wikipedia.

  • In 2012, the British High Court ordered five British ISPs to block access to the filesharing site The Pirate Bay. The case featured prominently in national news, and the Bay received record levels of traffic during and after the court case.

These examples typify the usual patterns of famous Streisand-effect cases: most involve either celebrities or corporations seeking to protect their reputations or intellectual property, or governments seeking to protect state secrets or silence dissent. The plaintiff is typically powerful, heavy-handed, and surprisingly heedless of public relations. The typical defendant is (or can be presented to be) a scrappy underdog, with access to enough media savvy to capture public sympathy. But even if the plaintiff has defenders, that can just make it worse: an ongoing conflict can be even more interesting than a one-off outrage, and a clash of sacred values can be the juiciest conflict of all.

It’s clear that the Streisand effect is a real phenomenon in at least some cases, but why? What conditions cause some attempts at censorship to succeed, while others not only fail, but explode in the censors’ faces? And how big a problem is the Streisand effect, really, for those who wish to manage the information shared by others?

Some definitions and distinctions

As stated slightly differently above, a good general definition of the Streisand effect is: The phenomenon whereby an attempt to suppress information has the unintended consequence of causing it to spread more widely.

Put more concisely, censorship is newsworthy.

The Streisand effect is thus a particular instance of censorship backfire, in which attempted suppression of information results in unanticipated negative consequences for the would-be censor.

Before we discuss why the Streisand effect occurs, it’s worth distinguishing it from a few related concepts:

  • Firstly, the Streisand effect specifically refers to cases where attempted censorship is counterproductive: where the information to be suppressed becomes more widely known than in the counterfactual. This should be distinguished from censorship that is merely ineffective, failing to reduce spread of the information as much as the would-be censor would like. This is quite important: a high risk that your attempt to suppress information fails is very different, decision-theoretically, from a risk that it actually causes the information to spread.

  • Secondly, the Streisand effect should be distinguished from other ways in which censorship can backfire: most obviously, by damaging the reputation of the censor[2]. If your attempt to suppress information attracts so much opprobrium that it turns out to be net-bad for you, but does not actually result in the counterfactual spread of the information in question, then that is a case of censorship backfire, but is importantly different from the Streisand effect[3].

A third category of things that are distinct from the classic Streisand effect, but similar enough that it is often worth discussing them together, is counterproductive secrecy. That is, cases where, instead of causing information spread by attempting to change the actions of others, you cause it by being ostentatiously secretive yourself. This is certainly a real thing: if you make it known that you possess valuable secret information, people will want it. There’s a reason so many people try to hack the NSA. Some of the dynamics of this are similar to parts of the Streisand effect, but the lack of an interpersonal censor/​censee[4] dynamic makes it different enough that I think it’s worth discussing under a different heading. My focus in this piece will be on the central case of counterproductive censorship, but there is certainly much that could valuably be said about counterproductive secrecy.

Decomposing the Streisand effect

So long as the possession of these writings were attended by danger, they were eagerly sought and read: when there was no longer any difficulty in securing them, they fell into oblivion.

-- Tacitus[5]

Where does the Streisand effect come from? Broadly speaking, the phenomenon of censorship attracting unwanted attention can be decomposed into at least three parts: that censorship is per se interesting; that it provides a signal that the information being suppressed is valuable; and that it is often offensive, motivating adversarial seeking-out of the information. The stronger each of these factors is in a given case, the greater the chance of provoking a large Streisand effect.

Censorship is interesting

Firstly, the attempt to suppress information another wants shared, often by threatening, petitioning for or exacting punishment on the censee, is per se interesting to onlookers. Conflict and drama have always been well-known to attract interest and attention, and stories about conflict and drama communicate valuable information about society: who is powerful, who is not, how particular kinds of conflicts are handled, what various important people believe about various things. People pay attention because these things are interesting and worth knowing about irrespective of the information being suppressed, and as a side effect are more likely to learn about (and remember) the information.

Examples:[6]

  • The Internet Watch Foundation vs Wikipedia controversy was a case of high drama over two clashing sacred values – freedom of expression vs protection of the vulnerable – mixed with entertaining farce. It also involved dramatic unintended consquences, bold stands on principle, and entertaining mutual misunderstanding. The information in dispute – an explicit album cover – was of relatively little importance to anyone involved, but it sure got a lot of extra attention as a result of the whole mess.

  • As Streisand plaintiffs go, Mario Costeja Gonzáles is one of the more sympathetic. All he wanted was not to be primarily known for long-settled social security debt! But in the process, he sued Google, was counter-sued, and the case went to the Court of Justice of the European Union and established a major legal precedent. This is a very interesting story. And as a result, Costeja Gonzáles’s social security debt is now extremely well-known.

Censorship is a signal

In general, people don’t exert effort to suppress information willy-nilly. There is generally some reason they think this particular information is worth censoring. As such, discovering that someone tried to suppress a given piece of information is good evidence that that information is valuable, and hence worth exposing.

What we mean by “valuable” information depends upon the nature of the audience. For a journalist, valuable information is information that makes a good, newsworthy, clickable story. For an activist, valuable information is information you can use to sway public opinion. For an adversary, valuable information is information you can exploit to gain some kind of advantage over your opponent. For a consumer, valuable information might be information that makes you feel more informed about the world, or that simply entertains you. Regardless, if it is discoverable that you have suppressed information, that is likely to be a signal to somebody that that information is worth making a special effort to uncover.

One of the loudest ways to broadcast that signal is to register your attempt at censorship in a public forum constantly watched by nosy spectators – say, a court of law. For non-government actors, this is often the only way to compel others to comply with your attempt at censorship, thus putting would-be censors in a catch-22: you can hope your target quietly complies with your cease-and-desist letter, but if they don’t, your secret is probably hopelessly doomed whether or not you win. Some jurisdictions, like the UK, allow actors to try to circumvent this problem through so-called “super-injunctions” that prohibit not only the sharing of information but the fact that any such prohibition is in place; this can work, but if the super-injunction later comes to light it provides an extra-strong signal that there is something worth knowing here[7].

The signalling effect of concealing information is most obvious in the case of counterproductive secrecy I mentioned above, but it can also be an important factor in counterproductive censorship[8].

Examples:

  • In the Pirate Bay example at the start of this post, the blocking of the website sent a strong signal to onlookers: “here is a place you can get lots of good content for free”.

  • A recent example – in February 2020 Apple sought to block publication of the German-language book App Store Confidential, which it asserted contained “a multitude of business secrets” and “confidential” “business practices”. Confidential business practices of one of the world’s most successful companies? That sounds like a book worth buying! The book shot to #2 on Amazon’s German best-seller charts.

Censorship is offensive

This one doesn’t need much elaboration. It is difficult to censor information without making enemies. Censorship is coercive both for the direct censee and for their potential audience (who might prefer to know the information given freely by the censee, but now can’t), and is also offensive to those people who hold freedom of expression as an important and fundamental right. In many prominent cases, the would-be censor also behaves in a heavy-handed, authoritarian manner that upsets an even larger number of people, further stoking the story.

This outrage seems to manifest as the Streisand effect in a couple of different ways. Firstly, outrage holds the attention: angry people are much more likely to read, share and remember a story, resulting in it spreading much further (and persisting much longer) than it otherwise would[9]. Secondly, perceived attempts to pressure and control people stimulate psychological reactance, resulting in them taking actions to spite the would-be controller – which in this case means seeking out and sharing the information in question.

The importance of this factor is evident in the headlines used to report on Streisand-effect stories, which often emphasise the bad behaviour of the would-be censor and the fact that they “don’t want you to see/​know” the information in question. Nevertheless, it can be hard to distinguish this effect from the other two, and in many cases all three are tightly bound together.

Examples:

  • Streisand’s actions in the trope namer example – suing a nonprofit researcher for $50 million dollars for a trivial and inadvertent violation of privacy – were clearly outrageous, and people were correspondingly outraged. This led to the widespread sharing of the story and the mass attention on the offending photo.

  • In the French-radio-station example, the authorities involved acted in a way widely perceived as heavy-handed bullying, not to mention technically ignorant. Their principle victim also happened to be the chairman of Wikimedia France, giving him a substantial megaphone to voice his outrage and so stimulate controversy.

  • There are several cases on record of businesses threatening or suing customers over bad reviews, an idea that makes virtually everyone angry. Union Street Guest House, the most notorious case, was deluged with bad reviews from people who had never stayed there, who were furious about their “no bad review” policy[10].

Why provoke the Streisand effect?

Given all these reasons censorship can backfire, why do would-be censors try to suppress information? Why trigger the Streisand effect rather than allow the information to wallow in news-free obscurity?

Error theory

The simplest answer to this question is simply that the would-be censors are making a mistake: for one reason or another, they expect their attempt at censorship to successfully suppress the information, where instead it causes it to spread more widely. This is a very plausible answer: human folly is one of the few constants in the world, and most of the famous cases of the Streisand effect certainly seem deeply ill-conceived.

There are a number of mistakes that could cause a would-be censor to fail to anticipate the Streisand effect. They could underestimate the visibility of their actions, the strength of public aversion to them, or the degree to which they provide a signal that the information being suppressed is worth knowing. They could overestimate the likelihood of the information spreading widely without their information, or the pliability of their target. Even if aware of the Streisand effect in the abstract, they not realise that this particular attempt at censorship is likely to be counterproductive.

Between them, these diverse mistakes provide ample opportunity for triggering the Streisand effect. This was especially true in the early decades of the internet, when people had still not yet adapted to its effects on news and culture; it is still true today. That said, in researching the existing literature on the Streisand effect I’ve been frustrated by the crowing how-could-they-have-been-so-stupid attitude that generally accompanies popular coverage. I think there are a number of reasons someone might knowingly risk triggering the Streisand effect in service of some larger goal.

Clear-eyed trade-offs

Morality/​deterrence

It is well-known that people will accept disproportionate costs to punish what they see as immoral behaviour; the willingness of victims to endure the enormous cost of the court system is a case in point. These actions are personally costly but socially beneficial, in that they create an incentive to not carry out the behaviour that incurs the costly punishment.

Many instances of the Streisand effect with an individual plaintiff take this form: the plaintiff realises their secret is out whatever happens, but seeks to punish the defendant for violating their privacy or other rights. I expect Mario Costeja Gonzáles eventually realised that his case against Google was getting him more publicity rather than less, but he carried on fighting on principle: and established a precedent that, for better or worse, allowed many others to conceal information more easily.

Less sympathetically, larger actors can also exploit the deterrent effect. The actions and threats of North Korea probably dramatically increased The Interview’s public profile, but I’d bet both filmmakers and especially producers/​distributors will be substantially warier in similar cases going forward. Indeed, in these cases the increased publicity given by the Streisand effect can[11] work in the censor’s favour, increasing the breadth and strength of the deterrent effect.

Rational gambling

It’s not clear how common large Streisand effects actually are, as a subset of attempts at censorship (see below). If the base rate is low, then the threat of backfire may not have a large effect on the expected value of censorship. In the same way a maritime trade company accepts a certain frequency of shipwreck and piracy, a certain (low) frequency of Streisand effects might be an acceptable price to pay for the broader benefits of censorship to the censor[12].

This is especially true when the individual making the decisions is not the one assuming the risk. In this case, there is a principal-agent problem at work: the agent assumes less reputational risk, and so is prepared to accept a higher risk of backlash. The attourney representing Streisand in the house-photograph case had and has a long track-record of representing celebrities in privacy cases: he has mostly been successful, and if he loses some, well, his name hasn’t been attached to any embarrassing internet phenomena. We can probably assume that he’s not the only attourney who might sometimes have done better by his clients (but not by himself) to advise them to be a tad less litigious.

Creating noise

Censorship creates a signal that there is something here worth knowing. But if you consistently attempt to censor, that signal progressively weakens until it is no longer useful in any particular case. Similarly, if you are well-known to be generally secretive and censorious, any particular instance of you being secretive and censorious is less interesting, and hence less newsworthy. Taking a few hits from the Streisand effect early on might therefore be worth it to protect other secrets in the long-run, if you can maintain your general censoriousness in the face of serious opposition.

This theory makes some sense, but I’d be cautious about giving it too much weight in practice. While generally being censorious can reduce the signalling value of particular cases of censoriousness, your general censoriousness can itself become a signal, motivating people to seek what it is you want so badly to hide. Not to mention the fact that this attitude is likely to damage your reputation and motivate others to uncover your secrets to spite you, independent of any particular instance. Overall I’d say this strategy (being generally secretive so people don’t know which secrets are important) works better for organisations (e.g. Apple, the NSA) than individuals, and even there needs to be backed up with serious cybersecurity competence.

Taking the long view

Sometimes the number of people accessing some particular information now is less important to the censor than the accessibility of that information over time. In this case, a spike in the attention paid to the information now may be worth an ongoing reduction in the accessibility of that information, especially if that short-term attention is superficial and transitory.

This can be the case with government censorship, especially when the target of censorship is a platform rather than a specific story. Blocking YouTube might spur increased attention and access in the short term, but if that interest isn’t sustained, the ongoing trivial inconvenience of accessing the site might help dampen ongoing dissent in the future[13].

Of course, this is an extremely delicate balancing act. That short-term backlash could greatly increase awareness of the tools and techniques needed to bypass government censorship, creating an ongoing problem[14]. Or the backlash could cause so much short-term reputational damage that it provokes serious resistance: no use making your life easier in five years if it gets you deposed now. In general, the extent to which this is an effective tactic vs a more sophisticated kind of error is a political-science question I don’t feel qualified to answer[15]; I just wanted to flag that it could, in principle, go either way.

Amplification and mitigation

We’ve discussed different mechanisms by which attempted censorship can give rise to the Streisand effect, and a range of reasons why a would-be censor might (rationally or irrationally) risk triggering such a backlash. But what kind of features make an attempt at censorship more or less likely to produce a Streisand effect?

There are a few different framings one can use to look at this question. To begin with, we can return to the three contributing factors from earlier: that censorship is interesting, that it is offensive, and that it provides a good signal. We can thus predict that attempts at censorship that score especially highly on these measures will be at greatest risk of triggering the Streisand effect.

  • Censorship is especially interesting when it is especially dramatic (involving well-known personalities or institutions, high-drama controversy and conflict, clashes of sacred values, etc.) or when it is strange or unusual in some way (e.g. if the information being censored seems to be something people don’t normally censor; if it is the test case for some new law or principle; or if the would-be censor or target are acting strangely). It is less interesting when it appears dull and routine; the appearance of being dull and routine is thus precious to the censor, and often studiously maintained.

  • Censorship provides an especially strong signal when the observer already has reason to value information about the would-be censor (e.g. because they are a rival or a celebrity); when the would-be censor seems to be trying especially hard to conceal the information (e.g. through super-injunctions); when the act of censorship is highly visible (e.g. a court order); or when the censors explicitly lay out why the information is valuable (e.g. Apple claiming that App Store Confidential contains business secrets). The signal is weaker if the would-be censor is obscure; if the means of censorship are inconspicuous; or if public information about the reason for censorship is vague and uninteresting.

  • Censorship is especially offensive, and hence triggers especially strong reactance, when the would-be censor seems to be behaving especially badly, and especially when they are seen to be abusing their power, misusing the law, reacting disproportionately, or demonstrating poor personal character in some way[16]. Similarly, censorship will be especially offensive if the target of that censorship is (or can be presented to be) especially sympathetic, or demonstrates especially good character. Conversely, censorship is less offensive when these factors are reversed, with the censor appearing sympathetic and reasonable, and the target unsympathetic[17].

Thus, a Streisand effect is more likely when the would-be censor misestimates or mispredicts one or more of these factors[18].

This framing, however, is largely censor-centric, and so misses out one of the most important factors determining whether a Streisand effect occurs: the actions of the target. The individual or organisation being censored often has a huge amount of influence over the outcome: if they roll over quietly there is little chance of a major backlash, while if they fight back in a media-savvy fashion a major Streisand effect is much more likely. In these latter cases, the Streisand effect is thus better seen as the result of a contest between the censor and the target to control the public narrative.

In his various papers[19] about backfire dynamics in censorship and repression, Brian Martin claims that censors use five main methods to “inhibit outrage” over an incident[20] and so reduce the probability of a Streisand effect:

  1. Reducing visibility of the action (e.g. through cover-ups)

  2. Devaluing the target of the action

  3. Interpreting events in a favourable light

  4. Legitimising their response through the use of official channels

  5. Incentivising those involved to stay quiet (or follow their preferred line) through threats, bribes, etc.

This list naturally suggests a corresponding list of actions available to a target of censorship (or their allies) seeking to increase outrage:

  1. Increasing visibility of the action

  2. Arguing for the value of the target, and perhaps devaluing the would-be censor

  3. Interpreting events in a negative, outrageous light

  4. Delegitimising the action (e.g. by rejecting official channels as corrupt)

  5. Resisting incentives to keep quiet, and perhaps incentivising others to speak out

Of course, not all would-be censors and targets are able (or willing) to utilise all of these tactics; the full range is perhaps only really available to authoritarian governments. Private citizens bringing defamation charges, for example, have limited ability to reduce the visibility of their actions beyond avoiding publicity during the trial. However, they can still use many of the other strategies, such as legitimisation (through the use of the courts), positive framing (as a fight to protect one’s good name against unjustified slander), devaluing the target (as a liar or reckless spreader of falsehoods) and incentivisation (threats of punitive damages and offers of settlement).

In this framing, whether an attempt at censorship leads to a Streisand effect depends on which actor is better able to execute their corresponding strategies. In this contest, the would-be censor is typically more powerful[21] in many ways, but the target has a key advantage: for the censor, any publicity is bad publicity. The more the target is able to raise the profile of the controversy, the more likely a Streisand effect is to occur. The three factors discussed above – the interestingness, offensiveness, and signalling value of censorship – serve to aid the target in this goal, and the stronger they are the more likely the target is to succeed, if they try.

How common is the Streisand effect?

Finally, we turn to the question of just how frequent the Streisand effect actually is. Is it a universal phenomenon that every would-be censor should fear? Or is it flashy and painful but ultimately rare, like getting struck by lightning?

Among internet journalists, the Streisand effect is often treated like an iron law of the universe: the just comeuppance of anyone foolish enough to try to suppress information in the digital age. There’s a lot of breathless rhetoric around “Streisand’s Law” and “When you try to hide something on the web, everyone sees it”. If you believe this coverage, the Streisand effect is just an inevitable consequence of the way the internet works[22].

I’m not convinced. It seems notable to me that most coverage of the effect seems to recycle some subset of the same ten or so core examples, with a somewhat larger number of minor cases. In any case, the frequency of news articles about a subject isn’t a great way of gauging its true relative frequency. There’s an obvious evidence-filtering problem here: we can see the numerator, but not the denominator[23].

I don’t know, but I suspect that, in fact, cases of the Streisand effect strong enough to reach the general news media are very rare. Most of the time, I predict, either result in the information being suppressed, or are ineffective without sparking a major backlash. As this article points out, almost none of the vast number of takedown requests received by Google result in any kind of significant Streisand effect. Or, as one person I spoke to about this put it, “most people respond to cease and desist letters by ceasing and desisting”.

I’m not the only one with this impression. Brian Martin says that “most [attempts at censorship] do not backfire, even when they have the potential to do so”. But Martin, with his clear sympathy for the targets of censorship, has reason to paint a picture of a dire threat. Actual attempts to quantify the frequency of censorship backfire seem very thin on the ground: most studies that attempt to analyse data on this issue show only that the Streisand effect can occur, not how often it actually does[24].

My best guess is that serious Streisand effects are common enough to be of concern to censors in cases that seem predisposed to them (see previous section), but rare enough that most attempts at censorship (of various kinds) do not provoke a backlash in the general media. In many cases, however, the “general media” may not be the main concern.

This article describes the case of Maldives Scuba Diving, a diving company that sued the owners of a popular scuba-diving forum over allegations made by forum users that its equipment was unsafe[25]. This case did not make the national news; I hadn’t heard of it before stumbling across that article. But news of the lawsuit got “a great deal of unwelcome attention” among scuba divers, including spreading news of the company’s recent name change “far and wide in the scuba diving community at large”.

It’s not totally clear how that case worked out (it seems to have ended in a private settlement of some kind), but the general lesson is stark: if the harm you are concerned about – to your reputation, your intellectual property, or the world – is achievable by a limited community of individuals, then your attempts at information control don’t need to trigger a dramatic, “general” Streisand effect to be counterproductive. A localised scandal that includes the communities you are concerned about will suffice. For a scuba-diving company, a scandal in the scuba-diving community is more than bad enough; the rest of the world is almost irrelevant.

These smaller, weaker, more localised Streisand effects seem likely to be much easier to trigger than big general scandals – and hence much more common. As a result, for those whose concern is concentrated in particular communities[26], these mini-Streisand incidents probably constitute the greatest danger.

Acknowledgements

Thanks to the EA Long-Term Future Fund for funding to work on this and related issues, the Centre on Long-Term Risk for the idea to look into the Streisand Effect, and Gregory Lewis for reviewing this post.


  1. ↩︎

    The Wikipedia page and this BBC article are two good repositories of (commonly-claimed) examples of the Streisand effect.

  2. ↩︎

    For example, in the case of authoritarian governments, visible censorship “might signal regime weakness”, emboldening the opposition (Hobbs & Roberts, 2018). For private individuals and non-authoritarian governments, meanwhile, the risk is mostly that people just dislike you more.

  3. ↩︎

    I actually think quite a few commonly-cited examples of the Streisand effect fall primarily into this category. For example, the case of Martha Payne, a Scottish student who was banned from blogging about the quality of her school meals. The ban created a lot of controversy and bad press from the local council (who had ordered the ban), and probably did spread news of her blog more widely than it otherwise would have been, but the blog had already made the national news multiple times and was moderately widely known. While the actions of the council were clearly counterproductive, I suspect more harm was done by the reputational damage than any further spreading of the blog.

  4. ↩︎

    “Censee” (meaning “target of censorship”) does not appear to be a generally accepted word. It is used in at least one scholarly paper, though, and seems like an obvious choice with no obvious alternatives, so I’m going to use it here.

  5. ↩︎

    As quoted in Jansen & Martin (2015).

  6. ↩︎

    Most Streisand-effect examples that made it to the national and international news combine more than one of these aspects; my aim with these examples lists is to pull out a few cases where the particular effect being discussed seems like the biggest factor.

  7. ↩︎

    Not to mention the fact that journalists and many others find super-injunctions (and other kinds of secret court) so offensive that they are likely to try extra-hard to unearth the information just to spite you (see next section).

  8. ↩︎

    One of the few academic papers to explicitly focus on the Streisand effect is Hagenbach & Koessler (2017), who come at the question from a game-theoretic perspective. In their (frankly fairly contrived) model, fully rational actors who appreciate the signalling value of their actions never fall into the Streisand effect (unless they make a different mistake, like misassessing the visibility of their actions), but actors who fail to appreciate the signalling effects of their actions routinely over-censor.

  9. ↩︎

    This obviously links back to the first effect, that censorship is interesting independent of the information being censored.

  10. ↩︎

    Note that “this business sued a customer for leaving a bad review” communicates very relevant information for potential customers, thus making these cases also good examples of the signalling theory.

  11. ↩︎

    That is, it can, but doesn’t always do so: if the backlash is large enough the would-be censor can end up seeming impotent, which will vitiate any intended deterrence.

  12. ↩︎

    Of course, if you can better identify which attempts at censorship are likely to result in Streisand effects, you can increase their payoffs by avoiding those. On the other hand, if the rate is sufficiently low it might not be worth the cost of doing so.

  13. ↩︎

    “Because consumers of media are impatient, even small increases in the price of information imposed by censorship can have large negative effects on information consumption.” (Hobbs & Roberts 2018)

  14. ↩︎

    See, e.g., Hobbs & Roberts (2018), “How Sudden Censorship Can Increase Access to Information”, which finds that “blocking of the popular social networking website Instagram in China disrupted the habits of millions of individuals accustomed to visiting that site and increased evasion of the Great Firewall”, increasing subsequent traffic to long-blocked Twitter and Facebook.

  15. ↩︎

    This broadly goes for the other strategies in this section as well.

  16. ↩︎

    It is notable how many famous cases of the Streisand effect can be accurately described as “bullying”. Barbra Streisand sues an obscure researcher for a huge sum for posting a photo of her house. A French intelligence agency hauls in a prominent Wikipedia editor with no relation to the article in question and forces him under threat of prosecution to delete it. McDonald’s expends huge amounts of resources to silence a couple of unimportant activists who can’t even afford to pay their lawyers. Plaintiffs in Streisand cases seldom appear sympathetic.

  17. ↩︎

    However, in this case, you might still be in for a severe Streisand effect as the result of the first factor (conflict is interesting).

  18. ↩︎

    For government censorship, one important way for a censor to hit all three factors at once is if censorship is dramatic and sudden: for example, if a popular website goes from fully available to fully blocked all at once. This is dramatic (and thus interesting), upsetting (people’s lives and routines are badly disrupted) and provides a good signal that blocking that source of information was of urgent importance for the censors.

  19. ↩︎

    Two examples here and here. There are others, but they’re all fairly similar to one another, so it’s not really necessary to read more than one.

  20. ↩︎

    “Incident” here is a fairly general term referring to anything a powerful group might want to manage public relations about, but Martin’s papers typically focus on incidents of government repression or censorship.

  21. ↩︎

    When the target of censorship is more powerful than, or even similarly powerful to, the would-be censor, a Streisand effect (or some other kind of censorship backfire) seems very likely.

  22. ↩︎

    This article is a rare voice of scepticism among the popular press.

  23. ↩︎

    “To fully appreciate the capacity of powerful groups to [avoid the Streisand effect], it is necessary to examine cases that did not backfire—cases in which there was no Streisand effect. The flaw in looking only at instances of the Streisand effect is that there is no control group; the cases examined are potentially atypical.” — Jansen & Martin (2015)

  24. ↩︎

    Nabi (2014) finds that several prominent cases of media censorship in Turkey and Pakistan led to large (but transient) spikes in search volume for the blocked content in those countries, as well as spikes in searches for various anti-censorship tools (VPNs etc). Xue et al. (2016) found that republication of URLs that had been delisted on Google did not on average lead to a spike in traffic, but did in some cases. Hobbs & Roberts (2018) find that the sudden blockage of Instagram in mainland China decreased traffic to Instagram (i.e. not a direct Streisand effect) but increased traffic to Twitter and Facebook. Pan & Siegel (2020) find that imprisoned Saudi dissenters reduced their criticism of the regime after release, but that their Twitter followers became (non-significantly) more critical, while the behaviour of other dissenters did not change.

  25. ↩︎

    Specifically, that it had been responsible for a tainted-air incident that had killed one diver and sickened ten others, and that it had changed its name in the wake of that incident.

  26. ↩︎

    These might include companies that cater to particular interest groups; individuals who value their reputation in particular niche communities; and law-enforcement and other organisations concerned about bad actors among particular expert communities.