Consider the following argument:
1) Over time humanity will discover more superweapons. At the moment, these are mostly just accessible to state actors, but eventually these will become accessible to smaller groups/individuals
2) The (edit: potential for unilateral action) means that if a large number of groups gain access such an event is almost guaranteed to occur
3) It seems unrealistic to believe that we could ever completely prevent such terrorism occurring without minimally-invasive mass surveillance. I don’t believe that we could obtain this result via education without it being in effect brainwashing. Maybe you could genetically engineer people to be less violent, but fundamentally changing our psychology is terrifying as well.
4) Minimally-invasive mass surveillance would purely focus on threats above a particular scale and ignore everything else that we more minor. Given sufficiently advanced technology, we might be able to prevent humans from having access to the information in any other circumstance
5) While it is possible that a superintelligence might be able to talk everyone into accepting that this is a reasonable policy, I am unsure enough about this claim to believe that it is worthwhile trying to build support for minimally-invasive mass surveillance as this will undoubtedly be reflexively opposed by many people who don’t appreciate the stakes.
It’s likely that I have seen this term mentioned somewhere else in the past by someone else, but if I did, the source is long gone from my memory.
What do you think about this argument?
Update: I was linked to this TED talk by Nick Bostrom where he discusses the potential that we might need such surveillance.
I think this is an important question. My own actual answer is that I’m very unsure. It seems plausible that implementing or advocating for such a policy would be wise, or that it would be counterproductive.
The following thoughts and links will hopefully be more useful than that answer:
1. This seems very reminiscent of the arguments Bostrom makes in the paper The Vulnerable World Hypothesis (and especially his “easy nukes” thought experiment). From that paper’s abstract:
(Perhaps half-remembering this paper is what leads you to say “It’s likely that I have seen this term mentioned somewhere else in the past by someone else”.)
2. Some other relevant sources include:
https://forum.effectivealtruism.org/posts/xoxbDsKGvHpkGfw9R/problem-areas-beyond-80-000-hours-current-priorities#Global_governance
https://forum.effectivealtruism.org/posts/xoxbDsKGvHpkGfw9R/problem-areas-beyond-80-000-hours-current-priorities#Surveillance
https://www.effectivealtruism.org/articles/ea-global-2018-the-future-of-surveillance/
3. I think this is an important topic. In my draft series on “Crucial questions for longtermists”, one question I list is “Would further development or deployment of surveillance technology increase risks from totalitarianism and dystopia? By how much?”
I’m also considering including an additional “topic” that would contain a more thorough set of “crucial questions” on the matter.
4. I don’t think the unilateralist’s curse is quite the right term in your argument. The potential for huge harms from unilateral action are indeed key, but the unilateralist’s curse is something more specific.
Essentially, the curse is about a specific way in which random distribution of misjudgement can lead to the “most optimistic” person acting, and thereby to harm occurring, despite the actor themselves having genuinely aimed to do good. (I think it’s also meant to be when this happens despite people’s average estimates being accurate, rather than them being systematically overly optimistic, though I can’t remember that for sure.) From the paper on the curse:
This could apply in cases like well-intentioned but harmful dual-use research, or in well-intentioned release of hazardous information. Interestingly, it could also apply to widely promoting this sort of “vulnerable world” argument—it’s possible that:
the people who would do so are those who overestimate the expected value of surveillance, preventing policing, etc.
the “real” expected value is negative
just a few people widely promoting this sort of argument is enough for major harm to occur, because then the idea can be picked up by others, acquire a life of its own, etc.
In any case, the possibility for well-intentioned yet extremely harmful actions, and the way the unilateralist’s curse boosts the likelihood of them, does provide additional reason for surveillance. But the case for surveillance doesn’t necessarily have to rest on that, and you seem most focused on malicious use (e.g., terrorism).
5. I’ve collected a bunch of sources related to the topics of the unilateralist’s curse, downside risks/accidental harm, and information hazards, which might be interesting to you or some other readers.
Hope that’s helpful!
Thanks for posting such a detailed answer!
Which would be more dystopian to you, DNA engineering to ensure the distribution of human behavior will not include unilateral destruction, or super surveillance?
I personally think DNA engineering at least has some positive points, too, while surveillance is purely a necessary evil.
DNA engineering has some positive points, but imagine the power that having significant control its citizens personalities would give the government. That shouldn’t be underestimated.
Background: I am an information science student who has taken a class on the societal aspects of surveillance.
My gut feeling is that advocating for or implementing “mass surveillance” targeted at preventing individuals from using weapons of mass destruction (WMDs) would be counterproductive.
First, were a mass surveillance system aimed at controlling WMDs to be set up, governments would lobby for it to be used for other purposes as well, such as monitoring for conventional terrorism. Pretty soon it wouldn’t be minimally invasive anymore; it would just be a general-purpose mass surveillance system.
Second, a surveillance system of the scope that Bostrom has proposed (“ubiquitous real-time worldwide surveillance”) would itself be an existential risk to liberal democracy. The problem is that a ubiquitous surveillance system would create the feeling that surveillees are constantly being watched. Even if it had strong technical and institutional privacy guarantees and those guarantees were communicated to the public, people would likely not be able to trust it; rumors of abuse would only make establishing trust harder. People modify their behavior when they know they are being watched or could be watched at any time, so they would be less willing to engage in behaviors that are stigmatized by society even if the Panopticon were not explicitly looking out for those behaviors. This feeling of constantly being watched would stifle risk-taking, individuality, creativity, and freedom of expression, all of which are essential to sustain human progress.
I think that a much more limited suite of targeted surveillance systems, combined with other mechanisms for arms control, would be a lot more promising while still being effective at controlling WMDs. Such limited surveillance systems are already used in gun control: for example, the U.S. federal government requires dealers to keep records of gun sales for at least 20 years, and many U.S. states and other countries keep records of who is licensed to own a gun. Some states also require gun owners to report lost or stolen guns in order to fight gun trafficking. These surveillance measures can be designed to balance gun owners’ privacy interests with the public’s interest in reducing gun violence. We could regulate synthetic biology a lot like we do gun control: for example, companies that create synthetic biology or sell desktop DNA sequencers could be required to maintain records of transactions.
However, I don’t expect this targeted approach to work as well for cyber weapons. Because computers are general-purpose, cyber weapons can theoretically be developed and executed on any computer, and trying to prevent the use of cyber weapons by surveilling everyone who owns a computer would be extremely inefficient (since the vast majority of people who use computers are not creating cyber weapons) and impractical (because power users could easily uninstall any spyware planted on their machines). Also, because computers are ubiquitous and often store a lot of sensitive personal information, this form of surveillance would be extremely unpopular as well as invasive. Strengthening cyber defense seems like a more promising way to prevent harm from cyber attacks.
I agree that such a system would be terrifying. But I worry that its absence would be even more terrifying. Limited surveillance systems work decently for gun control, but when we get to the stage where someone can kill tens of thousands or even millions instead of a hundred I suspect it’ll break down.