Vuln­er­a­ble world hypothesis

TagLast edit: 2 May 2021 22:28 UTC by EA Wiki assistant

The vulnerable world hypothesis (VWH) is the view that there exists some level of technology at which civilization almost certainly gets destroyed unless extraordinary preventive measures are undertaken. The VWH was introduced by Nick Bostrom in 2019 (Bostrom 2019).

Historical precedents

Versions of VWH have been suggested prior to Bostrom’s statement of it, though not defined precisely or analyzed rigorously. An early expression is arguably found in a 1945 address by Bertrand Russell to the House of Lords concerning the detonation of atomic bombs in Hiroshima and Nagasaki and its implications for the future of humanity (Russell 1945: 89). (Russell frames his concerns specifically about nuclear warfare, but as Toby Ord has argued (Ord 2020: ch. 2), this is how early discussions about existential risk were presented, because at the time nuclear power was the only known technology with the potential to cause an existential catastrophe.)

All that must take place if our scientific civilization goes on, if it does not bring itself to destruction; all that is bound to happen. We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? It is a simple question but a very vital one. I do not think it is possible to exaggerate the gravity of the possibilities of evil that lie in the utilization of atomic energy. As I go about the streets and see St. Paul’s, the British Museum, the Houses of Parliament and the other monuments of our civilization, in my mind’s eye I see a nightmare vision of those buildings as heaps of rubble with corpses all round them. That is a thing we have got to face, not only in our own country and cities, but throughout the civilized world as a real probability unless the world will agree to find a way of abolishing war. It is not enough to make war rare; great and serious war has got to be abolished, because otherwise these things will happen.


Bostrom, Nick (2019) The vulnerable world hypothesis, Global Policy, vol. 10, pp. 455–476.

Bostrom, Nick & Matthew van der Merwe (2021) How vulnerable is the world?, Aeon, February 12.

Christiano, Paul (2016) Handling destructive technology, AI Alignment, November 14.

Hanson, Robin (2018) Vulnerable world hypothesis, Overcoming Bias, November 16.

Huemer, Michael (2020) The case for tyranny, Fake Nous, July 11.

Karpathy, Andrej (2016) Review of The Making of the Atomic Bomb, Goodreads, December 13.

Manheim, David (2020) The fragile world hypothesis: complexity, fragility, and systemic existential risk, Futures, vol. 122, pp. 1–8.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Piper, Kelsey (2018) How technological progress is making it likelier than ever that humans will destroy ourselves, Vox, November 19.

Rozendal, Siebe (2020) The problem of collective ruin, Siebe Rozendal’s Blog, August 22.

Russell, Bertrand (1945) The international situation, The Parliamentary Debates (Hansard), vol. 138, pp. 87–93.

Sagan, Carl (1994) Pale Blue Dot: A Vision of the Human Future in Space, New York: Random House.

Related entries

anthropogenic existential risks | differential progress | existential security | global governance | international organizations | terrorism

“The Vuln­er­a­ble World Hy­poth­e­sis” (Nick Bostrom’s new pa­per)

HaukeHillebrandt9 Nov 2018 11:20 UTC
23 points
6 commentsEA link

AGI in a vuln­er­a­ble world

AI Impacts2 Apr 2020 3:43 UTC
17 points
0 comments1 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
34 points
2 comments8 min readEA link

A toy model for tech­nolog­i­cal ex­is­ten­tial risk

RobertHarling28 Nov 2020 11:55 UTC
10 points
3 comments4 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfield19 Feb 2021 16:26 UTC
23 points
8 comments37 min readEA link

The case for de­lay­ing so­lar geo­eng­ineer­ing research

Halstead23 Mar 2019 15:26 UTC
51 points
19 comments5 min readEA link

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

casebash1 Jul 2020 23:32 UTC
23 points
6 comments1 min readEA link
No comments.