RSS

Vuln­er­a­ble world hypothesis

TagLast edit: 14 Jul 2022 9:50 UTC by Leo

The vulnerable world hypothesis (VWH) is the view that there exists some level of technology at which civilization almost certainly gets destroyed unless extraordinary preventive measures are undertaken. VWH was introduced by Nick Bostrom in 2019.[1]

Historical precedents

Versions of VWH have been suggested prior to Bostrom’s statement of it, though not defined precisely or analyzed rigorously. An early expression is arguably found in a 1945 address by Bertrand Russell to the House of Lords concerning the detonation of atomic bombs in Hiroshima and Nagasaki and its implications for the future of humanity.[2] (Russell frames his concerns specifically about nuclear warfare, but as Toby Ord has argued,[3] this is how early discussions about existential risk were presented, because at the time nuclear power was the only known technology with the potential to cause an existential catastrophe.)

All that must take place if our scientific civilization goes on, if it does not bring itself to destruction; all that is bound to happen. We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? It is a simple question but a very vital one. I do not think it is possible to exaggerate the gravity of the possibilities of evil that lie in the utilization of atomic energy. As I go about the streets and see St. Paul’s, the British Museum, the Houses of Parliament and the other monuments of our civilization, in my mind’s eye I see a nightmare vision of those buildings as heaps of rubble with corpses all round them. That is a thing we have got to face, not only in our own country and cities, but throughout the civilized world as a real probability unless the world will agree to find a way of abolishing war. It is not enough to make war rare; great and serious war has got to be abolished, because otherwise these things will happen.

Further reading

Bostrom, Nick (2019) The vulnerable world hypothesis, Global Policy, vol. 10, pp. 455–476.

Bostrom, Nick & Matthew van der Merwe (2021) How vulnerable is the world?, Aeon, February 12.

Christiano, Paul (2016) Handling destructive technology, AI Alignment, November 14.

Hanson, Robin (2018) Vulnerable world hypothesis, Overcoming Bias, November 16.

Huemer, Michael (2020) The case for tyranny, Fake Nous, July 11.

Karpathy, Andrej (2016) Review of The Making of the Atomic Bomb, Goodreads, December 13.

Manheim, David (2020) The fragile world hypothesis: complexity, fragility, and systemic existential risk, Futures, vol. 122, pp. 1–8.

Piper, Kelsey (2018) How technological progress is making it likelier than ever that humans will destroy ourselves, Vox, November 19.

Rozendal, Siebe (2020) The problem of collective ruin, Siebe Rozendal’s Blog, August 22.

Sagan, Carl (1994) Pale Blue Dot: A Vision of the Human Future in Space, New York: Random House.

Related entries

anthropogenic existential risk | differential progress | existential security | global governance | international organization | terrorism | time of perils

  1. ^

    Bostrom, Nick (2019) The vulnerable world hypothesis, Global Policy, vol. 10, pp. 455–476.

  2. ^

    Russell, Bertrand (1945) The international situation, The Parliamentary Debates (Hansard), vol. 138, pp. 87–93, p. 89.

  3. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 2.

“The Vuln­er­a­ble World Hy­poth­e­sis” (Nick Bostrom’s new pa­per)

Hauke Hillebrandt9 Nov 2018 11:20 UTC
24 points
6 comments1 min readEA link
(nickbostrom.com)

The case for de­lay­ing so­lar geo­eng­ineer­ing research

John G. Halstead23 Mar 2019 15:26 UTC
53 points
22 comments5 min readEA link

AGI in a vuln­er­a­ble world

AI Impacts2 Apr 2020 3:43 UTC
17 points
0 comments1 min readEA link
(aiimpacts.org)

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

Chris Leong1 Jul 2020 23:32 UTC
26 points
6 comments1 min readEA link

Mike Hue­mer on The Case for Tyranny

Chris Leong16 Jul 2020 9:57 UTC
24 points
5 comments1 min readEA link
(fakenous.net)

A toy model for tech­nolog­i­cal ex­is­ten­tial risk

RobertHarling28 Nov 2020 11:55 UTC
10 points
2 comments4 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
36 points
2 comments8 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfield19 Feb 2021 16:26 UTC
27 points
8 comments38 min readEA link

Nu­clear Strat­egy in a Semi-Vuln­er­a­ble World

Jackson Wagner28 Jun 2021 17:35 UTC
27 points
0 comments18 min readEA link

Civ­i­liza­tional vulnerabilities

Vasco Grilo22 Apr 2022 9:37 UTC
7 points
0 comments3 min readEA link

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben Snodin2 May 2022 9:41 UTC
136 points
17 comments33 min readEA link

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell Tabarrok18 Jul 2022 11:54 UTC
64 points
18 comments31 min readEA link

On the Vuln­er­a­ble World Hypothesis

Catherine Brewer1 Aug 2022 12:55 UTC
44 points
12 comments14 min readEA link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan Arel6 Dec 2022 22:36 UTC
5 points
4 comments3 min readEA link

Wikipe­dia is not so great, and what can be done about it.

Rey Bueno12 Dec 2022 20:06 UTC
15 points
1 comment16 min readEA link
(www.reddit.com)

Open-source LLMs may prove Bostrom’s vuln­er­a­ble world hypothesis

Roope Ahvenharju14 Apr 2023 9:25 UTC
14 points
2 comments1 min readEA link

Open Agency model can solve the AI reg­u­la­tion dilemma

Roman Leventov9 Nov 2023 15:22 UTC
4 points
0 comments2 min readEA link

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk830 Nov 2023 10:15 UTC
−9 points
0 comments12 min readEA link

[Question] What am I miss­ing re. open-source LLM’s?

another-anon-do-gooder4 Dec 2023 4:48 UTC
1 point
2 comments1 min readEA link

The Jour­nal of Danger­ous Ideas

rogersbacon13 Feb 2024 15:43 UTC
−26 points
1 comment5 min readEA link
(www.secretorum.life)
No comments.