In­for­ma­tion hazard

TagLast edit: 2 Jun 2021 14:05 UTC by Pablo

An information hazard (also known as an infohazard) is a risk arising from the spread of true information. The concept was introduced by Nick Bostrom in a 2011 paper (Bostrom 2011).

An important ethical and practical issue is how information hazards should be treated. To what extent should people suppress acquisition and dissemination of information which may cause harm? The answer to this question both depends on one’s moral views—for instance, whether new knowledge is good in itself, or whether it is wrong to restrict personal liberty—and on one’s empirical views of what the outcome of such suppression is likely to be.


Aird, Michael (2020) Collection of all prior work I found that seemed substantially relevant to information hazards, Effective Altruism Forum, February 24.
Many additional resources on this topic.

Bostrom, Nick (2011) Information hazards: a typology of potential harms from knowledge, Review of contemporary philosophy, vol. 10, pp. 1–35.

Sandberg, Anders (2020) Anders Sandberg on information hazards, Slate Star Codex meetup, July 5.

Related entries

accidental harm | misinformation | unilateralist’s curse

In­for­ma­tion haz­ards: a very sim­ple typology

willbradshaw13 Jul 2020 16:54 UTC
56 points
3 comments2 min readEA link

What are in­for­ma­tion haz­ards?

MichaelA5 Feb 2020 20:50 UTC
17 points
3 comments4 min readEA link

Ter­ror­ism, Tylenol, and dan­ger­ous information

Davis_Kingsley23 Mar 2019 2:10 UTC
84 points
15 comments3 min readEA link


Fin17 Sep 2019 2:41 UTC
79 points
10 comments18 min readEA link

Ex­plor­ing the Streisand Effect

willbradshaw6 Jul 2020 7:00 UTC
41 points
4 comments20 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA Global3 Sep 2020 18:11 UTC
30 points
0 comments24 min readEA link

Open Com­mu­ni­ca­tion in the Days of Mal­i­cious On­line Actors

Ozzie Gooen6 Oct 2020 23:57 UTC
35 points
10 comments7 min readEA link

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments9 min readEA link

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA Global2 Jun 2017 8:48 UTC
7 points
0 comments1 min readEA link

What ar­eas are the most promis­ing to start new EA meta char­i­ties—A sur­vey of 40 EAs

Joey23 Dec 2020 12:24 UTC
138 points
12 comments16 min readEA link

[Question] Is an in­crease in at­ten­tion to the idea that ‘suffer­ing is bad’ likely to in­crease ex­is­ten­tial risk?

dotsam30 Jun 2021 19:41 UTC
2 points
6 comments1 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

22tom28 Jul 2021 13:23 UTC
81 points
3 comments36 min readEA link

[Question] In­fo­haz­ards: The Fu­ture Is Dis­be­liev­ing Facts?

Prof.Weird22 Nov 2020 7:26 UTC
2 points
0 comments1 min readEA link

Thoughts on The Weapon of Openness

willbradshaw13 Feb 2020 0:10 UTC
29 points
17 comments8 min readEA link

X-risks of SETI and METI?

geoffreymiller2 Jul 2019 22:41 UTC
18 points
11 comments1 min readEA link

[Link] Thiel on GCRs

Milan_Griffes22 Jul 2019 20:47 UTC
28 points
11 comments1 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDello20 May 2020 23:01 UTC
21 points
10 comments6 min readEA link

Ex­am­ples of Suc­cess­ful Selec­tive Dis­clo­sure in the Life Sciences

tessa19 Aug 2021 18:38 UTC
50 points
2 comments4 min readEA link
No comments.