RSS

In­for­ma­tion hazard

TagLast edit: 29 May 2022 17:33 UTC by Pablo

An information hazard (also known as an infohazard) is a risk arising from the spread of true information. The concept was introduced by Nick Bostrom in a 2011 paper.[1]

An important ethical and practical issue is how information hazards should be treated. To what extent should people suppress acquisition and dissemination of information which may cause harm? The answer to this question both depends on one’s moral views—for instance, whether new knowledge is good in itself, or whether it is wrong to restrict personal liberty—and on one’s empirical views of what the outcome of such suppression is likely to be.

Further reading

Aird, Michael (2020) Collection of all prior work I found that seemed substantially relevant to information hazards, Effective Altruism Forum, February 24.
Many additional resources on this topic.

Bostrom, Nick (2011) Information hazards: a typology of potential harms from knowledge, Review of contemporary philosophy, vol. 10, pp. 1–35.

Piper, Kelsey (2022) When scientific information is dangerous, Vox, March 30.

Sandberg, Anders (2020) Anders Sandberg on information hazards, Slate Star Codex meetup, July 5.

Related entries

accidental harm | misinformation | unilateralist’s curse

  1. ^

    Bostrom, Nick (2011) Information hazards: a typology of potential harms from knowledge, Review of Contemporary Philosophy, vol. 10, pp. 1–35.

In­for­ma­tion haz­ards: a very sim­ple typology

Will Bradshaw13 Jul 2020 16:54 UTC
65 points
3 comments2 min readEA link

What are in­for­ma­tion haz­ards?

MichaelA🔸5 Feb 2020 20:50 UTC
38 points
3 comments4 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA Global3 Sep 2020 18:11 UTC
32 points
0 comments22 min readEA link
(www.youtube.com)

Ter­ror­ism, Tylenol, and dan­ger­ous information

Davis_Kingsley23 Mar 2019 2:10 UTC
88 points
15 comments3 min readEA link

Bioinfohazards

Fin17 Sep 2019 2:41 UTC
89 points
8 comments18 min readEA link

Ex­plor­ing the Streisand Effect

Will Bradshaw6 Jul 2020 7:00 UTC
46 points
4 comments20 min readEA link

Types of in­for­ma­tion hazards

Vasco Grilo🔸29 May 2022 14:30 UTC
15 points
1 comment3 min readEA link

[Question] How to nav­i­gate po­ten­tial infohazards

more better 4 Mar 2023 21:28 UTC
16 points
7 comments1 min readEA link

Con­jec­ture: In­ter­nal In­fo­haz­ard Policy

Connor Leahy29 Jul 2022 19:35 UTC
34 points
3 comments19 min readEA link

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul Bricman4 Dec 2023 7:41 UTC
4 points
0 comments16 min readEA link
(arxiv.org)

Assess­ment of AI safety agen­das: think about the down­side risk

Roman Leventov19 Dec 2023 9:02 UTC
6 points
0 comments1 min readEA link

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben Snodin2 May 2022 9:41 UTC
137 points
17 comments33 min readEA link

We sum­ma­rized the top info haz­ard ar­ti­cles and made a pri­ori­tized read­ing list

Corey_Wood14 Dec 2021 19:46 UTC
41 points
2 comments22 min readEA link

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments9 min readEA link
(www.youtube.com)

Biose­cu­rity Cul­ture, Com­puter Se­cu­rity Culture

Jeff Kaufman 🔸30 Aug 2023 17:07 UTC
129 points
10 comments2 min readEA link

Manag­ing risks while try­ing to do good

Wei Dai1 Feb 2024 14:24 UTC
42 points
5 comments2 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA🔸1 Mar 2020 14:08 UTC
51 points
11 comments13 min readEA link

[Question] Is an in­crease in at­ten­tion to the idea that ‘suffer­ing is bad’ likely to in­crease ex­is­ten­tial risk?

dotsam30 Jun 2021 19:41 UTC
2 points
6 comments1 min readEA link

[Question] How much pres­sure do you feel against ex­ter­nally ex­press­ing views which do not con­form to those of your man­ager or or­gani­sa­tion?

Vasco Grilo🔸10 Feb 2024 9:05 UTC
60 points
16 comments5 min readEA link

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA Global2 Jun 2017 8:48 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASB16 Dec 2021 21:40 UTC
87 points
6 comments3 min readEA link

Open Com­mu­ni­ca­tion in the Days of Mal­i­cious On­line Actors

Ozzie Gooen6 Oct 2020 23:57 UTC
38 points
10 comments7 min readEA link

Ques­tions for fur­ther in­ves­ti­ga­tion of AI diffusion

Ben Cottier21 Dec 2022 13:50 UTC
28 points
0 comments11 min readEA link

What ar­eas are the most promis­ing to start new EA meta char­i­ties—A sur­vey of 40 EAs

Joey🔸23 Dec 2020 12:24 UTC
150 points
13 comments16 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes28 Jul 2021 13:23 UTC
96 points
5 comments30 min readEA link

80,000 Hours ca­reer re­view: In­for­ma­tion se­cu­rity in high-im­pact areas

80000_Hours16 Jan 2023 12:45 UTC
56 points
10 comments11 min readEA link
(80000hours.org)

Are we drop­ping the ball on Recom­men­da­tion AIs?

Raphaël S23 Oct 2024 19:37 UTC
5 points
0 comments1 min readEA link

AI can ex­ploit safety plans posted on the Internet

Peter S. Park4 Dec 2022 12:17 UTC
5 points
3 comments1 min readEA link

[Question] Should the fo­rum have posts (or com­ments) only vie­w­able by logged-in fo­rum members

Jeremy4 Apr 2022 17:40 UTC
21 points
6 comments1 min readEA link

[Question] AI Eth­i­cal Committee

eaaicommittee1 Mar 2022 23:35 UTC
8 points
0 comments1 min readEA link

In­tro­duc­ing spirit hazards

brb24327 May 2022 22:16 UTC
9 points
2 comments2 min readEA link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

RobBensinger7 Dec 2022 19:46 UTC
77 points
7 comments5 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDello20 May 2020 23:01 UTC
27 points
10 comments6 min readEA link

[Question] How to dis­close a new x-risk?

harsimony24 Aug 2022 1:35 UTC
20 points
9 comments1 min readEA link

Could a sin­gle alien mes­sage de­stroy us?

Writer25 Nov 2022 9:58 UTC
40 points
5 comments1 min readEA link

[Question] In­fo­haz­ards: The Fu­ture Is Dis­be­liev­ing Facts?

Prof.Weird22 Nov 2020 7:26 UTC
2 points
0 comments1 min readEA link

X-risks of SETI and METI?

Geoffrey Miller2 Jul 2019 22:41 UTC
18 points
11 comments1 min readEA link

Thoughts on The Weapon of Openness

Will Bradshaw13 Feb 2020 0:10 UTC
32 points
17 comments8 min readEA link

The $100,000 Tru­man Prize: Re­ward­ing Anony­mous EA Work

Drew Spartz22 Sep 2022 21:07 UTC
37 points
47 comments4 min readEA link

Ex­am­ples of Suc­cess­ful Selec­tive Dis­clo­sure in the Life Sciences

Tessa A 🔸19 Aug 2021 18:38 UTC
51 points
2 comments4 min readEA link

[Question] Has pri­vate AGI re­search made in­de­pen­dent safety re­search in­effec­tive already? What should we do about this?

Roman Leventov23 Jan 2023 16:23 UTC
15 points
0 comments5 min readEA link

Prometheus Un­leashed: Mak­ing sense of in­for­ma­tion hazards

basil.icious15 Feb 2023 6:44 UTC
0 points
0 comments4 min readEA link
(basil08.github.io)

Tech­ni­cal Re­port on Mir­ror Bac­te­ria: Fea­si­bil­ity and Risks

Aaron Gertler 🔸12 Dec 2024 19:07 UTC
193 points
14 comments1 min readEA link
(purl.stanford.edu)

How can we im­prove In­fo­haz­ard Gover­nance in EA Biose­cu­rity?

Nadia Montazeri5 Aug 2023 12:03 UTC
167 points
27 comments4 min readEA link

AI-based dis­in­for­ma­tion is prob­a­bly not a ma­jor threat to democracy

Dan Williams24 Feb 2024 20:01 UTC
63 points
8 comments10 min readEA link

A be­gin­ner’s in­tro­duc­tion to AI-driven biorisk: Large Lan­guage Models, Biolog­i­cal De­sign Tools, In­for­ma­tion Hazards, and Biosecurity

NatKiilu3 May 2024 15:49 UTC
6 points
1 comment16 min readEA link

[Link] Thiel on GCRs

Milan_Griffes22 Jul 2019 20:47 UTC
28 points
11 comments1 min readEA link
No comments.