RSS

In­for­ma­tion hazard

TagLast edit: May 29, 2022, 5:33 PM by Pablo

An information hazard (also known as an infohazard) is a risk arising from the spread of true information. The concept was introduced by Nick Bostrom in a 2011 paper.[1]

An important ethical and practical issue is how information hazards should be treated. To what extent should people suppress acquisition and dissemination of information which may cause harm? The answer to this question both depends on one’s moral views—for instance, whether new knowledge is good in itself, or whether it is wrong to restrict personal liberty—and on one’s empirical views of what the outcome of such suppression is likely to be.

Further reading

Aird, Michael (2020) Collection of all prior work I found that seemed substantially relevant to information hazards, Effective Altruism Forum, February 24.
Many additional resources on this topic.

Bostrom, Nick (2011) Information hazards: a typology of potential harms from knowledge, Review of contemporary philosophy, vol. 10, pp. 1–35.

Piper, Kelsey (2022) When scientific information is dangerous, Vox, March 30.

Sandberg, Anders (2020) Anders Sandberg on information hazards, Slate Star Codex meetup, July 5.

Related entries

accidental harm | misinformation | unilateralist’s curse

  1. ^

    Bostrom, Nick (2011) Information hazards: a typology of potential harms from knowledge, Review of Contemporary Philosophy, vol. 10, pp. 1–35.

In­for­ma­tion haz­ards: a very sim­ple typology

Will BradshawJul 13, 2020, 4:54 PM
65 points
3 comments2 min readEA link

What are in­for­ma­tion haz­ards?

MichaelA🔸Feb 5, 2020, 8:50 PM
38 points
3 comments4 min readEA link

Ter­ror­ism, Tylenol, and dan­ger­ous information

Davis_KingsleyMar 23, 2019, 2:10 AM
88 points
15 comments3 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA GlobalSep 3, 2020, 6:11 PM
32 points
0 comments22 min readEA link
(www.youtube.com)

Bioinfohazards

FinSep 17, 2019, 2:41 AM
89 points
8 comments18 min readEA link

Ex­plor­ing the Streisand Effect

Will BradshawJul 6, 2020, 7:00 AM
46 points
4 comments20 min readEA link

[Question] How to nav­i­gate po­ten­tial infohazards

more better Mar 4, 2023, 9:28 PM
16 points
7 comments1 min readEA link

Types of in­for­ma­tion hazards

Vasco Grilo🔸May 29, 2022, 2:30 PM
15 points
1 comment3 min readEA link

Con­jec­ture: In­ter­nal In­fo­haz­ard Policy

Connor LeahyJul 29, 2022, 7:35 PM
34 points
3 comments19 min readEA link

Assess­ment of AI safety agen­das: think about the down­side risk

Roman LeventovDec 19, 2023, 9:02 AM
6 points
0 comments1 min readEA link

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul BricmanDec 4, 2023, 7:41 AM
4 points
0 comments16 min readEA link
(arxiv.org)

Manag­ing risks while try­ing to do good

Wei DaiFeb 1, 2024, 2:24 PM
42 points
5 comments2 min readEA link

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA GlobalJun 8, 2018, 7:15 AM
9 points
0 comments9 min readEA link
(www.youtube.com)

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASBDec 16, 2021, 9:40 PM
87 points
6 comments3 min readEA link

[Question] Is an in­crease in at­ten­tion to the idea that ‘suffer­ing is bad’ likely to in­crease ex­is­ten­tial risk?

dotsamJun 30, 2021, 7:41 PM
2 points
6 comments1 min readEA link

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA GlobalJun 2, 2017, 8:48 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben SnodinMay 2, 2022, 9:41 AM
137 points
17 comments33 min readEA link

We sum­ma­rized the top info haz­ard ar­ti­cles and made a pri­ori­tized read­ing list

Corey_WoodDec 14, 2021, 7:46 PM
41 points
2 comments22 min readEA link

Open Com­mu­ni­ca­tion in the Days of Mal­i­cious On­line Actors

Ozzie GooenOct 6, 2020, 11:57 PM
38 points
10 comments7 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA🔸Mar 1, 2020, 2:08 PM
51 points
11 comments13 min readEA link

Ques­tions for fur­ther in­ves­ti­ga­tion of AI diffusion

Ben CottierDec 21, 2022, 1:50 PM
28 points
0 comments11 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes🔸Jul 28, 2021, 1:23 PM
96 points
5 comments30 min readEA link

What ar­eas are the most promis­ing to start new EA meta char­i­ties—A sur­vey of 40 EAs

Joey🔸Dec 23, 2020, 12:24 PM
150 points
13 comments16 min readEA link

80,000 Hours ca­reer re­view: In­for­ma­tion se­cu­rity in high-im­pact areas

80000_HoursJan 16, 2023, 12:45 PM
56 points
10 comments11 min readEA link
(80000hours.org)

Biose­cu­rity Cul­ture, Com­puter Se­cu­rity Culture

Jeff Kaufman 🔸Aug 30, 2023, 5:07 PM
130 points
10 comments2 min readEA link

[Question] How much pres­sure do you feel against ex­ter­nally ex­press­ing views which do not con­form to those of your man­ager or or­gani­sa­tion?

Vasco Grilo🔸Feb 10, 2024, 9:05 AM
60 points
16 comments5 min readEA link

AI-based dis­in­for­ma­tion is prob­a­bly not a ma­jor threat to democracy

Dan WilliamsFeb 24, 2024, 8:01 PM
63 points
8 comments10 min readEA link

[Question] AI Eth­i­cal Committee

eaaicommitteeMar 1, 2022, 11:35 PM
8 points
0 comments1 min readEA link

AI can ex­ploit safety plans posted on the Internet

Peter S. ParkDec 4, 2022, 12:17 PM
5 points
3 comments1 min readEA link

Thoughts on The Weapon of Openness

Will BradshawFeb 13, 2020, 12:10 AM
32 points
17 comments8 min readEA link

The $100,000 Tru­man Prize: Re­ward­ing Anony­mous EA Work

Drew SpartzSep 22, 2022, 9:07 PM
37 points
47 comments4 min readEA link

A be­gin­ner’s in­tro­duc­tion to AI-driven biorisk: Large Lan­guage Models, Biolog­i­cal De­sign Tools, In­for­ma­tion Hazards, and Biosecurity

NatKiiluMay 3, 2024, 3:49 PM
6 points
1 comment16 min readEA link

Are we drop­ping the ball on Recom­men­da­tion AIs?

Raphaël SOct 23, 2024, 7:37 PM
5 points
0 comments1 min readEA link

Ex­am­ples of Suc­cess­ful Selec­tive Dis­clo­sure in the Life Sciences

Tessa A 🔸Aug 19, 2021, 6:38 PM
51 points
2 comments4 min readEA link

Can Knowl­edge Hurt You? The Dangers of In­fo­haz­ards (and Exfo­haz­ards)

A.G.G. LiuFeb 8, 2025, 3:51 PM
12 points
0 comments1 min readEA link
(www.youtube.com)

Fact Check: 57% of the in­ter­net is NOT AI-gen­er­ated

James-Hartree-LawJan 17, 2025, 9:26 PM
1 point
0 comments1 min readEA link

[Question] Has pri­vate AGI re­search made in­de­pen­dent safety re­search in­effec­tive already? What should we do about this?

Roman LeventovJan 23, 2023, 4:23 PM
15 points
0 comments5 min readEA link

Prometheus Un­leashed: Mak­ing sense of in­for­ma­tion hazards

basil.iciousFeb 15, 2023, 6:44 AM
0 points
0 comments4 min readEA link
(basil08.github.io)

[Question] Should the fo­rum have posts (or com­ments) only vie­w­able by logged-in fo­rum members

JeremyApr 4, 2022, 5:40 PM
21 points
6 comments1 min readEA link

[Question] Quick Q) In­for­ma­tion haz­ards in open source lit reviews

SofiiaFJan 27, 2025, 9:32 PM
4 points
0 comments3 min readEA link

[Link] Thiel on GCRs

Milan GriffesJul 22, 2019, 8:47 PM
28 points
11 comments1 min readEA link

Tech­ni­cal Re­port on Mir­ror Bac­te­ria: Fea­si­bil­ity and Risks

Aaron Gertler 🔸Dec 12, 2024, 7:07 PM
244 points
18 comments1 min readEA link
(purl.stanford.edu)

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

RobBensingerDec 7, 2022, 7:46 PM
77 points
7 comments5 min readEA link

How can we im­prove In­fo­haz­ard Gover­nance in EA Biose­cu­rity?

Nadia MontazeriAug 5, 2023, 12:03 PM
168 points
27 comments4 min readEA link

Con­sider keep­ing your threat mod­els pri­vate.

Miles KodamaFeb 1, 2025, 12:29 AM
18 points
2 comments4 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDelloMay 20, 2020, 11:01 PM
27 points
10 comments6 min readEA link

[Question] How to dis­close a new x-risk?

harsimonyAug 24, 2022, 1:35 AM
20 points
9 comments1 min readEA link

In­tro­duc­ing spirit hazards

brb243May 27, 2022, 10:16 PM
9 points
2 comments2 min readEA link

Could a sin­gle alien mes­sage de­stroy us?

WriterNov 25, 2022, 9:58 AM
40 points
5 comments1 min readEA link

[Question] In­fo­haz­ards: The Fu­ture Is Dis­be­liev­ing Facts?

Prof.WeirdNov 22, 2020, 7:26 AM
2 points
0 comments1 min readEA link

X-risks of SETI and METI?

Geoffrey MillerJul 2, 2019, 10:41 PM
18 points
11 comments1 min readEA link
No comments.