Did “information hazard” originate in EA? Plenty of results on Google for “dangerous information” and “dangerous knowledge”, which I think mean almost the same thing, although I suppose “information hazard” refers to the risk itself, while “dangerous information” and “dangerous knowledge” refer to the information/knowledge and might suggest likely harm rather than just risk.
One aspect of how “information hazard” tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].
This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the “infohazard” concept distinctive.
I think a lot of people would view those selfish/tribal reasons as reasonable/defensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-large’s future.
Is discourse around lying/concealing information out of altruistic concern really that rare in Western cultures?
I feel like lying about the extent of pandemics for “your own good” is a tragic pattern that’s frequentlyrepeatedinhistory, and that altruistic motivations (or at least justifications) are commonly presented for why governments do this.
“Think of the children” and moral panic justifications for censorship seems extremely popular.
Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.
Other examples might be public health messaging. E.g. I’ve heard anecdotal claims that it’s a deliberate choice not to emphasize, say, the absolute risk of contracting HIV per instance of unprotected sex with an infected person.
Good question/point! I definitely didn’t mean to imply that EAs were the first people to recognise the idea that true information can sometimes cause harm. If my post did seem to imply that, that’s perhaps a good case study in how easy it is to fall short of my third suggestion, and thus why it’s good to make a conscious effort on that front!
But I’m pretty sure the term “information hazard” was publicly introduced in Bostrom’s 2011 paper. And my sentence preceding that example was “It seems to me that people in the EA community have developed a remarkable number of very useful concepts or terms”.
I said “or terms” partly because it’s hard to say when something is a new concept vs an extension or reformulation of an old one (and the difference may not really matter). I also said that partly because I think new terms (jargon) can be quite valuable even if they merely serve as a shorthand for one specific subset of all the things people sometimes mean by another, more everyday term. E.g.,”dangerous information” and “dangerous knowledge” might sometimes mean (or be taken to mean) “information/knowledge which has a high chance of being net harmful”, whereas “information hazard” just conveys at least a non-trivial chance of at least some harm.
As for whether it was a new concept: the paper provided a detailed treatment of the topic of information hazards, including a taxonomy of different types. I think one could argue that this amounted to introducing the new concept of “information hazards”, which was similar to and built on earlier concepts such as “dangerous information”. (But one could also argue against that, and it might not matter much whether we decide to call it a new concept vs an extension/new version of existing ones.)
Did “information hazard” originate in EA? Plenty of results on Google for “dangerous information” and “dangerous knowledge”, which I think mean almost the same thing, although I suppose “information hazard” refers to the risk itself, while “dangerous information” and “dangerous knowledge” refer to the information/knowledge and might suggest likely harm rather than just risk.
One aspect of how “information hazard” tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].
This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the “infohazard” concept distinctive.
Not 100% unprecedentedly new, or anything, but rare in modern Western discourse pre-Bostrom.
I think a lot of people would view those selfish/tribal reasons as reasonable/defensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-large’s future.
Brian Nosek talks about this a lot – academics mostly want to be more open but view being so as against their own best interests.
Is discourse around lying/concealing information out of altruistic concern really that rare in Western cultures?
I feel like lying about the extent of pandemics for “your own good” is a tragic pattern that’s frequently repeated in history, and that altruistic motivations (or at least justifications) are commonly presented for why governments do this.
“Think of the children” and moral panic justifications for censorship seems extremely popular.
Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.
Other examples might be public health messaging. E.g. I’ve heard anecdotal claims that it’s a deliberate choice not to emphasize, say, the absolute risk of contracting HIV per instance of unprotected sex with an infected person.
Good question/point! I definitely didn’t mean to imply that EAs were the first people to recognise the idea that true information can sometimes cause harm. If my post did seem to imply that, that’s perhaps a good case study in how easy it is to fall short of my third suggestion, and thus why it’s good to make a conscious effort on that front!
But I’m pretty sure the term “information hazard” was publicly introduced in Bostrom’s 2011 paper. And my sentence preceding that example was “It seems to me that people in the EA community have developed a remarkable number of very useful concepts or terms”.
I said “or terms” partly because it’s hard to say when something is a new concept vs an extension or reformulation of an old one (and the difference may not really matter). I also said that partly because I think new terms (jargon) can be quite valuable even if they merely serve as a shorthand for one specific subset of all the things people sometimes mean by another, more everyday term. E.g.,”dangerous information” and “dangerous knowledge” might sometimes mean (or be taken to mean) “information/knowledge which has a high chance of being net harmful”, whereas “information hazard” just conveys at least a non-trivial chance of at least some harm.
As for whether it was a new concept: the paper provided a detailed treatment of the topic of information hazards, including a taxonomy of different types. I think one could argue that this amounted to introducing the new concept of “information hazards”, which was similar to and built on earlier concepts such as “dangerous information”. (But one could also argue against that, and it might not matter much whether we decide to call it a new concept vs an extension/new version of existing ones.)
All good points!