Did āinformation hazardā originate in EA? Plenty of results on Google for ādangerous informationā and ādangerous knowledgeā, which I think mean almost the same thing, although I suppose āinformation hazardā refers to the risk itself, while ādangerous informationā and ādangerous knowledgeā refer to the information/āknowledge and might suggest likely harm rather than just risk.
One aspect of how āinformation hazardā tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].
This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the āinfohazardā concept distinctive.
I think a lot of people would view those selfish/ātribal reasons as reasonable/ādefensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-largeās future.
Is discourse around lying/āconcealing information out of altruistic concern really that rare in Western cultures?
I feel like lying about the extent of pandemics for āyour own goodā is a tragic pattern thatās frequentlyrepeatedinhistory, and that altruistic motivations (or at least justifications) are commonly presented for why governments do this.
āThink of the childrenā and moral panic justifications for censorship seems extremely popular.
Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.
Other examples might be public health messaging. E.g. Iāve heard anecdotal claims that itās a deliberate choice not to emphasize, say, the absolute risk of contracting HIV per instance of unprotected sex with an infected person.
Good question/āpoint! I definitely didnāt mean to imply that EAs were the first people to recognise the idea that true information can sometimes cause harm. If my post did seem to imply that, thatās perhaps a good case study in how easy it is to fall short of my third suggestion, and thus why itās good to make a conscious effort on that front!
But Iām pretty sure the term āinformation hazardā was publicly introduced in Bostromās 2011 paper. And my sentence preceding that example was āIt seems to me that people in the EA community have developed a remarkable number of very useful concepts or termsā.
I said āor termsā partly because itās hard to say when something is a new concept vs an extension or reformulation of an old one (and the difference may not really matter). I also said that partly because I think new terms (jargon) can be quite valuable even if they merely serve as a shorthand for one specific subset of all the things people sometimes mean by another, more everyday term. E.g.,ādangerous informationā and ādangerous knowledgeā might sometimes mean (or be taken to mean) āinformation/āknowledge which has a high chance of being net harmfulā, whereas āinformation hazardā just conveys at least a non-trivial chance of at least some harm.
As for whether it was a new concept: the paper provided a detailed treatment of the topic of information hazards, including a taxonomy of different types. I think one could argue that this amounted to introducing the new concept of āinformation hazardsā, which was similar to and built on earlier concepts such as ādangerous informationā. (But one could also argue against that, and it might not matter much whether we decide to call it a new concept vs an extension/ānew version of existing ones.)
Did āinformation hazardā originate in EA? Plenty of results on Google for ādangerous informationā and ādangerous knowledgeā, which I think mean almost the same thing, although I suppose āinformation hazardā refers to the risk itself, while ādangerous informationā and ādangerous knowledgeā refer to the information/āknowledge and might suggest likely harm rather than just risk.
One aspect of how āinformation hazardā tends to be conceptualised that is fairly new[1], apart from the term itself, is the idea that one might wish to be secretive out of impartial concern for humankind, rather than for selfish or tribal reasons[2].
This especially applies in academia, where the culture and mythology are strongly pro-openness. Academics are frequently secretive, but typically in a selfish way that is seen as going against their shared ideals[3]. The idea that a researcher might be altruistically secretive about some aspect of the truth of nature is pretty foreign, and to me is a big part of what makes the āinfohazardā concept distinctive.
Not 100% unprecedentedly new, or anything, but rare in modern Western discourse pre-Bostrom.
I think a lot of people would view those selfish/ātribal reasons as reasonable/ādefensible, but still different from e.g. worrying that such-and-such scientific discovery might damage humanity-at-largeās future.
Brian Nosek talks about this a lot ā academics mostly want to be more open but view being so as against their own best interests.
Is discourse around lying/āconcealing information out of altruistic concern really that rare in Western cultures?
I feel like lying about the extent of pandemics for āyour own goodā is a tragic pattern thatās frequently repeated in history, and that altruistic motivations (or at least justifications) are commonly presented for why governments do this.
āThink of the childrenā and moral panic justifications for censorship seems extremely popular.
Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.
Other examples might be public health messaging. E.g. Iāve heard anecdotal claims that itās a deliberate choice not to emphasize, say, the absolute risk of contracting HIV per instance of unprotected sex with an infected person.
Good question/āpoint! I definitely didnāt mean to imply that EAs were the first people to recognise the idea that true information can sometimes cause harm. If my post did seem to imply that, thatās perhaps a good case study in how easy it is to fall short of my third suggestion, and thus why itās good to make a conscious effort on that front!
But Iām pretty sure the term āinformation hazardā was publicly introduced in Bostromās 2011 paper. And my sentence preceding that example was āIt seems to me that people in the EA community have developed a remarkable number of very useful concepts or termsā.
I said āor termsā partly because itās hard to say when something is a new concept vs an extension or reformulation of an old one (and the difference may not really matter). I also said that partly because I think new terms (jargon) can be quite valuable even if they merely serve as a shorthand for one specific subset of all the things people sometimes mean by another, more everyday term. E.g.,ādangerous informationā and ādangerous knowledgeā might sometimes mean (or be taken to mean) āinformation/āknowledge which has a high chance of being net harmfulā, whereas āinformation hazardā just conveys at least a non-trivial chance of at least some harm.
As for whether it was a new concept: the paper provided a detailed treatment of the topic of information hazards, including a taxonomy of different types. I think one could argue that this amounted to introducing the new concept of āinformation hazardsā, which was similar to and built on earlier concepts such as ādangerous informationā. (But one could also argue against that, and it might not matter much whether we decide to call it a new concept vs an extension/ānew version of existing ones.)
All good points!