Note: The contents of this post is from my blogpost linked with this post.
# Intro
As part of my research into global existential risks, I came across the conundrum of information hazard which simply refers to the potential misuse of scientific or technological information by bad actors that might turn into a global risk for everyone.
This phenomenon is most aggravated in the field of biotechnological research where misuse of increasingly openly available genomic data or cheaper protein synthesis techniques can easily be used to develop and mass-produce humanity-threatening super-pathogens (“super” referring to immunity against all known forms of human cure). This is a particularly interesting conundrum as the fine line between openness and secrecy governs the difference between better scientific research and global catastrophe.
It is easy to say that all biotech information and research generated should be classified by labs around the world. But this conservative approach would turn dry the flow of useful information and data that is so important to inspire and facilitate future research. Although this approach guarantees safeguard against information hazard, it also means the end of all future scientific research and technological innovation. Such a scenario will leave humanity helpless in the face of attack by a naturally emerged pathogen.
On the other hand of the spectrum, completely free proliferation of potentially biohazardous information, if accessed by a malicious organisation or even a bad actor with the required expertise and material, can spell doom for humanity.
Thus, it can be concluded that the censorship on scientific information and data should lie somewhere in the grey of the spectrum. Where is that point?
The secrecy point question is more nuanced than it looks. For example, consider this scenario. A biotech lab publishes unsuspecting non-hazardous information. This information is accessed by a malicious group which was lacking some non-hazardous information to bring its “evil project” to fruition. In the light of the newly published (unsuspectful) data, it can now generate the required hazardous information it needs and bring its evil plan to life. Notably, this scenario is quite narrow (one can argue, if the malicious group has the resources to do independent R&D, then it can generate whatever information it needs independently. Or that such expertise is not known to be within the resource constraints of any known terrorist outfit). Sure, but what I’m trying to present is the subtleties and intricacies that surround such censorship and governance issues and truly calls for a departure from the binary of black and white approach taken by governance organisations today.
# Case studies
The US government policies on dual-use research is solely restricted to work that fall under any of its seven classes of “experiments of concern” or if it involves a subset of organisms on the Federal Select Agent List. It is not hard to imagine that emerging research may not fall under this classification and yet be of catastrophic concern to humanity.
Or take this second scenario that seems to defy common sense. It is beneficial to publish easy-to-discover information hazardous research as it would benefit good actors and the publication of the work is indifferent for the bad actors as they would discover the information themselves sooner or later. This loudly opens a new can of worms by asking where to draw the line for “easy-to-discover”?
Moreover, keeping useful information hidden may lead to unintentional misuse of currently deployed technology or slowdown preventive measures in case of large-scale risks. An apt commentary on the reassessment to a publication concerning the identification of a subtype of the neurotoxic botulinum toxin reads as follows:
> Although it is ethical to identify and mitigate DURC, it is also an ethical imperative to enable others to counter potential harm with good. With critical national security and public health at stake, it is unethical to impede research competitors for personal, professional, or commercial motives. Likewise, excessive government regulation is not helpful if it slows the progress of countermeasure development (Keim, 2016, p. 333).
# Incentive games
There are various notorious and perverse incentive games at work too. Take for instance, the decry against biological weaponry in the Geneva protocol during WW2 which informed the Japanese of the potential utility of biological weapons and eventually inspired them to develop and use their own B-weapons (is this a word?). Incentives act on all levels of society. Scaling the academia ladder forces researchers to publish their work, unfortunately often bypassing ethics and common-sense, in order to be promoted or just receive funding. They are often left at the discretion of the policy-makers or fund dispensers who have little to no idea of the potential impact of the research (both good or bad).
# The bigger picture
The progress of scientific and technological advancements are asking more existential questions than ever before and at a rate that is simply beyond the mental grasp of the common human and barely understood by a select few. Taking a meta look at the information hazard problem, it can be summarised as the inability to infer the intention of human beings. Take for instance, the open problem in security and blockchain dubbed the “Oracle problem”—which basically asks if it is possible to validate any information generated outside the chain by the chain itself. If one examines the question carefully (the leisure of which I do not exercise in this post), it would be apparent that the fundamental problem lies at the interface between the real world and its equivalent projection in the digital space. For eg: in web2, the identity of a person is closely tied to its account on some database. This abstraction worked because the proof that that person’s existence was guaranteed by some central authority. But with the elimination of the central authority, who backs the data? How do we even define identity in such a case? The premise for the abstraction breaks down. The authentication situation is not as dire as it looks as people are trying to find workarounds but I intend to present the bigger picture here.
# Solutions?
So, we understand that this is an important issue. But what is the solution? To be honest, there doesn’t seem to be any fool-proof solution because the problem is on a different plane as noted in the previous paragraph. There are various suggestions such as assessing the bad (and good) impact of an information promulgation beforehand. Or consciously disclosing information in a way that benefits good actors while posing difficulty for bad actors to use it for malicious plans—“Security by obscurity”. Identification of entities who access a particular piece of information may be an option but I only see it eventually dwindling into authoritarian monopolistic control over the information as “someone” has to decide if the information should be made available to a particular entity or not.
# Conclusion
In the timeless words of Dickens, I think this is “the best of times, the worst of times”, an exciting time to ask questions, a vibrant time for research,and an amazing time to be alive. The answers we give to ourselves in this century may potentially decide our fate.
# Acknowledgements and References
Lewis, et. al—Information Hazards in Biotechnology, 2019
Prometheus Unleashed: Making sense of information hazards
Link post
Note: The contents of this post is from my blogpost linked with this post.
# Intro
As part of my research into global existential risks, I came across the conundrum of information hazard which simply refers to the potential misuse of scientific or technological information by bad actors that might turn into a global risk for everyone.
This phenomenon is most aggravated in the field of biotechnological research where misuse of increasingly openly available genomic data or cheaper protein synthesis techniques can easily be used to develop and mass-produce humanity-threatening super-pathogens (“super” referring to immunity against all known forms of human cure). This is a particularly interesting conundrum as the fine line between openness and secrecy governs the difference between better scientific research and global catastrophe.
It is easy to say that all biotech information and research generated should be classified by labs around the world. But this conservative approach would turn dry the flow of useful information and data that is so important to inspire and facilitate future research. Although this approach guarantees safeguard against information hazard, it also means the end of all future scientific research and technological innovation. Such a scenario will leave humanity helpless in the face of attack by a naturally emerged pathogen.
On the other hand of the spectrum, completely free proliferation of potentially biohazardous information, if accessed by a malicious organisation or even a bad actor with the required expertise and material, can spell doom for humanity.
Thus, it can be concluded that the censorship on scientific information and data should lie somewhere in the grey of the spectrum. Where is that point?
The secrecy point question is more nuanced than it looks. For example, consider this scenario. A biotech lab publishes unsuspecting non-hazardous information. This information is accessed by a malicious group which was lacking some non-hazardous information to bring its “evil project” to fruition. In the light of the newly published (unsuspectful) data, it can now generate the required hazardous information it needs and bring its evil plan to life. Notably, this scenario is quite narrow (one can argue, if the malicious group has the resources to do independent R&D, then it can generate whatever information it needs independently. Or that such expertise is not known to be within the resource constraints of any known terrorist outfit). Sure, but what I’m trying to present is the subtleties and intricacies that surround such censorship and governance issues and truly calls for a departure from the binary of black and white approach taken by governance organisations today.
# Case studies
The US government policies on dual-use research is solely restricted to work that fall under any of its seven classes of “experiments of concern” or if it involves a subset of organisms on the Federal Select Agent List. It is not hard to imagine that emerging research may not fall under this classification and yet be of catastrophic concern to humanity.
Or take this second scenario that seems to defy common sense. It is beneficial to publish easy-to-discover information hazardous research as it would benefit good actors and the publication of the work is indifferent for the bad actors as they would discover the information themselves sooner or later. This loudly opens a new can of worms by asking where to draw the line for “easy-to-discover”?
Moreover, keeping useful information hidden may lead to unintentional misuse of currently deployed technology or slowdown preventive measures in case of large-scale risks. An apt commentary on the reassessment to a publication concerning the identification of a subtype of the neurotoxic botulinum toxin reads as follows:
> Although it is ethical to identify and mitigate DURC, it is also an ethical imperative to enable others to counter potential harm with good. With critical national security and public health at stake, it is unethical to impede research competitors for personal, professional, or commercial motives. Likewise, excessive government regulation is not helpful if it slows the progress of countermeasure development (Keim, 2016, p. 333).
# Incentive games
There are various notorious and perverse incentive games at work too. Take for instance, the decry against biological weaponry in the Geneva protocol during WW2 which informed the Japanese of the potential utility of biological weapons and eventually inspired them to develop and use their own B-weapons (is this a word?). Incentives act on all levels of society. Scaling the academia ladder forces researchers to publish their work, unfortunately often bypassing ethics and common-sense, in order to be promoted or just receive funding. They are often left at the discretion of the policy-makers or fund dispensers who have little to no idea of the potential impact of the research (both good or bad).
# The bigger picture
The progress of scientific and technological advancements are asking more existential questions than ever before and at a rate that is simply beyond the mental grasp of the common human and barely understood by a select few. Taking a meta look at the information hazard problem, it can be summarised as the inability to infer the intention of human beings. Take for instance, the open problem in security and blockchain dubbed the “Oracle problem”—which basically asks if it is possible to validate any information generated outside the chain by the chain itself. If one examines the question carefully (the leisure of which I do not exercise in this post), it would be apparent that the fundamental problem lies at the interface between the real world and its equivalent projection in the digital space. For eg: in web2, the identity of a person is closely tied to its account on some database. This abstraction worked because the proof that that person’s existence was guaranteed by some central authority. But with the elimination of the central authority, who backs the data? How do we even define identity in such a case? The premise for the abstraction breaks down. The authentication situation is not as dire as it looks as people are trying to find workarounds but I intend to present the bigger picture here.
# Solutions?
So, we understand that this is an important issue. But what is the solution? To be honest, there doesn’t seem to be any fool-proof solution because the problem is on a different plane as noted in the previous paragraph. There are various suggestions such as assessing the bad (and good) impact of an information promulgation beforehand. Or consciously disclosing information in a way that benefits good actors while posing difficulty for bad actors to use it for malicious plans—“Security by obscurity”. Identification of entities who access a particular piece of information may be an option but I only see it eventually dwindling into authoritarian monopolistic control over the information as “someone” has to decide if the information should be made available to a particular entity or not.
# Conclusion
In the timeless words of Dickens, I think this is “the best of times, the worst of times”, an exciting time to ask questions, a vibrant time for research,and an amazing time to be alive. The answers we give to ourselves in this century may potentially decide our fate.
# Acknowledgements and References
Lewis, et. al—Information Hazards in Biotechnology, 2019