Hmm my guess is that you’re underrating the dangers of making more easily accessible information that is already theoretically out “in the wild.” My guess is that most terrorists are not particularly competent, conscientious, or creative.[1] It seems plausible and even likely to me that better collations of publicly available information in some domains can substantially increase the risk and scale of harmful activities.
Take your sarin gas example.
sarin gas via instructions that were easily findable in this 1995 article:
“How easy is it to make sarin, the nerve gas that Japanese authorities believe was used to kill eight and injure thousands in the Tokyo subways during the Monday-morning rush hour?
“Wait a minute, I’ll look it up,” University of Toronto chemistry professor Ronald Kluger said over the phone. This was followed by the sound of pages flipping as he skimmed through the Merck Index, the bible of chemical preparations.Five seconds later, Kluger announced, “Here it is,” and proceeded to read not only the chemical formula but also the references that describe the step-by-step preparation of sarin, a gas that cripples the nervous system and can kill in minutes.
“This stuff is so trivial and so open,” he said of both the theory and the procedure required to make a substance so potent that less than a milligram can kill you.”
I think it is clearly not the case that terrorists in 1995, with the resources and capabilities of Aum Shinrikyo, can trivially make and spread sarin gas so potent that less than a milligram can kill you, and that the only thing stopping them is lack of willingness to kill many people. I believe this because in 1995, Aum Shinirikyo had the resources, capabilities, and motivations of Aum Shinrikyo, and they were not able to trivially make highly potent and concentrated sarin gas.
Aum intended to kill thousands of people with sarin gas, and produced enough to do so. But they a) were not able to get the gas to a sufficiently high level of purity, and b) had issues with dispersal. In the 1995 Tokyo subway attack, they ended up killing 13 people, far less than the thousands that they intended.
My favorite anecdote is that they attempted to cultivate a botulism batch. Unfortunately, Aum lab security protocols were so lax that a technician fell into the fermenting tank. The man almost drowned, but was otherwise unharmed.
If there is a future bioterrorist attack involving, say, smallpox, we can disaggregate quite a few elements in the causal chain leading up to that:
The NIH published the entire genetic sequence of smallpox for the world to see.
Google indexed that webpage and made it trivially easy to find.
Thanks to electricity and internet providers, folks can use Google.
They now need access to a laboratory and all the right equipment.
Either they need to have enough resources to create their own laboratory from scratch, or else they need to access someone’s lab (in which case they run a significant risk of being discovered).
They need a huge amount of tacit knowledge in order to able to actually use the lab—knowledge that simply can’t be captured in text or replicated from text (no matter how detailed). Someone has to give them a ton of hands-on training.
An LLM could theoretically speed up the process by giving them a detailed step-by-step set of instructions.
They are therefore able to actually engineer smallpox in the real world (not just generate a set of textual instructions).
The question for me is: How much of the outcome here depends on 6 as the key element, without which the end outcome wouldn’t occur?
Maybe a future LLM would provide a useful step 6, but anyone other than a pre-existing expert would always fail at step 4 or 5. Alternatively, maybe all the other steps let someone let someone do this in reality, and an accurate and complete LLM (in the future) would just make it 1% faster.
I don’t think the current study sheds any light whatsoever on those questions (it has no control group, and it has no step at which subjects are asked to do anything in the real world).
In a way, the sarin story confirms what I’ve been trying to say: a list of instructions, no matter how complete, does not mean that people can literally execute the instructions in the real world. Indeed, having tried to teach my kids to cook, even making something as simple as scrambled eggs requires lots of experience and tacit knowledge.
Aum intended to kill thousands of people with sarin gas, and produced enough to do so. But they a) were not able to get the gas to a sufficiently high level of purity, and b) had issues with dispersal. In the 1995 Tokyo subway attack, they ended up killing 13 people, far less than the thousands that they intended.
IIRC b) was largely a matter of the people getting nervous and not deploying it in the intended way, rather than a matter of a lack of metis.
Hmm my guess is that you’re underrating the dangers of making more easily accessible information that is already theoretically out “in the wild.” My guess is that most terrorists are not particularly competent, conscientious, or creative.[1] It seems plausible and even likely to me that better collations of publicly available information in some domains can substantially increase the risk and scale of harmful activities.
Take your sarin gas example.
I think it is clearly not the case that terrorists in 1995, with the resources and capabilities of Aum Shinrikyo, can trivially make and spread sarin gas so potent that less than a milligram can kill you, and that the only thing stopping them is lack of willingness to kill many people. I believe this because in 1995, Aum Shinirikyo had the resources, capabilities, and motivations of Aum Shinrikyo, and they were not able to trivially make highly potent and concentrated sarin gas.
Aum intended to kill thousands of people with sarin gas, and produced enough to do so. But they a) were not able to get the gas to a sufficiently high level of purity, and b) had issues with dispersal. In the 1995 Tokyo subway attack, they ended up killing 13 people, far less than the thousands that they intended.
Aum also had bioweapons and nuclear weapons programs. In the 1990s, they were unable to be “successful” with either[2], despite considerable resources.
No offense intended to any members of the terror community reading this comment.
My favorite anecdote is that they attempted to cultivate a botulism batch. Unfortunately, Aum lab security protocols were so lax that a technician fell into the fermenting tank. The man almost drowned, but was otherwise unharmed.
So let me put it this way:
If there is a future bioterrorist attack involving, say, smallpox, we can disaggregate quite a few elements in the causal chain leading up to that:
The NIH published the entire genetic sequence of smallpox for the world to see.
Google indexed that webpage and made it trivially easy to find.
Thanks to electricity and internet providers, folks can use Google.
They now need access to a laboratory and all the right equipment.
Either they need to have enough resources to create their own laboratory from scratch, or else they need to access someone’s lab (in which case they run a significant risk of being discovered).
They need a huge amount of tacit knowledge in order to able to actually use the lab—knowledge that simply can’t be captured in text or replicated from text (no matter how detailed). Someone has to give them a ton of hands-on training.
An LLM could theoretically speed up the process by giving them a detailed step-by-step set of instructions.
They are therefore able to actually engineer smallpox in the real world (not just generate a set of textual instructions).
The question for me is: How much of the outcome here depends on 6 as the key element, without which the end outcome wouldn’t occur?
Maybe a future LLM would provide a useful step 6, but anyone other than a pre-existing expert would always fail at step 4 or 5. Alternatively, maybe all the other steps let someone let someone do this in reality, and an accurate and complete LLM (in the future) would just make it 1% faster.
I don’t think the current study sheds any light whatsoever on those questions (it has no control group, and it has no step at which subjects are asked to do anything in the real world).
In a way, the sarin story confirms what I’ve been trying to say: a list of instructions, no matter how complete, does not mean that people can literally execute the instructions in the real world. Indeed, having tried to teach my kids to cook, even making something as simple as scrambled eggs requires lots of experience and tacit knowledge.
IIRC b) was largely a matter of the people getting nervous and not deploying it in the intended way, rather than a matter of a lack of metis.