If there is a future bioterrorist attack involving, say, smallpox, we can disaggregate quite a few elements in the causal chain leading up to that:
The NIH published the entire genetic sequence of smallpox for the world to see.
Google indexed that webpage and made it trivially easy to find.
Thanks to electricity and internet providers, folks can use Google.
They now need access to a laboratory and all the right equipment.
Either they need to have enough resources to create their own laboratory from scratch, or else they need to access someone’s lab (in which case they run a significant risk of being discovered).
They need a huge amount of tacit knowledge in order to able to actually use the lab—knowledge that simply can’t be captured in text or replicated from text (no matter how detailed). Someone has to give them a ton of hands-on training.
An LLM could theoretically speed up the process by giving them a detailed step-by-step set of instructions.
They are therefore able to actually engineer smallpox in the real world (not just generate a set of textual instructions).
The question for me is: How much of the outcome here depends on 6 as the key element, without which the end outcome wouldn’t occur?
Maybe a future LLM would provide a useful step 6, but anyone other than a pre-existing expert would always fail at step 4 or 5. Alternatively, maybe all the other steps let someone let someone do this in reality, and an accurate and complete LLM (in the future) would just make it 1% faster.
I don’t think the current study sheds any light whatsoever on those questions (it has no control group, and it has no step at which subjects are asked to do anything in the real world).
So let me put it this way:
If there is a future bioterrorist attack involving, say, smallpox, we can disaggregate quite a few elements in the causal chain leading up to that:
The NIH published the entire genetic sequence of smallpox for the world to see.
Google indexed that webpage and made it trivially easy to find.
Thanks to electricity and internet providers, folks can use Google.
They now need access to a laboratory and all the right equipment.
Either they need to have enough resources to create their own laboratory from scratch, or else they need to access someone’s lab (in which case they run a significant risk of being discovered).
They need a huge amount of tacit knowledge in order to able to actually use the lab—knowledge that simply can’t be captured in text or replicated from text (no matter how detailed). Someone has to give them a ton of hands-on training.
An LLM could theoretically speed up the process by giving them a detailed step-by-step set of instructions.
They are therefore able to actually engineer smallpox in the real world (not just generate a set of textual instructions).
The question for me is: How much of the outcome here depends on 6 as the key element, without which the end outcome wouldn’t occur?
Maybe a future LLM would provide a useful step 6, but anyone other than a pre-existing expert would always fail at step 4 or 5. Alternatively, maybe all the other steps let someone let someone do this in reality, and an accurate and complete LLM (in the future) would just make it 1% faster.
I don’t think the current study sheds any light whatsoever on those questions (it has no control group, and it has no step at which subjects are asked to do anything in the real world).