Also, if you’re worried about low-IQ people being able to create mayhem, I think the least of our worries should be that they’d get their hands on a detailed protocol for creating a virus or anything similar (see, e.g., https://www.nature.com/articles/nprot.2007.135) -- hardly anyone would be able to understand it anyway, let alone have the real-world skills or equipment to do any of it.
Yes, the information is available on Google. The question is, in our eyes, more about whether a future model could successfully walk an unskilled person through the process without the person needing to understand it at all.
The paper is an attempt to walk a careful line of warning the world that the same information in more capable models could be quite dangerous, but not actually increasing the likelihood of someone using the current open source models (which it is too late to control!) for making biological weapons.
If there are specific questions you have, I’d be happy to answer.
“future model could successfully walk an unskilled person through the process without the person needing to understand it at all.”
Seems very doubtful. Could an unskilled person be “walked through” this process just by slightly more elaborate instructions? https://www.nature.com/articles/nprot.2007.135? Seems that the real barriers to something as complex as synthesizing a virus are 1) lack of training/skill/tacit knowledge, 2) lack of equipment or supplies. Detailed instructions are already out there.
My interpretation of the Gopal paper is that LLMs do meaningfully change the risks:
They’ll allow you to make progress without understanding, say, the Luo paper or the technology involved.
They’ll tell you what equipment you’d need, where to get it, how to get it, and how to operate it. Or they’ll tell you how to pay someone else to do bits for you without arousing suspicion.
Perhaps model this as having access to a helpful amoral virologist?
Also, if you’re worried about low-IQ people being able to create mayhem, I think the least of our worries should be that they’d get their hands on a detailed protocol for creating a virus or anything similar (see, e.g., https://www.nature.com/articles/nprot.2007.135) -- hardly anyone would be able to understand it anyway, let alone have the real-world skills or equipment to do any of it.
Yes, the information is available on Google. The question is, in our eyes, more about whether a future model could successfully walk an unskilled person through the process without the person needing to understand it at all.
The paper is an attempt to walk a careful line of warning the world that the same information in more capable models could be quite dangerous, but not actually increasing the likelihood of someone using the current open source models (which it is too late to control!) for making biological weapons.
If there are specific questions you have, I’d be happy to answer.
“future model could successfully walk an unskilled person through the process without the person needing to understand it at all.”
Seems very doubtful. Could an unskilled person be “walked through” this process just by slightly more elaborate instructions? https://www.nature.com/articles/nprot.2007.135? Seems that the real barriers to something as complex as synthesizing a virus are 1) lack of training/skill/tacit knowledge, 2) lack of equipment or supplies. Detailed instructions are already out there.
My interpretation of the Gopal paper is that LLMs do meaningfully change the risks:
They’ll allow you to make progress without understanding, say, the Luo paper or the technology involved.
They’ll tell you what equipment you’d need, where to get it, how to get it, and how to operate it. Or they’ll tell you how to pay someone else to do bits for you without arousing suspicion.
Perhaps model this as having access to a helpful amoral virologist?