This is a really interesting post, especially considering realistic considerations of how AI can affect crime and terrorism are few and far between in public debate. Much of my own research is in the field of AI in National Security and Defence and though biosecurity isn’t my wheelhouse, I do have some have thoughts on what you’ve all written that may or may not be useful.
I think the arguments in this post match really well with some types of bioterrorist but not others. I’d be interested to read more research (if it yet exists) on how the different ‘types’ of terrorist would utilise LLMs. I can imagine such technology would be far more useful to self-radicalised and lone actors rather than those in more traditional and organised terror structures for various reasons. The two use cases would also require very different measures to predict and prevent attacks as well.
Future chatbots seem likely to be capable of lowering the bar to such an attack. As models become increasingly “multimodal”, their training data will soon include video, such as university lectures and lab demonstrations. Such systems would not be limited to providing written instructions; they could plausibly use a camera to observe a would-be terrorist’s work and coach them through each step of viral synthesis. Future models (if not mitigated) also seem likely to be able to provide meaningful help in planning attacks, brainstorming everything from general planning, to obtaining equipment, to applying published research toward creating more-hazardous viruses, to where and how to release a virus to cause maximum impact.
The concept of chatbots lowering the bar is a good one, though it also comes with the upside that it also makes attacks easier to stop because it’s an intelligence and evidence goldmine. More terrorists having webcams in their houses would be fantastic. The downside obviously being that knowledge is more democratised. The bioterrorism element is harder to stop than other direct action or NBC attacks because the knowledge is ‘dual-use’. That is there are plenty of good reasons to access that information, and plenty of bad ones too. Unlike some other searches.
The second point about ‘meaningful help in planning attacks’ is likely to be the most devastating in the short term. The ability to quickly and at scale map things like footfall density and security arrangements over geographical areas reduces the timelines for attack planning, which subsequently reduces the time good actors have to prevent attacks. It also could feasibly provide help in avoiding detection. This isn’t really a serious infosec hazard because plenty of would-be criminals attempt to find information online or in books to conceal their crimes (there’s even a fantastic Breaking Bad scene where Walter White admonishes a rookie criminal for making rookie mistakes), but it helps the less ‘common sense gifted’ avoid common pitfalls which slightly increases the difficulty in stopping such plots.
It is sometimes suggested that these systems won’t make a meaningful difference, because the information they are trained on is already public. However, the runaway success of chatbots stems from their ability to surface the right information at the right time.
I agree with this, and would also add to it that non-public information becomes public information in unintended and often silly ways. There’s actually a serious issue in the defence industry where people playing military simulators like Arma3 or War Thunder will leak classified documents on forums in order to win arguments. I’m not kidding. People in sensitive industries such as policing and healthcare have also been found to be using things like ChatGPT to answer internal queries or summarise confidential reports which exposes people’s very private data (or active investigations) to the owners of the chatbots and even worse into the training data. This information, despite intentions to be private, then end up in the databanks of LLMs and might turn up elsewhere. This would be a concern in relation to your post for its use in pharmaceutical industries. There may be a need for regulation there as a potential impact lever for what you discuss in your post?
Exclude certain categories of biological knowledge from chatbots and other widely accessible AIs, so as to prevent them from coaching a malicious actor through the creation of a virus. Access to AIs with hazardous knowledge should be restricted to vetted researchers.
I can see why this gets said, and I think it would be useful for self-radicalised loners who lack access to any other tools, but I imagine that larger terror organisations will be working on their own LLMs before long (if they aren’t already). Larger terror groups have in the past been very successful at adopting new technologies far faster than their opponents have realistically been ready for. Take ISIS and their use of social media and drones, for example. Such a policy would be effective at reducing the scale of threat within domestic borders though, and could be an effective policy. It’s not my specialist area though, so I’m happy to be corrected by someone for whom it is.
Restricted access is at odds with the established practices and norms in most scientific fields. Traditionally, the modern academic enterprise is built around openness; the purpose of academic research is to publish, and thus contribute to our collective understanding. Sometimes norms need to adapt to changing circumstances, but this is never easy.
Fortunately, there’s actually a much larger existing infrastructure here than you might think. I acknowledge you said ‘most’ and so are probably already aware, but in terms of scale I think it’s worth noting it’s quite widespread. There are academic conferences geared towards potentially unsafe knowledge that are restricted—some by organisers and some by government. There are events like this which have a significant academia component which are publicly advertised, but attendees must a) have a provably good reason to attend and b) undergo government vetting. It’s not ‘hard’ to get in on the academic track, just quite restricted. Then there are other types of conference which again are a kind of mixture between academia and frontline NS/D but aren’t publicly advertised and are invitation only or word-of-mouth application only. The point being that there’s quite a good infosec infrastructure on a sliding scale there which could realistically be imported into biological sciences (and maybe is already—like I say, not really my wheelhouse). So I think the point you were hinting at here is a really good idea and I don’t think it violates academic principles. Like you wouldn’t leave a petri dish full of unsafe chemicals on a bus, you wouldn’t release unsafe knowledge into the world. There are people, however, who vehemently disagree with this—and I’m sure they have their reasons.
I apologise if this comment was overly long, but this post is in a very interesting area and I felt it worth putting the effort in :)
This is a really interesting post, especially considering realistic considerations of how AI can affect crime and terrorism are few and far between in public debate. Much of my own research is in the field of AI in National Security and Defence and though biosecurity isn’t my wheelhouse, I do have some have thoughts on what you’ve all written that may or may not be useful.
I think the arguments in this post match really well with some types of bioterrorist but not others. I’d be interested to read more research (if it yet exists) on how the different ‘types’ of terrorist would utilise LLMs. I can imagine such technology would be far more useful to self-radicalised and lone actors rather than those in more traditional and organised terror structures for various reasons. The two use cases would also require very different measures to predict and prevent attacks as well.
The concept of chatbots lowering the bar is a good one, though it also comes with the upside that it also makes attacks easier to stop because it’s an intelligence and evidence goldmine. More terrorists having webcams in their houses would be fantastic. The downside obviously being that knowledge is more democratised. The bioterrorism element is harder to stop than other direct action or NBC attacks because the knowledge is ‘dual-use’. That is there are plenty of good reasons to access that information, and plenty of bad ones too. Unlike some other searches.
The second point about ‘meaningful help in planning attacks’ is likely to be the most devastating in the short term. The ability to quickly and at scale map things like footfall density and security arrangements over geographical areas reduces the timelines for attack planning, which subsequently reduces the time good actors have to prevent attacks. It also could feasibly provide help in avoiding detection. This isn’t really a serious infosec hazard because plenty of would-be criminals attempt to find information online or in books to conceal their crimes (there’s even a fantastic Breaking Bad scene where Walter White admonishes a rookie criminal for making rookie mistakes), but it helps the less ‘common sense gifted’ avoid common pitfalls which slightly increases the difficulty in stopping such plots.
I agree with this, and would also add to it that non-public information becomes public information in unintended and often silly ways. There’s actually a serious issue in the defence industry where people playing military simulators like Arma3 or War Thunder will leak classified documents on forums in order to win arguments. I’m not kidding. People in sensitive industries such as policing and healthcare have also been found to be using things like ChatGPT to answer internal queries or summarise confidential reports which exposes people’s very private data (or active investigations) to the owners of the chatbots and even worse into the training data. This information, despite intentions to be private, then end up in the databanks of LLMs and might turn up elsewhere. This would be a concern in relation to your post for its use in pharmaceutical industries. There may be a need for regulation there as a potential impact lever for what you discuss in your post?
I can see why this gets said, and I think it would be useful for self-radicalised loners who lack access to any other tools, but I imagine that larger terror organisations will be working on their own LLMs before long (if they aren’t already). Larger terror groups have in the past been very successful at adopting new technologies far faster than their opponents have realistically been ready for. Take ISIS and their use of social media and drones, for example. Such a policy would be effective at reducing the scale of threat within domestic borders though, and could be an effective policy. It’s not my specialist area though, so I’m happy to be corrected by someone for whom it is.
Fortunately, there’s actually a much larger existing infrastructure here than you might think. I acknowledge you said ‘most’ and so are probably already aware, but in terms of scale I think it’s worth noting it’s quite widespread. There are academic conferences geared towards potentially unsafe knowledge that are restricted—some by organisers and some by government. There are events like this which have a significant academia component which are publicly advertised, but attendees must a) have a provably good reason to attend and b) undergo government vetting. It’s not ‘hard’ to get in on the academic track, just quite restricted. Then there are other types of conference which again are a kind of mixture between academia and frontline NS/D but aren’t publicly advertised and are invitation only or word-of-mouth application only. The point being that there’s quite a good infosec infrastructure on a sliding scale there which could realistically be imported into biological sciences (and maybe is already—like I say, not really my wheelhouse). So I think the point you were hinting at here is a really good idea and I don’t think it violates academic principles. Like you wouldn’t leave a petri dish full of unsafe chemicals on a bus, you wouldn’t release unsafe knowledge into the world. There are people, however, who vehemently disagree with this—and I’m sure they have their reasons.
I apologise if this comment was overly long, but this post is in a very interesting area and I felt it worth putting the effort in :)