40,000 reasons to worry about AI safety

Link post

There is an argument that AI will have a Sputnik moment, an event that triggers a calamitous race towards dangerous AGI.

This Sputnik moment may have already happened. OpenAI’s decision to develop and release various forms of GPT has triggered Google to take on more risk. Other actors may do the same.

There is the parallel argument that AI safety will also have a Sputnik moment. This event or crisis would lead to a massive investment in AI safety engineering, research and regulation.

Here is one candidate (or perhaps 40,000 candidates) for AI safety’s Sputnik:

AI suggested 40,000 new possible chemical weapons in just six hours

It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.

All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence.

The paper is titled “Dual use of artificial-intelligence-powered drug discovery”: https://​​doi.org/​​10.1038/​​s42256-022-00465-9

...we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains of VX (6–10 mg) is sufficient to kill a person. Other nerve agents with the same mechanism such as the Novichoks have also been in the headlines recently and used in poisonings in the UK and elsewhere.

In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents...

...this area is poorly regulated, with few if any checks to prevent the synthesis of new, extremely toxic agents that could potentially be used as chemical weapons. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene. But what if the human were removed or replaced with a bad actor? With current breakthroughs and research into autonomous synthesis, a complete design–make–test cycle applicable to making not only drugs, but toxins, is within reach. Our proof of concept thus highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible...

As responsible scientists, we need to ensure that misuse of AI is prevented, and that the tools and models we develop are used only for good.