Still no strong evidence that LLMs increase bioterrorism risk

https://​​www.lesswrong.com/​​posts/​​ztXsmnSdrejpfmvn7/​​propaganda-or-science-a-look-at-open-source-ai-and

Linkpost from LessWrong.

The claims from the piece which I most agree with are:

  1. Academic research does not show strong evidence that existing LLMs increase bioterrorism risk.

  2. Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence.

I’d like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute.