Hi Stuart,
Thanks for your feedback on the paper. I was one of the authors, and I wanted to emphasize a few points.
The central claim of the paper is not that current open-source models like Llama-2 enable those looking to obtain bioweapons more than traditional search engines or even print text. While I think this is likely true given how helpful the models were for planning and assessing feasibility, they can also mislead users and hallucinate key details. I myself am quite uncertain about how these trade off against e.g. using Google – you can bet on that very question here. Doing a controlled study like the one RAND is running could help address this question.
Instead, we are much more concerned about the capabilities of future models. As LLMs improve, they will offer more streamlined access to knowledge than traditional search. I think this is already apparent in the fact that people routinely use LLMs for information they could have obtained online or in print. Weaknesses in current LLMs, like hallucinating facts, are priority issues for AI companies to solve, and I feel pretty confident we will see a lot of progress in this area.
Nevertheless, based on the response to the paper, it’s apparent that we didn’t communicate the distinction between current and future models enough, and we’re making revisions to address this.
The paper argues that because future LLMs will be much more capable and because existing safeguards can be easily removed, we need to worry about this issue now. That includes thinking of policies that incentivize AI companies to develop safe AI models that cannot be tuned to remove safeguards. The nice thing with catastrophe insurance is that if robust evals (much more work to do in this area) demonstrate that an open-source LLM is safe, then coverage will be far cheaper. That said, we still have a lot more work to do to understand how regulation can effectively limit the risks of open-source AI models, partly because the issue of model weight proliferation has been so neglected.
I’m curious about your thoughts on some of the below questions since I think they are at the crux of figuring out where we agree/disagree.
Do you think that future LLMs will enable bioterrorists to a greater degree than traditional tools like search engines or print text?
If yes, do you think the difference will be significant enough to warrant regulations that incentivize developers of future models to only release them once properly safeguarded (or not at all)?
Do you think that there are specific areas of knowledge around engineering and releasing exponentially growing biology that should be restricted?
Thanks again for your input!
Nice post. I would also add that Sam’s podcast with Toby Ord discussed many EA-related concepts including the GWWC pledge. I signed up directly as a result of that podcast and I would expect that there may have been a similar spike as seen after the Will MacAskill episodes.