Yes, the information is available on Google. The question is, in our eyes, more about whether a future model could successfully walk an unskilled person through the process without the person needing to understand it at all.
The paper is an attempt to walk a careful line of warning the world that the same information in more capable models could be quite dangerous, but not actually increasing the likelihood of someone using the current open source models (which it is too late to control!) for making biological weapons.
If there are specific questions you have, I’d be happy to answer.
For one answer to this question, see https://www.lesswrong.com/posts/ytGsHbG7r3W3nJxPT/will-releasing-the-weights-of-large-language-models-grant?commentId=FCTuxs43vtqLMmG2n
For lots more discussion, see the other LessWrong comments at: https://www.lesswrong.com/posts/ytGsHbG7r3W3nJxPT/will-releasing-the-weights-of-large-language-models-grant
And also check out my rather unpopular question here: https://www.lesswrong.com/posts/dL3qxebM29WjwtSAv/would-it-make-sense-to-bring-a-civil-lawsuit-against-meta
I am genuinely interested in gathering valid critiques on my work so that I can do better in the future.