In case you were wondering, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models completely avoids any discussion of the fact that releasing a dual-use model could potentially be dangerous or that the impacts of any such models should be evaluated before use. This is a truly stunning display of ball-dropping.
Update: I just checked NIST AI 600-1 as well: the report is extremely blaise about CBRN hazards from general AI (admitting though that chemical and biological design tools might pose risks to society or national security”). They quote the RAND report that claims the current generation doesn’t pose any such risks beyond web search, neglecting to mention that these results only applied to the release of a model over an API. As far as they’re concerned, these risks just need to be “carefully monitored”.
In case you were wondering, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models completely avoids any discussion of the fact that releasing a dual-use model could potentially be dangerous or that the impacts of any such models should be evaluated before use. This is a truly stunning display of ball-dropping.
Update: I just checked NIST AI 600-1 as well: the report is extremely blaise about CBRN hazards from general AI (admitting though that chemical and biological design tools might pose risks to society or national security”). They quote the RAND report that claims the current generation doesn’t pose any such risks beyond web search, neglecting to mention that these results only applied to the release of a model over an API. As far as they’re concerned, these risks just need to be “carefully monitored”.
Sounds like we need some people to make some comments!!!!!