What would it take for a model to be capable of supporting bioterrorism? Or simply to get consistently useful results, similar to human research scientists, in technical domains.
The LLM model is f(fuzzy map of human text, prompt) = [distribution of tokens proportional the probabilities a human might emit the token].
You might assume this would have median human intelligence but since “errors are noisy” while “correct answers repeat again and again”, an LLM emitting the most probable token is somewhat above median intelligence at tasks that can be reflected this way.
This does not necessarily scale to [complex technical situations that are not necessarily available in text form and require vision, smell, and touch as well as fine robotic control], [actions that evolve the state of a laboratory towards an end goal].
It seems like you would need actual training data from actual labs, right? So as long as Meta doesn’t actually train on that kind of information, the model won’t be better at helping a bioterrorist with their technical challenges than google? Or am I badly wrong somewhere?
I am saying the common facts that we both have access to do not support your point of view. It never happened. There are no cases of “very powerful, short term useful, profitable or military technologies” that were effectively banned, in the last 150 years.
You have to go back to the 1240s to find a reference class match.
These strongly worded statements I just made are trivial for you to disprove. Find a counterexample. I am quite confident and will bet up to $1000 you cannot.