Thanks for posting this, this seems like valuable work.
I’m particularly interested in using MLOSS to intentionally shape AI development. For example, could we identify key areas where releasing particular MLOSS can increase safety or extend the time to AGI?
Finding ways to guide AI development towards narrow and simple AI models can extend AI timelines, which is complimentary to safety work:
Thanks for posting this, this seems like valuable work.
I’m particularly interested in using MLOSS to intentionally shape AI development. For example, could we identify key areas where releasing particular MLOSS can increase safety or extend the time to AGI?
Finding ways to guide AI development towards narrow and simple AI models can extend AI timelines, which is complimentary to safety work:
https://www.lesswrong.com/posts/BEWdwySAgKgsyBzbC/satisf-ai-a-route-to-reducing-risks-from-ai
In your opinion, what traits of a particular piece of MLOSS determine whether it increases or decreases risk?