Yep, AI safety people tend to oppose sharing model weights for future dangerous AI systems.
But it’s not certain that (operator-aligned) open-source powerful AI entails doom. To a first approximation, it entails doom iff “offense” is much more efficient than “defense,” which depends on context. But absent super monitoring to make sure that others aren’t making weapons/nanobots/whatever, or super efficient defenses against such attacks, I intuit that offense is heavily favored.
Yep, AI safety people tend to oppose sharing model weights for future dangerous AI systems.
But it’s not certain that (operator-aligned) open-source powerful AI entails doom. To a first approximation, it entails doom iff “offense” is much more efficient than “defense,” which depends on context. But absent super monitoring to make sure that others aren’t making weapons/nanobots/whatever, or super efficient defenses against such attacks, I intuit that offense is heavily favored.