Just as a partial reply, it seems weird to me to claim that the groups both best able to demonstrate safety and most technically capable of doing so—the groups making the systems—should get a free pass to tell other people to prove what they are doing is unsafe. That’s a really bad incentive.
And I think basically everywhere in the western world, for the past half century or so, we require manufacturers and designers to ensure their products are safe, implicitly or explicitly. Houses, bridges, consumer electronics, and children’s toys all get certified for safety. Hell, we even license engineers in most countries and make it illegal for non-licensed engineers to do things like certify building safety. That isn’t a democratic control, but it’s clearly putting the burden of proof on the makers, not those claiming it might be unsafe.
And I think basically everywhere in the western world, for the past half century or so, we require manufacturers and designers to ensure their products are safe, implicitly or explicitly. Houses, bridges, consumer electronics, and children’s toys all get certified for safety.
Sure, there are regulations on manufacturing products. But these regulations are generally based on decades of experience with the technologies, and were only put in place after people started to see harm. They weren’t conceived a priori before the technologies had any sizable impact.
(But if we had decades of experience with computer-based systems not reliably doing exactly what we wanted, you’d admit that this degree of caution on systems we expect to be powerful would be reasonable?)
That’s not how modern risk assessment works. Risk registers and mitigation planning are based on proactively identifying risk. To the extent that this doesn’t occur before something is built and/or deployed, at the very least, it’s a failure of the engineering process. (It also seems somewhat perverse to argue that we need to protect innovation in a specific domain by sticking to the way regulation happened long in the past.)
And in the cases where engineering and scientific analysis has identified risks in advance, but no regulatory system is in place, the legal system has been clear that there is liability on the part of the producers. And given those widely acknowledged dangers, it seems clear that if model developers ignores a known or obvious risk, they are criminally liable for negligence. This isn’t the same as restricting by-default-unsafe technologies like drugs and buildings, but at the very least, I think you should agree that one needs to make an argument for why ML models should be treated differently than other technologies with widely acknowledged dangers.
Just as a partial reply, it seems weird to me to claim that the groups both best able to demonstrate safety and most technically capable of doing so—the groups making the systems—should get a free pass to tell other people to prove what they are doing is unsafe. That’s a really bad incentive.
And I think basically everywhere in the western world, for the past half century or so, we require manufacturers and designers to ensure their products are safe, implicitly or explicitly. Houses, bridges, consumer electronics, and children’s toys all get certified for safety. Hell, we even license engineers in most countries and make it illegal for non-licensed engineers to do things like certify building safety. That isn’t a democratic control, but it’s clearly putting the burden of proof on the makers, not those claiming it might be unsafe.
Sure, there are regulations on manufacturing products. But these regulations are generally based on decades of experience with the technologies, and were only put in place after people started to see harm. They weren’t conceived a priori before the technologies had any sizable impact.
...and this risk isn’t predictable on priors?
(But if we had decades of experience with computer-based systems not reliably doing exactly what we wanted, you’d admit that this degree of caution on systems we expect to be powerful would be reasonable?)
That’s not how modern risk assessment works. Risk registers and mitigation planning are based on proactively identifying risk. To the extent that this doesn’t occur before something is built and/or deployed, at the very least, it’s a failure of the engineering process. (It also seems somewhat perverse to argue that we need to protect innovation in a specific domain by sticking to the way regulation happened long in the past.)
And in the cases where engineering and scientific analysis has identified risks in advance, but no regulatory system is in place, the legal system has been clear that there is liability on the part of the producers. And given those widely acknowledged dangers, it seems clear that if model developers ignores a known or obvious risk, they are criminally liable for negligence. This isn’t the same as restricting by-default-unsafe technologies like drugs and buildings, but at the very least, I think you should agree that one needs to make an argument for why ML models should be treated differently than other technologies with widely acknowledged dangers.