First: I do completely agree that several modifications are absolutely egregious and without any logical explanation—primarily the removal of whistleblower protections. However, I think it is also important that we recognize that SB1047 does have flaws and isn’t perfect and we should all be welcoming of constructive feedback both for and against. Some level of reasonable compromise when pushing forward unprecedented policy such as this is always going to happen for better or worse.
IMHO, the biggest problems with the bill as originally written were the ability to litigate against a company before any damages had actually occurred, and moreover—the glaring loopholes that existed with fixed-flops thresholds for oversight. Anybody with any understanding of machine learning training pipelines could point out any number of loopholes and easy circumventions (e.g., more iterative segmented training checkpoints/versioning essentially delegating large training runs into multiple smaller runs—or segmentation and modularization of models themselves)
We also need to be humble and open-minded about unintended consequences (e.g., it’s possible this bill pushes some organizations to open-source or open-weight distribution models, or of course encourage big tech to relocate AI to states with less regulation). If we treat all of industry as ‘The Enemy’ we risk loosing key allies in the AI research space (both individuals as well as organizations).
the ability to litigate against a company before any damages had actually occurred
Can you explain why you find this problematic? It’s not self-evident to me, because we do this too for other things, e.g. drunk driving, pharmaceuticals needing to pass safety testing
I’m not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it’s commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don’t do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it’s not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven’t committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
Drunk driving is illegal because it risks doing serious harm. It’s still illegal when the harm has not occurred (yet). Things can be crimes without harm having occurred.
First: I do completely agree that several modifications are absolutely egregious and without any logical explanation—primarily the removal of whistleblower protections. However, I think it is also important that we recognize that SB1047 does have flaws and isn’t perfect and we should all be welcoming of constructive feedback both for and against. Some level of reasonable compromise when pushing forward unprecedented policy such as this is always going to happen for better or worse.
IMHO, the biggest problems with the bill as originally written were the ability to litigate against a company before any damages had actually occurred, and moreover—the glaring loopholes that existed with fixed-flops thresholds for oversight. Anybody with any understanding of machine learning training pipelines could point out any number of loopholes and easy circumventions (e.g., more iterative segmented training checkpoints/versioning essentially delegating large training runs into multiple smaller runs—or segmentation and modularization of models themselves)
We also need to be humble and open-minded about unintended consequences (e.g., it’s possible this bill pushes some organizations to open-source or open-weight distribution models, or of course encourage big tech to relocate AI to states with less regulation). If we treat all of industry as ‘The Enemy’ we risk loosing key allies in the AI research space (both individuals as well as organizations).
Can you explain why you find this problematic? It’s not self-evident to me, because we do this too for other things, e.g. drunk driving, pharmaceuticals needing to pass safety testing
I’m not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it’s commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don’t do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it’s not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven’t committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
Drunk driving is illegal because it risks doing serious harm. It’s still illegal when the harm has not occurred (yet). Things can be crimes without harm having occurred.