Excited to see this! I’d be most excited about case studies of standards in fields where people didn’t already have clear ideas about how to verify safety.
In some areas, it’s pretty clear what you’re supposed to do to verify safety. Everyone (more-or-less) agrees on what counts as safe.
One of the biggest challenges with AI safety standards will be the fact that no one really knows how to verify that a (sufficiently-powerful) system is safe. And a lot of experts disagree on the type of evidence that would be sufficient.
Are there examples of standards in other industries where people were quite confused about what “safety” would require? Are there examples of standards that are specific enough to be useful but flexible enough to deal with unexpected failure modes or threats? Are there examples where the standards-setters acknowledged that they wouldn’t be able to make a simple checklist, so they requested that companies provide proactive evidence of safety?
One of the biggest challenges with AI safety standards will be the fact that no one really knows how to verify that a (sufficiently-powerful) system is safe. And a lot of experts disagree on the type of evidence that would be sufficient.
While overcoming expert disagreement is a challenge, it is not one that is as big as you think. TL;DR: Deciding not to agree is always an option.
To expand on this: the fallback option in a safety standards creation process, for standards that aim to define a certain level of safe-enough, is as follows. If the experts involved cannot agree on any evidence based method for verifying that a system X is safe enough according to the level of safety required by the standard, then the standard being created will simply, and usually implicitly, declare that there is no route by which system X can comply with the safety standard. If you are required by law, say by EU law, to comply with the safety standard before shipping a system into the EU market, then your only legal option will be to never ship that system X into the EU market.
For AI systems you interact with over the Internet, this ‘never ship’ translates to ‘never allow it to interact over the Internet with EU residents’.
I am currently in the JTC21 committee which is running the above standards creation process to write the AI safety standards in support of the EU AI Act, the Act that will regulate certain parts of the AI industry, in case they want to ship legally into the EU market. ((Legal detail: if you cannot comply with the standards, the Act will give you several other options that may still allow you to ship legally, but I won’t get into explaining all those here. These other options will not give you a loophole to evade all expert scrutiny.))
Back to the mechanics of a standards committee: if a certain AI technology, when applied in a system X, is well know to make that system radioactively unpredictable, it will not usually take long for the technical experts in a standards committee to come to an agreement that there is no way that they can define any method in the standard for verifying that X will be safe according to the standard. The radioactively unsafe cases are the easiest cases to handle.
That being said, in all but the most trivial of safety engineering fields, there is a complicated epistemics involved in deciding when something is safe enough to ship, it is complicated whether you use standards or not. I have written about this topic, in the context of AGI, in section 14 of this paper.
Maybe there’s something in early cybersecurity? I.e. we’re not really sure precisely how people could be harmed through these systems (like the nascent internet), but there’s plenty of potential in the future?
Are there examples of standards in other industries where people were quite confused about what “safety” would require?
Yes, medical robotics is one I was involved in. Though there, the answer is often just wait for the first product to hit the market (there is nothing quite there yet, doing full autonomous surgery), and then copy their approach.
As is, the medical standards don’t cover much ML, and so the companies have to come up with the reasoning themselves for convincing the FDA in the audit. Which in practice means many companies just don’t risk it, and do something robotic, but surgeon controled, or use classical algorithms instead of deep learning.
Excited to see this! I’d be most excited about case studies of standards in fields where people didn’t already have clear ideas about how to verify safety.
In some areas, it’s pretty clear what you’re supposed to do to verify safety. Everyone (more-or-less) agrees on what counts as safe.
One of the biggest challenges with AI safety standards will be the fact that no one really knows how to verify that a (sufficiently-powerful) system is safe. And a lot of experts disagree on the type of evidence that would be sufficient.
Are there examples of standards in other industries where people were quite confused about what “safety” would require? Are there examples of standards that are specific enough to be useful but flexible enough to deal with unexpected failure modes or threats? Are there examples where the standards-setters acknowledged that they wouldn’t be able to make a simple checklist, so they requested that companies provide proactive evidence of safety?
While overcoming expert disagreement is a challenge, it is not one that is as big as you think. TL;DR: Deciding not to agree is always an option.
To expand on this: the fallback option in a safety standards creation process, for standards that aim to define a certain level of safe-enough, is as follows. If the experts involved cannot agree on any evidence based method for verifying that a system X is safe enough according to the level of safety required by the standard, then the standard being created will simply, and usually implicitly, declare that there is no route by which system X can comply with the safety standard. If you are required by law, say by EU law, to comply with the safety standard before shipping a system into the EU market, then your only legal option will be to never ship that system X into the EU market.
For AI systems you interact with over the Internet, this ‘never ship’ translates to ‘never allow it to interact over the Internet with EU residents’.
I am currently in the JTC21 committee which is running the above standards creation process to write the AI safety standards in support of the EU AI Act, the Act that will regulate certain parts of the AI industry, in case they want to ship legally into the EU market. ((Legal detail: if you cannot comply with the standards, the Act will give you several other options that may still allow you to ship legally, but I won’t get into explaining all those here. These other options will not give you a loophole to evade all expert scrutiny.))
Back to the mechanics of a standards committee: if a certain AI technology, when applied in a system X, is well know to make that system radioactively unpredictable, it will not usually take long for the technical experts in a standards committee to come to an agreement that there is no way that they can define any method in the standard for verifying that X will be safe according to the standard. The radioactively unsafe cases are the easiest cases to handle.
That being said, in all but the most trivial of safety engineering fields, there is a complicated epistemics involved in deciding when something is safe enough to ship, it is complicated whether you use standards or not. I have written about this topic, in the context of AGI, in section 14 of this paper.
Maybe there’s something in early cybersecurity? I.e. we’re not really sure precisely how people could be harmed through these systems (like the nascent internet), but there’s plenty of potential in the future?
Yes, medical robotics is one I was involved in. Though there, the answer is often just wait for the first product to hit the market (there is nothing quite there yet, doing full autonomous surgery), and then copy their approach. As is, the medical standards don’t cover much ML, and so the companies have to come up with the reasoning themselves for convincing the FDA in the audit. Which in practice means many companies just don’t risk it, and do something robotic, but surgeon controled, or use classical algorithms instead of deep learning.