Itâs a great question. I see Safety Cases more as a meta-framework in which you can use different kinds of evidence. Other risk management techniques can be used as evidence in a Safety Case (eg this paper uses a delphi method).
Also I think Safety Cases are attractive to people in AI Safety because: 1) They offer flexibility for the kind of evidence and reasoning that is allowed. From skimming it seems to me that many of the other risk management practices you linked are more strict about the kind of arguments or the kind of evidence that can be brought. 2) They strive to comprehensively prove that overall risk is low. I think most of the other techniques donât let you make claims such as âoverall risk from a system is <x%â (which AI Safety people want). 3) (I might be wrong here), but it seems to me that many other risk management techniques require you to understand the system and itâs environment decently well, whereas this is very difficult for AI Safety.
Overall, you might well be right that other risk management techniques have been overlooked and we shouldnât just focus on Safety Cases.
On 2, I think there are quite a number of techniques that give you quantitative risk estimates, and itâs quite routine in safety engineering and often required (e.g. to demonstrate that you have achieved 1e-4 fatality threshold and any further risk reduction is impractical). I donât fully understand most of the techniques listed in ISO31010, but it seems that a number of them do give quantitative risk estimates as a result the risk evaluation process, e.g. monte carlo, bayesian networks, F/âN diagrams, VaR, toxicological risk assessment, etc.
If you havenât already seen this paper on risk modelling, they use FTA and bayesian networks to estimate risks quantitatively.
Itâs a great question. I see Safety Cases more as a meta-framework in which you can use different kinds of evidence. Other risk management techniques can be used as evidence in a Safety Case (eg this paper uses a delphi method).
Also I think Safety Cases are attractive to people in AI Safety because:
1) They offer flexibility for the kind of evidence and reasoning that is allowed. From skimming it seems to me that many of the other risk management practices you linked are more strict about the kind of arguments or the kind of evidence that can be brought.
2) They strive to comprehensively prove that overall risk is low. I think most of the other techniques donât let you make claims such as âoverall risk from a system is <x%â (which AI Safety people want).
3) (I might be wrong here), but it seems to me that many other risk management techniques require you to understand the system and itâs environment decently well, whereas this is very difficult for AI Safety.
Overall, you might well be right that other risk management techniques have been overlooked and we shouldnât just focus on Safety Cases.
Yeah 1 and 3 seems right to me, thanks.
On 2, I think there are quite a number of techniques that give you quantitative risk estimates, and itâs quite routine in safety engineering and often required (e.g. to demonstrate that you have achieved 1e-4 fatality threshold and any further risk reduction is impractical). I donât fully understand most of the techniques listed in ISO31010, but it seems that a number of them do give quantitative risk estimates as a result the risk evaluation process, e.g. monte carlo, bayesian networks, F/âN diagrams, VaR, toxicological risk assessment, etc.
If you havenât already seen this paper on risk modelling, they use FTA and bayesian networks to estimate risks quantitatively.