Thanks for writing this. I continue to be deeply frustrated by the “accident vs. misuse” framing.
In fact, one I am writing this comment because I think this post itself endorses that framing to too great an extent. For instance, I do not think it is appropriate to describe this simply as an accident:
engineers disabled an emergency brake that they worried would cause the car to behave overly cautiously and look worse than competitor vehicles.
I have a hard time imagining that they didn’t realize this would likely make the cars less safe; I would say they made a decision to prioritize ‘looking good’ over safety, perhaps rationalizing it by saying it wouldn’t make much difference and/or that they didn’t have a choice because their livelihoods were at risk (which perhaps they were).
Now that I’ve got the whinging out of the way, I say thank you again for writing it, and that I found the distinction between “AI risks with structural causes” and “‘Non-AI’ risks partly caused by AI” quite valuable, and I hope it will be widely adopted.
In fact, one I am writing this comment because I think this post itself endorses that framing to too great an extent.
Probably agree with you there
I do not think it is appropriate to describe this [the Uber crash] simply as an accident
Also agree with that. I wasn’t trying to claim it is simply an accident—there are also structural causes (i.e. bad incentives). As I wrote:
Note that this could also be well-described as an “accident risk” (there was some incompetence on behalf of the engineers, along with the structural causes). [emphasis added]
If I were writing this again, I wouldn’t use the word “well-described” (unclear what I actually mean; sounds like I’m making a stronger claim than I was). Maybe I’d say “can partly be described as an accident”.
But today, I think this mostly just introduces unnecessary/confusing abstraction. The main important point in my head now is: when stuff goes wrong, it can be due to malintent, incompetence, or the incentives. Often it’s a complicated mixture of all three. Make sure your thinking about AI risk takes that into account.
And sure, you could carve up risks into categories, where you’re like:
if it’s mostly incompetence, call it an accident
if it’s mostly malintent, call it misuse
if it’s mostly incentives, call it structural
But it’s pretty unclear what “mostly” means, and moreover it just feels kind of unnecessary/confusing.
Thanks for writing this. I continue to be deeply frustrated by the “accident vs. misuse” framing.
In fact, one I am writing this comment because I think this post itself endorses that framing to too great an extent. For instance, I do not think it is appropriate to describe this simply as an accident:
I have a hard time imagining that they didn’t realize this would likely make the cars less safe; I would say they made a decision to prioritize ‘looking good’ over safety, perhaps rationalizing it by saying it wouldn’t make much difference and/or that they didn’t have a choice because their livelihoods were at risk (which perhaps they were).
Now that I’ve got the whinging out of the way, I say thank you again for writing it, and that I found the distinction between “AI risks with structural causes” and “‘Non-AI’ risks partly caused by AI” quite valuable, and I hope it will be widely adopted.
Probably agree with you there
Also agree with that. I wasn’t trying to claim it is simply an accident—there are also structural causes (i.e. bad incentives). As I wrote:
If I were writing this again, I wouldn’t use the word “well-described” (unclear what I actually mean; sounds like I’m making a stronger claim than I was). Maybe I’d say “can partly be described as an accident”.
But today, I think this mostly just introduces unnecessary/confusing abstraction. The main important point in my head now is: when stuff goes wrong, it can be due to malintent, incompetence, or the incentives. Often it’s a complicated mixture of all three. Make sure your thinking about AI risk takes that into account.
And sure, you could carve up risks into categories, where you’re like:
if it’s mostly incompetence, call it an accident
if it’s mostly malintent, call it misuse
if it’s mostly incentives, call it structural
But it’s pretty unclear what “mostly” means, and moreover it just feels kind of unnecessary/confusing.
I recently learned that in law, there is a breakdown as:
Intent (~=misuse)
Oblique Intent (i.e. a known side effect)
Recklessness (known chance of side effect)
Negligence (should’ve known chance of side effect)
Accident (couldn’t have been expected to know)
This seems like a good categorization.