In fact, one I am writing this comment because I think this post itself endorses that framing to too great an extent.
Probably agree with you there
I do not think it is appropriate to describe this [the Uber crash] simply as an accident
Also agree with that. I wasn’t trying to claim it is simply an accident—there are also structural causes (i.e. bad incentives). As I wrote:
Note that this could also be well-described as an “accident risk” (there was some incompetence on behalf of the engineers, along with the structural causes). [emphasis added]
If I were writing this again, I wouldn’t use the word “well-described” (unclear what I actually mean; sounds like I’m making a stronger claim than I was). Maybe I’d say “can partly be described as an accident”.
But today, I think this mostly just introduces unnecessary/confusing abstraction. The main important point in my head now is: when stuff goes wrong, it can be due to malintent, incompetence, or the incentives. Often it’s a complicated mixture of all three. Make sure your thinking about AI risk takes that into account.
And sure, you could carve up risks into categories, where you’re like:
if it’s mostly incompetence, call it an accident
if it’s mostly malintent, call it misuse
if it’s mostly incentives, call it structural
But it’s pretty unclear what “mostly” means, and moreover it just feels kind of unnecessary/confusing.
Probably agree with you there
Also agree with that. I wasn’t trying to claim it is simply an accident—there are also structural causes (i.e. bad incentives). As I wrote:
If I were writing this again, I wouldn’t use the word “well-described” (unclear what I actually mean; sounds like I’m making a stronger claim than I was). Maybe I’d say “can partly be described as an accident”.
But today, I think this mostly just introduces unnecessary/confusing abstraction. The main important point in my head now is: when stuff goes wrong, it can be due to malintent, incompetence, or the incentives. Often it’s a complicated mixture of all three. Make sure your thinking about AI risk takes that into account.
And sure, you could carve up risks into categories, where you’re like:
if it’s mostly incompetence, call it an accident
if it’s mostly malintent, call it misuse
if it’s mostly incentives, call it structural
But it’s pretty unclear what “mostly” means, and moreover it just feels kind of unnecessary/confusing.
I recently learned that in law, there is a breakdown as:
Intent (~=misuse)
Oblique Intent (i.e. a known side effect)
Recklessness (known chance of side effect)
Negligence (should’ve known chance of side effect)
Accident (couldn’t have been expected to know)
This seems like a good categorization.