I’m curious as to the somewhat hedging word choices in the second paragraph like could and perhaps. The case for great, even extreme, harm from AI misuse seems a lot more straightforward than AI doom. Misuse of new, very powerful technologies has caused at least significant harm (including body counts) in the past with some consistency, so I would assume the pattern would follow with AI as well.
I’m allowing for the possibility that we hit another AI winter, and the new powerful technology just doesn’t arrive in our lifetime. Or that the technology is powerful for some things, but remains too unreliable for use in life-critical situations and is kept out of them.
I think it’s likely that AI will have at least an order or magnitude or two greater body count than it has now, but I don’t know how high it will be.
I once worked on a program with DoD to help buy up loose MANPADS in Libya. There’s a linear causal relationship between portable air defense systems and harm. Other ordnance has a similar relationship.
The relationship is tenuous when we move from the world of atoms to bits. I struggle to see how new software could pose novel risks to life and limb. That doesn’t mean developers of self-driving vehicles or autopilot functions in aircraft should ignore safety in their software design, what I’m suggesting is that those considerations are not novel.
If someone advocates that we treat neural networks unlike any other system in existence today, I would imagine the burden of proof would be on them to justify this new approach.
I’m curious as to the somewhat hedging word choices in the second paragraph like could and perhaps. The case for great, even extreme, harm from AI misuse seems a lot more straightforward than AI doom. Misuse of new, very powerful technologies has caused at least significant harm (including body counts) in the past with some consistency, so I would assume the pattern would follow with AI as well.
I’m allowing for the possibility that we hit another AI winter, and the new powerful technology just doesn’t arrive in our lifetime. Or that the technology is powerful for some things, but remains too unreliable for use in life-critical situations and is kept out of them.
I think it’s likely that AI will have at least an order or magnitude or two greater body count than it has now, but I don’t know how high it will be.
I once worked on a program with DoD to help buy up loose MANPADS in Libya. There’s a linear causal relationship between portable air defense systems and harm. Other ordnance has a similar relationship.
The relationship is tenuous when we move from the world of atoms to bits. I struggle to see how new software could pose novel risks to life and limb. That doesn’t mean developers of self-driving vehicles or autopilot functions in aircraft should ignore safety in their software design, what I’m suggesting is that those considerations are not novel.
If someone advocates that we treat neural networks unlike any other system in existence today, I would imagine the burden of proof would be on them to justify this new approach.