I once worked on a program with DoD to help buy up loose MANPADS in Libya. There’s a linear causal relationship between portable air defense systems and harm. Other ordnance has a similar relationship.
The relationship is tenuous when we move from the world of atoms to bits. I struggle to see how new software could pose novel risks to life and limb. That doesn’t mean developers of self-driving vehicles or autopilot functions in aircraft should ignore safety in their software design, what I’m suggesting is that those considerations are not novel.
If someone advocates that we treat neural networks unlike any other system in existence today, I would imagine the burden of proof would be on them to justify this new approach.
I once worked on a program with DoD to help buy up loose MANPADS in Libya. There’s a linear causal relationship between portable air defense systems and harm. Other ordnance has a similar relationship.
The relationship is tenuous when we move from the world of atoms to bits. I struggle to see how new software could pose novel risks to life and limb. That doesn’t mean developers of self-driving vehicles or autopilot functions in aircraft should ignore safety in their software design, what I’m suggesting is that those considerations are not novel.
If someone advocates that we treat neural networks unlike any other system in existence today, I would imagine the burden of proof would be on them to justify this new approach.