Hi, I’m thinking about a possibly new approach to AI safety. Call it AI monitoring and safe shutdown.
Safe shutdown, riffs on the idea of the big red button, but adapts it for use in simpler systems. If there was a big red button, who gets to press it and how? This involves talking to law enforcement, legal and policy. Big red buttons might be useful for non learning systems, large autonomous drones and self-driving cars are two system that might suffer from software failings and need to be shutdown safely if possible (or precipitously if the risks from hard shutdown are less than it’s continued operation).
The monitoring side of thing asks what kind of registration and monitoring we should have for AIs and autonomous systems. Building on work on aircraft monitoring, what would the needs around autonomous system be?
Is this a neglected/valuable cause area? If so, I’m at an early stage and could use other people to help out.
Hi, I’m thinking about a possibly new approach to AI safety. Call it AI monitoring and safe shutdown.
Safe shutdown, riffs on the idea of the big red button, but adapts it for use in simpler systems. If there was a big red button, who gets to press it and how? This involves talking to law enforcement, legal and policy. Big red buttons might be useful for non learning systems, large autonomous drones and self-driving cars are two system that might suffer from software failings and need to be shutdown safely if possible (or precipitously if the risks from hard shutdown are less than it’s continued operation).
The monitoring side of thing asks what kind of registration and monitoring we should have for AIs and autonomous systems. Building on work on aircraft monitoring, what would the needs around autonomous system be?
Is this a neglected/valuable cause area? If so, I’m at an early stage and could use other people to help out.