“No one really likes safety, they like features” – Stefan Seltz-Axmacher lamented in his open letter announcing the end of Starsky Robotics in 2020. After founding and leading a company obsessed with making driverless trucks safer, reducing the chance of fatality accidents from 1 in a thousand to 1 in a million, he announced they had to shut down due to a lack of investors’ interest. Investors weren’t impressed by the thousandfold increase in safety that Starsky Robotics achieved. Instead, they preferred the new features brought forth by Starsky’s competitors, such as the ability to change lanes automatically or drive on surface streets. This crooked incentive structure favors businesses willing to take on risks that are clearly destructive in the world of driverless vehicles and can lead to catastrophic consequences as AI systems progress at large. If features are appealing but safety isn’t, who will invest on making sure language models are convincing writers but don’t massively deceive the public? Who will ensure weaponized AI systems efficiently react to threats but also accurately interpret blurred human values like the law of war? As AI capabilities advance, it will be necessary to prioritize safety over features in many cases — who will be up to the test?
“No one really likes safety, they like features” – Stefan Seltz-Axmacher lamented in his open letter announcing the end of Starsky Robotics in 2020. After founding and leading a company obsessed with making driverless trucks safer, reducing the chance of fatality accidents from 1 in a thousand to 1 in a million, he announced they had to shut down due to a lack of investors’ interest. Investors weren’t impressed by the thousandfold increase in safety that Starsky Robotics achieved. Instead, they preferred the new features brought forth by Starsky’s competitors, such as the ability to change lanes automatically or drive on surface streets. This crooked incentive structure favors businesses willing to take on risks that are clearly destructive in the world of driverless vehicles and can lead to catastrophic consequences as AI systems progress at large. If features are appealing but safety isn’t, who will invest on making sure language models are convincing writers but don’t massively deceive the public? Who will ensure weaponized AI systems efficiently react to threats but also accurately interpret blurred human values like the law of war? As AI capabilities advance, it will be necessary to prioritize safety over features in many cases — who will be up to the test?