Good find, thanks! I’m not very keen on instructing teams to run bug bounties and not other mechanisms, so am not particularly enthusiastic about this.
It looks like this would focus on infosecurity of the AI systems being used (i.e. can this weapon’s AI be hacked?) rather than testing for potential vulnerabilities from the AI systems themself.
Good find, thanks! I’m not very keen on instructing teams to run bug bounties and not other mechanisms, so am not particularly enthusiastic about this.
It looks like this would focus on infosecurity of the AI systems being used (i.e. can this weapon’s AI be hacked?) rather than testing for potential vulnerabilities from the AI systems themself.