A political think tank to refine and push for regulatory markets of AI safety in as many countries as possible. Jack Clark, Gillian K. Hadfield: “We propose a new model for regulation to achieve AI safety: global regulatory markets. We first sketch the model in general terms and provide an overview of the costs and benefits of this approach. We then demonstrate how the model might work in practice: responding to the risk of adversarial attacks on AI models employed in commercial drones.”
It is probably hard and slow to establish such markets worldwide. This and similar proposal also put a focus on safety regulation right before the deployment. But that implies that that it is assumed that any development and testing systems are perfectly sandboxed so that an AGI cannot break out at any stage other than when it is intentionally deployed. That seems unwarranted to me. So this regulation would have to go much deeper than just testing the product but would have to test the safety of the development process too.
For starters, someone could be contracted to conduct a historical analysis of how long it took for similar forms of regulation to take hold.
From all the proposals for certification or safety-consulting for AI safety, this one seems most promising to me (but that’s not a high bar). I would feel mildly safer with something like this in place.
Regulatory markets of AI safety
Artificial Intelligence
A political think tank to refine and push for regulatory markets of AI safety in as many countries as possible. Jack Clark, Gillian K. Hadfield: “We propose a new model for regulation to achieve AI safety: global regulatory markets. We first sketch the model in general terms and provide an overview of the costs and benefits of this approach. We then demonstrate how the model might work in practice: responding to the risk of adversarial attacks on AI models employed in commercial drones.”
It is probably hard and slow to establish such markets worldwide. This and similar proposal also put a focus on safety regulation right before the deployment. But that implies that that it is assumed that any development and testing systems are perfectly sandboxed so that an AGI cannot break out at any stage other than when it is intentionally deployed. That seems unwarranted to me. So this regulation would have to go much deeper than just testing the product but would have to test the safety of the development process too.
For starters, someone could be contracted to conduct a historical analysis of how long it took for similar forms of regulation to take hold.
From all the proposals for certification or safety-consulting for AI safety, this one seems most promising to me (but that’s not a high bar). I would feel mildly safer with something like this in place.
It would also be good to offer whistleblower bounties for AI safety and biosafety!