Has anyone considered the idea of setting up independent arbitration courts for artificial intelligence research labs that keeps them ethically aligned? So here is the problem and a possible (poorly thought out) solution:
US courts would probably be overly punitive if they were policing AI research. They would undoubtedly be unintentionally regressive, limit the growth of the field, and create a situation in which other worse actors in other localities would get AGI first. It would probably move AI research offshores. The strong hammer of the law is not well equipped or agile enough to thread this small needle. In the worst case federal regulation on AI would be dangerous and harmful, at best it would be clumsy. The rules to govern AI labs would also have to be international, and agreeable to all parties.
Private arbitration courts work well for many corporations. There is a lot of precedent here. Both parties voluntarily enter into the arbitration court and agree on the outcome.
Imagine independent courts established to ensure AI safety measures are taken. Individuals from different labs could peek into other labs, “view the soviet warheads”, and ensure everyone is behaving responsibly. If one lab has repeated ethics concerns all major AI labs or corporations could agree to not work with that lab, raise concern with their local government, or lose some agreed upon deposit they agreed they would forfeit if they had too many ethics violations.
Individual AI labs benefit from sharing research, promoting a healthy community, and all working towards the common good to have a safe launch of AGI if it happens.
There are many out of the box ways to run this. All AI labs could pool money for courts in return for shared research, Open Phil could establish grants for smaller non profits to create courts, grant making organizations could withhold money for ethics violations. There are many other ways of setting it up. There’s a rich history of creative ways for individuals to combat collective action or externality problems voluntarily in a way that is often more effective than top down monopolistic legal systems.
There could be strong incentives to take part in the courts for public image reasons. It would tarnish Googles image if DeepMind was the only major AI lab that didn’t play by the community’s rules. Activists could draw attention to this.
Forgive me if this has been thought of before.
Has anyone considered the idea of setting up independent arbitration courts for artificial intelligence research labs that keeps them ethically aligned? So here is the problem and a possible (poorly thought out) solution:
US courts would probably be overly punitive if they were policing AI research. They would undoubtedly be unintentionally regressive, limit the growth of the field, and create a situation in which other worse actors in other localities would get AGI first. It would probably move AI research offshores. The strong hammer of the law is not well equipped or agile enough to thread this small needle. In the worst case federal regulation on AI would be dangerous and harmful, at best it would be clumsy. The rules to govern AI labs would also have to be international, and agreeable to all parties.
Private arbitration courts work well for many corporations. There is a lot of precedent here. Both parties voluntarily enter into the arbitration court and agree on the outcome.
Imagine independent courts established to ensure AI safety measures are taken. Individuals from different labs could peek into other labs, “view the soviet warheads”, and ensure everyone is behaving responsibly. If one lab has repeated ethics concerns all major AI labs or corporations could agree to not work with that lab, raise concern with their local government, or lose some agreed upon deposit they agreed they would forfeit if they had too many ethics violations.
Individual AI labs benefit from sharing research, promoting a healthy community, and all working towards the common good to have a safe launch of AGI if it happens.
There are many out of the box ways to run this. All AI labs could pool money for courts in return for shared research, Open Phil could establish grants for smaller non profits to create courts, grant making organizations could withhold money for ethics violations. There are many other ways of setting it up. There’s a rich history of creative ways for individuals to combat collective action or externality problems voluntarily in a way that is often more effective than top down monopolistic legal systems.
There could be strong incentives to take part in the courts for public image reasons. It would tarnish Googles image if DeepMind was the only major AI lab that didn’t play by the community’s rules. Activists could draw attention to this.
Just an idea I had