I would be surprised if we could do much to slow AI, but I agree that at least a few people should look into this approach.
I think it could be a highly valuable project for someone to form a community around this as long as they were careful not to allow the discussion of extreme options within the group.
A good place for a group like this to start could be looking at previous efforts where new technologies have been regulated successfully.
Examples: CRISPR on human embryos, the Treaty on the Prohibition of nuclear weapons, The Biological Weapons Convention, and maybe slightly different angle: the Antarctic Treaty seemed to be effective at slowing down something that was competitive between countries and yet allowed important scientific research to continue in an international cooperation.
Maybe looking at some of these case studies could be a good starting point to consider the similarities/differences to AI? E.g.
Were the above regulations on technology that was already in use, or emerging?
Was public opinion important in pushing for the regulation? Or was the public hardly involved at all?
Who was incentivised to develop this new technology/research? Who was it funded by? (companies, governments, philanthropy, NGOs, users)
Who served to benefit from the technology being developed? Did people make counterfactual arguments at the time? (e.g. if we develop the tech further, it’s extremely likely that great medicine will be developed, alongside something dangerous / untenable)
Thank you, that’s great. I’d be keen to start a project on this. For whoever is interested, please DM me and we can start brainstorming and form a group etc.
I would be surprised if we could do much to slow AI, but I agree that at least a few people should look into this approach.
I think it could be a highly valuable project for someone to form a community around this as long as they were careful not to allow the discussion of extreme options within the group.
A good place for a group like this to start could be looking at previous efforts where new technologies have been regulated successfully.
Examples: CRISPR on human embryos, the Treaty on the Prohibition of nuclear weapons, The Biological Weapons Convention, and maybe slightly different angle: the Antarctic Treaty seemed to be effective at slowing down something that was competitive between countries and yet allowed important scientific research to continue in an international cooperation.
Maybe looking at some of these case studies could be a good starting point to consider the similarities/differences to AI? E.g.
Were the above regulations on technology that was already in use, or emerging?
Was public opinion important in pushing for the regulation? Or was the public hardly involved at all?
Who was incentivised to develop this new technology/research? Who was it funded by? (companies, governments, philanthropy, NGOs, users)
Who served to benefit from the technology being developed? Did people make counterfactual arguments at the time? (e.g. if we develop the tech further, it’s extremely likely that great medicine will be developed, alongside something dangerous / untenable)
Great suggestion. I would love to see someone diving deeper into these topics.
Thanks for bringing up the idea of case studies.
It would also be useful to study verification, compliance and enforcement of these regulations: “Trust, but verify.”
Thank you, that’s great. I’d be keen to start a project on this. For whoever is interested, please DM me and we can start brainstorming and form a group etc.