Whenever I get to feel too anxious about the risks from misaligned AI, I really wish the community could invest in slowing down AI progress to buy out some time for alignment researchers and that this plan would actually work without causing other serious problems. What is the EA consensus? Would this be a good solution? A desperate solution? How much has the community thought about this and what are some conclusions/suggestions for next steps? I’ve only found this recent blog post on the topic.
[Question] Slowing down AI progress?
No comments.
A few suggestions for next steps:
Support investigative journalism into AI progress and safety. Something similar to this, but for AI: Bankman-Fried Family Donates $5 Million to ProPublica: Grant will support reporting on biosecurity and pandemic preparedness.
Support non-governmental organizations that campaign for international laws and treaties regulating AI. The regulation of autonomous weapons might be a good starting point. See the Campaign to Stop Killer Robots and Lethal Autonomous Weapons.
Support national laws and agencies regulating advanced AI. For the US, see this bipartisan bill and this proposal to establish a federal agency.
Relevant discussion from a couple of days ago: https://astralcodexten.substack.com/p/why-not-slow-ai-progress
Has anything else been written on this topic?
I would be surprised if we could do much to slow AI, but I agree that at least a few people should look into this approach.
I think it could be a highly valuable project for someone to form a community around this as long as they were careful not to allow the discussion of extreme options within the group.
A good place for a group like this to start could be looking at previous efforts where new technologies have been regulated successfully.
Examples: CRISPR on human embryos, the Treaty on the Prohibition of nuclear weapons, The Biological Weapons Convention, and maybe slightly different angle: the Antarctic Treaty seemed to be effective at slowing down something that was competitive between countries and yet allowed important scientific research to continue in an international cooperation.
Maybe looking at some of these case studies could be a good starting point to consider the similarities/differences to AI? E.g.
Were the above regulations on technology that was already in use, or emerging?
Was public opinion important in pushing for the regulation? Or was the public hardly involved at all?
Who was incentivised to develop this new technology/research? Who was it funded by? (companies, governments, philanthropy, NGOs, users)
Who served to benefit from the technology being developed? Did people make counterfactual arguments at the time? (e.g. if we develop the tech further, it’s extremely likely that great medicine will be developed, alongside something dangerous / untenable)
Great suggestion. I would love to see someone diving deeper into these topics.
Thanks for bringing up the idea of case studies.
It would also be useful to study verification, compliance and enforcement of these regulations: “Trust, but verify.”
Thank you, that’s great. I’d be keen to start a project on this. For whoever is interested, please DM me and we can start brainstorming and form a group etc.