Supporting global coordination in AI development: Why and how to contribute to international AI standards
(Link post for the FHI GovAI technical report Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development)
You likely haven’t thought about international standards much, if at all. Some may recognize leading international standards bodies ISO and IEEE from camera settings and WiFi specs. There are tens of thousands more on everything from cybersecurity practices to environmental protection to artificial intelligence. Even among those who work on standards, they have a reputation for being notoriously dull. But do not fall asleep—standards can be an important tool to support the beneficial development of advanced AI.
There are serious concerns about competitive dynamics in AI development that may lead to underinvestment in safety and reduce regulatory oversight. Governments around the world are rushing to release national AI strategies in order to accelerate developments within their borders. Individual labs may face incentives to race in the development of advanced AI systems. International standards can help mitigate these dangerous possibilities. Internationally developed and globally disseminated, standards can support trust among nations and developers. Standards can promote concrete beneficial practices, including a global focus on AI Safety. Standards can also encourage beneficial partial openness among labs that sees them share information on dangers and mitigation strategies.
International standards bodies are actively working on standards development for AI today. But leading AI labs with a focus on advanced capabilities are notably absent from these standardization efforts. Greater focus is needed on standards with long-term beneficial impacts. Furthermore, mechanisms exist today that can see these nominally voluntary standards disseminated and enforced around the world. But these mechanisms need further research, planning, and testing to see them used successfully in practice. Engagement from those in the EA community with relevant expertise can support these developments.
With the Center for the Governance of AI at Oxford, I have published a white paper on AI standards that makes the case for engaging in international standards, outlines ongoing efforts, and recommends specific courses of action to push beneficial standards efforts forward. You can find the paper on the Future of Humanity Institute’s website here.
Are there historical cases where you think international technical standards have been especially effective in preventing harm by being both cost-effective and widely followed? (That is, something like “saving a good number of DALYs compared to how much it cost companies to follow them”.)
I’m familiar with international prohibitions against certain types of weaponry, which seem to have been fairly effective, but nothing comes to mind on the consumer side (though I’m sure examples do exist).
I have worked on developing energy efficiency standards. Sometimes they come from international organizations and are adopted by individual countries. But sometimes individual countries develop standards and then they can go international (I gave an example of this in my 80,000 Hours podcast near the end). This has happened quite a bit with the US Energy Star program. I agree with the original poster that standards development can indeed be dull, but I think this is an important effort.
This is great! I think it could be worth emphasizing more that you’re essentially making a linkpost for the FHI / GovAI technical report Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development.
Thanks, Jonas. I’ve edited to make the link clear from the top.