You likely haven’t thought about international standards much, if at all. Some may recognize leading international standards bodies ISO and IEEE from camera settings and WiFi specs. There are tens of thousands more on everything from cybersecurity practices to environmental protection to artificial intelligence. Even among those who work on standards, they have a reputation for being notoriously dull. But do not fall asleep—standards can be an important tool to support the beneficial development of advanced AI.
There are serious concerns about competitive dynamics in AI development that may lead to underinvestment in safety and reduce regulatory oversight. Governments around the world are rushing to release national AI strategies in order to accelerate developments within their borders. Individual labs may face incentives to race in the development of advanced AI systems. International standards can help mitigate these dangerous possibilities. Internationally developed and globally disseminated, standards can support trust among nations and developers. Standards can promote concrete beneficial practices, including a global focus on AI Safety. Standards can also encourage beneficial partial openness among labs that sees them share information on dangers and mitigation strategies.
International standards bodies are actively working on standards development for AI today. But leading AI labs with a focus on advanced capabilities are notably absent from these standardization efforts. Greater focus is needed on standards with long-term beneficial impacts. Furthermore, mechanisms exist today that can see these nominally voluntary standards disseminated and enforced around the world. But these mechanisms need further research, planning, and testing to see them used successfully in practice. Engagement from those in the EA community with relevant expertise can support these developments.
With the Center for the Governance of AI at Oxford, I have published a white paper on AI standards that makes the case for engaging in international standards, outlines ongoing efforts, and recommends specific courses of action to push beneficial standards efforts forward. You can find the paper on the Future of Humanity Institute’s website here.
Supporting global coordination in AI development: Why and how to contribute to international AI standards
(Link post for the FHI GovAI technical report Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development)
You likely haven’t thought about international standards much, if at all. Some may recognize leading international standards bodies ISO and IEEE from camera settings and WiFi specs. There are tens of thousands more on everything from cybersecurity practices to environmental protection to artificial intelligence. Even among those who work on standards, they have a reputation for being notoriously dull. But do not fall asleep—standards can be an important tool to support the beneficial development of advanced AI.
There are serious concerns about competitive dynamics in AI development that may lead to underinvestment in safety and reduce regulatory oversight. Governments around the world are rushing to release national AI strategies in order to accelerate developments within their borders. Individual labs may face incentives to race in the development of advanced AI systems. International standards can help mitigate these dangerous possibilities. Internationally developed and globally disseminated, standards can support trust among nations and developers. Standards can promote concrete beneficial practices, including a global focus on AI Safety. Standards can also encourage beneficial partial openness among labs that sees them share information on dangers and mitigation strategies.
International standards bodies are actively working on standards development for AI today. But leading AI labs with a focus on advanced capabilities are notably absent from these standardization efforts. Greater focus is needed on standards with long-term beneficial impacts. Furthermore, mechanisms exist today that can see these nominally voluntary standards disseminated and enforced around the world. But these mechanisms need further research, planning, and testing to see them used successfully in practice. Engagement from those in the EA community with relevant expertise can support these developments.
With the Center for the Governance of AI at Oxford, I have published a white paper on AI standards that makes the case for engaging in international standards, outlines ongoing efforts, and recommends specific courses of action to push beneficial standards efforts forward. You can find the paper on the Future of Humanity Institute’s website here.