My Feedback to the UN Advisory Body on AI

Background and Some Comments:

  • The UN High Level Advisory Body on AI launched their interim report which was open for feedback.

  • My general approach to this was trying to target points which might have been overlooked by other organizations in the space and mostly focusing on a variety of ideas rather than honing in on one(which seemed the most apt given the 3,000 character limit on each section).

  • The report is focused on many aspects of AI Governance apart from safety, which is my focus—I chose to ignore the other aspects when brainstorming.

  • Do note that there were 2-3 other sections that I didn’t respond to, and I used AI tools to make my ideas as concise as possible due to hard character limitations. In hindsight, I think a couple of them could have benefitted from pointing out comparable models where the approach has been successful.

  • I am sharing this with the intention of getting general feedback and perhaps inspiring others to also respond to Public Voice Opportunities.

Opportunities and Enablers:

1. Global Compute Supply Chain and Algorithmic Oversight

Short Summary:

  • Monitoring the global compute supply chain is essential for ensuring equitable access to computational resources necessary for AGI development.

  • Simultaneously, oversight of algorithmic advancements is critical, as these can significantly boost AGI capabilities and potentially alter the balance of technological power.

  • Establish a dedicated global authority to oversee the compute supply chain and algorithmic advancements. This body would track developments in computational resources and algorithmic efficiency, regulate access to prevent monopolization and ensure that breakthroughs are ethically aligned and equitably distributed. Its mandate would include developing guidelines to manage the proliferation of advanced algorithms that could accelerate AGI capabilities, ensuring a balanced and responsible approach to technological progress.

2. Global Cooperative Framework for AGI Development and Rapid Response Dialogue

Short Summary:

  • A global cooperative framework that facilitates rapid dialogue and collaboration among international stakeholders is vital for addressing the challenges and leveraging the opportunities presented by AGI advancements. This approach emphasizes the importance of fostering international cooperation, sharing research and innovations, and establishing mechanisms for fast-tracking responses to emerging AGI technologies and incidents.

  • Implement a Global Cooperative Framework for AGI Development that encourages collaborative innovation and the sharing of research findings and best practices.

  • Establish protocols for accelerated international dialogue and cooperation on AGI governance, including the development of rapid response mechanisms to AGI advancements and incidents. This framework should aim to reduce geopolitical tensions, promote a unified approach to AGI governance, and ensure a timely and coordinated global response to the multifaceted challenges posed by AGI technologies.

Institutional Functions that an international governance regime for AI should carry out:

1. Mandatory AGI System Deactivation Protocols

Short Summary:

The inclusion of reliable “off-switch” protocols in AGI systems is fundamental for immediate deactivation in case of risks, ensuring human control over AGI under all circumstances.

Introduce legislation requiring AGI systems to incorporate independently verified, fail-safe deactivation mechanisms as a standard safety feature.

2. Independent AGI Auditing Framework

Short Summary:

  • Independent audits of AGI systems for ethical, safety, and compliance standards are necessary to ensure accountability and adherence to global norms.

Recommended Action:

  • Implement a global framework mandating the regular auditing of AGI systems by accredited bodies, focusing on ethical adherence, safety standards, and compliance with international norms.

3.AGI Deceptive Alignment Certification

Short Summary:

  • Addressing deceptive alignment in AGI systems through rigorous testing and certification is essential to verify genuine alignment with ethical guidelines and societal values.

  • Create a certification process for AGI systems, including stringent testing scenarios to evaluate ethical alignment, with certification required for deployment.

4. High-risk AGI Application Licensing

Short Summary:

  • A licensing regime for high-risk AGI applications will ensure thorough scrutiny based on safety, security, and ethical assessments before deployment.

  • Enact a global licensing regime that subjects high-risk AGI applications to comprehensive evaluations, ensuring they meet established global standards for safety and ethics.

5. Global AGI Incident Reporting System

Short Summary:

  • A system for global reporting of AGI incidents, including near-misses, is essential for shared learning and proactive risk management.

  • Create a system for the global reporting of AGI incidents to support early identification of risks and coordinate international responses to AGI-related incidents.

Any other feedback on the Interim Report:

1. The Problem of Using Interoperability in Human Rights Protections

Issue:

The push for AI systems to be interoperable—able to work across various jurisdictions and organizations—risks prioritizing technical compatibility over human rights protections.

Consequence:

Stakeholders might adopt the lowest level of protection, creating a “race to the bottom” scenario where fundamental human rights are undermined.

Solution:

Establish international norms setting high standards for human rights protections to ensure interoperability does not compromise fundamental rights.

2. Acknowledge AI as a Potential Existential Risk

Acknowledgement: Recognizing AI as a potential existential risk is crucial, highlighting concerns that AI advancements could surpass human control and pose threats to humanity’s survival.

Policy Response: Implement stringent safety standards, continuously monitor AI developments, and foster global consensus on limiting hazardous AI technologies.

Objective: Prevent scenarios where AI could cause irreversible harm, promoting a proactive and precautionary policy-making approach.

3. Recognizing the Role of Youth

Future Stakeholders: Youth represent not just the inheritors of tomorrow but active stakeholders in shaping the future landscape of AI. Their perspectives, innovative ideas, and unique insights into digital technology make them invaluable to the process of AI policy development.

Direct Impact: As the generation that will live with the long-term outcomes and consequences of today’s AI policies, young people have a vested interest in ensuring these frameworks are robust, equitable, and aligned with future societal needs.

Proposed Approach for Inclusion

  • Policy Co-creation: Engage youth in AI policy development through forums, consultations, and think tanks that specifically aim to gather their insights and proposals.

  • Capacity Building: Invest in educational and training programs focused on AI ethics, governance, and policy to equip young people with the necessary knowledge to contribute effectively.

  • Global Representation: Ensure diverse representation from young individuals across different regions, cultures, and socioeconomic backgrounds to capture a wide range of perspectives and solutions.