After reviewing the report, my comments are as follows:
Opportunities and Enablers:
Democratizing access to AI capabilities and infrastructure—Expanding access to AI tools, datasets, compute resources and educational opportunities, especially for underrepresented groups and regions
Providing incentives for AI applications focused on social good, sustainability and inclusive growth is important. The UN could highlight exemplary AI projects aligned with the SDGs
Risks and Challenges:
Include Intrinsic safety as one of the principle in AI design and risk assessment:
Intrinsic safety is a key principle that should be integrated into the design, development and deployment of AI systems from the ground up to mitigate risks. This means building in safeguards, constraints and fail-safe mechanisms that prevent AI systems from causing unintended harm, even in the case of failures or misuse.
Guiding Principles to guide the formation of new global governance institutions for AI:
Enforcement Mechanisms—clearly define the authority or mechanisms to enforce their decisions or policies
Institutional Functions that an international governance regime for AI should carry out:
Referring to the FDA’s Adverse Event Reporting System (AERS), it is recommended to establish a similar system as a tool for the global AI safety monitoring program
After reviewing the report, my comments are as follows:
Opportunities and Enablers:
Democratizing access to AI capabilities and infrastructure—Expanding access to AI tools, datasets, compute resources and educational opportunities, especially for underrepresented groups and regions
Providing incentives for AI applications focused on social good, sustainability and inclusive growth is important. The UN could highlight exemplary AI projects aligned with the SDGs
Risks and Challenges:
Include Intrinsic safety as one of the principle in AI design and risk assessment:
Intrinsic safety is a key principle that should be integrated into the design, development and deployment of AI systems from the ground up to mitigate risks. This means building in safeguards, constraints and fail-safe mechanisms that prevent AI systems from causing unintended harm, even in the case of failures or misuse.
Guiding Principles to guide the formation of new global governance institutions for AI:
Enforcement Mechanisms—clearly define the authority or mechanisms to enforce their decisions or policies
Institutional Functions that an international governance regime for AI should carry out:
Referring to the FDA’s Adverse Event Reporting System (AERS), it is recommended to establish a similar system as a tool for the global AI safety monitoring program