The people most likely to be harmed by ungoverned AI are among the least represented in the rooms where governance decisions are being made.
My background is in conflict management and humanitarian research. My research on the humanitarian recognition and response of conflict-related sexual violence against men and boys documented how international protection frameworks systematically failed to reach certain populations despite being designed with good intentions. This does not occur because the need of the people was nonexistent, but because, from the very beginning, the framework was not designed around those people.
The very pattern I documented in humanitarian recognition and response settings across sub-Saharan Africa seems to be appearing in AI governance. Existing frameworks, such as the EU AI Act and the UN AI advisory body recommendations, were finalised largely by people from a narrow set of geographic and cultural contexts. Meanwhile, the communities most exposed to the potential harms of AI deployment, such as those in conflict-affected settings and the Global South, become the afterthought of the completed frameworks, if at all.
This is not a new problem. It is a familiar one in a new domain. And the lesson from humanitarian recognition and response is clear: passive universalism does not work. Regardless of how comprehensive it appears on paper, any framework that is not intentional for a specific population will not reach or protect that population.
Drawing from the above argument, I am currently developing a policy brief with specific recommendations for bodies such as the UN AI advisory body and humanitarian organisations that are increasingly deploying AI tools in field settings. The argument is grounded in primary research as well as in the growing body of AI governance literature that acknowledges this gap without yet addressing it structurally.
Two questions for those working in this space: Is this gap being addressed somewhere I am not seeing? And what would be most useful to practitioners and policymakers, a framing that leads with the humanitarian angle, or one that leads with the AI governance gaps directly?
[Question] Designed for whom? A question AI governance frameworks have yet to answer
Link post
The people most likely to be harmed by ungoverned AI are among the least represented in the rooms where governance decisions are being made.
My background is in conflict management and humanitarian research. My research on the humanitarian recognition and response of conflict-related sexual violence against men and boys documented how international protection frameworks systematically failed to reach certain populations despite being designed with good intentions. This does not occur because the need of the people was nonexistent, but because, from the very beginning, the framework was not designed around those people.
The very pattern I documented in humanitarian recognition and response settings across sub-Saharan Africa seems to be appearing in AI governance. Existing frameworks, such as the EU AI Act and the UN AI advisory body recommendations, were finalised largely by people from a narrow set of geographic and cultural contexts. Meanwhile, the communities most exposed to the potential harms of AI deployment, such as those in conflict-affected settings and the Global South, become the afterthought of the completed frameworks, if at all.
This is not a new problem. It is a familiar one in a new domain. And the lesson from humanitarian recognition and response is clear: passive universalism does not work. Regardless of how comprehensive it appears on paper, any framework that is not intentional for a specific population will not reach or protect that population.
Drawing from the above argument, I am currently developing a policy brief with specific recommendations for bodies such as the UN AI advisory body and humanitarian organisations that are increasingly deploying AI tools in field settings. The argument is grounded in primary research as well as in the growing body of AI governance literature that acknowledges this gap without yet addressing it structurally.
Two questions for those working in this space: Is this gap being addressed somewhere I am not seeing? And what would be most useful to practitioners and policymakers, a framing that leads with the humanitarian angle, or one that leads with the AI governance gaps directly?