I’m not sure I agree with the premise of this argument: that the concept of AI for good is faulty, because it can’t solve all the problems.
I don’t think “AI for good” claims to solve all the problems. Absolutely let’s take issue with the idea that AI is going to resolve everything, but that doesn’t mean it can’t help with anything.
But I’m not worried that AI won’t touch the fundamental problems of “social structures, economic pressures, and unequal opportunities”. I’m worried that it already is, and is moving the dial in the wrong direction. Automation moves wealth and power away from individuals and towards companies. Concentration of wealth and power in the hands of ever smaller number of individuals and companies is exactly what drives economic, social issues and inequality.
Unless AI is governed and managed appropriately, it’s going to be part of the problem, more than part of the solution.
You’ve absolutely nailed it. Thank you for this incredibly insightful comment.
I want to wholeheartedly agree with your core point: my deepest fear isn’t just that ‘AI for Good’ won’t solve these fundamental problems, but that mainstream AI development, as it currently stands, is actively exacerbating them. You’ve perfectly articulated the mechanism behind this: the automation-driven concentration of wealth and power.
To clarify the premise of my original post: I don’t believe the concept of ‘AI for Good’ is inherently flawed, nor is my critique that ‘AI for Good is deficient because it can’t solve every problem.’ My critique is aimed at the narrative’s focus. I am concerned that the “AI for Good” movement often directs our attention and resources towards more palatable, surface-level issues. Meanwhile, the far more powerful, fundamental engine of commercial AI development relentlessly fuels the very structural inequalities we claim to be fighting.
This is exactly what I see in some of the projects I’ve encountered. For instance:
An AI project that assists with agriculture by solving pest and disease problems is a benefit to humanity. Logically, however, this doesn’t necessarily benefit the small farmer. Large corporations have natural advantages of scale, while individual farmers have limited resources. Agricultural AI might not lead to more income for farmers, but could instead accelerate land consolidation by large enterprises.
Another project advocates for developing play-and-learn hardware for children in impoverished families, supposedly giving them better resources. This is certainly helpful to some extent, but such hardware is often unaffordable for the very families it aims to help. These families typically must prioritize immediate subsistence over long-term educational investments.
Medical AI developed for doctors in remote areas might never reach them. Furthermore, such AI doesn’t necessarily lower healthcare costs for the average person and could instead risk becoming a tool for profit and exploitation by certain institutions.
Your point and mine are two sides of the same coin, and together they paint a grim picture:
My argument is that the “good” side of AI often has a focus that is too narrow, neglecting the deepest forms of suffering.
Your argument is that the dominant, commercial side of AI is actively making the root causes of this suffering worse.
This leads to a terrifying conclusion: our “AI for Good” efforts, however well-intentioned, risk becoming a rounding error—a fig leaf hiding a much larger, systemic trend towards greater inequality.
This brings me to a follow-up question that I’d love to hear your (and others’) thoughts on:
Given this reality, what is the most effective role for the “AI for Good” community? Should we continue to focus on niche applications? Or should our primary focus shift towards advocacy, governance, and creating “counter-power” AI systems—tools designed specifically to challenge the concentration of wealth and power you described? How do we stop applying bandages and start treating the disease itself?
I think it’s about the framing of AI for good. The “AI for good” narrative is most looking at “what can AI do?”, and as you say, this just leads to sticking plasters—and at worst, it’s technical people designing solutions to problems they don’t really understand.
I think the question in AI for good instead needs to be “How do we do AI?”. This means looking at how the public are involved in development of AI, how people can have a stake, how the public can help to oversee and benefit from AI, rather than corporations.
Personally, I don’t think that there’s a tension between niche applications of AI and governance/counter power AI systems. I think the answer is to create the niche applications with the public, and in ways that empower the public. For example, how can the public have greater control over their data and share in the profits from its use in AI?
I’m not sure I agree with the premise of this argument: that the concept of AI for good is faulty, because it can’t solve all the problems.
I don’t think “AI for good” claims to solve all the problems. Absolutely let’s take issue with the idea that AI is going to resolve everything, but that doesn’t mean it can’t help with anything.
But I’m not worried that AI won’t touch the fundamental problems of “social structures, economic pressures, and unequal opportunities”. I’m worried that it already is, and is moving the dial in the wrong direction. Automation moves wealth and power away from individuals and towards companies. Concentration of wealth and power in the hands of ever smaller number of individuals and companies is exactly what drives economic, social issues and inequality.
Unless AI is governed and managed appropriately, it’s going to be part of the problem, more than part of the solution.
I think this op-ed sets out some of these issues really well: https://nathanlawkc.substack.com/p/its-time-to-build-a-democracy-ai
You’ve absolutely nailed it. Thank you for this incredibly insightful comment.
I want to wholeheartedly agree with your core point: my deepest fear isn’t just that ‘AI for Good’ won’t solve these fundamental problems, but that mainstream AI development, as it currently stands, is actively exacerbating them. You’ve perfectly articulated the mechanism behind this: the automation-driven concentration of wealth and power.
To clarify the premise of my original post: I don’t believe the concept of ‘AI for Good’ is inherently flawed, nor is my critique that ‘AI for Good is deficient because it can’t solve every problem.’ My critique is aimed at the narrative’s focus. I am concerned that the “AI for Good” movement often directs our attention and resources towards more palatable, surface-level issues. Meanwhile, the far more powerful, fundamental engine of commercial AI development relentlessly fuels the very structural inequalities we claim to be fighting.
This is exactly what I see in some of the projects I’ve encountered. For instance:
An AI project that assists with agriculture by solving pest and disease problems is a benefit to humanity. Logically, however, this doesn’t necessarily benefit the small farmer. Large corporations have natural advantages of scale, while individual farmers have limited resources. Agricultural AI might not lead to more income for farmers, but could instead accelerate land consolidation by large enterprises.
Another project advocates for developing play-and-learn hardware for children in impoverished families, supposedly giving them better resources. This is certainly helpful to some extent, but such hardware is often unaffordable for the very families it aims to help. These families typically must prioritize immediate subsistence over long-term educational investments.
Medical AI developed for doctors in remote areas might never reach them. Furthermore, such AI doesn’t necessarily lower healthcare costs for the average person and could instead risk becoming a tool for profit and exploitation by certain institutions.
Your point and mine are two sides of the same coin, and together they paint a grim picture:
My argument is that the “good” side of AI often has a focus that is too narrow, neglecting the deepest forms of suffering.
Your argument is that the dominant, commercial side of AI is actively making the root causes of this suffering worse.
This leads to a terrifying conclusion: our “AI for Good” efforts, however well-intentioned, risk becoming a rounding error—a fig leaf hiding a much larger, systemic trend towards greater inequality.
This brings me to a follow-up question that I’d love to hear your (and others’) thoughts on:
Given this reality, what is the most effective role for the “AI for Good” community? Should we continue to focus on niche applications? Or should our primary focus shift towards advocacy, governance, and creating “counter-power” AI systems—tools designed specifically to challenge the concentration of wealth and power you described? How do we stop applying bandages and start treating the disease itself?
Yes, we are in total agreement. https://gradual-disempowerment.ai/ is a scary and relevant description of the concentration of wealth and power.
I think it’s about the framing of AI for good. The “AI for good” narrative is most looking at “what can AI do?”, and as you say, this just leads to sticking plasters—and at worst, it’s technical people designing solutions to problems they don’t really understand.
I think the question in AI for good instead needs to be “How do we do AI?”. This means looking at how the public are involved in development of AI, how people can have a stake, how the public can help to oversee and benefit from AI, rather than corporations.
https://publicai.network/ are making headway on some of this thinking.
Personally, I don’t think that there’s a tension between niche applications of AI and governance/counter power AI systems. I think the answer is to create the niche applications with the public, and in ways that empower the public. For example, how can the public have greater control over their data and share in the profits from its use in AI?