I think it’s about the framing of AI for good. The “AI for good” narrative is most looking at “what can AI do?”, and as you say, this just leads to sticking plasters—and at worst, it’s technical people designing solutions to problems they don’t really understand.
I think the question in AI for good instead needs to be “How do we do AI?”. This means looking at how the public are involved in development of AI, how people can have a stake, how the public can help to oversee and benefit from AI, rather than corporations.
Personally, I don’t think that there’s a tension between niche applications of AI and governance/counter power AI systems. I think the answer is to create the niche applications with the public, and in ways that empower the public. For example, how can the public have greater control over their data and share in the profits from its use in AI?
Yes, we are in total agreement. https://gradual-disempowerment.ai/ is a scary and relevant description of the concentration of wealth and power.
I think it’s about the framing of AI for good. The “AI for good” narrative is most looking at “what can AI do?”, and as you say, this just leads to sticking plasters—and at worst, it’s technical people designing solutions to problems they don’t really understand.
I think the question in AI for good instead needs to be “How do we do AI?”. This means looking at how the public are involved in development of AI, how people can have a stake, how the public can help to oversee and benefit from AI, rather than corporations.
https://publicai.network/ are making headway on some of this thinking.
Personally, I don’t think that there’s a tension between niche applications of AI and governance/counter power AI systems. I think the answer is to create the niche applications with the public, and in ways that empower the public. For example, how can the public have greater control over their data and share in the profits from its use in AI?