In the paper she co-authored, Gebru makes a good case for why real AI technologies put to work now are harming marginalized communities and show potential for increasing harm to those communities. However, in this Wired article, Gebru is associating EA with the harms caused by existing and likely future AI technologies. Gebru is claiming that because major investors in AI are or were involved in funding AI safety research, that the same research is co-opted by those investor’s interests. Gebru identifies those interests with narrow financial agendas held by the investors, ones that show no regard for marginalized communities that are likely to be impacted by the use of current AI technologies.
I think it’s worth exploring to what extent her actual agenda, one targeting the environmental, social, and economic harms or exploitation that AI research involves now, could be accomplished, regardless of her error in believing that EA is co-opted by financial interests pushing for increasingly harmful AI technologies.
I’m thinking about how to solve problems like:
carbon footprint of AI training and deployment hardware and software and its disproportionate impacts on marginalized communities in the near term.
social harms of deployable and tunable LLM’s used for example, as propaganda generators
social harms of now open-sourced and limitation-free image generators (and upcoming video generators) such as Gebru’s article’s linked WAPO article discusses.
technological unemployment caused by AI technology.
concentration of power with organizations deploying AGI technology.
Fundamentally, an ambiguous pathway toward AI safety is one shared with both a path toward an AI utopia but also an AI dystopia. The best way to thoroughly disprove Gebru’s core belief, that EA is co-opted by Silicon Valley money-hungry hegemonic billionaires, would be to focus on the substantive AI impact concerns that she raises.
The suggestions outlined in her paper are appropriate, in my view. If LLM’s were removed from public access and kept as R&D experiments only, I would not miss them. If ASR was limited to uses such as caption generation, I would feel good about it. But what do you think?
In the paper she co-authored, Gebru makes a good case for why real AI technologies put to work now are harming marginalized communities and show potential for increasing harm to those communities. However, in this Wired article, Gebru is associating EA with the harms caused by existing and likely future AI technologies. Gebru is claiming that because major investors in AI are or were involved in funding AI safety research, that the same research is co-opted by those investor’s interests. Gebru identifies those interests with narrow financial agendas held by the investors, ones that show no regard for marginalized communities that are likely to be impacted by the use of current AI technologies.
I think it’s worth exploring to what extent her actual agenda, one targeting the environmental, social, and economic harms or exploitation that AI research involves now, could be accomplished, regardless of her error in believing that EA is co-opted by financial interests pushing for increasingly harmful AI technologies.
I’m thinking about how to solve problems like:
carbon footprint of AI training and deployment hardware and software and its disproportionate impacts on marginalized communities in the near term.
social harms of deployable and tunable LLM’s used for example, as propaganda generators
social harms of now open-sourced and limitation-free image generators (and upcoming video generators) such as Gebru’s article’s linked WAPO article discusses.
exploitation of labor to produce AI datasets.
technological unemployment caused by AI technology.
concentration of power with organizations deploying AGI technology.
Fundamentally, an ambiguous pathway toward AI safety is one shared with both a path toward an AI utopia but also an AI dystopia. The best way to thoroughly disprove Gebru’s core belief, that EA is co-opted by Silicon Valley money-hungry hegemonic billionaires, would be to focus on the substantive AI impact concerns that she raises.
The suggestions outlined in her paper are appropriate, in my view. If LLM’s were removed from public access and kept as R&D experiments only, I would not miss them. If ASR was limited to uses such as caption generation, I would feel good about it. But what do you think?