I think it is a bad idea to set up a database of negative articles on EA, or to spend too much time worrying about them:
It would be an attention sink to spend time tediously rebutting this stuff—effective altruists’ time is valuable, and a classic failure mode of online movements is to become “too online” until you are a bunch of internet atheists compiling databases of arguments and fallacies with which to do battle against an equally dedicated army of internet creationists.
EA is in some ways essentially an elite movement—we’re not trying to be as viral as we can possibly be (if we were, our main mode of communication wouldn’t be asking people to read long dry nonfiction essays on the Forum!) to appeal to the widest possible audience. Instead we’re trying to be as insightful and correct as we can possibly be, in order to appeal to smart people who respect the truth. These smart, careful people are exactly the kind of people who are least likely be swayed by obviously dumb, bad-faith hit-pieces that deploy the language of wokeism to make nonsensical attacks in random directions.
By contrast, setting up an organized database of “misinformation” and trying to dispatch internet footsoldiers to crusade against our enemies would likely be a huge turn-off to those smart, careful people. When I think of a group that does this stuff, I think “scientology” or maybe “oppressive governments” or “fringe political movements like antifa” or other paranoid and crazy organizations/individuals.
This makes sense. Definitely a strong argument for a closed or limited-access database, or no database at all.
It would be an attention sink to spend time tediously rebutting this stuff—effective altruists’ time is valuable
I think this is definitely true for most people but not all. I’ve met lots of people affiliated with EA who have mundane software engineering jobs and are interesting in mainly contributing casually every now and then.
a classic failure mode of online movements is to become “too online” until you are a bunch of internet atheists compiling databases of arguments and fallacies with which to do battle against an equally dedicated army of internet creationists
Strong agree on this one, although I think the justifications are only the tip of the iceberg. The risks are much greater IMO, especially related to social media, but it involves information I’m not willing to talk about here on a public forum.
These smart, careful people are exactly the kind of people who are least likely be swayed by obviously dumb, bad-faith hit-pieces that deploy the language of wokeism to make nonsensical attacks in random directions.
I somewhat disagree on this one. I used to be a strong advocate for actively preventing large numbers of woke nonsensical people from dominating EA and trying to turn it into one of Bernie Sanders’s cause areas. But now I think that mostly, people start out obsessed with the language of nonsensical wokeism and gradually choose to become smart, careful people after meeting large numbers of other people who are already careful and smart. Everyone has to start somewhere, and some people have better starting points than others.
trying to dispatch internet footsoldiers to crusade against our enemies would likely be a huge turn-off
I think this is pretty easy to prevent. Just put a disclaimer at the top of the database telling people not to do that. You don’t even need to make it limited-access, although that would help.
The only reason that journalists are using misinformation to target EA is because they know there’s absolutely nothing stopping them, like a bully targeting the smallest kids on a playground. It’s basically open season. Increasing awareness (or even accountability) makes sense here.
In the paper she co-authored, Gebru makes a good case for why real AI technologies put to work now are harming marginalized communities and show potential for increasing harm to those communities. However, in this Wired article, Gebru is associating EA with the harms caused by existing and likely future AI technologies. Gebru is claiming that because major investors in AI are or were involved in funding AI safety research, that the same research is co-opted by those investor’s interests. Gebru identifies those interests with narrow financial agendas held by the investors, ones that show no regard for marginalized communities that are likely to be impacted by the use of current AI technologies.
I think it’s worth exploring to what extent her actual agenda, one targeting the environmental, social, and economic harms or exploitation that AI research involves now, could be accomplished, regardless of her error in believing that EA is co-opted by financial interests pushing for increasingly harmful AI technologies.
I’m thinking about how to solve problems like:
carbon footprint of AI training and deployment hardware and software and its disproportionate impacts on marginalized communities in the near term.
social harms of deployable and tunable LLM’s used for example, as propaganda generators
social harms of now open-sourced and limitation-free image generators (and upcoming video generators) such as Gebru’s article’s linked WAPO article discusses.
technological unemployment caused by AI technology.
concentration of power with organizations deploying AGI technology.
Fundamentally, an ambiguous pathway toward AI safety is one shared with both a path toward an AI utopia but also an AI dystopia. The best way to thoroughly disprove Gebru’s core belief, that EA is co-opted by Silicon Valley money-hungry hegemonic billionaires, would be to focus on the substantive AI impact concerns that she raises.
The suggestions outlined in her paper are appropriate, in my view. If LLM’s were removed from public access and kept as R&D experiments only, I would not miss them. If ASR was limited to uses such as caption generation, I would feel good about it. But what do you think?
I think it is a bad idea to set up a database of negative articles on EA, or to spend too much time worrying about them:
It would be an attention sink to spend time tediously rebutting this stuff—effective altruists’ time is valuable, and a classic failure mode of online movements is to become “too online” until you are a bunch of internet atheists compiling databases of arguments and fallacies with which to do battle against an equally dedicated army of internet creationists.
EA is in some ways essentially an elite movement—we’re not trying to be as viral as we can possibly be (if we were, our main mode of communication wouldn’t be asking people to read long dry nonfiction essays on the Forum!) to appeal to the widest possible audience. Instead we’re trying to be as insightful and correct as we can possibly be, in order to appeal to smart people who respect the truth. These smart, careful people are exactly the kind of people who are least likely be swayed by obviously dumb, bad-faith hit-pieces that deploy the language of wokeism to make nonsensical attacks in random directions.
By contrast, setting up an organized database of “misinformation” and trying to dispatch internet footsoldiers to crusade against our enemies would likely be a huge turn-off to those smart, careful people. When I think of a group that does this stuff, I think “scientology” or maybe “oppressive governments” or “fringe political movements like antifa” or other paranoid and crazy organizations/individuals.
This makes sense. Definitely a strong argument for a closed or limited-access database, or no database at all.
I think this is definitely true for most people but not all. I’ve met lots of people affiliated with EA who have mundane software engineering jobs and are interesting in mainly contributing casually every now and then.
Strong agree on this one, although I think the justifications are only the tip of the iceberg. The risks are much greater IMO, especially related to social media, but it involves information I’m not willing to talk about here on a public forum.
I somewhat disagree on this one. I used to be a strong advocate for actively preventing large numbers of woke nonsensical people from dominating EA and trying to turn it into one of Bernie Sanders’s cause areas. But now I think that mostly, people start out obsessed with the language of nonsensical wokeism and gradually choose to become smart, careful people after meeting large numbers of other people who are already careful and smart. Everyone has to start somewhere, and some people have better starting points than others.
I think this is pretty easy to prevent. Just put a disclaimer at the top of the database telling people not to do that. You don’t even need to make it limited-access, although that would help.
The only reason that journalists are using misinformation to target EA is because they know there’s absolutely nothing stopping them, like a bully targeting the smallest kids on a playground. It’s basically open season. Increasing awareness (or even accountability) makes sense here.
In the paper she co-authored, Gebru makes a good case for why real AI technologies put to work now are harming marginalized communities and show potential for increasing harm to those communities. However, in this Wired article, Gebru is associating EA with the harms caused by existing and likely future AI technologies. Gebru is claiming that because major investors in AI are or were involved in funding AI safety research, that the same research is co-opted by those investor’s interests. Gebru identifies those interests with narrow financial agendas held by the investors, ones that show no regard for marginalized communities that are likely to be impacted by the use of current AI technologies.
I think it’s worth exploring to what extent her actual agenda, one targeting the environmental, social, and economic harms or exploitation that AI research involves now, could be accomplished, regardless of her error in believing that EA is co-opted by financial interests pushing for increasingly harmful AI technologies.
I’m thinking about how to solve problems like:
carbon footprint of AI training and deployment hardware and software and its disproportionate impacts on marginalized communities in the near term.
social harms of deployable and tunable LLM’s used for example, as propaganda generators
social harms of now open-sourced and limitation-free image generators (and upcoming video generators) such as Gebru’s article’s linked WAPO article discusses.
exploitation of labor to produce AI datasets.
technological unemployment caused by AI technology.
concentration of power with organizations deploying AGI technology.
Fundamentally, an ambiguous pathway toward AI safety is one shared with both a path toward an AI utopia but also an AI dystopia. The best way to thoroughly disprove Gebru’s core belief, that EA is co-opted by Silicon Valley money-hungry hegemonic billionaires, would be to focus on the substantive AI impact concerns that she raises.
The suggestions outlined in her paper are appropriate, in my view. If LLM’s were removed from public access and kept as R&D experiments only, I would not miss them. If ASR was limited to uses such as caption generation, I would feel good about it. But what do you think?