Curious what people on this site think about Timnit Gebru’s (a person I genuinely respect) criticism of EA as a “white savior, colonial, incredibly well funded (of course) ideology, driving so much of “AGI” discourse in “AI”, with strands of it intersecting with literal eugenics”?
It would be nice to have some specific examples of these things. This particular criticism, in my view, is just an attempt to associate EA with Bad Things so that people also think of EA as a Bad Thing. There’s no actual arguments in this statement—there are no specific claims to oppose. (Except that EA is incredibly well-funded—which is true, but also not inherently good or bad, and therefore does not need to be defended.) If I’m being charitable—many arguments are like this, especially when you only have 140 characters. This is a bad argument, but it’s far from a uniquely bad argument. The burden of proof is on Timnit to provide evidence for these accusations, but they may have done this somewhere else, just not in this tweet. (I assume it’s a tweet because of its length, and, let’s face it, its dismissiveness. Twitter is known for such things.)If I’m not being charitable—the point of a vague argument such as the above is that it places the burden of proof on the accused. The defense being asked for is for EA’s to present specific examples of actions that EA is taking that prove they aren’t “colonial” or “white savior”-esque. This is a losing game from the start, because the terms are vague enough that you can always argue that a given action proves nothing or isn’t good enough, and that someone could be doing more to decolonise their thoughts and actions. The only winning game is not to play.Which interpretation is correct? I don’t know enough about Timnit Gebru to say. I’d say that if Timnit is known for presenting nuanced, concrete arguments on other mediums or on other topics, this argument is probably a casualty of Twitter, and the charitable approach is appropriate here.
I have a comment on specifically the part about AGI discourse.
Imagine there’s a community that seems unusually concerned about the risk that some asteroid will soon hit the Earth and kill every person (you, your parents, your friends, your country members, members of every other country, etc.).
If I became aware of that community, the question I’d be most interested in answering is “are they correct about the asteroid?”
It wouldn’t occur to me to make confident statements about how the community is biased or ill-intentioned in various ways before having assessed their object-level arguments.
Maybe if I could persuasively refute the arguments and it would be so easy to do that and I couldn’t help but wonder how someone could think something so deeply flawed, maybe then would I engage in speculation about what might be going wrong with people’s thinking. Even so, I’d still bother to put up a thorough refutation of their arguments at the start of my critique.
I have several acquaintances from developing countries who like EA in part because it prioritizes global poverty, although this view isn’t universally shared by all people from developing countries.
It’s very unclear if she’s read any of the work besides the torrid Torres pieces.
In this world there’s not a lot that isn’t touched by white savior colonialism. Perfectionism and productivity are good in moderation. When those things are taken to harmful extremes is when you fall into narcissism, racism and oppression. In one of the first chapters of ‘doing good better’ they talk about how aid was really helpful in countries in Africa and nowhere did they mention the reasons it wasn’t helpful, which goes against effectiveness.
I like the philosophy of effectiveness, but there needs to be a better way for the community to determine what’s effective.
It also depends on expertise. Sometimes it’s just a matter of staying in your lane. Someone who isn’t aware of social issues isn’t going to be a champion for it, just like I won’t for AI. I am trying to learn more about it but I won’t be good at it like my main interests.