The thread arose from this related conversation about sentient AIs being compared to people with disabilities (where everyone agreed that such analogies are harmful)
This response feels like it is making unnecessary concessions in an attempt to appease someone who will probably never be satisfied. For example, Habiba says
Of course we should be working on harms of tech right now also!
But this is not at all obvious! There are strong arguments that the contemporary ‘harms’ of tech are vastly overstated, and even if they were not, it seems unlikely that we should be working on them, given their vastly lower scope/neglectedness/tractability than other issues EAs focus on. I would be very surprised if any credible CBA suggested that short-term tech harms were a better cause area that third world poverty, factory farms and existential risks.
Similarly, Habiba contrasts
cold “number crunching”
with
caring, thoughtful folks who truly care about helping others
But these by no means need to be in conflict. I think any reasonable evaluation of EAs will find many who are quite unemotional, and do do a lot of number crunching—the later is, after all, a core part of cost-effectiveness estimates, and hence the EA movement. But that doesn’t mean they don’t “truely care”—it’s that number-crunching is the best way of executing on that caring.
Despite what seem to me like large concessions, I doubt this sort of approach is ever going to convince people like Gebru. If you look at her argumentative approach, here and elsewhere, it makes use of a lot of rhetorical/emotional appeals and is generally skeptical of the role of impartial reason. Her epistemic approach seem incompatible with that which the EA movement is trying to promote. For example, it is natural for EAs to want to make comparisons between things—e.g. to say “depression is worse than malaria, but it’s cheaper to fix malaria”—in a way that seems profane to such people. I’m not sure if the suggestion here is the result of simply misunderstanding the nature of analogy, but it is clearly not the case that we can take arguments about historical moral progress, replace ‘disabled people’ with ‘sighted, hearing people’ and act as if this does not change the argument! Such comparisons are a necessary part of EA thought. Similarly, the principle of charity—of trying to understand others’ points of view, and address the most plausible version of it, rather than attacking strawmen and making ad hominem accusations—is an important part of EA epistemic practices.
Rather than trying to paper over our differences, I think we should offer a polite but firm defense of our views. Pretending there is no conflict between EA and ‘woke’ ideology seems like it could only be achieved by sacrificing a huge part of what makes EA a distinct and valuable social movement.
The thread arose from this related conversation about sentient AIs being compared to people with disabilities (where everyone agreed that such analogies are harmful)
Thanks for noting! Habiba responded: https://twitter.com/FreshMangoLassi/status/1485769468634710020
This response feels like it is making unnecessary concessions in an attempt to appease someone who will probably never be satisfied. For example, Habiba says
But this is not at all obvious! There are strong arguments that the contemporary ‘harms’ of tech are vastly overstated, and even if they were not, it seems unlikely that we should be working on them, given their vastly lower scope/neglectedness/tractability than other issues EAs focus on. I would be very surprised if any credible CBA suggested that short-term tech harms were a better cause area that third world poverty, factory farms and existential risks.
Similarly, Habiba contrasts
with
But these by no means need to be in conflict. I think any reasonable evaluation of EAs will find many who are quite unemotional, and do do a lot of number crunching—the later is, after all, a core part of cost-effectiveness estimates, and hence the EA movement. But that doesn’t mean they don’t “truely care”—it’s that number-crunching is the best way of executing on that caring.
Despite what seem to me like large concessions, I doubt this sort of approach is ever going to convince people like Gebru. If you look at her argumentative approach, here and elsewhere, it makes use of a lot of rhetorical/emotional appeals and is generally skeptical of the role of impartial reason. Her epistemic approach seem incompatible with that which the EA movement is trying to promote. For example, it is natural for EAs to want to make comparisons between things—e.g. to say “depression is worse than malaria, but it’s cheaper to fix malaria”—in a way that seems profane to such people. I’m not sure if the suggestion here is the result of simply misunderstanding the nature of analogy, but it is clearly not the case that we can take arguments about historical moral progress, replace ‘disabled people’ with ‘sighted, hearing people’ and act as if this does not change the argument! Such comparisons are a necessary part of EA thought. Similarly, the principle of charity—of trying to understand others’ points of view, and address the most plausible version of it, rather than attacking strawmen and making ad hominem accusations—is an important part of EA epistemic practices.
Rather than trying to paper over our differences, I think we should offer a polite but firm defense of our views. Pretending there is no conflict between EA and ‘woke’ ideology seems like it could only be achieved by sacrificing a huge part of what makes EA a distinct and valuable social movement.
Your comment has aged well.
I really like her response :)