Just wanted to flag that AI scientist Timnit Gebru has written a tweet thread criticizing the AI safety field and the longtermist paradigm, quoting the Phil Torres Aeon essay. I would appreciate it if someone could put out a kind, thoughtful response to her thread. Since Gebru is a prominent, respected person in the mainstream AI ethics research community, inconsiderate responses to her thread (especially personal attacks) by EA community members run the risk of making the movement look bad.
The thread arose from this related conversation about sentient AIs being compared to people with disabilities (where everyone agreed that such analogies are harmful)
This response feels like it is making unnecessary concessions in an attempt to appease someone who will probably never be satisfied. For example, Habiba says
Of course we should be working on harms of tech right now also!
But this is not at all obvious! There are strong arguments that the contemporary āharmsā of tech are vastly overstated, and even if they were not, it seems unlikely that we should be working on them, given their vastly lower scope/āneglectedness/ātractability than other issues EAs focus on. I would be very surprised if any credible CBA suggested that short-term tech harms were a better cause area that third world poverty, factory farms and existential risks.
Similarly, Habiba contrasts
cold ānumber crunchingā
with
caring, thoughtful folks who truly care about helping others
But these by no means need to be in conflict. I think any reasonable evaluation of EAs will find many who are quite unemotional, and do do a lot of number crunchingāthe later is, after all, a core part of cost-effectiveness estimates, and hence the EA movement. But that doesnāt mean they donāt ātruely careāāitās that number-crunching is the best way of executing on that caring.
Despite what seem to me like large concessions, I doubt this sort of approach is ever going to convince people like Gebru. If you look at her argumentative approach, here and elsewhere, it makes use of a lot of rhetorical/āemotional appeals and is generally skeptical of the role of impartial reason. Her epistemic approach seem incompatible with that which the EA movement is trying to promote. For example, it is natural for EAs to want to make comparisons between thingsāe.g. to say ādepression is worse than malaria, but itās cheaper to fix malariaāāin a way that seems profane to such people. Iām not sure if the suggestion here is the result of simply misunderstanding the nature of analogy, but it is clearly not the case that we can take arguments about historical moral progress, replace ādisabled peopleā with āsighted, hearing peopleā and act as if this does not change the argument! Such comparisons are a necessary part of EA thought. Similarly, the principle of charityāof trying to understand othersā points of view, and address the most plausible version of it, rather than attacking strawmen and making ad hominem accusationsāis an important part of EA epistemic practices.
Rather than trying to paper over our differences, I think we should offer a polite but firm defense of our views. Pretending there is no conflict between EA and āwokeā ideology seems like it could only be achieved by sacrificing a huge part of what makes EA a distinct and valuable social movement.
I find it a bit frustrating that most critiques of AI Safety work or longtermism in general seem to start by constructing a strawman of the movement. Iāve read a ton of stuff by self-proclaimed long-termists and would consider myself one and I donāt think Iāve ever heard anyone seriously propose choosing to decrease the risk of existential risk by .0000001 percent instead of lifting a billion people out of poverty. Iām sure people have, but itās certainly not a mainstream view in the community.
And as others have rightly pointed out, thereās a strong case to be made for caring about AI safety or engineered pandemics or nuclear war even if all you care about are the people alive today.
The critique also does the āguilt by associationā thing where it tries to make the movement bad by associating it with people the author knows are unpopular with their audience.
Just wanted to flag that AI scientist Timnit Gebru has written a tweet thread criticizing the AI safety field and the longtermist paradigm, quoting the Phil Torres Aeon essay. I would appreciate it if someone could put out a kind, thoughtful response to her thread. Since Gebru is a prominent, respected person in the mainstream AI ethics research community, inconsiderate responses to her thread (especially personal attacks) by EA community members run the risk of making the movement look bad.
The thread arose from this related conversation about sentient AIs being compared to people with disabilities (where everyone agreed that such analogies are harmful)
Thanks for noting! Habiba responded: https://āātwitter.com/āāFreshMangoLassi/āāstatus/āā1485769468634710020
This response feels like it is making unnecessary concessions in an attempt to appease someone who will probably never be satisfied. For example, Habiba says
But this is not at all obvious! There are strong arguments that the contemporary āharmsā of tech are vastly overstated, and even if they were not, it seems unlikely that we should be working on them, given their vastly lower scope/āneglectedness/ātractability than other issues EAs focus on. I would be very surprised if any credible CBA suggested that short-term tech harms were a better cause area that third world poverty, factory farms and existential risks.
Similarly, Habiba contrasts
with
But these by no means need to be in conflict. I think any reasonable evaluation of EAs will find many who are quite unemotional, and do do a lot of number crunchingāthe later is, after all, a core part of cost-effectiveness estimates, and hence the EA movement. But that doesnāt mean they donāt ātruely careāāitās that number-crunching is the best way of executing on that caring.
Despite what seem to me like large concessions, I doubt this sort of approach is ever going to convince people like Gebru. If you look at her argumentative approach, here and elsewhere, it makes use of a lot of rhetorical/āemotional appeals and is generally skeptical of the role of impartial reason. Her epistemic approach seem incompatible with that which the EA movement is trying to promote. For example, it is natural for EAs to want to make comparisons between thingsāe.g. to say ādepression is worse than malaria, but itās cheaper to fix malariaāāin a way that seems profane to such people. Iām not sure if the suggestion here is the result of simply misunderstanding the nature of analogy, but it is clearly not the case that we can take arguments about historical moral progress, replace ādisabled peopleā with āsighted, hearing peopleā and act as if this does not change the argument! Such comparisons are a necessary part of EA thought. Similarly, the principle of charityāof trying to understand othersā points of view, and address the most plausible version of it, rather than attacking strawmen and making ad hominem accusationsāis an important part of EA epistemic practices.
Rather than trying to paper over our differences, I think we should offer a polite but firm defense of our views. Pretending there is no conflict between EA and āwokeā ideology seems like it could only be achieved by sacrificing a huge part of what makes EA a distinct and valuable social movement.
Your comment has aged well.
I really like her response :)
I find it a bit frustrating that most critiques of AI Safety work or longtermism in general seem to start by constructing a strawman of the movement. Iāve read a ton of stuff by self-proclaimed long-termists and would consider myself one and I donāt think Iāve ever heard anyone seriously propose choosing to decrease the risk of existential risk by .0000001 percent instead of lifting a billion people out of poverty. Iām sure people have, but itās certainly not a mainstream view in the community.
And as others have rightly pointed out, thereās a strong case to be made for caring about AI safety or engineered pandemics or nuclear war even if all you care about are the people alive today.
The critique also does the āguilt by associationā thing where it tries to make the movement bad by associating it with people the author knows are unpopular with their audience.