Disclaimer: I read it a while ago and this is reproduction fast from memory. I also have bad memory of some of the weirder chapters (the Christianity one for instance). These also do not express my personal opinions but rather steelmans and reframings of the book.
I’m from the continental tradition and read a lot of the memeplex (e.g. Donna Harraway, Marcuse, and Freire). I’ll try to make this short summary more EA legible:
1. The object level part of its criticisms draw upon qualitative data from animal activists who take higher risk of failure but more abolitionist approaches. The criticism is then the marginal change pushed by EA makes abolition harder because of the following: (a) lack of coordination and respect for the animal rights activists on the left and specifically the history there, (b) how funding distorts the field and eats up talent and competes against the left (c) how they have to bend themselves to be epistemically scrutable to EA.
An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for “transformative” change to capitalism.
For instance they would say a lot of companies are pursuing RLHF in AI Safety not because it’s the correct way to go but because it’s the easiest low hanging fruit (even if it produces deceptive alignment).
2. Secondarily there is a values based criticism in the animal rights section that EA is too utilitarian which leads to: (a) preferencing charities that lessen animal suffering in narrow senses and (b) when EA does take risks with animal welfare it’s more technocratic and therefore prone to market hype with things like alternative proteins.
A toy example that might help is that something like cage free eggs would violate (a) because it makes the egg company better able to dissolve criticism and (b) is a lack of imagination on the part of ending egg farming overall and sets up a false counterfactual.
3. Thirdly, on global poverty it makes a few claims:
a. The motivation towards quantification is a selfish one citing Herbert Marcuse’s arguments on how neoliberalism has captured institutions. Specifically, the argument criticises Ajeya Cotra’s 2017 talk about effective giving and how it’s about a selfish internal psychological need for quantification and finding comfort in that quantification.
b. The counterfactual of poverty and possible set of actions are much larger because it doesn’t consider the amount of collective action possible. The author sets out types of consciousness raising examples of activism that on first glance is “small” and “intractable” but spark big upheavals (funnily names Greta Thundberg among Black social justice activists which offended my sensibilities).
c. EA runs interference for rich people and provide them cover and potential political action against them (probably the weakest claim of the bunch).
I think a lot of the anti-quantification type arguments that EAs thumb their noses at should be reframed because they are not as weak as they seem nor as uncommon in EA. For instance, the arguments on SPARC and other sorts of community building efforts are successful because they introduce people to transformative ideas. E.g. it’s not a specific activity done but the combination of community and vibes broadly construed that leads to really talented people doing good.
3. Longtermism doesn’t get much of a mention because of publishing time. There’s just a meta-criticism that the switch over from neartermism to longtermism reproduces the same pattern of thinking but also the subtle intellectual. E.g. EAs used to say things were too moonshot with activism and systemic change but now they’re doing longtermism.
I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into. I think if people are pattern-matching to Torres type hit pieces they’re going to be pleasantly surprised. These are real dyed in the wool leftists. It’s not so much weird gotchas that are targeted at getting retweets from twitter beefs and libs it’s for leftist students and seems to be more targeted towards the animal activism side and specific instances of left animal activists and EA clashes at parts.
An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for “transformative” change to capitalism.
For instance they would say a lot of companies are pursuing RLHF in AI Safety not because it’s the correct way to go but because it’s the easiest low hanging fruit (even if it produces deceptive alignment).
I want to address this point not to argue against the animal activist’s point, but rather because it is a bad analogy for that point. The argument against working for safety teams at capabilities orgs or RLHF is not that they reduce x-risk to an “acceptable” level, causing orgs to give up on further reductions, but rather than they don’t reduce x-risk.
This is a fantastic comment. And there’s an EA who’s able to interpret the continental/lefty/Frankfurt memeplex for a majority analytical/decoupling/mistake-theory audience, I think this could be a very high-impact thing to do on the Forum! Part of why EA is bad at dealing with criticism like this is (imo) that a lot of the time we don’t really understand what the critics are saying, and as you point out: “I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into.”
Definitely going to spend a lot of my weekend reading the articles and adding to collaborative review that’s going around.
One major thing you do bring up in this review is that it is a very lefty-oriented piece of criticism. To me, this is just confirming my priors that EA needs to eventually recognise this is where its biggest piece of pushback is going to come from, both in the intellectual space and what ordinary peoples opinions of EA will be informed by (especially the younger the age profile). While we might be able to ‘EA-Judo’ away from some of the criticisms, or turn others into purely empirical disagreements where we are ready to update, there are others where the movement might have to be more openly about ‘Cause X reduces suffering, decreases x-Risk, and decreases the chance of a global revolution against capitalism and that’s ok’. (So, personal note, reviewing lefty critiques of EA has just shot up my list of things I want to post about on the Forum)
A majority of the pieces are not written in academic form, even though most include citations from academic sources. The most obviously academic pieces are 9 by Adams, 15 by Sanbonmatsu, and 16 by Crary.
I would categorize the book as largely “normal”. It pulls from a group of writers whose backgrounds and writing styles vary.
The highest-level takeaways (not my own views, except when “I/I’d” is included”):
EA is missing relevant data due to its over-reliance on quantifiable data
Effective does not equal impactful
Lack of localized knowledge and interventions reduces sustainability, adoption (trust), and overall impact
The lack of diversity, equity, and inclusion in the community produces worse outcomes and less impact. The same is said regarding considerations of [racial] justice.
EA neglects engagement with non-EA movements and actors; in addition to worse EA outcomes, it harms otherwise positive work. In short, EA undervalues solidarity.
I’d liken this to something along the lines of “EA doesn’t play nicely with the other kids in the sandbox”.
EA is too rigid and does not fair well in complex situations
EA lacks compassion/is cold, and though it is commonly argued this improves outcomes, it is more harmful than not
EA relies upon and reifies systems that may be causing disproportionate harm; it fails to consider that radical changes outside of its scope may be the most impactful
EA is an egotistical philosophy and community; it speaks and acts with certainty that it shouldn’t
Yes, a lot of the first volume focuses on animal welfare. Though this volume is focused on animal welfare, I do think many of the takeaways I included might be echoed by critics in other cause areas.
I don’t read academic criticism a lot, so what’s the context of a book like this? Is it normal? What does it, imply, if anything?
Disclaimer: I read it a while ago and this is reproduction fast from memory. I also have bad memory of some of the weirder chapters (the Christianity one for instance). These also do not express my personal opinions but rather steelmans and reframings of the book.
I’m from the continental tradition and read a lot of the memeplex (e.g. Donna Harraway, Marcuse, and Freire). I’ll try to make this short summary more EA legible:
1. The object level part of its criticisms draw upon qualitative data from animal activists who take higher risk of failure but more abolitionist approaches. The criticism is then the marginal change pushed by EA makes abolition harder because of the following: (a) lack of coordination and respect for the animal rights activists on the left and specifically the history there, (b) how funding distorts the field and eats up talent and competes against the left (c) how they have to bend themselves to be epistemically scrutable to EA.
An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for “transformative” change to capitalism.
For instance they would say a lot of companies are pursuing RLHF in AI Safety not because it’s the correct way to go but because it’s the easiest low hanging fruit (even if it produces deceptive alignment).
2. Secondarily there is a values based criticism in the animal rights section that EA is too utilitarian which leads to: (a) preferencing charities that lessen animal suffering in narrow senses and (b) when EA does take risks with animal welfare it’s more technocratic and therefore prone to market hype with things like alternative proteins.
A toy example that might help is that something like cage free eggs would violate (a) because it makes the egg company better able to dissolve criticism and (b) is a lack of imagination on the part of ending egg farming overall and sets up a false counterfactual.
3. Thirdly, on global poverty it makes a few claims:
a. The motivation towards quantification is a selfish one citing Herbert Marcuse’s arguments on how neoliberalism has captured institutions. Specifically, the argument criticises Ajeya Cotra’s 2017 talk about effective giving and how it’s about a selfish internal psychological need for quantification and finding comfort in that quantification.
b. The counterfactual of poverty and possible set of actions are much larger because it doesn’t consider the amount of collective action possible. The author sets out types of consciousness raising examples of activism that on first glance is “small” and “intractable” but spark big upheavals (funnily names Greta Thundberg among Black social justice activists which offended my sensibilities).
c. EA runs interference for rich people and provide them cover and potential political action against them (probably the weakest claim of the bunch).
I think a lot of the anti-quantification type arguments that EAs thumb their noses at should be reframed because they are not as weak as they seem nor as uncommon in EA. For instance, the arguments on SPARC and other sorts of community building efforts are successful because they introduce people to transformative ideas. E.g. it’s not a specific activity done but the combination of community and vibes broadly construed that leads to really talented people doing good.
3. Longtermism doesn’t get much of a mention because of publishing time. There’s just a meta-criticism that the switch over from neartermism to longtermism reproduces the same pattern of thinking but also the subtle intellectual. E.g. EAs used to say things were too moonshot with activism and systemic change but now they’re doing longtermism.
I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into. I think if people are pattern-matching to Torres type hit pieces they’re going to be pleasantly surprised. These are real dyed in the wool leftists. It’s not so much weird gotchas that are targeted at getting retweets from twitter beefs and libs it’s for leftist students and seems to be more targeted towards the animal activism side and specific instances of left animal activists and EA clashes at parts.
I want to address this point not to argue against the animal activist’s point, but rather because it is a bad analogy for that point. The argument against working for safety teams at capabilities orgs or RLHF is not that they reduce x-risk to an “acceptable” level, causing orgs to give up on further reductions, but rather than they don’t reduce x-risk.
This is a fantastic comment. And there’s an EA who’s able to interpret the continental/lefty/Frankfurt memeplex for a majority analytical/decoupling/mistake-theory audience, I think this could be a very high-impact thing to do on the Forum! Part of why EA is bad at dealing with criticism like this is (imo) that a lot of the time we don’t really understand what the critics are saying, and as you point out: “I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into.”
Definitely going to spend a lot of my weekend reading the articles and adding to collaborative review that’s going around.
One major thing you do bring up in this review is that it is a very lefty-oriented piece of criticism. To me, this is just confirming my priors that EA needs to eventually recognise this is where its biggest piece of pushback is going to come from, both in the intellectual space and what ordinary peoples opinions of EA will be informed by (especially the younger the age profile). While we might be able to ‘EA-Judo’ away from some of the criticisms, or turn others into purely empirical disagreements where we are ready to update, there are others where the movement might have to be more openly about ‘Cause X reduces suffering, decreases x-Risk, and decreases the chance of a global revolution against capitalism and that’s ok’. (So, personal note, reviewing lefty critiques of EA has just shot up my list of things I want to post about on the Forum)
Thanks for this excellent elucidation!
A majority of the pieces are not written in academic form, even though most include citations from academic sources. The most obviously academic pieces are 9 by Adams, 15 by Sanbonmatsu, and 16 by Crary.
I would categorize the book as largely “normal”. It pulls from a group of writers whose backgrounds and writing styles vary.
The highest-level takeaways (not my own views, except when “I/I’d” is included”):
EA is missing relevant data due to its over-reliance on quantifiable data
Effective does not equal impactful
Lack of localized knowledge and interventions reduces sustainability, adoption (trust), and overall impact
The lack of diversity, equity, and inclusion in the community produces worse outcomes and less impact. The same is said regarding considerations of [racial] justice.
EA neglects engagement with non-EA movements and actors; in addition to worse EA outcomes, it harms otherwise positive work. In short, EA undervalues solidarity.
I’d liken this to something along the lines of “EA doesn’t play nicely with the other kids in the sandbox”.
EA is too rigid and does not fair well in complex situations
EA lacks compassion/is cold, and though it is commonly argued this improves outcomes, it is more harmful than not
EA relies upon and reifies systems that may be causing disproportionate harm; it fails to consider that radical changes outside of its scope may be the most impactful
EA is an egotistical philosophy and community; it speaks and acts with certainty that it shouldn’t
Oh, I’ve read the first two chapters. And what it implies is that they do not like EAs encroachment into the animal welfare space.
Yes, a lot of the first volume focuses on animal welfare. Though this volume is focused on animal welfare, I do think many of the takeaways I included might be echoed by critics in other cause areas.