A few notes to start:
This is my first Forum post.
I received an early copy of the book from a colleague. Though I agree with some of what is included in it, this post does not serve as a holistic endorsement of the book or the ideas therein.
This post is written in a personal capacity. The views expressed do not represent those of Effective Altruism DC (EA DC) or any other organization with which I am affiliated.
My thanks to Manuel Del RĂo RodrĂguez for his post from 17th January 2023, before the book was released: âBook Critique of Effective Altruismâ.
Why I Read The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism:
Criticismâsuch as thisâis a gift. I care deeply about addressing the issues that are discussed in this volume, and I believe the contributorsâ perspectives are valuable for me when I am reflecting on how I think about and work on them (especially relative to other communities), as well as how our work is perceived. When I can engage with thoughtful criticism with openness and deep reflection, I learn, grow as a person, and make better decisions. I appreciate the time that the contributors put into this book and that they also care about making the world as good a place as it can be.
I have read and enjoyed past works by many of the authors included in the volume, not least Adams, Crary, Gruen, and Srinivasan.
I have been engaging with [what later became] EA since 2011 (having first learned about it by reading The Life You Can Save in 2009), but I did not begin full-time community building until 2022. I began this work because I hold (and still do) that EA ideas, funding, and work have had positive impacts and will continue to have positive impacts across numerous axes. I also hold (and still do) that unnecessary harm has occurred along the way and that we need to do better, particularly in professionalizing affiliated organizations and making the community more diverse, equitable, and inclusive. I believe this book can help us do good better.
[Added at 19:00 on 8 February] Many of the critiques found in the book do not reflect how most engaged EAs interpret the ideas or the community; rather than inditing the author(s) for this, try to empathize with how they may have come to this conclusion and follow their arguments from there.
I encourage you to read the book and to share your perspectives (or a summary) with your local group, EA Anywhere, and/âor on this Forum post (or in a post of your own).
The remainder of this post includes the book summary, chapter titles, and reviews from the publisher, Oxford University Press.
Book Summary:
The Good It Promises, the Harm It Does is the first edited volume to critically engage with Effective Altruism (EA). It brings together writers from diverse activist and scholarly backgrounds to explore a variety of unique grassroots movements and community organizing efforts. By drawing attention to these responses and to particular cases of human and animal harms, this book represents a powerful call to attend to different voices and projects and to elevate activist traditions that EA lacks the resources to assess and threatens to squelch. The contributors reveal the weakness inherent within the ready-made, top-down solutions that EA offers in response to many global problems-and offers in their place substantial descriptions of more meaningful and just social engagement.
Table of Contents:
ForewordâAmia Srinivasan
Acknowledgments
About the Contributors
IntroductionâCarol J. Adams, Alice Crary, and Lori Gruen
âHow Effective Altruism Fails Community-Based ActivismââBrenda Sanders
âEffective Altruismâs Unsuspecting Twenty-First Century ColonialismââSimone de Lima
âAnti-Blackness and the Effective AltruistââChristopher Sebastian
âAnimal Advocacyâs Stockholm SyndromeââAndrew deCoriolis, Aaron S. Gross, Joseph Tuminello, Steve J. Gross, and Jennifer Channin
âWho Counts? Effective Altruism and the Problem of Numbers in the History of American Wildlife ConservationââMichael D. Wise
âDiversifying Effective Altruismâs Long Shots in Animal Advocacy: An Invitation to Prioritize Black Vegans, Higher Education, and Religious CommunitiesââMatthew C. Halteman
âA Christian Critique of the Effective Altruism Approach to Animal PhilanthropyââDavid L. Clough
âQueer Eye on the EA Guysââpattrice jones
âA Feminist Ethics of Care Critique of Effective AltruismââCarol J. Adams
âThe Empty Promises of Cultured MeatââElan Abrell
âHow âAlternative Proteinsâ Create a Private Solution to a Public ProblemââMichele Simon
âThe Power of Love to Transform Animal Lives: The Deciption of Animal QuantificationââKrista Hiddema
âOur Partners, The Animals: Reflections from a Farmed Animal SanctuaryââKathy Stevens
âThe Wisdom Gained from Animals who Self-LiberateââRachel McCrystal
âEffective Altruism and the Reified MindâJohn Sanbonmatsu
âAgainst âEffective Altruismââł - Alice Crary
âThe Change We NeedââLori Gruen
CodaââFuture-Oriented Effective Altruism: Whatâs Wrong with Longtermism?ââCarol J. Adams, Alice Crary, and Lori Gruen
Index
Reviews:
âThe story of Effective Altruism is told here not by its proponents, but by those engaged in liberation struggles and justice movements that operate outside of Effective Altruismâs terms. There is every possibility that Effective Altruists will ignore what these voices have to say. That would be a deep shame, and whatâs more, a betrayal of a real commitment to bring about a better world.ââAmia Srinivasan, Chichele Professor of Social and Political Theory at All Souls College, Oxford
âEffective Altruism has made big moral promises that are often undermined by its unwillingness to listen attentively to the voices of its detractors, especially those from marginalized communities. In this vital, stimulating volume, we hear from some of the most important of these voices on some of the most important criticisms of Effective Altruism, including its racism, colonialism, and technocratic rationalism. This book is essential, inviting reading for both Effective Altruists and their critics.ââKate Manne, Associate Professor at the Sage School of Philosophy, Cornell University
âWhat could possibly go wrong when a largely white and male alliance of academics, business and nonprofit arrivistes, and obscenely rich donors reduce complex situations to numbers and plug those numbers into equations that claim to offer moral and strategic clarity about how we should live in a suffering world? In this book, dissenting activists and academics speak passionately and plainly about what has gone wrongâand provide an armamentarium for those keen to free action and imagination from the allianceâs outsized grip on the work of liberation.ââTimothy Pachirat, author of Every Twelve Seconds: Industrialized Slaughter and the Politics of Sight
Disclosure: I work at an animal advocacy organisation funded by ACE and EA Funds.
I finished reading this book. Itâs almost entirely on animal advocacy. I think the book would benefit quite a lot if the authors focused on narrow and specific claims and provided all the evidence to make really strong cases for these claims. Instead many authors mention many issues without getting really deeper than pre-existing debates on the topic. I canât say I have seen much new material, but I already work on animal advocacy so I keep reading about this topic all the time. Maybe itâs good to collect existing criticisms into a book format.
I think the strongest criticism in the book that gets repeated quite a lot is the problem of measurability bias in animal advocacy. I keep thinking about this too and I hope we find better ways to prioritise interventions in animal advocacy. Hereâs Macaskill talking about measurability bias sometime ago:
âhereâs one thing that I feel gets neglected: The value of concrete, short-run wins and symbolic actions. I think a lot about Henry Spira, the animal rights activist that Peter Singer wrote about in Ethics into Action. He led the first successful campaign to limit the use of animals in medical testing, and he was able to have that first win by focusing on science experiments at New Yorkâs American Museum of Natural History, which involved mutilating cats in order to test their sexual performance after the amputation. From a narrow EA perspective, the campaign didnât make any sense: the benefit was something like a dozen cats. But, at least as Singer describes it, it was the first real win in the animal liberation movement, and thereby created a massive amount of momentum for the movement.
I worry that in current EA culture people feel like every activity has to be justified on the basis of marginal cost-effectiveness, and that that the fact that an action would constitute some definite and symbolic, even if very small, step towards progress â and be the sort of thing that could provide fuel for a further movement â isnât âallowableâ as a reason for engaging in an activity. Whereas in activism in general these sorts of knock-on effects would often be regarded as the whole point of particular campaigns, and that actually seems to me (now) like a pretty reasonable position (even if particular instances of that position might often be misguided).â
Yet I think the authors in this book jump too quickly from âYou canât measure all the impactsâ to âSupport my favourite thingâ.
I think animal advocates have been trying these symbolic gestures for years and itâs not clear how much theyâre now helping farmed animals, and it doesnât seem like much (but counterfactuals are tricky). Thereâs still a lot of this kind of work going on, because non-EA advocates are willing to support it.
Furthermore, EAA has been supporting some of these, like the Nonhuman Rights Project and Sentience Politics, just not betting huge on them. NhRP was previously an ACE standout charity for several years, although now I think they basically only get Movement Grants from ACE, which are smaller. Thereâs work to try to get octopus farming banned before it grows. I think we still support bans on fur farming, foie gras, etc. when good opportunities arise. And corporate welfare campaigns also have symbolic value and build momentum, and the fact that theyâre so often successful and even often against pretty big targets probably helps with the momentum.
And we have taken big bets on plant-based substitutes and cultured meat for years, with little apparent impact for animals so far.
At first glance, I was worried that a lot of it would be low-quality criticisms that attack strawmen of EA, and the other comments in this thread basically confirm this.
Itâs astounding how often critics of EA get basic things about EA wrong. The two most salient ones that I see are:
âEA is founded on utilitarianismââno, itâs not. EA is loosely based on utilitarianism, but as Erich writes, EA is compatible with a broader range of ethical frameworks, particularly beneficentric ones.
Corollary: if you morally value other things besides welfare (e.g. biodiversity), come up with some way to trade off those moral goods against each other and compare interventions using your all-things-considered metric.
âEA only cares about things that can be easily measuredââagain, no. Itâs great when thereâs empirical studies of cost-effectiveness, but we all recognize thatâs not always possible. In general, the EA movement has become more open to acting on greater uncertainty. What ultimately matters is estimating impact, not measuring it. Open Phil had back-of-the-envelope calculations for a vast set of cause areas in 2014. EAs have put pages and pages of effort into trying to estimate the impact of things that are hard to measure, like economic growth and biosecurity interventions.
To be fair, my responses to these criticisms above still assume that you can quantify good done. But in principle, you can compare the impact of interventions without necessarily quantifying them (e.g. using ordinal social welfare functions); itâs just a lot easier to make up numbers and use them.
On the one hand, I really wish people who want to criticize EA would actually do their fucking homework and listen to what EAs themselves have said about the things theyâre criticizing. On the other hand, if people consistently mistakenly believe that EA is only utilitarianism and only cares about measurable outcomes, then maybe we need to adjust our own messaging to avoid this.
So if youâre wondering why EAs are reluctant to engage with âdeep criticismsâ of EA principles, maybe itâs because a lot of them miss the mark.
I think itâs wrong to think of it as criticism in the way EA thinks about criticism which is âtell me X thing Iâm doing is wrong so I can fix itâ but rather highlight the set of existing fundamental disagreements. I think the book is targeted at an imagined left-wing young person who the authors think would be âtrickedâ into EA because they misread certain claims that EA puts forward. Itâs a form of memeplex competition. Moreover, I do think some of the empirical details talking about the effect ACE has on the wider community can inform a lot of EAs with coordinating with wider ecosystems in cause areas and common communication failure modes.
Well, no. I gather that the goal of these criticisms is to âdisprove EAâ or âargue that EA is wrongâ. To the extent that they attack strawmen of EA instead of representing it for what it is and arguing against that, theyâve failed to achieve that goal.
FWIW, I read your comment as agreeing with zchuangs. They say that the book aims to convince its target audience by highlighting fundamental differences, and you say it aims to disprove EA. Highlighting (and I guess more specifically, sometimes at least, arguing against) fundamental principles of EA seems like itâs in the âdisproving EAâ bucket to me.
(I agree with both of these perspectives, but I only read a single essay, which zchuang called one of the weird ones, so take that with a grain of salt.)
I would really like to read a summary of this book. The reviews posted here (edit: in the original post) do not actually give much insight as to the contents. Iâm hoping someone will post a detailed summary on the forum (and, as EAs love self-criticism, fully expect someone will!).
Thanks very much, Kyle, for starting this thread. I put the penultimate draft of my essay up at PhilPapers.org, for anyone who might be interested in reading Chapter 6, âDiversifying EA Long Shots: An Invitation to Prioritize Black Vegans, Higher Education, and Religious Communities.â
Abstract, keywords, and free download are available here: https://ââphilpapers.org/âârec/ââHALQEA-2 . Please note that the pagination mostly tracks with the published version, but there are limits to my rinky-dink MS-Word formatting efforts to make the penultimate draft (which is slightly different in places) read as much like the published version as possible. :)
Writing this essay was a fun learning experience for me (though Iâm sure youâll see Iâm still on the learning curve) and I especially appreciated the always careful, always kind critical feedback I received from members of the EA community including JD Bauman, Caleb Parikh, Dominic Roser, and Zak Weston.
Feedback welcome here but I confess to being old, not very online lively, and more of a reader and a ruminator than a forum commenter. Much gratitude for the opportunity to post!
Update: There was a printing error in the original pdf I posted, which led to pp. 82-83 appearing twice while pp. 84-85 were omitted. I have corrected the error. Apologies to anyone who may have downloaded the incomplete version.
I want to adhere to forum norms and maintain a high quality in my posts, but this is tempting me to throw all that out the window. Of course, I will read a summary if one is provided, but going over these chapter titles, this book could just as well be a caricature of wokeness. Prioritizing Black Vegans? Queer Eye on the EA Guys? The celebratory quote complaining about white males getting it all wrong? Not to mention chapter 11 sounds seriously reminiscent of degrowthers. âSure, alternative proteins ended factory farming, but they didnât overthrow capitalism.â
My priors on this having any value to the goal of doing the most good are incredibly low.
The Black Vegans one is about different consumer price elasticities between racial groups along various axises.
Queer Eye on the EA Guys is about different measures of animal suffering and coordination between EA and animal activists broadly.
Chapter 11 I also expected to be about degrowthers but itâs about regulatory capture and Jevonâs paradox.
Moreover, I think naming conventions for left-wing texts just have this effect. It depends on how the audience pattern-matches I guess. Also Queer Eye on the EA Guys is just a funny pun. Itâs an interesting read for me personally at least. I donât think it changed any of my opinions or actions.
So Iâm tempted to read it because I like to engage with criticism that someone has spent a long time writing, but having read the article they wrote to preface it (https://ââblog.oup.com/ââ2022/ââ12/ââthe-predictably-grievous-harms-of-effective-altruism/ââ) I imagine Iâm gonna hear the flaws and be like âno these are features, not bugsâ .
From the above article:
Yes seems reasonable not to fund stuff until you know effectiveness (though Iâve now read this example in the book and they might be cost effective since they seem to be well attended)
Yeah seems right
So while I think that itâs possible to overquanify, yeah I probably am skeptical that local histories are going to outcompete an effective intervention.
So I guess I predict Iâm gonna think âa couple of useful examples, boy these people donât like us, yeah we didnât want to do that anyway, okay yeah no thatâs an actual fair critcismâ
And if we donât respond, it will be all âyou didnât read our book you donât like criticismâ
And if I respond like that it will be âyou havenât really engaged with itâ.
So yeah, unsure how to respond. I guess, do I think that if I read and wrote a response it would be interestingly engaged with? No, not reallyâthe books tone is combative, as if itâs doing the least possible work to engage but wants to say it tried.
So yeah, I have little confidence that reading this book will start an actual discussion. Maybe Iâll talk to the authors on twitter xxox
The EA movement doesnât ignore interventions that canât be easily measured, though. As I stated in another comment, what matters is being able to estimate impact, not being able to measure it directly e.g. through RCTs.
I actually think that local knowledge, indigenous knowledge, etc. can be helpful for informing the design of interventions, but theyâre best used as an input to the scientific method, not a substitute for it.
Aside: I think these paragraphs are examples of opportunities for âEA judoâ: one can respond, as I did, that theyâre disagreeing with the reasoning methods that EA allegedly uses without disagreeing with the core principle that doing good effectively is important.
That seems like a reasonable assessment. I do think the authors would be willing to discuss their pieces, but I do not know how worthwhile it would be (though, admittedly, I think a public debate could make for an interesting event).
I read âA Christian Critique of the Effective Altruism Approach to Animal Philanthropyâ as a sampling. I picked it simply because it piqued my interest. I donât know whether itâs representative of the book as a whole. Some thoughts âŠ
This essay is clearly not aimed at me, since it critiques EA from the point of view of Christian ethics, and while there are definitely Christian EAs, I personally find Christianity (and by extension, Christian ethics) highly implausible. I also find deontology and consequentialism sounder than virtue ethics. So itâs no surprise that I find the authorâs worldview in the essay unconvincing, but the essay also presents arguments that donât really rely on Christianity being true, which Iâll get to in a bit.
The essay proceeds roughly along these lines:
EA is founded on utilitarianism, and utilitarianism has issues.
In particular, there are problems when applying EA to animal advocacy.
The authorâs Christian ethical framework is superiour to the EA framework when deciding where to donate money to help animals.
Now, as for what I think about it âŠ
First, on utilitarianism.
The essay states: âEffective Altruism is founded on utilitarianism, and utilitarianism achieves simplicity in its consideration of only one morally relevant aspect of a situation. [...] The heart of whatâs wrong with Effective Altruism is a fundamental defect of utilitarianism: there are important morally relevant features of any situation that are not reducible to evaluating the results of actions and are not measurable or susceptible to calculation. This makes it inevitable that features of a situation to which numbers can be assigned are exaggerated in significance, while others are neglected.â
This is the key argument that the essay presents against EA: utilitarianism is wrong since it dismisses non-welfarist goods, and therefore EA is wrong since itâs a subset of utilitarianism.
I take this to be an argument about the philosophy of EA, not about the way itâs practiced. But IMO itâs false to say that EA is founded on utilitarianism (assuming we take âfoundedâ to mean âphilosophically grounded inâ rather than âestablished alongsideâ). I think the premises EA relies on are weaker than that; theyâre something more like beneficentrism: âThe view that promoting the general welfare is deeply important, and should be amongst oneâs central life projects.â
This ends up mattering, because it means that EA can be practiced perfectly well while accepting deontic constraints, or non-welfarist values. I reckon you just need to think itâs good to promote the good (this works for many different, though not all, definitions of the good), and to actually put that into practice and do it effectively.
Thereâs no point in re-litigating the soundness of utilitarianism here, but though I lean deontological, as mentioned I find consequentialism (and utilitarianism) more plausible than Christian and/âor virtue ethics. Anyway, I think even if utilitarianism were wrong or bad, EA would still be good and right, on grounds similar to beneficentrism.
Second, on measuring and comparing.
The essay argues that, though EAs love quantifying and measuring things, and then comparing things in light of that, this is a false promise: âAll [EA] is doing is taking one measurable feature of a situation and representing it as maximal effectiveness. A Christian ethical analysis of making decisions about spending money, or anything else, would always be concerned to bring due attention to all the ethical moving parts.â
With animals in particular, itâs extremely hard to compare different kinds of good, and we should take a pluralistic approach to doing so: âHow do you decide between supporting an animal sanctuary offering the opportunity for previously farmed animals to live out the remainder of their lives in comfort, or a campaign to require additional environmental enrichment in broiler chicken sheds, or the promotion of plant-based diets? Each is likely to have beneficial impacts on animals, but they are of very different kinds. The animal sanctuary is offering current benefits to the particular group of animals itâs looking after. If successful, the broiler chicken campaign is likely to affect many more animals, but with a smaller impact on each. If the promotion of plant-based diets is successful on a large scale, it could reduce the demand for broiler chickens together with other animal products, but it might be hard to demonstrate the long-term effects of a particular campaign.â
For example, giving to the farm sanctuary provides a lot of good that isnât easily measurable: âPeople have the experience of coming to a farmed animal sanctuary and encountering animals that are not being used in production systems. They have an opportunity to recognize the particularities of the animalsâ lives, such as what it means for this kind of animal to flourish. This encounter might well be transformative in the personâs understanding of their relationship with farmed animals.â And a farm sanctuary may be better allow humans to develop their virtue: âIt would be hard to measure the effectiveness of that kind of education and character development in Effective Altruism terms.â
As an aside, hereâs an issue I have with virtue ethics. I think itâs perverse to think that doing something good for an animal (or human) is good because it allows one to develop oneâs virtue. Surely itâs good to save animals from the horrific suffering theyâre subjected to in factory farms for the sake of the animals themselves, and the important thing here is what happens to them, whatâs good and bad for those whose suffering cries out that we do something?
So when I read: âIf [...] you take the shortcut of just getting people to buy plant-based meat because it tastes good or costs less, as soon as either of those things change in a particular context and it becomes advantageous for people to behave in ways that result in bad treatment of animals, they have no reason to do otherwise.â I canât help but think, Well, if Iâm a pig in a factory farm, I probably donât give a fuck whether people stop eating meat because they prefer the taste of Impossible Pork or because they Saw The Light, I just want to get out of my shit-filled seven-by-two-feet gestation crate!
(Of course, if getting people to See The Light is the best way of getting fewer sows in gestation crates, I think EAs would happily endorse that strategy! Thatâs just an empirical question. But itâs quite a different thing to say that getting people to See The Light is better even though it leads to more pigs in gestation crates.)
Next, the author presents the systemic change argument against EA. In particular, the essay argues that EAâs focus on measurements and data (1) causes EAs to be short-sighted, focusing on small, measurable wins at the expense of large, hard-to-measure wins, and (2) causes EAs to ignore or miss harder-to-measure second-order effects.
(The author does write that EAs could just do the better thing if thereâs a better thing to do. But this wonât help, because EAâs definition of âbetterâ is lacking: it still dismisses (writes the author) all non-welfarist goods.)
I donât want to rehash that debate here as itâs already been discussed at length elsewhere.
Third, the author presents an alternative to EA.
Donât get your hopes up, though. âThe bad news is that there is no simple alternative Christian procedure for identifying the best options for giving.â
Nonetheless, the author ventures three thoughts âŠ
First, you should trust your judgment: âDo not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system.â
This seems basically like âtrust your intuition /â donât listen to othersâ to me, but I think peopleâs intuition is often wrong and inconsistent, that listening to others allows you to form better views, and that if you care about achieving some goal (e.g. helping animals), you really should look at the evidence and use reason (though your intuitions are also evidence).
Second, remember that the most salient cause isnât necessarily the best: âIt is easy to get the public to be concerned about big fluffy animals like pandas that theyâve seen in nature documentaries and who live far away. It is harder to get people interested in the farmed animals who live in warehouses not far away but hidden from view.â
I, and Iâd imagine all EAs, agree with this one! I also think itâs in tension with the first suggestion: often peopleâs commitments and personal judgments are closely connected with what theyâve been exposed to, because why wouldnât they be?
Third, donât ask for too much: âIt is unhelpful to think that you are searching for the single most effective way your money can be used. Instead, you are looking for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals.â
I guess this may be true (though depressing) if itâs true that weâre clueless and canât compare causes. For reasons mentioned above, I think we can (and must) compare, but I get why the author ends up here given their other beliefs.
Going back to relying just on intuition and not listening to others would also seem pretty unvirtuous (unwise/âimprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look âfor a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goalsâ. I would also guess he doesnât mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoning lead.
I think heâs mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better. But I think this is something that has been expressed before by EAs, including people at Open Phil, typically with respect to worldview diversification:
(E.g. the train to crazy town) https://ââ80000hours.org/ââpodcast/ââepisodes/ââajeya-cotra-worldview-diversification/ââ
https://ââforum.effectivealtruism.org/ââposts/ââ8wWYmHsnqPvQEnapu/ââgetting-on-a-different-train-can-effective-altruism-avoid
https://ââforum.effectivealtruism.org/ââposts/ââT975ydo3mx8onH3iS/ââea-is-about-maximization-and-maximization-is-perilous
âAlexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorousâŠIâm just not ready to accept that as a devastating argument against it.â https://ââ80000hours.org/ââpodcast/ââepisodes/ââalexander-berger-improving-global-health-wellbeing-clear-direct-ways/ââ
However, it seems EAs are willing to give much greater weight to philosophical arguments and the recommendations of specific systems.
On virtue ethics (although to be clear, Iâve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. Itâs good for you, too, but itâs good for you because itâs good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but itâs good for them because itâs good for those that stand to be helped.
The argument you make against virtue ethics is also similar to an argument Iâd make against non-instrumental deontological constraints (and Iâve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing whatâs better for moral patients. And helping others abide by these constraints, similar to developing othersâ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
And more generally, why would it be better (or even sometimes obligatory) to do something thatâs worse for others overall than an alternative?
Yeah that makes sense to me. My original reading was probably too uncharitable. Though when I read zchuangâs observation further up
I now feel like maybe the author isnât warning readers about the perils of focusing on a particular worldview, but specifically on worldviews like EA, that often take one measure and optimise it in practice (even if the philosophy permits a pluralistic view on value).
It does seem like their approach would have the effect of making people defer less, or biases them towards their original views and beliefs, though? Hereâs the full paragraph:
And on this âŠ
Yeah sure, though I donât think this really gets around the objection (at least not for meâitâs based on intuition, after all). Even if you build character in this way in order to help ppl/âanimals in the future, itâs still the case that youâre not helping the animals youâre helping for their own sake, youâre doing it for some other reason. Even if that other reason is to help other animals in the future, that still feels off to me.
I think this is a pretty solid objection, but I see two major differences between deontology and virtue ethics (disclaimer: I havenât read much about virtue ethics either so I could be strawmanning it) here:
Deontological duties are actually rooted in whatâs good/âbad for the targets of actions, whereas (in theory at least) the best way of building virtue could be totally disconnected from whatâs good for people/âanimals? (The nature of the virtue itself could not be disconnected, just the way you come by it.) E.g. maybe the best way of building moral character is to step into a character building simulator rather than going to an animal sanctuary? It feels like (and again I stress my lack of familiarity) a virtue ethicist comes up with whatâs virtuous by looking at the virtue-haver (and of course what happens to others can affect that, but what goes on inside the virtue-haver seems primary), whereas a deontologist comes up with duties by looking at whatâs good/âbad for those affected (and what goes on inside them seems primary).
Kantianism in particular has an injunction against using others as mere means, making it impossible to make moral decisions without considering those affected by the decision. (Though, yeah, I know there are trolley-like situations where you kind of privilege the first-order affected over the second-order affecteds.)
Edit: Also, with Kant, in particular, my impression is that he doesnât go, âIâve done this abstract, general reasoning and came to the conclusion that lying is categorically wrong, so therefore you should never lie in any particular instanceâ, but rather âin any particular instance, we should follow this general reasoning process (roughly, of identifying the maxim weâre acting according to, and seeing if that maxim is acceptable), and as it happens, I note that the set of maxims that involve lying all seem unacceptableâ. Not sure if Iâm communicating this clearly âŠ
I would expect that living your life in a character building simulator would itself be unvirtuous. You canât actually express most virtues in such a setting, because the stakes arenât real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they arenât harmed, itâs fine? Or, as long as youâre not misleading them, youâre helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your afvocacy). Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is respresentative? Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no âdirectâ impact on animals through this? Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)? And add various other disclaimers and objections to give a less biased/âmisleading picture of things?
Could it be required that we include these issues with all advocacy, to ensure no one is misled into going vegan or becoming an advocate in the first place?
Yes, I imagined spending some time in a simulator. I guess Iâm making the claim that, in some cases at least, virtue ethics may identify a right action but seemingly without giving a good (IMO) account of whatâs right or praiseworthy about it.
There are degrees of coercion, and Iâm not sure whether to think of that as âthere are two distinct categories of action, the coercive and the non-coercive, but we donât know exactly where to draw the line between themâ or âcoerciveness is a continuous property of actions; there can be more or less of itâ. (I mean by âcoercivenessâ here something like âtaking someoneâs decision out of their own handsâ, and IMO taking it as important means prioritising, to some degree, respect for peopleâs (and animalsâ) right to make their own decisions over their well-being.)
So my answer to these questions is: It depends on the details, but I expect that Iâd judge some things to be clearly coercive, others to be clearly fine, and to be unsure about some borderline cases. More specifically (just giving my quick impressions here):
I think it depends on whether you also have the personâs interests in mind. If you do it e.g. intending to help them make a more informed or reasoned decision, in accordance with their will, then thatâs fine. If you do it trying to make them act against their will (for example, by threatening or blackmailing them, or by lying or withholding information, such that they make a different decision than had they known the full picture), then thatâs using as a mere means. (A maxim always contains its ends, i.e. the agentâs intention.)
Yeah, I think it could, but I also think it could importantly inform people of the realities of factory farms. Hard to say whether this is too coercive, it probably depends on the details again (what you show, in which context, how you frame it, etc.).
Time for a caveat: Iâd never have the audacity to tell people (such as yourself) in the effective animal advocacy space whatâs best to do there, and anyway give some substantial weight to utilitarianism. So what precedes and follows this paragraph arenât recommendations or anything, nor is it my all-things-considered view, just what I think one Kantian view might entail.
By âdirect impactâ, you mean you wonât save any specific animal by e.g. going vegan, youâre just likely preventing some future sufferingâsomething like that? Interesting, Iâd guess not disclosing this is fine, due to a combination of (1) people probably donât really care that much about this distinction, and think preventing future suffering is ~just as good, (2) people are usually already aware of something like this (at least upon reflection), and (3) people might have lots of other motivations to do the thing anyway, e.g. not wanting to contribute to an intensively suffering-causing system, which make this difference irrelevant. But Iâm definitely open to changing my mind here.
I hadnât thought about it, but it seems reasonable to me to guide people to health resources for vegans when presenting arguments in favour of veganism, given the potentially substantial negative effects of doing veganism without knowing how to do it well.
Btw, Iâd be really curious to hear your take on all these questions.
What I have in mind for direct impact is causal inefficacy. Markets are very unlikely to respond to your purchase decisions, but we have this threshold argument that the expected value is good (maybe in line with elasticities), because in the unlikely event that they do respond, the impact is very large. But most people probably wouldnât find the EV argument compelling, given how unlikely the impact is in large markets.
I think itâs probably good to promote health resources to new vegans and reach them pretty early with these, but Iâd worry that if we pair this information with all the advocacy we do, we could undermine ourselves. We could share links to resources, like Challenge22 (they have nutritionists and dieticians), VeganHealth and studies with our advocacy, and maybe even say being vegan can take some effort to do healthfully and for some people it doesnât really work or could be somewhat worse than other diets for them (but itâs worth finding out for yourself, given how important this is), and that seems fine. But I wouldnât want to emphasize reasons not to go vegan or the challenges with being vegan when people are being exposed to reasons to go vegan, especially for the first time. EDIT: people are often looking for reasons not to go vegan, so many will overweight them, or use confirmation bias when assessing the evidence.
I guess the other side is that deception or misleading (even by omission) in this case could be like lying to the axe murderer, and any reasonable Kantian should endorse lying in that case, and in general should sometimes endorse instrumental harm to prevent someone from harming another, including the use of force, imprisonment, etc. as long as itâs proportionate and no better alternatives are available to achieve the same goal. What the Health, Cowspiracy and some other documentaries might be better examples of deception (although the writers themselves may actually believe what theyâre pushing) and a lot of people have probably gone vegan because of them.
Misleasing/âdeception could also be counterproductive, though, by giving others the impression that vegans are dishonest, or having lots of people leave because they didnât get resources to manage their diets well, which could even give the overall impression that veganism is unhealthy.
I donât read academic criticism a lot, so whatâs the context of a book like this? Is it normal? What does it, imply, if anything?
Disclaimer: I read it a while ago and this is reproduction fast from memory. I also have bad memory of some of the weirder chapters (the Christianity one for instance). These also do not express my personal opinions but rather steelmans and reframings of the book.
Iâm from the continental tradition and read a lot of the memeplex (e.g. Donna Harraway, Marcuse, and Freire). Iâll try to make this short summary more EA legible:
1. The object level part of its criticisms draw upon qualitative data from animal activists who take higher risk of failure but more abolitionist approaches. The criticism is then the marginal change pushed by EA makes abolition harder because of the following: (a) lack of coordination and respect for the animal rights activists on the left and specifically the history there, (b) how funding distorts the field and eats up talent and competes against the left (c) how they have to bend themselves to be epistemically scrutable to EA.
An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for âtransformativeâ change to capitalism.
For instance they would say a lot of companies are pursuing RLHF in AI Safety not because itâs the correct way to go but because itâs the easiest low hanging fruit (even if it produces deceptive alignment).
2. Secondarily there is a values based criticism in the animal rights section that EA is too utilitarian which leads to: (a) preferencing charities that lessen animal suffering in narrow senses and (b) when EA does take risks with animal welfare itâs more technocratic and therefore prone to market hype with things like alternative proteins.
A toy example that might help is that something like cage free eggs would violate (a) because it makes the egg company better able to dissolve criticism and (b) is a lack of imagination on the part of ending egg farming overall and sets up a false counterfactual.
3. Thirdly, on global poverty it makes a few claims:
a. The motivation towards quantification is a selfish one citing Herbert Marcuseâs arguments on how neoliberalism has captured institutions. Specifically, the argument criticises Ajeya Cotraâs 2017 talk about effective giving and how itâs about a selfish internal psychological need for quantification and finding comfort in that quantification.
b. The counterfactual of poverty and possible set of actions are much larger because it doesnât consider the amount of collective action possible. The author sets out types of consciousness raising examples of activism that on first glance is âsmallâ and âintractableâ but spark big upheavals (funnily names Greta Thundberg among Black social justice activists which offended my sensibilities).
c. EA runs interference for rich people and provide them cover and potential political action against them (probably the weakest claim of the bunch).
I think a lot of the anti-quantification type arguments that EAs thumb their noses at should be reframed because they are not as weak as they seem nor as uncommon in EA. For instance, the arguments on SPARC and other sorts of community building efforts are successful because they introduce people to transformative ideas. E.g. itâs not a specific activity done but the combination of community and vibes broadly construed that leads to really talented people doing good.
3. Longtermism doesnât get much of a mention because of publishing time. Thereâs just a meta-criticism that the switch over from neartermism to longtermism reproduces the same pattern of thinking but also the subtle intellectual. E.g. EAs used to say things were too moonshot with activism and systemic change but now theyâre doing longtermism.
I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into. I think if people are pattern-matching to Torres type hit pieces theyâre going to be pleasantly surprised. These are real dyed in the wool leftists. Itâs not so much weird gotchas that are targeted at getting retweets from twitter beefs and libs itâs for leftist students and seems to be more targeted towards the animal activism side and specific instances of left animal activists and EA clashes at parts.
I want to address this point not to argue against the animal activistâs point, but rather because it is a bad analogy for that point. The argument against working for safety teams at capabilities orgs or RLHF is not that they reduce x-risk to an âacceptableâ level, causing orgs to give up on further reductions, but rather than they donât reduce x-risk.
This is a fantastic comment. And thereâs an EA whoâs able to interpret the continental/âlefty/âFrankfurt memeplex for a majority analytical/âdecoupling/âmistake-theory audience, I think this could be a very high-impact thing to do on the Forum! Part of why EA is bad at dealing with criticism like this is (imo) that a lot of the time we donât really understand what the critics are saying, and as you point out: âI feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into.â
Definitely going to spend a lot of my weekend reading the articles and adding to collaborative review thatâs going around.
One major thing you do bring up in this review is that it is a very lefty-oriented piece of criticism. To me, this is just confirming my priors that EA needs to eventually recognise this is where its biggest piece of pushback is going to come from, both in the intellectual space and what ordinary peoples opinions of EA will be informed by (especially the younger the age profile). While we might be able to âEA-Judoâ away from some of the criticisms, or turn others into purely empirical disagreements where we are ready to update, there are others where the movement might have to be more openly about âCause X reduces suffering, decreases x-Risk, and decreases the chance of a global revolution against capitalism and thatâs okâ. (So, personal note, reviewing lefty critiques of EA has just shot up my list of things I want to post about on the Forum)
Thanks for this excellent elucidation!
A majority of the pieces are not written in academic form, even though most include citations from academic sources. The most obviously academic pieces are 9 by Adams, 15 by Sanbonmatsu, and 16 by Crary.
I would categorize the book as largely ânormalâ. It pulls from a group of writers whose backgrounds and writing styles vary.
The highest-level takeaways (not my own views, except when âI/âIâdâ is includedâ):
EA is missing relevant data due to its over-reliance on quantifiable data
Effective does not equal impactful
Lack of localized knowledge and interventions reduces sustainability, adoption (trust), and overall impact
The lack of diversity, equity, and inclusion in the community produces worse outcomes and less impact. The same is said regarding considerations of [racial] justice.
EA neglects engagement with non-EA movements and actors; in addition to worse EA outcomes, it harms otherwise positive work. In short, EA undervalues solidarity.
Iâd liken this to something along the lines of âEA doesnât play nicely with the other kids in the sandboxâ.
EA is too rigid and does not fair well in complex situations
EA lacks compassion/âis cold, and though it is commonly argued this improves outcomes, it is more harmful than not
EA relies upon and reifies systems that may be causing disproportionate harm; it fails to consider that radical changes outside of its scope may be the most impactful
EA is an egotistical philosophy and community; it speaks and acts with certainty that it shouldnât
Oh, Iâve read the first two chapters. And what it implies is that they do not like EAs encroachment into the animal welfare space.
Yes, a lot of the first volume focuses on animal welfare. Though this volume is focused on animal welfare, I do think many of the takeaways I included might be echoed by critics in other cause areas.
If you are interested in contributing to a book review, please either send me a message on the Forum or an email.
I have some interest in this, although Iâm unsure whether Iâd have time to read the whole book â Iâm open to collaborations.
If you would like to contribute to one or several sections, that would also be helpful!
Upvote if you liked this essay:
âHow Effective Altruism Fails Community-Based ActivismââBrenda Sanders
Put comments on this essay here.
There is a comment to downvote so I stay karma neutral.
Why not just ask people to use agreement votes instead of upvoting/âdownvoting?
Downvote me.
Are you guys retroactively downvoting for previous positive karma? Or are you just being petty?
Anonymous feedback https://ââwww.admonymous.co/âânathanpmyoung
Have not downvoted this one, but I personally donât like these âdownvote meâ comments. I feel like karma doesnât matter that much and they add noise.
If you think this is important, maybe you can make a single thread of comments in your shortforms to link to, and people can downvote there?
Writing as a user, not as a moderator
I donât like them either, but people donât like it when I write lots of comments and get lots of karma
I didnât downvote (I rarely engage in karma voting) but if I had to guess, I would say that having the entire content of the comment be âdownvote meâ misled people who didnât understand the connection to your previous comment immediately (i.e. more confusion than some specific plan to go against your stated purpose).
Is there any substantial engagement with the problem of wild animal suffering in the essays in the book?
Yes, there is one essay that argues against wild animal welfare interventions and argues in favour of traditional wildlife conservationism.
The chapter by Michael D. Wise?
I quickly skimmed it, and perhaps my reading here is uncharitable, but it did not actually seem to say anything substantial about the problem at all, merely offering (in themselves interesting) historical reflections about the problematic nature of conceiving human-non-human animal relationship in terms of property or ownership, and general musings on the chasm that separates us from the lived experience of beings very different from us.
Yes, itâs that one. I didnât find it very persuasive either. There isnât any other content on wild animal welfare in the book.
If there is anyone who ends up making a reading group discussion guide or a list of discussion prompts (whether itâs comprehensive or not!), Iâd love to check it out and add it to my collection of EA syllabi!