Danielle Duffield and I will be running an event for people interested in animal advocacy the evening before, similar to the one I ran last year (https://www.eventbrite.co.uk/e/pre-eag-animal-advocate-short-talks-tickets-51187206312?fbclid=IwAR1cOY9tr8PZUQz1MBxUaeJ4omUO_8KZOwdJ-aXltB8lV2l7D5upzX2Ejoo)
I’d recommend The End of Animal Farming for anyone interested in animal advocacy. Here’s my short review.
Personally I found Animal Liberation by Peter Singer very inspiring as a teenager (changed me from a passive vegetarian to someone determined to make a change for animals through some form of advocacy) but I haven’t looked back at it in years.
See Lauren’s comment above on a new EAA careers/talent org.
I’m a big fan of this report and will probably recommend it to interested people as the best of the cost-effectiveness models I have seen on corporate welfare commitments.
I’m very glad for the”Ways this estimate could be misleading” section. I think its very important to make these wider considerations clear; they have not been so clear in previous cost-effectiveness estimates. I also like that you make it clear how you think that these considerations weigh up with the pluses and minuses system.
It’s great that this information on various uncertainties is included and yet you are still able to provide a useable estimate of cost-effectiveness (that excludes these indirect effects). I would probably lean towards making this result less prominent in the write-up, e.g. not including it in the title. I do think that, despite your clarity on the uncertainties, it is easy for readers to pick up and focus on the final estimate and then disregard the rest of the post.
Is “altruism” the alternative to “justice” or is “wellbeing/suffering” the alternative to “justice”? I feel like the latter is something that we could aim to maximise, and that this would be consistent with EA, but not with what the majority of aspiring EAs aim to maximise. Justice seems less relevant to utilitarianism and more relevant to virtues ethics or similar (apologies, I don’t know much about philosophy).
To the extent that we consider justice as a messaging strategy, rather than something substantial that represents goals/ideology, my guess is that aspiring effective altruists should aim for non-partisanship. My guess is that alignment with established political ideologies (including “the left”, or the Democrats over the Republicans in the US) encourages increased political salience in the short term but stagnation longer term. This guess is based mostly on intuition and my research on the anti-abortion movement.
But I would agree with Zach G that, intuitively, justice messaging will have wider reach/virality than altruism messaging.
Very big fan of this post. It is one of the best, substantial critiques of EA as it currently is that I’ve read/heard in a while. There are lots of parts that I’d love to delve into more but I’ll focus on one here, which seems to be one of the most important claims:
The three key aspects of this argument are expert belief in plausibility of the problem, very large impact of the problem if it does occur, and the problem being substantively neglected. My argument is that we can adapt this argument to make parallel arguments for other cause areas.
Sure, but this seems to miss out the tractability consideration. Your post barely mentions tractability or cost-effectiveness and the INT framework is really just a way of thinking about the expected value / cost-effectiveness of particular actions. I’d guess that some of the areas you list have been ignored or rejected by many aspiring EAs because they just seem less tractable at first glance.
I do think it’s important that the EA community explores plausibly high-impact cause areas, even if only a handful of individuals ever focus on them. So I’d be pleased if people took this post as an encouragement to explore the tractability of contributions to various areas that have been neglected by the EA community so far.
Could do! Not sure what sort of engagement an online course would get? I think Peter Singer had an EA online course and GFI has made one for production methods of cellular agriculture and/or plant-based foods I believe. Could be interesting to see what sort of take up those got, if they’ve led to many people become actively/deeply engaged, how long they took to create, and how much they cost.
Interested if you are aware of many examples other than Michael Plant?
I’d guess that there are some low-hanging fruit research projects that could help lots of organisations and individuals trying to maximise their positive impact across multiple cause areas (not confident on this because there are some groups whose work I am unfamiliar with).
Examples of existing research that fits this category are the recent post on “Ingredients for creating disruptive research teams” and Open Philanthropy Project’s research on the history of philanthropy.
It’s possible that having a small organisation explicitly focused on these sort of opportunities could be worthwhile. Otherwise, if someone tried more thoroughly list and prioritise projects, individuals could potentially work on this (and get funding through EA Grants?)
Agree with the risks of presenting such a score. Agree that scores would be very speculative and your credibility intervals would be very wide. But I’d also guess that without this sort of score/summary measure, then it’s very hard to use this research in practical applications?
Perhaps a compromise is to compile these sorts of summary scores, but then to only share them with advocates or researchers that have specific purposes in mind for those figures? This way, if someone wanted to use the summary score to inform an estimate, or make some decision based off of your research here, they could do so and it would only take them a few minutes to send you an email, rather than several hours trying to come up with an equivalent estimate, using your research as a starting point.
It’s still possible that the figures would become more widely known if people find the numbers indirectly, e.g. in citations. But this seems unlikely to affect many people (unfortunately, I don’t imagine this research going viral).
PS thanks very much for this very thorough seeming research on this difficult topic!
Enjoyed taking part. Will you post the results on the Forum? I look forward to seeing them.
1) some of the questions referred to “persons” or similar words, implying humans. Other questions referred to “others” which I interpreted as including animals. Interpreting as humans as opposed to humans and other animals (or vice versa) affected my answers in some cases. (Not sure if the wording was chosen for consistency with existing psychological scales)
2) you may get some selection effects from donating to AMF, rather than offering EAs the option to choose from different options (e.g. can pick whichever EA Fund they’d prefer)
3) Here you said you’d donate to AMF and then I got told at the end it would go to GiveDirectly
Sure! I’d guess it depends on the project. I doubt that narrowly supporting effectiveness-focused individuals or organisations would always be the best use of resources, but I’d guess that it would be in most cases (say, 70% of marginal EAA movement building resources over the next 5 years?)
Offering movement building services to some organisations might be a lower priority if you think that those organisations don’t have a particularly positive impact anyway.
There are also costs of broadening the scope of some shared resources/services; it makes coordination harder and mutual support less useful. An intuitive (though possibly slightly unfair) comparison is between the EAA Facebook discussion group to various AR or vegan groups. If someone only had time to create/manage one of those two resources, I’d much prefer the former. I think my current view on this is similar to CEA’s (e.g. see the section on “preserving value” here).
PS I’m not worried about the total scale being low, if there are opportunities that would likely be cost-effective (see this related post, if interested).
If you are intending to look into or start one of the projects listed in the post above, please comment on this thread. This may help with coordination and mutual support.
(E.g., as noted above) I’m currently planning to start an EAA podcast in the next few months. Comment below or contact me at email@example.com if you would like to share ideas or concerns (both are welcome!)
This was very interesting. There were several aspects I found surprising, such as the apparent importance of collaboration and shared physical space. Thanks for writing and sharing this.
I’m interested in the methodology, since I’ve also been working on both 1) case studies, which I hope to able to compare between at a later point and 2) a literature review.
How long do you estimate that you spent looking at each of the case studies?
It seems that most are based on a small number of sources. Did you find that reading additional sources changed your views about a particular research team compared to the first source or two that you read? Do you expect steeply diminishing returns from investing more time into digging further into particular case studies?
“I wonder if there could be a kind of “trip advisor” type badge to recommend how well charities/interventions are doing in such a way as to encourage them to improve.”
Not quite the same, but you might be interested in https://sogive.org/
It’s often assumed in work on wild animal welfare/suffering that biodiversity and ecosystem protection are poor heuristics for representing the best interests of individual animals. Just because a system is diverse doesn’t necessarily mean the individuals are suffering more or less.
Many relevant essays by Brian Tomasik on his site. Here’s one example https://reducing-suffering.org/medicine-vs-deep-ecology/
Additional consideration to the cross-species comparison consideration:
In comparing human to animal charities, we’re often comparing human years lost (with DALYs or QALYs) to improvements in quality or years of negative life prevented. There’s lots of scope for disagreement in making these comparisons.
E.g. is a year on a factory farm worse than a year of an average human’s life is good? If so, by how many orders of magnitude? I’d guess it is worse, perhaps by an order of magnitude or more.
See here for more discussion (though it’s quite an old post and Kelly has told me she would change / update sections of it, given the time).
Thanks for this, hadn’t seen that link before.
One point made there is that “likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact… By contrast, no analogous mechanism ensures that an improvement in the welfare of one animal results in the improvements in the welfare of other animals.” An important long-term consideration for the effects of welfare reforms is whether they generate more momentum for further reforms for animals and for expansion of the moral circle, or whether they generate complacency. I’m currently very uncertain on this, though lean slightly towards momentum. See here for relevant considerations and evidence.
Some other posts related to considering the long-term effects of animal advocacy interventions:
1) Jacy Reese, “Why I prioritize moral circle expansion over artificial intelligence alignment”
2) Me, “How tractable is changing the course of history?” (see especially some of the considerations in “How tractable are trajectory changes towards moral circle expansion?”)
3) Brian Tomasik, “Charity Cost-Effectiveness in an Uncertain World” (not necessarily specific to animal issues, but I think there is some v useful theoretical discussion)