(posting anonymously due to working closely with some CE groups)
I like CE a lot, and think that some of their charities are great. I donate to multiple CE charities (including one animal welfare charity). I appreciate their ambition and think that what they do is very difficult, and they’ve had a lot of success in the face of that difficulty.
However, I’m very concerned about their work in the animal welfare space, and wanted to flag some of these concerns if they are considering expanding in this space. I don’t think donors should take much guidance from them, compared to OpenPhil or the EA Animal Welfare Fund, and I would personally wager that CE leading giving in the animal space would be net-negative for the space compared to the status quo (which, to be fair, is very bad already). These comments are pretty light on a serious topic, but I want to flag them to donors considering joining this funding circle. These are one person’s impressions, and I’d encourage people considering joining these projects look into this more.
Concerns: 1. CE’s research on animal welfare is extremely low quality
There is a widely held view in the animal research community that CE’s reports on animal welfare consistently contain serious factual errors, and their research is broadly not trusted by others in the space. My personal experience with this was reading a report I was an expert on, noticing immediately that it had multiple major errors, sharing that feedback, and having it ignored due to their internal time capping practices.
Another animal advocacy research organization supposedly found CE plagiarizing their work extensively including in published reports, and CE failed to address this.
2. CE’s charities working on animal welfare have mostly not been very good, and listening to external feedback prior to launching them would have told them this would happen.
Here are some very light evaluations of CE’s animal charities:
Current CE incubated animal charities
Shrimp Welfare Project: Very promising, doesn’t do CE’s original proposed idea anymore, rough impression is that more feedback ahead of time would have told them the original idea was bad.
Animal Advocacy Careers: Okay, but not a super promising/scalable intervention, still does CE’s original proposed idea
Fish Welfare Initiative: Hasn’t worked very well, and seemed like it wouldn’t in advance, doesn’t do CE’s original proposed idea anymore, more feedback ahead of time would have told them not to do the original idea
Animal Ask: Was a bad idea prior to launching, hasn’t had much impact, people in the space were skeptical ahead of launch
Healthier Hens: Was a bad idea prior to launching, hasn’t had much impact, people in the space were skeptical ahead of launch
Animal Policy International: Was a bad idea prior to launching, too short a time has elapsed to assess
Incubating charities is difficult, and CE definitely shouldn’t expect 100% to work out. I think 1.25 charity out of 6 being good is still a quite solid success rate. But, most of these charities seemed like bad ideas to many other people in the space prior to launching, CE was given that feedback, and seemed to fail to act on it. This is also the case with their most recent batch of charities that haven’t yet launched. CE is typically much more confident in their own research than external feedback, which seems bad given concern 1.
3. CE is fairly hostile to external feedback on their animal welfare work
CE has a fairly strong reputation of being hostile / non-collaborative in the animal welfare space. While their charities and founders tend to be very open to feedback and willing to work with others, CE itself has consistently been non-collaborative to other groups in the space to the degree that their staff are sometimes not invited to coordination events or meetings.
Hello! One point that seems important to make: “People in the space” being skeptical of a startup idea, or even being confident it’s a bad idea, is not good evidence that it’s a bad idea.
Whilst we can expect subject matter experts to be skeptical of ideas that turn out to be bad, we can also expect them to be skeptical of a lot of ideas that turn out to be good!
This is true of many extremely successful for-profit start-ups (it’s mentioned in Y-Combinator lectures a lot) and of non-profits as well, including many of CE’s most successful incubated charities. If i’m not mistaken, it’s even true of CE itself! When it’s co-founders asked experts for their advice when considering launching the Incubation Program, the majority view was that it was a bad idea. I, for one, am glad they didn’t listen! If you look around the room you’re in now, you’re almost certainly surrounded by a number of inventions that “people in the space” said were impossible or a bad idea. The job of an entrepreneur (or a researcher identifying start-up ideas for entrepreneurs) is to figure out when “people in the space” are wrong, or at least likely enough to be wrong for it to be worth trying.
So in conclusion: Track record of CE incubated charities—good indicator. Whether or not people in the space were skeptical—not a good indicator.
I think this is overly simplifying of something a lot more complex, and I’m surprised it’s a justification you use for this. Of course on some level what you’re saying is correct in many cases. But imagine you recommend a global health charity to be launched. GiveWell says “you’re misinterpreting some critical evidence, and this isn’t as impactful as you think”. Charities on the ground say “this will impact our existing work, so try doing it this other way”. You launch the intervention anyway. The founders immediately get the same feedback, including from trying the intervention, then pivot to coordinating more and aligning with external experts.
This seems much more analogous to what happens in the animal space, and it seems absolutely like a good indicator that people were skeptical. Charities aren’t for-profits, who exist in a vacuum of their own profitability. They are part of a broader ecosystem.
Another animal advocacy research organization supposedly found CE plagiarizing their work extensively including in published reports, and CE failed to address this.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
I believe this refers to an incident that happened in 2021. CE had an ongoing relationship with an animal advocacy policy organisation occasionally providing research to support their policy work. We received a request for some input and over the next 24 hours we helped that policy organisation draft a note on the topic at hand. In doing so a CE staff member copy and pasted text from a private document shared by another animal advocacy research organisation. This was plagiarism and should not have happened. I would like to note: firstly that this did not happen in the course of our business as usual research process but in a rushed piece of work that bypassed our normal review process, and secondly that this report was not directly published by us and it was not made clear to the CE staff member involved that the content was going to be made into a public report (most other work for that policy organisation was just used privately) although we should of course have considered this possibility. These facts of course do not excuse our mistake here but are relevant for assessing the risk that this was any more than a one-off mistake.
I was involved in responding when this issue came to light. On the day the mistake was realised we: acknowledged the mistake, apologised to the injured party, pulled all publicity for the report, drafted an email to the policy org asking to have the person who’s text was copied added as a co-author (the email was not sent until the following day after as we waited for approval from the injured party). The published report was updated. Over the next three weeks we carried out a thorough internal risk assessment including reviewing all past reports by the same author. The other animal rights research organisation acknowledged they were satisfied with the steps taken. We found no cases of plagiarism in any other reports (the other research org agreed with this assessment), although one other tweak was made to a report to make acknowledgment more clear.
FWIW I find mildlyanonymous’ description of this event to be somewhat misleading referring to multiple “reports” and claiming “CE failed to address this”.
CE’s reports on animal welfare consistently contain serious factual errors … noticing immediately that it had multiple major errors, sharing that feedback, and having it ignored due to their internal time capping practices.
I don’t know what this is about. I know of no case where we have ignored feedback. We are always open to receiving feedback on any of our reports from anyone at any time. I am very sorry if I or any CE staff ignored you and I open to hearing more about this, and/or hearing about any errors you have spotted in our research. If you can share any more information on this I can look into it, please contact me (I will PM you my email address, note I am about to go on a week’s leave). It is often the case if we receive minor critical feedback after a report is published we do not go back and edit our report but note the feedback in our Implementation Note for future Charity Entrepreneurship founders, maybe that is what happened.
Thanks for sharing this! It differs from the narrative I’ve heard elsewhere is critical ways, but I don’t really know much about this situation, and just appreciate the transparency.
I went though the old emails today and I am confident that my description accurately captured what happened and that everything I said can be backed up.
First a meta note less directly connected to the response:
Our funding circles fund a lot of different groups, and there is no joint pot, so it’s closer to a moderated discussion about a given cause area than CE/AIM making granting calls. We are not looking for people to donate to us or our charities, and as far as I understand, OpenPhil and AWF do not have a participatory way to get involved other than just donating to their joint pot directly. This request is more aimed at people who want to put in significant personal time to making decisions independent from existing funding actors.
More connected response:
Thanks for the thoughts, and the support you have given our past charities. I can give a few quick comments on this. Our research team might also respond a bit more deeply.
1) Research quality: I think in general, our research is pretty unusual in that we are quite willing to publish research that has a fairly limited number of hours put into it. Partly, this is due to our research not being aimed at external actors (e.g., convincing funders, the broader animal movement, other orgs) as much as aimed at people already fairly convinced on founding a charity and aimed at a quite specific question of what would be the best org to found. We do take an approach that is more accepting of errors, particularly ones that do not affect endline decisions connected directly to founding a charity. E.g., For starting a charity on fish in a given country, we are not really concerned about the number of fish farmed unless that number is significant determining factor in terms of founding a charity in that space. We have gone back and forth as to how much transparency to have on research and how much time to spend per report and have not come to a fixed answer. We are more likely to get criticism/pushback on higher transparency + lower hours per report but typically think it will still lead to more charities that are promising in the end.
2) CE’s animal charity quality: I think both our ordering and assessment of charity quality would be different from what is described here. I also think animal welfare funds and Open Phil’s (both of who have funded the majority of these projects) assessments would also not match your description. However, in some ways, these are small differences as our general estimate is that 2⁄5 charities in a given area are highly promising. It is quite a hits-based game, and that is the number we would expect (and would rank internally) about how many charities are performing really well.
2.5) Feedback on animal charities: I did a quick review of charities that got the most positive vs negative feedback at the time of idea recommendation from the broader animal community relative to your rank order and relative to our internal one and did not find a correlation. Generally, I think the space is pretty uncertain and thus the charity that got the most positive expectations were typically the charities that deviated the smallest amount from other actions already taken in the space. I think that putting more time into the research reports (including getting more feedback) is one way to improve charity quality (at the cost of quantity) but I’m pretty skeptical it’s the best way. So far, the biggest predictive factor has not been idea strength but the founder team, so when thinking about where to spend marginal resources to improve charities, I would still lean that way (although it’s far from clear if that will always be the case).
3) I would be interested in doing a survey on this to get better data. I get the impression that we are seen as pretty disconnected from the animal space (and I think that is fairly true). I think we are far more involved in e.g., the EA space both when it comes to more formal research and when it comes to softer social engagement. I think our charities tend to go deeper into whatever area they are focusing on than our team does, and I am pretty comfortable with that. I would not be surprised if we both were invited less and attended less coordination events and meetings connected to the animal space; we like to stay focused quite directly on the problems we are working on.
Thanks again for writing this up. I put some chance that these are issues that are correct and important enough to prioritize, and it’s valuable to get pushback and flags even if we end up disagreeing about the actions to take.
It would be helpful if you engaged with the plagiarism claims, because it is concerning that CE is running researcher training programs while failing to handle that well. I agree with the rest of what you say here as being tricky, but think that it is pretty bad that you publish the low confidence research publicly, and it’s led to confusion in the animal space.
+ 2.5 - I think if your ordering is significantly different, it’s probably fairly different than most people in the space, so that’s somewhat surprising/an indicator that lots of feedback isn’t reaching you all.
To be clear, I am certain that CE staff have not been invited to events in the animal welfare space due to impressions of your organization being unwilling to be cooperative.
My main view is that animal donors should seriously engage in a vetting process prior to taking large amounts of guidance on donations from CE / shouldn’t update on your research in meaningful ways. I still think CE is probably the best bet in the animal space for future new very high impact organizations in the space as well though, so it’s a tricky balance to critique CE. I’d bet that a fair number of the best giving opportunities in the animal space in 5 years will have come out of CE, but that it’ll also have come with a large amount of generally avoidable wastes of funding and talent.
I think in general, our research is pretty unusual in that we are quite willing to publish research that has a fairly limited number of hours put into it. Partly, this is due to our research not being aimed at external actors (e.g., convincing funders, the broader animal movement, other orgs) as much as aimed at people already fairly convinced on founding a charity and aimed at a quite specific question of what would be the best org to found. We do take an approach that is more accepting of errors, particularly ones that do not affect endline decisions connected directly to founding a charity.
Do you think there are additional steps you could/should take to make this philosophy / these limitations clearer to would-be to those who come across your reports?
I strongly support more transparency and more release of materials (including less polished work product), but I think it is essential that the would-be secondary user is well aware of the limitations. This could include (e.g.) noting the amount of time spent on the report, the intended audience and use case for the report, the amount of reliance upon which you intend that audience to place on the report, any additional research you expect that intended audience to take before relying on the report, and the presence of any significant issues / weaknesses that may be of particular concern to either the intended audience or anticipated secondary users. If you specifically do not intend to correct any errors discovered after a certain time (e.g., after the idea was used or removed from recommended options), it would probably be good to state that as well.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey’s response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pledge).
I don’t have strong evidence for thinking this. Mostly I am going of the amount of errors that incubates find in the reports. In each cohort we have ~10 potential founders digging into ~4-5 reports for a few weeks. I estimate there is on average roughly 0.8 non-trivial non-major errors (i.e. something that would change a CEA by ~20%) and 0 major errors highlighted by the potential founders. This seems in the same order of magnitude to the number of errors GiveWell get on scrutiny (e.g. here).
And ultimately all our reports are tested in the real world by people putting the ideas in practice. If our reports do not line up to reality in any major way we expect to find out when founders do their own research or a charity pivots or shuts down, as MHI has done recently.
One caveat to this is that I am more confident about the reports on the ideas we do recommend than the other reports on non-recommended ideas which receive less oversight internally as they are less decision relevant for founders, and receive less scrutiny from incubates and being put into action.
I note also that in this entire critique and having skimmed over the threads here no-one appears to have pointed out any actual errors in any CE report. So I find it hard to update on anything written here. (The possible exception is me, in this post, pointing to the MHI case which does seem unfortunately to have shut down in part due to an error in the initial research.)
So I think our quality of research is comparable to other orgs, but my evidence for this is weak and I have not done a thorough benchmarking. I would be interested in ways to test this. It could be a good idea for CE to run a change our mind context like GiveWell in order to test the robustness of our research. Something for me to consider. It could also be useful (although I doubt worth the error) to have some external research evaluator review our work and benchmark us against other organisations.
[EDIT: To be clear talking here about quality in terms of number of mistakes/errors. Agree our research is often shorter and as such is more willing to take shortcuts to reach conclusions.]
– –
That said I do agree that we should make it very very clear in all our reports the context of who the report is written for and why and what the reader should take from the report. We do this in the introduction section to all our reports and I will review the introduction for future reports to make sure this is absolutely clear.
I think it is quite clear that a lot of your research isn’t at the bar of those other organizations (though I think for the reasons Joey mentioned, that definitely can be okay). For example, I think in this report, collapsing 30 million species with diverse life histories into a single “Wild bug” and then taking what appear to be completely uncalibrated guesses at their life conditions, then using that to compare to other species is just well below the quality standards of other organizations in the space, even if it is a useful way to get a quick sense of things.
[previous comment is deleted, because I accidentally sent an unfinished one]
Thanks for the example! That makes sense and makes me wonder if part of the disagreement came from thinking about different reference classes. I agree that, in general, the research we did in our first year of operations, so 2018/2019, is well below the quality standard we expect of ourselves now, or what we expected of ourselves even in 2020. I agree it is easy to find a lot of errors (that weren’t decision-relevant) in our research from that year. That is part of the reason they are not on the website anymore.
That being said, I still broadly support our decision not to spend more time on research that year. That’s because spending more time on it would have come at a cost of significant tradeoff. At the time, there was no other organization whose research we could have relied on, and the alternative to the assessment you mention was either to not compare interventions across species (or reduce it to a simplistic metric like “the number of animals affected” metric) or to spend more time on research and run Incubation Program a year later in which case we would have lost a year of impact and might not have started the charities we did. That would have been a big loss because for example, that year we incubated Suvita whose impact and promise were recently recognized by GiveWell that, provided Suvita with $3.3M to scale up, or we incubated Fish Welfare Initiative (FWI) and Animal Advocacy Careers a decision I still consider to be a good one (FWI is an ACE Recommended Charity, and even though I agree with its co-founders that their impact could be higher, I’m glad they exist). We also couldn’t simply hire more staff and do things more in-depth because it was our first year of operation, and there was not enough funding and other resources available for, at the time, an unproven project.
I wouldn’t want to spend more time on that, especially because one of the main principles of our research is “decision-relevance,” and the “wild bug” one-pager you mention or similar ones were not relevant. If it were, we would not have settled on something of that quality, and we would have put more time into it.
For what it is worth, I think there are things we could have done better. Specifically, we could have put more effort into communicating how little weight others should put on some of that research. We did that by stating at the top (for example, as in the wild bug one-pager you link), “these reports were 1-5 hours time-limited, depending on the animal, and thus are not fully comprehensive.” and at the time, we thought it was sufficient. But we could have stressed epistemic status even more strongly and in more places so it is clear to others that we put very little weight on it. For full transparency, we also made another mistake. We didn’t recommend working on banning/reducing bait fish as an idea at the time because, from our shallow research, it looked less promising, and later, upon researching it more in-depth, we decided to recommend it. It wouldn’t have made a difference then because there were not enough potential co-founders in year 1 to start more charities, but it was a mistake, nevertheless.
Hi mildlyanonymous. I work on the philanthropy programs at AIM. One thing to keep in mind here is that the Foundation Program and funding circles are distinct from the Incubation Program. We never tell any funder where to donate, all funding decisions are independently-made, and if you look at past funding circle updates (Meta, Mental Health), most grantees are not CE incubatees. Having worked on several funding circles, I am a huge believer that communities of grantmakers can have outsized impact compared to working alone. While I myself am a fan of CE’s animal research and incubatees, this disagreement doesn’t really have any bearing on the programs Joey was referring to in this post. All the same, thank you for your thoughts!
Thanks for sharing your thoughts! I feel like your comment would be more valuable/credible if you elaborated further on your claims. You say the ideas for the animal charities were bad, but you provide no justification, and most of them not succeeding should not update one much given charities being successful is unlikely on priors.
I’m mainly trying to convey what seemed to be a sentiment among many who worked in research in animal advocacy in response to seeing these ideas, though I agree with your second point.
As an example of this, I think that people thought Animal Ask or Healthier Hens both failed account for why the animal space had consolidated work on a few specific asks over the last few years (because corporations weren’t sure how to prioritize across many asks, and focusing on just one at a time helped get their attention to be more focused), and this was feedback conveyed to CE ahead of time but mostly ignored, and then became a route to failure for their early work.
At Animal Ask we did later hear some of that feedback ourselves and one of our early projects failed for similar reasons. Our programs are very group-led, as in we select our research priorities based on groups looking to pursue new campaigns. This means the majority of our projects tend to focus on policy rather than corporate work, given more groups consider new country-specific campaigns and want research to inform this decision.
In the original report from CE, they do account for the consolidation of corporate work behind a few asks. They expected the research on corporate work to be ‘ongoing’ deeper’ and ‘more focused research’. So strategically would look more like research throughout the previous corporate campaign to inform the next with a low probability of updating any specific ask. The expectation is that it could be many years between the formation of corporate asks.
So in fact this consolidation was highlighted in the incubation program as a reason success could have so much impact. As with the large amount of resources the movement devotes to these consolidated corporate asks ensuring these are optimised is essential.
As Ren outlined we have a couple of recent, more detailed evaluations and we have found that the main limitations on our impact are factors only a minority of advisors in the animal space highlighted. These are constraints from other organisation stakeholders either upper management (when the campaigns team had updated on our findings but there was momentum behind another campaign) or funders (particular individual or smaller donors who are typicaly less research motivated than OPP, EAAWF, ACE etc.)
You can see this was the main concern for CE researchers in the original report. “Organizations in the animal space are increasingly aware of the importance of research, but often there are many factors to consider, including logistical ease, momentum, and donor interest. It is possible that this research would not be the determining factor in many cases”.
My impression is that Healthier Hens wouldn’t have caused confusion for corporations dealing with other major asks like cage-free or broiler asks, because HH was planning to work directly with different targets, specifically farms and feed mills, and in Kenya to start. Do you mean it would have just been better to further support corporate cage-free and broiler campaigns (about which you’ve stated skepticism here), or another ask the movement would consolidate to focus on?
They discuss things that didn’t go well for them here: fundraising, feed testing, delays, survey response collection and (negative results in their) split-feeding trial.
(I don’t have much sense about the impact of Animal Ask, both how much impact they’re having and why. Some of their research looks useful, but I don’t know how their work is informing decisions or at what scale.)
FWIW in the early stages of Healthier Hens, I heard some of the following pieces of feedback which IMO seem significant enough that it may have been a bad decision for CE to recommend a feed fortification charity for layer hens:
Feed costs are approximately 50% of costs for farmers, so interventions that make feed even more expensive are likely to be hard to achieve
CE’s report focuses on subsidising this feed for farmers to lessen the potential risk of the above point, but I think misses the crucial factor where most animal funders don’t want to subsidise the animal agriculture industry without a clear mechanism for passing these costs over to industry, hence making fundraising quite hard (which did turn out to be true)
Following on, if the subsidisation avenue was not pursued, it’s not clear what leverage Healthier Hens (or any other feed fortification charity) would have over feed mills or farms to get them to significantly increase their costs of production. For example, in the report, CE says “Entrepreneurs may pivot based on their own research: for example, they may instead partner with certifiers to encourage them to include feed standards for calcium, phosphorus, and vitamin D3 in their standards” but again, this is a significant ask of farms (and therefore certifiers) which I think was glossed over in the report.
It’s also worth noting that the experts interviewed in this report were 1 free-range egg farmer, 1 animal nutritionist and 2 Indian animal advocates (as it was originally thought to work best in India). None of them mentioned the concerns above but the person I spoke to (involved in global corporate welfare) thought that if CE had spoken to someone with reasonable global campaigning / corporate welfare experience, these problems would have been unearthed. I’m not sure how true this is but thought it was relevant info to the above discussion.
(My overall view on the meta-comment by mildlyanonymous is that it’s too vague to be useful and hard to verify many things but the intention of reducing poor allocation of talented co-founders and scarce funding is important, hence suggesting improvements to CE’s research process does seem valuable)
Edited afterwards: I added “without a clear mechanism for passing these costs over to industry” to the second bullet point after Michael’s good point below.
CE’s report focuses on subsidising this feed for farmers to lessen the potential risk of the above point, but I think misses the crucial factor where most animal funders don’t want to literally subsidise the animal agriculture industry, hence making fundraising quite hard (which did turn out to be true)
I’m not sure if this really explains much or if the funders were acting rationally if it did. As one of its main interventions, SWP is currently buying and giving out electric stunners for free, which is essentially a subsidy in kind. SWP is supported by Open Phil, ACE and seems popular in the broad EA community among animal charities (I’d guess even just for the direct provision of stunners, not any legislative/corporate policy work to leverage it later), but maybe not (?) in the animal community outside of EA.
But maybe shrimp stunning looked better ex ante, given the number of shrimp it could affect per $ and better evidence supporting stunning than feed fortification for keel bone fractures. In fact, HH’s feed fortification trial actually made things worse for hens. SWP is already past a billion shrimp helped in expectation (maybe not just with stunners?). SWP had to get some evidence for the success of the intervention before scaling, but someone had to pay for that and the stunning trial.
If people are hesitant to subsidize the industry, maybe the benefits to animals vs money to industry ratio just looked much better for SWP than HH, and good enough to be worth supporting SWP stunner work but not HH.
FWIW, I think it’s worth doing more hen feed fortification trials, with different supplements or given on different schedules or doses, given the scale and severity of keel bone fractures (WFP), as well as the possibility that cage-free could be worse if and because it increases keel bone fractures.
Yeah good point re Shrimp Welfare Project! I should have said “most animal funders don’t want to subsidise the animal ag industry without a clear mechanism for passing these costs over to the industry”.
For example, in the case of SWP, my understanding is that SWP wants to get these relatively cheap stunners ($50k and only a one-off cost) for a few major producers to show both producers and retailers that it is a relatively cheap way to improve animal welfare with minimal/no impacts on productivity. Then, I believe the idea is to get retailers (e.g. like this) to commit only to sourcing from producers who stun their shrimps, thereby influencing more producers to buy these stunners out of their own pocket (and repeat until all shrimp are being stunned before slaughter).
I think the case with feed fortification with layer hens is much less obvious and less simple due to the impact of feed costs (which are significant and ongoing), so IMO it wasn’t clear to animal funders how these costs would be passed onto the industry at a later date, rather than subsidising feed fortification in perpetuity.
A smaller note is that there is also a very small number of animal funders who follow this suffering-reduction-focused theory of change so if one major funder (e.g. OP) doesn’t fund you, this can be very problematic (as in the case of Healthier Hens). Also many funders don’t act rationally, so it’s also important the research takes that into account (not convinced that funders weren’t acting rationally in this case though).
But do EAs (and major funders especially) support SWP because they expect SWP to accelerate industry adoption of stunners paid for by the industry (or by others besides SWP/animal advocates)?
The ACE review barely discusses stunners, and only really in their section on room for more funding, where stunners account for essentially all the RFMF in 2024 and 2025, and there’s no mention of accelerated industry adoption of stunners not paid for by us.[1]
The EA Animal Welfare Fund grant just says “Purchase 4 stunners for producers committing to stun a minimum. of 1.4k MT (~100 million) of shrimps/annum per stunner”.
Stunning equipment will break down over time and eventually need to be replaced. Maybe they’re assuming the companies will repair/replace the stunners at their own cost as they break down, but I imagine they expect this to look good with only a few years of impact per stunner (or didn’t take into account the fact that stunners will break down).
The written rationale of Open Phil’s most recent grant to SWP doesn’t mention the possibility, either: “Open Philanthropy recommended a grant of $2,000,000 over two years to the Shrimp Welfare Project. Focuses include installing stunners at major shrimp producers, reducing stocking density on shrimp farms in South Asia, and increasing industry awareness of shrimp welfare.”
Other than by SWP themselves, I haven’t seen ~any online discussion of this acceleration.
It’s possible the grantmakers are sensitive to the possibility of acceleration of industry adoption of stunners paid for by the industry and are granting in part based on this, but it doesn’t show up in their written rationales. They say very little about the stunner plans in general, though.
And should we have had similar expectations for feed fortification costs to eventually be passed on and HH to accelerate feed fortification paid for by the industry (or not us)? Eventually we can move on from cage-free asks when+where cage-free becomes the norm (or the law), say. Maybe this is complicated by the fact that many companies are international, though.
relatively cheap stunners ($50k and only a one-off cost)
(...)
impact of feed costs (which are significant and ongoing)
Stunners aren’t a one-off cost in general: they’ll need to be repaired and replaced eventually if we keep killing shrimp and want them stunned. Someone will have to pay for that, just like ongoing feed fortification. So the only question is whether and how much SWP and HH accelerate the industry (or others besides animal advocates) paying for the respective costs. And again, written grant rationales for SWP don’t mention this acceleration, so it’s not clear the grants depended on expected acceleration.
And HH wouldn’t be paying for all of the feed, just some supplements. I do think SWP’s stunners work looks more cost-effective ex ante than HH did, though.[2]
I think this highlights a methodological issue with ACE’s review process: it isn’t sufficiently sensitive to the details and ex ante cost-effectiveness of additional future funding. Its cost-effectiveness criterion is retrospective wrt outputs, but SWP’s future plans with additional future funding are very different from what its cost-effectiveness was assessed on, and the ACE review of its future plans with additional funding is very shallow.
CE’s CEA of subsidized feed fortification was 34 welfare points per dollar assuming only an overall probability of success of 26%. The CEA for SWP assumes 100% probability of success. If we also assumed 100% for HH, HH would be at least 130=34/0.26 welfare points per dollar conditional on success (possibly higher, because there are still costs if it fails). The difference between conventional cage and cage-free is probably around 50 or fewer welfare points per year of life by CE’s estimates (comparing USA FF laying hens (battery cages) to wild bird or FF beef cow, say). Corporate cage-free campaigns were 54 years of life affected/$, so this would be <2700 welfare points/$ historically and say <540 welfare points/$=2700 welfare points/$/5 now, so I’d guess still a few times better than HH at >130 welfare points/$ conditional on success.
SWP has some track record with stunners already, so it is reasonable to assign them a higher probability of success than HH ex ante, and this can increase the gap.
Obviously, I don’t speak for OP or EA AWF fund but they literally only publish 1-3 sentences per grant so I’m not surprised at all if they don’t mention it, even if it is a consideration for them. That said, I might just be projecting because this was partially the reason why I supported giving them a grant!
Agree though that stunners aren’t literally a one-off and never touch again, but as you mention I think the overall cost of the intervention to animals helped is significantly better for shrimp stunning in my opinion, as well the avenue for industry adoption being much more clear and more likely.
Just a couple of points on the original comment about AIM:
@mildlyanonymous, I’m glad you brought up the perception of the animal movement regarding AIM. I must say, I don’t have the same negative perception as you do but this may be biased:
i) motivated reasoning on my part as a AIM incubate, and
ii) feedback I get from the overall movement may be filtered by my interlocutors because of said affiliation
In any case, I would really invite whoever feels that AIM is ‘not collaborative with the movement’ to look again. AIM has launched or is planning to launch several organisations which are actively designed to support the movement:
To grow in Africa (AAA)
To bring in more talent into the movement (AAC)
Help orgs in the movement make better decisions (Animal Ask)
Bring in more money to a resource-strained cause area (work in progress)
If this is not the very definition of collaboration, I don’t know what is
Regarding SWP not doing what CE originally proposed we do: I’ve mentioned this openly at least in a couple of interviews (80K, HILTLS). My goal was not to demerit AIM’s research but rather to say that there is so much one can learn from desktop research in a low-evidence space such as animal welfare and it is the role of the founding team to explore the different permutations and see what sticks
IMO, AIM’s reports need to lay out at least a promising intervention, do a cost-effectiveness analysis on it (among other things), and see how it compares to say, cage-free campaigns to decide whether to kill it or explore deeper
I apologise in advance for not engaging further with the comments about AIM / animal movement but we are very (human) resources constrained at SWP and the case in favour of AIM has been sufficiently established IMO
Out ToC indeed aims to move the Overton window in such a way that eventually high-leverage stakeholders (e.g. retailers, certifiers) feel confident to demand the use of electrical stunning beyond the capacity of SWP to fund
On the other hand, none of our funders has included this as strict condition because:
i) it is much harder to measure, and much more importantly
ii) the intervention looks sufficiently impactful and cost-effective without having to incorporate such second-degree effects
2. CE’s charities working on animal welfare have mostly not been very good, and listening to external feedback prior to launching them would have told them this would happen.
[...] doesn’t do CE’s original proposed idea anymore
On the point of the charities not doing CE’s originally proposed idea anymore, I want to clarify that we don’t see charities tweaking an idea as a failure but rather as the expected course of action we encourage. We are aware of the limitations of desktop research (however in-depth), and we encourage organizations to quickly update based on country visits, interactions with stakeholders, and pilot programs they run. There are just some informations that a researcher wouldn’t be able to get, and they need input from someone working on the ground. For example, when Rethink Priorities was writing their report on shrimp welfare, they consulted SWP extensively to gain that perspective. Because CE charities operate in extremely neglected cause areas, there is often no other “implementer” our research team can rely on. Therefore, organizations are usually expected to change the idea as they learn in their first months of operations. I see this as a success in ingraining the values of changing one’s mind in the face of new evidence, seeking this evidence, and making good decisions on the side of co-founders with the support of their CE mentors, and we are happy when we see it happen. There is a complex trade-off to be made when balancing the learning value from more in-depth desktop research vs. more time spent on learning as one implements, and I don’t think CE always gets it right, but the latter perspective is often misunderstood and underappreciated in the EA space.
Regarding charities specifically, in general, we expect about a 2⁄5 “hit rate” (rarely because of the broad idea being bad, more often because the implementation is challenging for one reason or another), and many people, including external charity evaluators and funders, have a different assessment of some of the charities you list. That being said, if you have any specific feedback about the incubated organization’s strategies or ideas, please reach out to them. As you mentioned, they are open to hearing input and feedback. Similarly, if you have specific suggestions about how CE can improve its recommendations, please get in touch with our Director of Research at sam@charityentrepreneurship.com; we appreciate specific feedback and conversation about how we can improve. Thank you for your support of multiple CE charities so far!
I definitely agree that organizations should pivot as they learn about how an intervention works in practice. I think the errors I refer to are more things of the type: a cursory glance from an animal welfare scientist could have told you your research was missing key considerations, and the charity would have not wasted time on the recommended intervention. These seem cheap to prevent and preventable issues.
Thanks for clarifying! We always have an expert view section in the report, and often consult animal science specialists, but it is possible we missed something. Could you tell me where specifically we made a mistake regarding animal science that could have changed the recommendation? I want to look into it, to fact-check it, and if it is right not to make this mistake in the future.
It looks like the report has been taken down, but I think the degree to which you pushed dissolved water oxygenation for fish welfare before launching Fish Welfare Initiative is an especially strong example of this. At the time I heard skepticism from many experts. You can see a reference to that report in this post. This report is another example of something that I think would not have passed any kind of rigorous external review.
Thanks! Can you tell me more about why you think improving dissolved oxygen is not a good idea? I still consider poor dissolved oxygen to be a major welfare problem for fish in the setting where the charity is expected to operate, and improving it through various means (assuming we also keep stocking density constant or decreasing it) would be good for their welfare. This has been validated in the field by FWI in this assessment and studied by others, so I’m a bit surprised. Unless you are referring to specific interventions to improve dissolved oxygen, of which I have high uncertainty about their cost-effectiveness.
And about the report you link, I broadly agree and have written about it below.
Hey, just chiming in here on behalf of the organization I co-founded (Fish Welfare Initiative). We went through AIM’s charity incubation program in 2019—their first formal cohort.
The following are a couple points I had:
1 - Echoing requests for evidence
As some people have already commented above, insofar as you have serious criticisms about various charities (CE or otherwise) it’d be helpful for you to provide some evidence for them.
In particular, it’d be interesting to learn more why you think AAC is “okay”, why Animal Ask “hasn’t had much impact”, and/or why FWI “hasn’t worked very well.”
I really think I would be happy to consider these arguments, but I first want to understand them.
I would personally wager that CE leading giving in the animal space would be net-negative for the space compared to the status quo (which, to be fair, is very bad already)
It’d also be helpful to know why you think the animal space, or maybe just giving in the animal space, is “very bad already”. (I know that in particular might be a lot for you to respond to though.) This brings me to my second point.
2 - Just because animal/CE charities are flawed doesn’t mean they’re not worth supporting.
One thread of your comments is one I really resonate with: The animal movement is not good enough. Our evidence is often subpar, decisions are made hastily, we don’t have the right people, etc. Unfortunately, I think this is all true.
But what should we really do differently? If, as you suggest, CE produces not super great animal charities, but it’s still (as you say) “the best bet in the animal space for future new high impact orgs”, then should we just resign ourselves to not launching and running any new animal-focused charities?
My point here is that just because something isn’t as good as we would like (e.g. IMO the best animal charities don’t have even 10% the evidence base of GiveWell’s top charities), that doesn’t mean they’re not worth doing or supporting. Sometimes I think we do ourselves a disservice by always comparing ourselves to human health/poverty alleviation charities: These human-focused orgs literally have decades or even a century more of an evidence base built up than we do. They don’t have an entrenched opposition. And they aren’t trying to change something people derive pleasure from 3 times a day.
We need to build a large and effective movement for reducing animal suffering and ending factory farming. That is going to require starting somewhere, no doubt with lots of early mistakes in the early days.
Of course, I don’t mean to say that anything goes—some ideas are still certainly too dumb to start and some charities too poorly-run to continue. However, I think we need to appreciate that we’re in the very early days of animal advocacy and we need to think about our approaches as such.
3 - On taking the advice of the EA Funds and OpenPhil over CE
This seems to be an important actionable takeaway you’d like people to have:
>>I don’t think donors should take much guidance from them, compared to OpenPhil or the EA Animal Welfare Fund
Just wanted to point this out in case you’re not already aware, but these two granting bodies already heavily grant to CE-incubated animal orgs.
For instance:
FWI has received about 5 grants from the EA AW fund over the years, and 1 grant from Open Philanthropy.
Animal Ask has received at least 1 grant from the EA AW fund and 2 grants from OpenPhil.
And I believe SWP and AAC have also received money from one or both of these funders.
So it seems like either you should think that a) CE animal orgs are actually more promising than you claimed, b) the EA AW Fund and OpenPhil are actually less promising than you implied, or c) these funds are just scraping the bottom of the barrel and grant to CE orgs for lack of better options.
Fwiw, and after talking a reasonable amount with these funders, I’m fairly of the opinion that correct answer is mostly A here.
4 - About Fish Welfare Initiative (FWI) specifically
It’s worth noting that FWI has varied a fair bit from the original idea (see the short published report here) that CE had made when we first launched. Broadly though, CE didn’t give us that certain of a direction—rather, we understood that there are serious problems with how humans raise farmed fish, dissolved oxygen is one of them, and we should do further research to design a specific intervention to help them. Of course it would have been better if there was better research or a more concrete direction for us to go in, but again: We are in the early days of the animal movement and there’s still not enough of an evidence base for most things.
I also agree with Karolina above that it’s not necessarily bad that charities pivot from the original idea (provided that they pivot to something useful).
As for how promising FWI is today, I’d be interested to hear (as I stated in Point 1 above) why you think FWI “hasn’t worked very well”. As I state in Point 2, I think we have certainly made loads of mistakes, but that we’re also having a moderate impact right now and investing in tackling a very important and very neglected problem. You can learn more specifically about all this in our last year in review, or also by seeing our current projects.
Also as mentioned in Point 3, we have received grants from OpenPhil and the EA AW Funds, and are a recommended charity by ACE. Perhaps you think that these organizations have made some mistake in recommending FWI, but then I think you’re in a position of doubt on the entire animal movement (which, to be fair, seems like that might be the position you are in). To that, I would say see my Point 2—these are the early days, and even though no org is perfect we need to start somewhere.
5 - Feel free to dm me
I think it’d be interesting to hear your response to some or all of these points publicly as other people seem to have similar questions, but if you feel uncomfortable doing that feel free to dm or email me. I think there’s a good chance we already know each other, in which case I’d be especially interested to chat more to come to some shared truth here.
I’ll say something I said to Joey in this thread early—I expect that the best animal charities in the future will come out of AIM, but it will come with a lot of avoidable waste of funds and talent due to the things related to my concerns. I think AIM focusing on their skills at incubating charities, and less on what I believe are weaknesses or threats (coordinating donors and research), would be much better for the space.
There is a widely held view in the animal research community that CE’s reports on animal welfare consistently contain serious factual errors
To the extent this view is both valid and widely-held, and the reports are public, it should be possible to identify at least some specific examples without compromising your anonymity. While I understand various valid reasons why you might not want to do that, I don’t think it is appropriate for us to update on a claim like this from a non-established anonymous account without some sort of support.
My goal here is not to provide this to the EA Forum, but to caution donors about doing further due diligence. But I mentioned a few examples of more egregious research failures in another comment.
I’ll also add that the original comment still has positive comment karma and many agree votes, and that many of the disagree votes seem to be AIM staff and incubatees, and not necessarily others in animal welfare research. I think that at a minimum, that should raise flags for many people to take these concerns seriously.
I don’t think donors should take much guidance from them, compared to OpenPhil or the EA Animal Welfare Fund, and I would personally wager that CE leading giving in the animal space would be net-negative for the space compared to the status quo (which, to be fair, is very bad already).
If you had total control over all donations in the EA animal space, how would you change things compared to the status quo?
For the main point of your argument, I echo Vasco Grilo’s point that your critiques of specific would be more compelling with justification or sources backing up your views. For any given charity idea, I have no reason to think that the fact that somebody on the internet thinks it’s a bad idea prior to launch correlates with that idea actually being bad. Every new idea has people who are sceptical of it—that doesn’t provide much information one way or the other. I’d be more interested to see a detailed evaluation of each charity in terms of actual impact they may have (or have not) delivered. I can only speak for my experience at Animal Ask, but a couple of recent, detailed evaluations do exist, and we invest a great deal of energy into critically evaluating our own work (and having it evaluated by others).
(As always, my views are my own, not those of my employer.)
I agree that detailed evaluations would be better than my narrative impressions. My main point is to warn people to do way more due diligence on CE. Even if the reputation is undeserved, the organization has a negative reputation among many in animal advocacy, especially in research and grant evaluation, and that is worth looking into.
I don’t really have strong views on how to allocate funds in the animal space, but I doubt it is through funding circles, which usually seem worse for charities even if they are better for donors and the space overall (e.g. the degree of dislike that charities have for the existing Farmed Animal Funders circle seems like an indicator of something important).
Firstly, I want to acknowledge that this comment has probably been pretty valuable in terms of sharing feedback for the CE team about perceptions that maybe a lot of people were unaware of, so thanks for raising some concerns that you and others might be having. I’ll also just say as a co-founder of a CE-incubated charity, I am far from impartial, but I think sharing some inside information could be helpful here.
My main response is to the first 2 comments because I have no real knowledge of the last point.
CE is setting a norm for using research or evidence (however limited) as a basis for starting a charity.
CE actually uses research & evidence to inform starting charities in the animal welfare space. This is not the norm! I think even establishing this as what you should be looking at is relatively new to the animal welfare space and should be acknowledged and praised. Currently the majority of charities that are started in the animal welfare space are not backed by research or significant evidence, from what I have observed usually founders think something is a good idea are relatively charismatic and subsequently get funding. So even though I agree that the research can be improved and I think it’s helpful to flag this to Karolina and Joey, I think that the starting point of CE charities is a lot more than that of other charities in this space. So, really, I think we should commend CE for trying to establish any kind of research as the norm and basis for starting a charity. (I think the animal rights movement could be much better if this was a standard all new charities adopted).
CE was never extremely confident with their own research internally to incubated charities, it was merely a foundation. They also established failure mode thinking into our impact assessments which is another great norm.
I can only talk about cohort 1 which was FWI and AAC. In this cohort, CE presented the research that they believed there was the potential for something really impactful to happen in something generally in this space. It was then up to the co-founding team to go out and do deeper research including getting more external feedback to validate the research, whether the charity was really worth starting and how to execute it best. CE also embedded into our thinking that we shouldn’t necessarily expect that our charities would succeed and that we should have clear failure points to assess whether it’s worth it to continue it, taking into consideration the counterfactuals of the movement’s money. Again i think this is a great norm to establish, many charities in the animal welfare space and in other sectors do not do this. They merely carry on without these assessments. Healthier Hens declared shutting down because of this and i think this should be celebrated not used as a signal of poor research. So basically, I think actually if you are mad about new charities not collaborating enough, I think that’s on us, not CE.
I think your main point (Which is a valid concern) is whether CE charities are net a good use of movement resources. To date just speaking about AAC we estimate adding over $2,000,000 of counterfactual value to other animal welfare organisations with a spend of just over 750,000 in under 5 years.
I agree with Haven’s point that the animal movement needs to do better and be better. But as you and others have said, I still think CE charities are some of the best in the movement. If we don’t try to create new good organisations addressing gaps in the movement, I don’t think we are going to realistically accelerate towards ending factory farming. The question is, do you know a better incubator programme to start new organisations than CE, or do you just want them to improve a bit?
SWP, Animal Ask, Healthier Hens, FWI and AAC have all been supported by either EA animal welfare fund or Open Phil or both (in the case of most of us) so I would be really surprised if there were that much difference between the alternative funding perspective you are suggesting here.
From AAC’s point of view, I would be interested to know your concerns on scalability because I think there are infinite ways we can scale; it’s more about us selecting the right one. I’d love feedback on this so feel free to DM me from your anonymous account. We have supported over 150 organisations in increasing talent into critical positions they were struggling to hire for with 90 landing positions and have also brought in $408,000 of counterfactual funding to other organisations. Currently, we estimate (conservatively from most donors feedback) for every 1$ we spend a $2.5 of value is added to the movement. Which suggests we are net positive to the movement. We have plans to double this ratio by the end of this year.
In conclusion, of course, CE has areas to improve, as we all do. Still, I think this is a pretty harsh analysis of an organisation adding a considerable amount of value and norms to the animal advocacy movement on founding charities. I think they would add a lot of value to bringing these values and norms into the donor landscape as there is a gap and CE has a pretty good track record in doing this in other donor circles like the Meta Funding Circle etc.
I think being hostile was probably slightly too strong, though I will note that the original comment still has positive upvotes / many agree votes, but no other people defending this position in the comments, which is concerning, and mostly CE staff and incubatees responding
I will note that the original comment still has positive upvotes
I (and others) have strongly upvoted it because (especially post-FTX[1]) it’s important to encourage people to share concerns about unethical behavior from influential people in the ecosystem, it’s not an indication of agreement.
Agree-votes do convey a lot of information, and I’m surprised that nobody else is defending this position in the comments, given 7 people agree with you.
I found one of the examples here very unpersuasive: I read this report years ago and I distinctly remember it was very clear that it was meant to “get a quick sense of things”, only had a few hours of research behind it, and wasn’t meant to pass any kind of rigorous research. It was the first thing I read about animal welfare and it was enlightening, I’m grateful that they published it. Here is the first paragraph:
After spending considerable time on creating the best system we could for evaluating animal welfare, we applied this system to 15 different animals/breeds. This included 6 types of wild animal and 7 types of farm animal environments, as well as 2 human conditions for baseline comparisons. This was far from a complete list, but it gave us enough information to get a sense of the different conditions. Each report was limited to 2-5 hours with pre-set evaluation criteria (as seen in this post), a 1-page summary, and a section of rough notes (generally in the 5-10 page range). Each summary report was read by 8 raters (3 from the internal CE research team, 5 external to the CE team). The average weightings and ranges in the spreadsheet below are generated by averaging the assessments of these raters.
(I am not affiliated with CE, but it would be important for me to know if their research was bad)
Most of these are just “people in space knew this wouldn’t work”. Could you share more specific criticisms? As Aidan said, the biggest successes come from projects no one else would do, so without more information that seems like a very weak criticism.
Just to note: I have a COI in commenting on this subject.
I strong downvoted your comment, as it reads to me as making bold claims whilst providing little supporting evidence. References to “lots of people in this area” could be considered to be a use case of the bandwagon fallacy.
It also concerns me that I’ve seen 5 instances of this post being disagree voted and strong downvoted, then a CE staff member commenting right after. I think those are obviously things people have a right to do if it is CE staff downvoting and disagreeing, but it means this post, outside of CE staff, might have fairly strong agreement from many people, which seems like an important note, given that it is still very positive karma, and without those votes might be positive agreement on balance.
For what it’s worth, I have no affiliation with CE, yet I disagree with some of the empirical claims you make — I’ve never gotten the sense that CE has a bad reputation among animal advocacy researchers, nor is it clear to me that the charities you mentioned were bad ideas prior to launching.
Then again, I might just not be in the know. But that’s why I really wish this post was pointing at specific reasoning for these claims rather than just saying it’s what other people think. If it’s true that other people think it, I’d love to know why they think it! If there are factual errors in CE’s research, it seems really important to flag them publicly. You even mention that the status quo for giving in the animal space (CE excepted) is “very bad already,” which is huge if true given the amount of money at stake, and definitely worth sharing examples of what exactly has gone wrong.
I learned today that AIM reached out to at least one organisation to try to deanonymize me after I posted this. I was also told they did some amount of coordinating the responses to it. Given that and the power they hold, I won’t talk about this further, as it’s made me feel unsafe in critiquing them. This was already the reason I left these comments anonymously.
(posting anonymously due to working closely with some CE groups)
I like CE a lot, and think that some of their charities are great. I donate to multiple CE charities (including one animal welfare charity). I appreciate their ambition and think that what they do is very difficult, and they’ve had a lot of success in the face of that difficulty.
However, I’m very concerned about their work in the animal welfare space, and wanted to flag some of these concerns if they are considering expanding in this space. I don’t think donors should take much guidance from them, compared to OpenPhil or the EA Animal Welfare Fund, and I would personally wager that CE leading giving in the animal space would be net-negative for the space compared to the status quo (which, to be fair, is very bad already). These comments are pretty light on a serious topic, but I want to flag them to donors considering joining this funding circle. These are one person’s impressions, and I’d encourage people considering joining these projects look into this more.
Concerns:
1. CE’s research on animal welfare is extremely low quality
There is a widely held view in the animal research community that CE’s reports on animal welfare consistently contain serious factual errors, and their research is broadly not trusted by others in the space. My personal experience with this was reading a report I was an expert on, noticing immediately that it had multiple major errors, sharing that feedback, and having it ignored due to their internal time capping practices.
Another animal advocacy research organization supposedly found CE plagiarizing their work extensively including in published reports, and CE failed to address this.
2. CE’s charities working on animal welfare have mostly not been very good, and listening to external feedback prior to launching them would have told them this would happen.
Here are some very light evaluations of CE’s animal charities:
Current CE incubated animal charities
Shrimp Welfare Project: Very promising, doesn’t do CE’s original proposed idea anymore, rough impression is that more feedback ahead of time would have told them the original idea was bad.
Animal Advocacy Careers: Okay, but not a super promising/scalable intervention, still does CE’s original proposed idea
Fish Welfare Initiative: Hasn’t worked very well, and seemed like it wouldn’t in advance, doesn’t do CE’s original proposed idea anymore, more feedback ahead of time would have told them not to do the original idea
Animal Ask: Was a bad idea prior to launching, hasn’t had much impact, people in the space were skeptical ahead of launch
Healthier Hens: Was a bad idea prior to launching, hasn’t had much impact, people in the space were skeptical ahead of launch
Animal Policy International: Was a bad idea prior to launching, too short a time has elapsed to assess
Incubating charities is difficult, and CE definitely shouldn’t expect 100% to work out. I think 1.25 charity out of 6 being good is still a quite solid success rate. But, most of these charities seemed like bad ideas to many other people in the space prior to launching, CE was given that feedback, and seemed to fail to act on it. This is also the case with their most recent batch of charities that haven’t yet launched. CE is typically much more confident in their own research than external feedback, which seems bad given concern 1.
3. CE is fairly hostile to external feedback on their animal welfare work
CE has a fairly strong reputation of being hostile / non-collaborative in the animal welfare space. While their charities and founders tend to be very open to feedback and willing to work with others, CE itself has consistently been non-collaborative to other groups in the space to the degree that their staff are sometimes not invited to coordination events or meetings.
Hello! One point that seems important to make: “People in the space” being skeptical of a startup idea, or even being confident it’s a bad idea, is not good evidence that it’s a bad idea.
Whilst we can expect subject matter experts to be skeptical of ideas that turn out to be bad, we can also expect them to be skeptical of a lot of ideas that turn out to be good!
This is true of many extremely successful for-profit start-ups (it’s mentioned in Y-Combinator lectures a lot) and of non-profits as well, including many of CE’s most successful incubated charities. If i’m not mistaken, it’s even true of CE itself! When it’s co-founders asked experts for their advice when considering launching the Incubation Program, the majority view was that it was a bad idea. I, for one, am glad they didn’t listen! If you look around the room you’re in now, you’re almost certainly surrounded by a number of inventions that “people in the space” said were impossible or a bad idea. The job of an entrepreneur (or a researcher identifying start-up ideas for entrepreneurs) is to figure out when “people in the space” are wrong, or at least likely enough to be wrong for it to be worth trying.
So in conclusion: Track record of CE incubated charities—good indicator. Whether or not people in the space were skeptical—not a good indicator.
I think this is overly simplifying of something a lot more complex, and I’m surprised it’s a justification you use for this. Of course on some level what you’re saying is correct in many cases. But imagine you recommend a global health charity to be launched. GiveWell says “you’re misinterpreting some critical evidence, and this isn’t as impactful as you think”. Charities on the ground say “this will impact our existing work, so try doing it this other way”. You launch the intervention anyway. The founders immediately get the same feedback, including from trying the intervention, then pivot to coordinating more and aligning with external experts.
This seems much more analogous to what happens in the animal space, and it seems absolutely like a good indicator that people were skeptical. Charities aren’t for-profits, who exist in a vacuum of their own profitability. They are part of a broader ecosystem.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
I believe this refers to an incident that happened in 2021. CE had an ongoing relationship with an animal advocacy policy organisation occasionally providing research to support their policy work. We received a request for some input and over the next 24 hours we helped that policy organisation draft a note on the topic at hand. In doing so a CE staff member copy and pasted text from a private document shared by another animal advocacy research organisation. This was plagiarism and should not have happened. I would like to note: firstly that this did not happen in the course of our business as usual research process but in a rushed piece of work that bypassed our normal review process, and secondly that this report was not directly published by us and it was not made clear to the CE staff member involved that the content was going to be made into a public report (most other work for that policy organisation was just used privately) although we should of course have considered this possibility. These facts of course do not excuse our mistake here but are relevant for assessing the risk that this was any more than a one-off mistake.
I was involved in responding when this issue came to light. On the day the mistake was realised we: acknowledged the mistake, apologised to the injured party, pulled all publicity for the report, drafted an email to the policy org asking to have the person who’s text was copied added as a co-author (the email was not sent until the following day after as we waited for approval from the injured party). The published report was updated. Over the next three weeks we carried out a thorough internal risk assessment including reviewing all past reports by the same author. The other animal rights research organisation acknowledged they were satisfied with the steps taken. We found no cases of plagiarism in any other reports (the other research org agreed with this assessment), although one other tweak was made to a report to make acknowledgment more clear.
FWIW I find mildlyanonymous’ description of this event to be somewhat misleading referring to multiple “reports” and claiming “CE failed to address this”.
I don’t know what this is about. I know of no case where we have ignored feedback. We are always open to receiving feedback on any of our reports from anyone at any time. I am very sorry if I or any CE staff ignored you and I open to hearing more about this, and/or hearing about any errors you have spotted in our research. If you can share any more information on this I can look into it, please contact me (I will PM you my email address, note I am about to go on a week’s leave). It is often the case if we receive minor critical feedback after a report is published we do not go back and edit our report but note the feedback in our Implementation Note for future Charity Entrepreneurship founders, maybe that is what happened.
Thanks for sharing this! It differs from the narrative I’ve heard elsewhere is critical ways, but I don’t really know much about this situation, and just appreciate the transparency.
I went though the old emails today and I am confident that my description accurately captured what happened and that everything I said can be backed up.
First a meta note less directly connected to the response:
Our funding circles fund a lot of different groups, and there is no joint pot, so it’s closer to a moderated discussion about a given cause area than CE/AIM making granting calls. We are not looking for people to donate to us or our charities, and as far as I understand, OpenPhil and AWF do not have a participatory way to get involved other than just donating to their joint pot directly. This request is more aimed at people who want to put in significant personal time to making decisions independent from existing funding actors.
More connected response:
Thanks for the thoughts, and the support you have given our past charities. I can give a few quick comments on this. Our research team might also respond a bit more deeply.
1) Research quality: I think in general, our research is pretty unusual in that we are quite willing to publish research that has a fairly limited number of hours put into it. Partly, this is due to our research not being aimed at external actors (e.g., convincing funders, the broader animal movement, other orgs) as much as aimed at people already fairly convinced on founding a charity and aimed at a quite specific question of what would be the best org to found. We do take an approach that is more accepting of errors, particularly ones that do not affect endline decisions connected directly to founding a charity. E.g., For starting a charity on fish in a given country, we are not really concerned about the number of fish farmed unless that number is significant determining factor in terms of founding a charity in that space. We have gone back and forth as to how much transparency to have on research and how much time to spend per report and have not come to a fixed answer. We are more likely to get criticism/pushback on higher transparency + lower hours per report but typically think it will still lead to more charities that are promising in the end.
2) CE’s animal charity quality: I think both our ordering and assessment of charity quality would be different from what is described here. I also think animal welfare funds and Open Phil’s (both of who have funded the majority of these projects) assessments would also not match your description. However, in some ways, these are small differences as our general estimate is that 2⁄5 charities in a given area are highly promising. It is quite a hits-based game, and that is the number we would expect (and would rank internally) about how many charities are performing really well.
2.5) Feedback on animal charities: I did a quick review of charities that got the most positive vs negative feedback at the time of idea recommendation from the broader animal community relative to your rank order and relative to our internal one and did not find a correlation. Generally, I think the space is pretty uncertain and thus the charity that got the most positive expectations were typically the charities that deviated the smallest amount from other actions already taken in the space. I think that putting more time into the research reports (including getting more feedback) is one way to improve charity quality (at the cost of quantity) but I’m pretty skeptical it’s the best way. So far, the biggest predictive factor has not been idea strength but the founder team, so when thinking about where to spend marginal resources to improve charities, I would still lean that way (although it’s far from clear if that will always be the case).
3) I would be interested in doing a survey on this to get better data. I get the impression that we are seen as pretty disconnected from the animal space (and I think that is fairly true). I think we are far more involved in e.g., the EA space both when it comes to more formal research and when it comes to softer social engagement. I think our charities tend to go deeper into whatever area they are focusing on than our team does, and I am pretty comfortable with that. I would not be surprised if we both were invited less and attended less coordination events and meetings connected to the animal space; we like to stay focused quite directly on the problems we are working on.
Thanks again for writing this up. I put some chance that these are issues that are correct and important enough to prioritize, and it’s valuable to get pushback and flags even if we end up disagreeing about the actions to take.
It would be helpful if you engaged with the plagiarism claims, because it is concerning that CE is running researcher training programs while failing to handle that well. I agree with the rest of what you say here as being tricky, but think that it is pretty bad that you publish the low confidence research publicly, and it’s led to confusion in the animal space.
+ 2.5 - I think if your ordering is significantly different, it’s probably fairly different than most people in the space, so that’s somewhat surprising/an indicator that lots of feedback isn’t reaching you all.
To be clear, I am certain that CE staff have not been invited to events in the animal welfare space due to impressions of your organization being unwilling to be cooperative.
My main view is that animal donors should seriously engage in a vetting process prior to taking large amounts of guidance on donations from CE / shouldn’t update on your research in meaningful ways. I still think CE is probably the best bet in the animal space for future new very high impact organizations in the space as well though, so it’s a tricky balance to critique CE. I’d bet that a fair number of the best giving opportunities in the animal space in 5 years will have come out of CE, but that it’ll also have come with a large amount of generally avoidable wastes of funding and talent.
Do you think there are additional steps you could/should take to make this philosophy / these limitations clearer to would-be to those who come across your reports?
I strongly support more transparency and more release of materials (including less polished work product), but I think it is essential that the would-be secondary user is well aware of the limitations. This could include (e.g.) noting the amount of time spent on the report, the intended audience and use case for the report, the amount of reliance upon which you intend that audience to place on the report, any additional research you expect that intended audience to take before relying on the report, and the presence of any significant issues / weaknesses that may be of particular concern to either the intended audience or anticipated secondary users. If you specifically do not intend to correct any errors discovered after a certain time (e.g., after the idea was used or removed from recommended options), it would probably be good to state that as well.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey’s response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pledge).
I don’t have strong evidence for thinking this. Mostly I am going of the amount of errors that incubates find in the reports. In each cohort we have ~10 potential founders digging into ~4-5 reports for a few weeks. I estimate there is on average roughly 0.8 non-trivial non-major errors (i.e. something that would change a CEA by ~20%) and 0 major errors highlighted by the potential founders. This seems in the same order of magnitude to the number of errors GiveWell get on scrutiny (e.g. here).
And ultimately all our reports are tested in the real world by people putting the ideas in practice. If our reports do not line up to reality in any major way we expect to find out when founders do their own research or a charity pivots or shuts down, as MHI has done recently.
One caveat to this is that I am more confident about the reports on the ideas we do recommend than the other reports on non-recommended ideas which receive less oversight internally as they are less decision relevant for founders, and receive less scrutiny from incubates and being put into action.
I note also that in this entire critique and having skimmed over the threads here no-one appears to have pointed out any actual errors in any CE report. So I find it hard to update on anything written here. (The possible exception is me, in this post, pointing to the MHI case which does seem unfortunately to have shut down in part due to an error in the initial research.)
So I think our quality of research is comparable to other orgs, but my evidence for this is weak and I have not done a thorough benchmarking. I would be interested in ways to test this. It could be a good idea for CE to run a change our mind context like GiveWell in order to test the robustness of our research. Something for me to consider. It could also be useful (although I doubt worth the error) to have some external research evaluator review our work and benchmark us against other organisations.
[EDIT: To be clear talking here about quality in terms of number of mistakes/errors. Agree our research is often shorter and as such is more willing to take shortcuts to reach conclusions.]
– –
That said I do agree that we should make it very very clear in all our reports the context of who the report is written for and why and what the reader should take from the report. We do this in the introduction section to all our reports and I will review the introduction for future reports to make sure this is absolutely clear.
I think it is quite clear that a lot of your research isn’t at the bar of those other organizations (though I think for the reasons Joey mentioned, that definitely can be okay). For example, I think in this report, collapsing 30 million species with diverse life histories into a single “Wild bug” and then taking what appear to be completely uncalibrated guesses at their life conditions, then using that to compare to other species is just well below the quality standards of other organizations in the space, even if it is a useful way to get a quick sense of things.
[previous comment is deleted, because I accidentally sent an unfinished one]
Thanks for the example! That makes sense and makes me wonder if part of the disagreement came from thinking about different reference classes. I agree that, in general, the research we did in our first year of operations, so 2018/2019, is well below the quality standard we expect of ourselves now, or what we expected of ourselves even in 2020. I agree it is easy to find a lot of errors (that weren’t decision-relevant) in our research from that year. That is part of the reason they are not on the website anymore.
That being said, I still broadly support our decision not to spend more time on research that year. That’s because spending more time on it would have come at a cost of significant tradeoff. At the time, there was no other organization whose research we could have relied on, and the alternative to the assessment you mention was either to not compare interventions across species (or reduce it to a simplistic metric like “the number of animals affected” metric) or to spend more time on research and run Incubation Program a year later in which case we would have lost a year of impact and might not have started the charities we did. That would have been a big loss because for example, that year we incubated Suvita whose impact and promise were recently recognized by GiveWell that, provided Suvita with $3.3M to scale up, or we incubated Fish Welfare Initiative (FWI) and Animal Advocacy Careers a decision I still consider to be a good one (FWI is an ACE Recommended Charity, and even though I agree with its co-founders that their impact could be higher, I’m glad they exist). We also couldn’t simply hire more staff and do things more in-depth because it was our first year of operation, and there was not enough funding and other resources available for, at the time, an unproven project.
I wouldn’t want to spend more time on that, especially because one of the main principles of our research is “decision-relevance,” and the “wild bug” one-pager you mention or similar ones were not relevant. If it were, we would not have settled on something of that quality, and we would have put more time into it.
For what it is worth, I think there are things we could have done better. Specifically, we could have put more effort into communicating how little weight others should put on some of that research. We did that by stating at the top (for example, as in the wild bug one-pager you link), “these reports were 1-5 hours time-limited, depending on the animal, and thus are not fully comprehensive.” and at the time, we thought it was sufficient. But we could have stressed epistemic status even more strongly and in more places so it is clear to others that we put very little weight on it. For full transparency, we also made another mistake. We didn’t recommend working on banning/reducing bait fish as an idea at the time because, from our shallow research, it looked less promising, and later, upon researching it more in-depth, we decided to recommend it. It wouldn’t have made a difference then because there were not enough potential co-founders in year 1 to start more charities, but it was a mistake, nevertheless.
Hi mildlyanonymous. I work on the philanthropy programs at AIM. One thing to keep in mind here is that the Foundation Program and funding circles are distinct from the Incubation Program. We never tell any funder where to donate, all funding decisions are independently-made, and if you look at past funding circle updates (Meta, Mental Health), most grantees are not CE incubatees. Having worked on several funding circles, I am a huge believer that communities of grantmakers can have outsized impact compared to working alone. While I myself am a fan of CE’s animal research and incubatees, this disagreement doesn’t really have any bearing on the programs Joey was referring to in this post. All the same, thank you for your thoughts!
Thanks for sharing your thoughts! I feel like your comment would be more valuable/credible if you elaborated further on your claims. You say the ideas for the animal charities were bad, but you provide no justification, and most of them not succeeding should not update one much given charities being successful is unlikely on priors.
I’m mainly trying to convey what seemed to be a sentiment among many who worked in research in animal advocacy in response to seeing these ideas, though I agree with your second point.
As an example of this, I think that people thought Animal Ask or Healthier Hens both failed account for why the animal space had consolidated work on a few specific asks over the last few years (because corporations weren’t sure how to prioritize across many asks, and focusing on just one at a time helped get their attention to be more focused), and this was feedback conveyed to CE ahead of time but mostly ignored, and then became a route to failure for their early work.
At Animal Ask we did later hear some of that feedback ourselves and one of our early projects failed for similar reasons. Our programs are very group-led, as in we select our research priorities based on groups looking to pursue new campaigns. This means the majority of our projects tend to focus on policy rather than corporate work, given more groups consider new country-specific campaigns and want research to inform this decision.
In the original report from CE, they do account for the consolidation of corporate work behind a few asks. They expected the research on corporate work to be ‘ongoing’ deeper’ and ‘more focused research’. So strategically would look more like research throughout the previous corporate campaign to inform the next with a low probability of updating any specific ask. The expectation is that it could be many years between the formation of corporate asks.
So in fact this consolidation was highlighted in the incubation program as a reason success could have so much impact. As with the large amount of resources the movement devotes to these consolidated corporate asks ensuring these are optimised is essential.
As Ren outlined we have a couple of recent, more detailed evaluations and we have found that the main limitations on our impact are factors only a minority of advisors in the animal space highlighted. These are constraints from other organisation stakeholders either upper management (when the campaigns team had updated on our findings but there was momentum behind another campaign) or funders (particular individual or smaller donors who are typicaly less research motivated than OPP, EAAWF, ACE etc.)
You can see this was the main concern for CE researchers in the original report. “Organizations in the animal space are increasingly aware of the importance of research, but often there are many factors to consider, including logistical ease, momentum, and donor interest. It is possible that this research would not be the determining factor in many cases”.
My impression is that Healthier Hens wouldn’t have caused confusion for corporations dealing with other major asks like cage-free or broiler asks, because HH was planning to work directly with different targets, specifically farms and feed mills, and in Kenya to start. Do you mean it would have just been better to further support corporate cage-free and broiler campaigns (about which you’ve stated skepticism here), or another ask the movement would consolidate to focus on?
They discuss things that didn’t go well for them here: fundraising, feed testing, delays, survey response collection and (negative results in their) split-feeding trial.
(I don’t have much sense about the impact of Animal Ask, both how much impact they’re having and why. Some of their research looks useful, but I don’t know how their work is informing decisions or at what scale.)
FWIW in the early stages of Healthier Hens, I heard some of the following pieces of feedback which IMO seem significant enough that it may have been a bad decision for CE to recommend a feed fortification charity for layer hens:
Feed costs are approximately 50% of costs for farmers, so interventions that make feed even more expensive are likely to be hard to achieve
CE’s report focuses on subsidising this feed for farmers to lessen the potential risk of the above point, but I think misses the crucial factor where most animal funders don’t want to subsidise the animal agriculture industry without a clear mechanism for passing these costs over to industry, hence making fundraising quite hard (which did turn out to be true)
Following on, if the subsidisation avenue was not pursued, it’s not clear what leverage Healthier Hens (or any other feed fortification charity) would have over feed mills or farms to get them to significantly increase their costs of production. For example, in the report, CE says “Entrepreneurs may pivot based on their own research: for example, they may instead partner with certifiers to encourage them to include feed standards for calcium, phosphorus, and vitamin D3 in their standards” but again, this is a significant ask of farms (and therefore certifiers) which I think was glossed over in the report.
It’s also worth noting that the experts interviewed in this report were 1 free-range egg farmer, 1 animal nutritionist and 2 Indian animal advocates (as it was originally thought to work best in India). None of them mentioned the concerns above but the person I spoke to (involved in global corporate welfare) thought that if CE had spoken to someone with reasonable global campaigning / corporate welfare experience, these problems would have been unearthed. I’m not sure how true this is but thought it was relevant info to the above discussion.
(My overall view on the meta-comment by mildlyanonymous is that it’s too vague to be useful and hard to verify many things but the intention of reducing poor allocation of talented co-founders and scarce funding is important, hence suggesting improvements to CE’s research process does seem valuable)
Edited afterwards: I added “without a clear mechanism for passing these costs over to industry” to the second bullet point after Michael’s good point below.
I’m not sure if this really explains much or if the funders were acting rationally if it did. As one of its main interventions, SWP is currently buying and giving out electric stunners for free, which is essentially a subsidy in kind. SWP is supported by Open Phil, ACE and seems popular in the broad EA community among animal charities (I’d guess even just for the direct provision of stunners, not any legislative/corporate policy work to leverage it later), but maybe not (?) in the animal community outside of EA.
But maybe shrimp stunning looked better ex ante, given the number of shrimp it could affect per $ and better evidence supporting stunning than feed fortification for keel bone fractures. In fact, HH’s feed fortification trial actually made things worse for hens. SWP is already past a billion shrimp helped in expectation (maybe not just with stunners?). SWP had to get some evidence for the success of the intervention before scaling, but someone had to pay for that and the stunning trial.
If people are hesitant to subsidize the industry, maybe the benefits to animals vs money to industry ratio just looked much better for SWP than HH, and good enough to be worth supporting SWP stunner work but not HH.
FWIW, I think it’s worth doing more hen feed fortification trials, with different supplements or given on different schedules or doses, given the scale and severity of keel bone fractures (WFP), as well as the possibility that cage-free could be worse if and because it increases keel bone fractures.
Yeah good point re Shrimp Welfare Project! I should have said “most animal funders don’t want to subsidise the animal ag industry without a clear mechanism for passing these costs over to the industry”.
For example, in the case of SWP, my understanding is that SWP wants to get these relatively cheap stunners ($50k and only a one-off cost) for a few major producers to show both producers and retailers that it is a relatively cheap way to improve animal welfare with minimal/no impacts on productivity. Then, I believe the idea is to get retailers (e.g. like this) to commit only to sourcing from producers who stun their shrimps, thereby influencing more producers to buy these stunners out of their own pocket (and repeat until all shrimp are being stunned before slaughter).
I think the case with feed fortification with layer hens is much less obvious and less simple due to the impact of feed costs (which are significant and ongoing), so IMO it wasn’t clear to animal funders how these costs would be passed onto the industry at a later date, rather than subsidising feed fortification in perpetuity.
A smaller note is that there is also a very small number of animal funders who follow this suffering-reduction-focused theory of change so if one major funder (e.g. OP) doesn’t fund you, this can be very problematic (as in the case of Healthier Hens). Also many funders don’t act rationally, so it’s also important the research takes that into account (not convinced that funders weren’t acting rationally in this case though).
But do EAs (and major funders especially) support SWP because they expect SWP to accelerate industry adoption of stunners paid for by the industry (or by others besides SWP/animal advocates)?
Its stunner cost-effectiveness analysis and numbers of shrimp helped so far don’t reflect this possibility.
The ACE review barely discusses stunners, and only really in their section on room for more funding, where stunners account for essentially all the RFMF in 2024 and 2025, and there’s no mention of accelerated industry adoption of stunners not paid for by us.[1]
The EA Animal Welfare Fund grant just says “Purchase 4 stunners for producers committing to stun a minimum. of 1.4k MT (~100 million) of shrimps/annum per stunner”.
Stunning equipment will break down over time and eventually need to be replaced. Maybe they’re assuming the companies will repair/replace the stunners at their own cost as they break down, but I imagine they expect this to look good with only a few years of impact per stunner (or didn’t take into account the fact that stunners will break down).
The written rationale of Open Phil’s most recent grant to SWP doesn’t mention the possibility, either: “Open Philanthropy recommended a grant of $2,000,000 over two years to the Shrimp Welfare Project. Focuses include installing stunners at major shrimp producers, reducing stocking density on shrimp farms in South Asia, and increasing industry awareness of shrimp welfare.”
Other than by SWP themselves, I haven’t seen ~any online discussion of this acceleration.
It’s possible the grantmakers are sensitive to the possibility of acceleration of industry adoption of stunners paid for by the industry and are granting in part based on this, but it doesn’t show up in their written rationales. They say very little about the stunner plans in general, though.
And should we have had similar expectations for feed fortification costs to eventually be passed on and HH to accelerate feed fortification paid for by the industry (or not us)? Eventually we can move on from cage-free asks when+where cage-free becomes the norm (or the law), say. Maybe this is complicated by the fact that many companies are international, though.
Stunners aren’t a one-off cost in general: they’ll need to be repaired and replaced eventually if we keep killing shrimp and want them stunned. Someone will have to pay for that, just like ongoing feed fortification. So the only question is whether and how much SWP and HH accelerate the industry (or others besides animal advocates) paying for the respective costs. And again, written grant rationales for SWP don’t mention this acceleration, so it’s not clear the grants depended on expected acceleration.
And HH wouldn’t be paying for all of the feed, just some supplements. I do think SWP’s stunners work looks more cost-effective ex ante than HH did, though.[2]
I think this highlights a methodological issue with ACE’s review process: it isn’t sufficiently sensitive to the details and ex ante cost-effectiveness of additional future funding. Its cost-effectiveness criterion is retrospective wrt outputs, but SWP’s future plans with additional future funding are very different from what its cost-effectiveness was assessed on, and the ACE review of its future plans with additional funding is very shallow.
EDIT: MHR’s CEA of stunners based on SWP’s CEA turned out a few times less cost-effective than historical corporate cage-free campaigns after accounting for moral weight and pain intensities and durations, so probably roughly competitive or better now, as Open Phil’s “marginal FAW funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis”.
CE’s CEA of subsidized feed fortification was 34 welfare points per dollar assuming only an overall probability of success of 26%. The CEA for SWP assumes 100% probability of success. If we also assumed 100% for HH, HH would be at least 130=34/0.26 welfare points per dollar conditional on success (possibly higher, because there are still costs if it fails). The difference between conventional cage and cage-free is probably around 50 or fewer welfare points per year of life by CE’s estimates (comparing USA FF laying hens (battery cages) to wild bird or FF beef cow, say). Corporate cage-free campaigns were 54 years of life affected/$, so this would be <2700 welfare points/$ historically and say <540 welfare points/$=2700 welfare points/$/5 now, so I’d guess still a few times better than HH at >130 welfare points/$ conditional on success.
SWP has some track record with stunners already, so it is reasonable to assign them a higher probability of success than HH ex ante, and this can increase the gap.
Obviously, I don’t speak for OP or EA AWF fund but they literally only publish 1-3 sentences per grant so I’m not surprised at all if they don’t mention it, even if it is a consideration for them. That said, I might just be projecting because this was partially the reason why I supported giving them a grant!
Agree though that stunners aren’t literally a one-off and never touch again, but as you mention I think the overall cost of the intervention to animals helped is significantly better for shrimp stunning in my opinion, as well the avenue for industry adoption being much more clear and more likely.
FYI you described the “Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity” post as “SWP’s CEA of stunners,” but I have no affiliation with SWP.
Just a couple of points on the original comment about AIM:
@mildlyanonymous, I’m glad you brought up the perception of the animal movement regarding AIM. I must say, I don’t have the same negative perception as you do but this may be biased:
i) motivated reasoning on my part as a AIM incubate, and
ii) feedback I get from the overall movement may be filtered by my interlocutors because of said affiliation
In any case, I would really invite whoever feels that AIM is ‘not collaborative with the movement’ to look again. AIM has launched or is planning to launch several organisations which are actively designed to support the movement:
To grow in Africa (AAA)
To bring in more talent into the movement (AAC)
Help orgs in the movement make better decisions (Animal Ask)
Bring in more money to a resource-strained cause area (work in progress)
If this is not the very definition of collaboration, I don’t know what is
Regarding SWP not doing what CE originally proposed we do: I’ve mentioned this openly at least in a couple of interviews (80K, HILTLS). My goal was not to demerit AIM’s research but rather to say that there is so much one can learn from desktop research in a low-evidence space such as animal welfare and it is the role of the founding team to explore the different permutations and see what sticks
IMO, AIM’s reports need to lay out at least a promising intervention, do a cost-effectiveness analysis on it (among other things), and see how it compares to say, cage-free campaigns to decide whether to kill it or explore deeper
I apologise in advance for not engaging further with the comments about AIM / animal movement but we are very (human) resources constrained at SWP and the case in favour of AIM has been sufficiently established IMO
Regarding the discussion between @James Özden and @MichaelStJules, you are both right to some extent:
Out ToC indeed aims to move the Overton window in such a way that eventually high-leverage stakeholders (e.g. retailers, certifiers) feel confident to demand the use of electrical stunning beyond the capacity of SWP to fund
On the other hand, none of our funders has included this as strict condition because:
i) it is much harder to measure, and much more importantly
ii) the intervention looks sufficiently impactful and cost-effective without having to incorporate such second-degree effects
On the point of the charities not doing CE’s originally proposed idea anymore, I want to clarify that we don’t see charities tweaking an idea as a failure but rather as the expected course of action we encourage. We are aware of the limitations of desktop research (however in-depth), and we encourage organizations to quickly update based on country visits, interactions with stakeholders, and pilot programs they run. There are just some informations that a researcher wouldn’t be able to get, and they need input from someone working on the ground. For example, when Rethink Priorities was writing their report on shrimp welfare, they consulted SWP extensively to gain that perspective. Because CE charities operate in extremely neglected cause areas, there is often no other “implementer” our research team can rely on. Therefore, organizations are usually expected to change the idea as they learn in their first months of operations. I see this as a success in ingraining the values of changing one’s mind in the face of new evidence, seeking this evidence, and making good decisions on the side of co-founders with the support of their CE mentors, and we are happy when we see it happen.
There is a complex trade-off to be made when balancing the learning value from more in-depth desktop research vs. more time spent on learning as one implements, and I don’t think CE always gets it right, but the latter perspective is often misunderstood and underappreciated in the EA space.
Regarding charities specifically, in general, we expect about a 2⁄5 “hit rate” (rarely because of the broad idea being bad, more often because the implementation is challenging for one reason or another), and many people, including external charity evaluators and funders, have a different assessment of some of the charities you list. That being said, if you have any specific feedback about the incubated organization’s strategies or ideas, please reach out to them. As you mentioned, they are open to hearing input and feedback. Similarly, if you have specific suggestions about how CE can improve its recommendations, please get in touch with our Director of Research at sam@charityentrepreneurship.com; we appreciate specific feedback and conversation about how we can improve. Thank you for your support of multiple CE charities so far!
I definitely agree that organizations should pivot as they learn about how an intervention works in practice. I think the errors I refer to are more things of the type: a cursory glance from an animal welfare scientist could have told you your research was missing key considerations, and the charity would have not wasted time on the recommended intervention. These seem cheap to prevent and preventable issues.
Thanks for clarifying! We always have an expert view section in the report, and often consult animal science specialists, but it is possible we missed something. Could you tell me where specifically we made a mistake regarding animal science that could have changed the recommendation? I want to look into it, to fact-check it, and if it is right not to make this mistake in the future.
It looks like the report has been taken down, but I think the degree to which you pushed dissolved water oxygenation for fish welfare before launching Fish Welfare Initiative is an especially strong example of this. At the time I heard skepticism from many experts. You can see a reference to that report in this post. This report is another example of something that I think would not have passed any kind of rigorous external review.
Thanks! Can you tell me more about why you think improving dissolved oxygen is not a good idea? I still consider poor dissolved oxygen to be a major welfare problem for fish in the setting where the charity is expected to operate, and improving it through various means (assuming we also keep stocking density constant or decreasing it) would be good for their welfare. This has been validated in the field by FWI in this assessment and studied by others, so I’m a bit surprised. Unless you are referring to specific interventions to improve dissolved oxygen, of which I have high uncertainty about their cost-effectiveness.
And about the report you link, I broadly agree and have written about it below.
Hey, just chiming in here on behalf of the organization I co-founded (Fish Welfare Initiative). We went through AIM’s charity incubation program in 2019—their first formal cohort.
The following are a couple points I had:
1 - Echoing requests for evidence
As some people have already commented above, insofar as you have serious criticisms about various charities (CE or otherwise) it’d be helpful for you to provide some evidence for them.
In particular, it’d be interesting to learn more why you think AAC is “okay”, why Animal Ask “hasn’t had much impact”, and/or why FWI “hasn’t worked very well.”
I really think I would be happy to consider these arguments, but I first want to understand them.
It’d also be helpful to know why you think the animal space, or maybe just giving in the animal space, is “very bad already”. (I know that in particular might be a lot for you to respond to though.) This brings me to my second point.
2 - Just because animal/CE charities are flawed doesn’t mean they’re not worth supporting.
One thread of your comments is one I really resonate with: The animal movement is not good enough. Our evidence is often subpar, decisions are made hastily, we don’t have the right people, etc. Unfortunately, I think this is all true.
But what should we really do differently? If, as you suggest, CE produces not super great animal charities, but it’s still (as you say) “the best bet in the animal space for future new high impact orgs”, then should we just resign ourselves to not launching and running any new animal-focused charities?
My point here is that just because something isn’t as good as we would like (e.g. IMO the best animal charities don’t have even 10% the evidence base of GiveWell’s top charities), that doesn’t mean they’re not worth doing or supporting. Sometimes I think we do ourselves a disservice by always comparing ourselves to human health/poverty alleviation charities: These human-focused orgs literally have decades or even a century more of an evidence base built up than we do. They don’t have an entrenched opposition. And they aren’t trying to change something people derive pleasure from 3 times a day.
We need to build a large and effective movement for reducing animal suffering and ending factory farming. That is going to require starting somewhere, no doubt with lots of early mistakes in the early days.
Of course, I don’t mean to say that anything goes—some ideas are still certainly too dumb to start and some charities too poorly-run to continue. However, I think we need to appreciate that we’re in the very early days of animal advocacy and we need to think about our approaches as such.
3 - On taking the advice of the EA Funds and OpenPhil over CE
This seems to be an important actionable takeaway you’d like people to have:
>>I don’t think donors should take much guidance from them, compared to OpenPhil or the EA Animal Welfare Fund
Just wanted to point this out in case you’re not already aware, but these two granting bodies already heavily grant to CE-incubated animal orgs.
For instance:
FWI has received about 5 grants from the EA AW fund over the years, and 1 grant from Open Philanthropy.
Animal Ask has received at least 1 grant from the EA AW fund and 2 grants from OpenPhil.
And I believe SWP and AAC have also received money from one or both of these funders.
So it seems like either you should think that a) CE animal orgs are actually more promising than you claimed, b) the EA AW Fund and OpenPhil are actually less promising than you implied, or c) these funds are just scraping the bottom of the barrel and grant to CE orgs for lack of better options.
Fwiw, and after talking a reasonable amount with these funders, I’m fairly of the opinion that correct answer is mostly A here.
4 - About Fish Welfare Initiative (FWI) specifically
It’s worth noting that FWI has varied a fair bit from the original idea (see the short published report here) that CE had made when we first launched. Broadly though, CE didn’t give us that certain of a direction—rather, we understood that there are serious problems with how humans raise farmed fish, dissolved oxygen is one of them, and we should do further research to design a specific intervention to help them. Of course it would have been better if there was better research or a more concrete direction for us to go in, but again: We are in the early days of the animal movement and there’s still not enough of an evidence base for most things.
I also agree with Karolina above that it’s not necessarily bad that charities pivot from the original idea (provided that they pivot to something useful).
As for how promising FWI is today, I’d be interested to hear (as I stated in Point 1 above) why you think FWI “hasn’t worked very well”. As I state in Point 2, I think we have certainly made loads of mistakes, but that we’re also having a moderate impact right now and investing in tackling a very important and very neglected problem. You can learn more specifically about all this in our last year in review, or also by seeing our current projects.
Also as mentioned in Point 3, we have received grants from OpenPhil and the EA AW Funds, and are a recommended charity by ACE. Perhaps you think that these organizations have made some mistake in recommending FWI, but then I think you’re in a position of doubt on the entire animal movement (which, to be fair, seems like that might be the position you are in). To that, I would say see my Point 2—these are the early days, and even though no org is perfect we need to start somewhere.
5 - Feel free to dm me
I think it’d be interesting to hear your response to some or all of these points publicly as other people seem to have similar questions, but if you feel uncomfortable doing that feel free to dm or email me. I think there’s a good chance we already know each other, in which case I’d be especially interested to chat more to come to some shared truth here.
Sorry again all for the novel of a comment!
I’ll say something I said to Joey in this thread early—I expect that the best animal charities in the future will come out of AIM, but it will come with a lot of avoidable waste of funds and talent due to the things related to my concerns. I think AIM focusing on their skills at incubating charities, and less on what I believe are weaknesses or threats (coordinating donors and research), would be much better for the space.
To the extent this view is both valid and widely-held, and the reports are public, it should be possible to identify at least some specific examples without compromising your anonymity. While I understand various valid reasons why you might not want to do that, I don’t think it is appropriate for us to update on a claim like this from a non-established anonymous account without some sort of support.
My goal here is not to provide this to the EA Forum, but to caution donors about doing further due diligence. But I mentioned a few examples of more egregious research failures in another comment.
I’ll also add that the original comment still has positive comment karma and many agree votes, and that many of the disagree votes seem to be AIM staff and incubatees, and not necessarily others in animal welfare research. I think that at a minimum, that should raise flags for many people to take these concerns seriously.
If you had total control over all donations in the EA animal space, how would you change things compared to the status quo?
For the main point of your argument, I echo Vasco Grilo’s point that your critiques of specific would be more compelling with justification or sources backing up your views. For any given charity idea, I have no reason to think that the fact that somebody on the internet thinks it’s a bad idea prior to launch correlates with that idea actually being bad. Every new idea has people who are sceptical of it—that doesn’t provide much information one way or the other. I’d be more interested to see a detailed evaluation of each charity in terms of actual impact they may have (or have not) delivered. I can only speak for my experience at Animal Ask, but a couple of recent, detailed evaluations do exist, and we invest a great deal of energy into critically evaluating our own work (and having it evaluated by others).
(As always, my views are my own, not those of my employer.)
I agree that detailed evaluations would be better than my narrative impressions. My main point is to warn people to do way more due diligence on CE. Even if the reputation is undeserved, the organization has a negative reputation among many in animal advocacy, especially in research and grant evaluation, and that is worth looking into.
I don’t really have strong views on how to allocate funds in the animal space, but I doubt it is through funding circles, which usually seem worse for charities even if they are better for donors and the space overall (e.g. the degree of dislike that charities have for the existing Farmed Animal Funders circle seems like an indicator of something important).
Firstly, I want to acknowledge that this comment has probably been pretty valuable in terms of sharing feedback for the CE team about perceptions that maybe a lot of people were unaware of, so thanks for raising some concerns that you and others might be having. I’ll also just say as a co-founder of a CE-incubated charity, I am far from impartial, but I think sharing some inside information could be helpful here.
My main response is to the first 2 comments because I have no real knowledge of the last point.
CE is setting a norm for using research or evidence (however limited) as a basis for starting a charity.
CE actually uses research & evidence to inform starting charities in the animal welfare space. This is not the norm! I think even establishing this as what you should be looking at is relatively new to the animal welfare space and should be acknowledged and praised. Currently the majority of charities that are started in the animal welfare space are not backed by research or significant evidence, from what I have observed usually founders think something is a good idea are relatively charismatic and subsequently get funding. So even though I agree that the research can be improved and I think it’s helpful to flag this to Karolina and Joey, I think that the starting point of CE charities is a lot more than that of other charities in this space. So, really, I think we should commend CE for trying to establish any kind of research as the norm and basis for starting a charity. (I think the animal rights movement could be much better if this was a standard all new charities adopted).
CE was never extremely confident with their own research internally to incubated charities, it was merely a foundation. They also established failure mode thinking into our impact assessments which is another great norm.
I can only talk about cohort 1 which was FWI and AAC. In this cohort, CE presented the research that they believed there was the potential for something really impactful to happen in something generally in this space. It was then up to the co-founding team to go out and do deeper research including getting more external feedback to validate the research, whether the charity was really worth starting and how to execute it best. CE also embedded into our thinking that we shouldn’t necessarily expect that our charities would succeed and that we should have clear failure points to assess whether it’s worth it to continue it, taking into consideration the counterfactuals of the movement’s money. Again i think this is a great norm to establish, many charities in the animal welfare space and in other sectors do not do this. They merely carry on without these assessments. Healthier Hens declared shutting down because of this and i think this should be celebrated not used as a signal of poor research. So basically, I think actually if you are mad about new charities not collaborating enough, I think that’s on us, not CE.
I think your main point (Which is a valid concern) is whether CE charities are net a good use of movement resources. To date just speaking about AAC we estimate adding over $2,000,000 of counterfactual value to other animal welfare organisations with a spend of just over 750,000 in under 5 years.
I agree with Haven’s point that the animal movement needs to do better and be better. But as you and others have said, I still think CE charities are some of the best in the movement. If we don’t try to create new good organisations addressing gaps in the movement, I don’t think we are going to realistically accelerate towards ending factory farming. The question is, do you know a better incubator programme to start new organisations than CE, or do you just want them to improve a bit?
SWP, Animal Ask, Healthier Hens, FWI and AAC have all been supported by either EA animal welfare fund or Open Phil or both (in the case of most of us) so I would be really surprised if there were that much difference between the alternative funding perspective you are suggesting here.
From AAC’s point of view, I would be interested to know your concerns on scalability because I think there are infinite ways we can scale; it’s more about us selecting the right one. I’d love feedback on this so feel free to DM me from your anonymous account. We have supported over 150 organisations in increasing talent into critical positions they were struggling to hire for with 90 landing positions and have also brought in $408,000 of counterfactual funding to other organisations. Currently, we estimate (conservatively from most donors feedback) for every 1$ we spend a $2.5 of value is added to the movement. Which suggests we are net positive to the movement. We have plans to double this ratio by the end of this year.
In conclusion, of course, CE has areas to improve, as we all do. Still, I think this is a pretty harsh analysis of an organisation adding a considerable amount of value and norms to the animal advocacy movement on founding charities. I think they would add a lot of value to bringing these values and norms into the donor landscape as there is a gap and CE has a pretty good track record in doing this in other donor circles like the Meta Funding Circle etc.
Lauren
(transparently co-founder of AAC)
Could you elaborate on “being hostile”? Do they have a reputation for causing harm, or is it just about not listening to feedback?
I think being hostile was probably slightly too strong, though I will note that the original comment still has positive upvotes / many agree votes, but no other people defending this position in the comments, which is concerning, and mostly CE staff and incubatees responding
Thank you for clarifying!
I (and others) have strongly upvoted it because (especially post-FTX[1]) it’s important to encourage people to share concerns about unethical behavior from influential people in the ecosystem, it’s not an indication of agreement.
Agree-votes do convey a lot of information, and I’m surprised that nobody else is defending this position in the comments, given 7 people agree with you.
I found one of the examples here very unpersuasive: I read this report years ago and I distinctly remember it was very clear that it was meant to “get a quick sense of things”, only had a few hours of research behind it, and wasn’t meant to pass any kind of rigorous research. It was the first thing I read about animal welfare and it was enlightening, I’m grateful that they published it. Here is the first paragraph:
(I am not affiliated with CE, but it would be important for me to know if their research was bad)
and, less so, post-OCB, post-Leverage, post-CFAR, …
Most of these are just “people in space knew this wouldn’t work”. Could you share more specific criticisms? As Aidan said, the biggest successes come from projects no one else would do, so without more information that seems like a very weak criticism.
I gave a few examples here, but as I mentioned in response to Aidan, I don’t think that comment makes much sense as an overall defense.
Just to note: I have a COI in commenting on this subject.
I strong downvoted your comment, as it reads to me as making bold claims whilst providing little supporting evidence. References to “lots of people in this area” could be considered to be a use case of the bandwagon fallacy.
In my opinion, a strong downvote is too harsh for a plausibly good faith comment with some potentially valuable criticism, even if (initially) vague.
They elaborated on some of their concerns in the replies.
You could ask them to elaborate more if they can (without deanonymizing people without their consent) on specific issues instead of strong downvoting.
It also concerns me that I’ve seen 5 instances of this post being disagree voted and strong downvoted, then a CE staff member commenting right after. I think those are obviously things people have a right to do if it is CE staff downvoting and disagreeing, but it means this post, outside of CE staff, might have fairly strong agreement from many people, which seems like an important note, given that it is still very positive karma, and without those votes might be positive agreement on balance.
For what it’s worth, I have no affiliation with CE, yet I disagree with some of the empirical claims you make — I’ve never gotten the sense that CE has a bad reputation among animal advocacy researchers, nor is it clear to me that the charities you mentioned were bad ideas prior to launching.
Then again, I might just not be in the know. But that’s why I really wish this post was pointing at specific reasoning for these claims rather than just saying it’s what other people think. If it’s true that other people think it, I’d love to know why they think it! If there are factual errors in CE’s research, it seems really important to flag them publicly. You even mention that the status quo for giving in the animal space (CE excepted) is “very bad already,” which is huge if true given the amount of money at stake, and definitely worth sharing examples of what exactly has gone wrong.
I learned today that AIM reached out to at least one organisation to try to deanonymize me after I posted this. I was also told they did some amount of coordinating the responses to it. Given that and the power they hold, I won’t talk about this further, as it’s made me feel unsafe in critiquing them. This was already the reason I left these comments anonymously.