Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and was the COO of Rethink Priorities from 2020 to 2024.
abrahamrowe
Yes correct—just the Insects as Food and Feed industry. Though note these estimates were from 2020 - my best guess is that there are at least 4-7x as many insects farmed by the industry today (mainly because it’s going through a lot of industrialization / scale up, and a bunch of new major factories have opened in the last few years).
Yep, I voted strongly agree from seeing that, though I wouldn’t necessarily agree with the non-footnoted version, and without all these caveats.
In the abstract I think this would be good, but I’m skeptical that there are great opportunities in the animal space that can absorb this much funding right now! This is like, doubling the EA funds going to animal welfare stuff. I think I would strongly agree with claims like:
Conditional on there being several years of capacity build up, animal welfare would use the funds more effectively.
From a pure EA lens, some animal welfare spending is many times more cost-effective than the most effect global health interventions.
The current most effective $100M spent on animal welfare is more cost-effective than the current most effective $100M spend on global health.
I think something that would be closer to 50⁄50 for me (or I haven’t thought about it actually, but on its face seem closer to a midpoint):
It would be better to invest an extra $100M to spend on animal welfare in the future than spending it on global health now.
I’d strongly disagree with a claim like:
It would be better to spend an extra $100M in the next two years on animal welfare than on global health
So I listed myself as strongly agreeing, but with all these caveats.
Thanks! This is a great point. I’ll work on getting some German-deductible options on the list for all categories for future months, but also can confirm that the pool has up to $1,500 (and potentially more) in donation swappable dollars to help navigate this right now.
Thanks! That’s a great question and something I should figure out how to handle. I’ll think about the ideal implementation of this and include something for November, but I think if it comes up for October participants:
Pledge in USD, stating the other currency amount planned to give in the alternate currency (spot converted on the day of the pledge) in the comments.
Give them amounts to give in their preferred currency using that rate.
Once donated and receipts are submitted, I’ll spot convert at the time they donated, and if the dollar weakened relative to their original pledge substantially, backstop it.
Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:
Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.
My experience isn’t this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:
~33% inherited wealth / family business
~40% seems like they mostly “earned it” in the sense that it seems like they started a business or did a job well, climbed the ranks in a company due to their skills, etc. To be generous, I’m also including people here who were early investors in crypto, say, where they made a good but highly speculative bet at the right time.
~20% seems like the did a lot of very difficult work, but also seem to have gotten really really lucky—e.g. grew a pre-existing major family business a lot, were roommates with Mark Zuckerberg, etc.
Obviously we don’t have the counterfactuals on these people’s lucky breaks, so it’s hard for me to guess what the world looks like where they didn’t have this lucky break, but I’d guess it’s at least at a much lower giving potential.
7% I’m not really sure.
So I’d guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don’t know about it. I’m surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don’t think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of “cause prioritization is directly downstream of the views of the wealthiest people” is worse than many alternatives.
I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.
I agree! I think that these donors are probably the least incentivized to do this, but also where a the most value would come from. Though I’ll note that as of me writing this comment the average is well above 10x the minimum donation.
Yeah, I agree that this seems tricky. I thought about sub-causes, but also worried they’d just make it really burdensome to participate every month.
I ended up making a Discord for participants, and added a channel where people can explain their allocation, so my hope is that this lets people who have strong sub-cause prioritization make the case to it for donors. Definitely interested in thoughts on how to improve this though, and seems worth exploring further.
Oh interesting. Great catch, thanks! Added.
Announcing Equal Hands — an experiment in democratizing effective giving.
After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, “Farmed vertebrate welfare should be an EA focus”.
I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don’t think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new animal-focused talent should probably be mostly going into earning-to-give for invertebrates/WAW, and that donations should mostly go to groups there or the EA AWF (which should in turn mostly fund invertebrates and WAW). I don’t think farmed vertebrate welfare should be the default way that EAs recommend to help animals
I mean something like directly implementing an intervention vs finance/HR/legal/back office roles, so ops just in the nonprofit sense.
Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it’s less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.
I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA’s net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.
But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn’t matter in the longrun from EA.
I don’t think it’s all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.
Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don’t really think any of these are that likely on their own — I just think it’s generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.
The various charities don’t produce enough value to offset the harms of FTX (seems likely they already have produced more to me, but I haven’t thought about it)
Things around accidentally accelerating AI capabilities in ways that end up being harmful
Things around accidentally accelerating various bio capabilities in ways that end up being harmful.
Enabling some specific person into entering a position of power where they end up doing a lot of harm.
X-risk from AI is overblown, and the E/accs are right about the potential of AI, and lots of harm is caused by trying to slow AI development/regulate it.
There is even stronger reactionary response to some future EA effort that makes things worse is some way.
Most of the risk from AI is algorithmic bias/related things, and AI folks’ conflict with people in that field ends up being harmful for reducing it.
Using EV only for making decisions accidentally leads to a really bad world, even when all decisions made were positive EV.
EA crowds out other better effective giving efforts that could have arisen.
Two caveats on my view:
I think I’m skeptical of my own impact in ops roles, but it seems likely that senior roles are harder to hire for generally, which might generally mean taking one could be more impactful (if you’re good at it).
I think many other “doer” careers that aren’t ops are very impactful in expectation — in particular founding new organizations (if done well or in an important and neglected area). I also think work like being a programs staff member at a non-research org is very much in the “doer” direction, and could be higher impact than ops or many research roles.
Also, I think our views as expressed here aren’t exactly opposite — I think my work in ops has had relatively little impact ex post, but that’s slightly different than thinking ops careers won’t have impact in expectation (though I think I lean fairly heavily in that direction too, just due to the number of qualified candidates for many ops roles).
Overall, I suspect Peter and I don’t disagree a ton (though haven’t talked with him about it) on any of this, and I agree with his overall assertion (more people should consider “doer” careers over research careers), I think I just also think that more people should consider earning to give over any direct work.
Also, Peter hires for tons of research roles, and I hire for tons of ops roles, so maybe this is also just us having siloed perspectives on the spaces we work in?
Thanks for the questions!!
What makes you hopeful that scalable interventions are coming, and can you say more about anything you’re particularly excited about here?
The ones that seem most likely in the near future are:
Insecticide interventions like alternative crop insect management approaches, including genetic ones
Less painful insecticides
Fertility control for urban wildlife
Probably a lot more no one has considered
Things that make me think this is on the table:
I think there aren’t great alternative animal welfare interventions, but animal interventions have really good returns if you get them right because you can impact so many animals.
We’ve made some cool progress on validating welfare measures that might be cheap to measure, which could be useful for assessing the sign of interventions.
It seems generally like the academic field building project is going well, so we should expect this to accelerate.
In terms of timelines — I think this is more like 10-15 years. But part of the reason I think that’s exciting is that I used to think it would be more like 2050+ before anything like this was on the table. I think I’ve also just generally decreased my confidence that the problems as are as difficult as I thought before (though I definitely think they are still tricky).
For insecticides, I think my view remains that we are something like 2-5 years of specific lab/field research away from plausibly having a great intervention, so it is sad that progress hasn’t been made on it, and given that this also seemed like the case a few years ago, funding the research should have been a priority earlier.
Nice, these are great points.
On some specifics:
I think the other consideration is that for really cheap proteins (corn/soy/wheat), chickens and other animals eat much less processed versions that are cheaper than the ones humans eat. But also people seem to like products made from them less. The novel plant protein inputs are a lot more expensive as far as I can tell.
Yeah, I think there is a bunch of uncertainty. My sense of the technical hurdles to cost reduction is that they are fairly large, and I’m not sure they super solvable. But I hope I am wrong!
Yeah, this seems possible too.
Plus I expect health and climate change angles on meat consumption will also more likely than not steadily increase.
I worry these push toward worse animal welfare (less eating of cows, more eating of chicken/fish), not better.
Yeah, I agree with everything you say here RE WAW, on both how to present it and the usefulness of the net-positive or negative debate.
Nice, these are good questions, but probably don’t capture all the cruxes in my view.
1. I think this seems moderately unlikely to me? I’m not sure what would drive down prices further than where they are now, as it seems like a large portion of the cost are the proteins themselves, and not production.
2. This also seems like it relies on crossing technological hurdles that are really hard.3. I think this seems possible? But I’d put below 50% on it, and if it does happen, I’d expect something more like the climate movement, where lots of people think it is important but don’t really take substantial steps to act on it.
4. I think that reaching 20% vegetarian seems possible in some countries, but I think I’m a lot more skeptical it’ll go much higher.
I think it does seem plausible to me that there would be a meaningful reduction in the amount of meat consumed over this period in developed countries, but also expect that might come with more chicken/fish consumption that would offset animal welfare gains anyway.
I think another crux more important to my pessimism is that I don’t feel very convinced that price/taste competitive meat alternatives will cause a significant increase in their adoption.
I meant more literally, put $100M in an investment account to save for good future animal opportunities vs spending on the best global health interventions today. I’m not certain it’s actually a 50⁄50 item, but was trying to find a mid point.
I don’t really know enough about global health work to say—but I’d guess there are some novel medical things seem plausibly able to:
Appear over the next few decades
Require a lot of cash to scale up
Could be really cost-effective