Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and was the COO of Rethink Priorities from 2020 to 2024.
abrahamrowe
Probably, but not sure! Yeah, the above is definitely ignoring cluelessness considerations, on which I don’t have any particularly strong opinion.
I don’t think this is quite what I’m referring to, but I can’t quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I’m not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don’t know if I’m embracing risk aversion views as much as relating to their appeal.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
I think I mean something slightly different than difference-making risk aversion, but I see what you’re saying. I don’t even know if I’m arguing against EV maximization—more just trying to point out that EV alone doesn’t feel like it fully captures the picture of the value I care about (e.g. likelihood of causing harm relative to doing nothing feels like another important thing). I think specifically, that there are plausible circumstances where I am more likely than not to cause additional harm, and in expectation that action has positive EV, feels concerning. I imagine lots of AI risk work could be like this: doing some research project has some strong chance of advancing capabilities a bit (high probability of a bit of negative value), but maybe a very small chance of massively reducing risk (low probability of tons of positive value). The EV looks good, but my median outcome will be the world being worse than it was if I hadn’t done anything.
Expected value maximization hides a lot of important details.
I think a pretty underrated and forgotten part of of Rethink Priorities’ CURVE sequence is the risk aversion work. I think the defenses of EV against more risk-aware models seem to often boil down to EV’s simplicity. But I think that EV actually just hides a lot of important detail, including, most importantly, that if you only care about EV maximization, you might be forced to conclude that worlds where you’re more likely to cause harm than not are preferable.
As an example, imagine that you’re considering a choice that can cause 10 equally possible outcomes. In 6 of them, you’ll create −1 utility. In 3 of them, your impact is neutral. In 1 of them, you’ll create 7 utility. The EV of taking the action is (-6+0+7)/10 = 0.1. This is a positive number! Your expected value is positive, even though you have a 60% chance of causing harm. In expectation you’re more likely than not to cause harm, but also in expectation you should expect to increase utility a bit. This is weird.
Scenario 1
More concretely, if I consider the following choices, which are equivalent from an EV perspective:
Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +10 utility
Option B. A 20% chance of causing a harmful outcome, but in expectation will cause +10 utility
It seems really bizarre to not prefer Option A. But if I prefer Option A, I’m just accepting risk aversion to at least some extent. But what if the numbers slip a little more?
Scenario 2
Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +9.9999 utility
Option B. A 20% chance of causing a harmful outcome, but in expectation will cause +10 utility
Do I really want to take a 20% chance on causing harm in exchange for 0.001% gain in utility caused?
Scenario 3
Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +5 utility
Option B. A 99.99999% chance of causing a harmful outcome, but in expectation will cause +10 utility
Do I really want to be exceedingly likely to cause harm, in exchange for a 100% gain in utility?
I don’t know the answers to the above scenarios, but I think it feels like just saying “the EV is X” without reference to the downside risk misses a massive part of the picture. It seems much better to say “the expected range of outcomes are a 20% chance of really bad stuff happening, a 70% chance of nothing happening and a 10% of a really really great outcome, which all averages out to an >0 average”. This is meaningfully different than saying “no downside risk, and a 10% chance of a pretty good outcome, so >0 average”.
I think that risk aversion is pretty important, but even if it isn’t incorporated into people’s thinking at all, it really doesn’t feel like EV produces a number I can take at face value, and that makes me feel like EV isn’t actually that simple.
The place where I currently see this happening the most is naive expected value maximization in reasoning about animal welfare — I feel like I’ve seen an uptick in “I think there is a 52% chance these animals live net negative lives, so we should do major irreversible things to reduce their population”. But it’s pretty easy to imagine doing those things being harmful, or your efforts backfiring, etc. in ways that cause harm.
EA UK is hiring a new Director!
How did you get to 58%? That seems pretty precise so interested in the reasoning there.
This isn’t an answer to your question, but I think the underlying assumption is way too strong given available evidence.
Taking for granted that bad experiences outweigh good ones in the wild (something I’m sympathetic to also, but which definitely has not clearly been demonstrated), I think having any kind of position on whether or not climate change increases or decreases wild animal welfare is pretty much impossible to say.
Why do you think insects will end up dominating in the calculus of animals impacted by climate change? What if most animals impacted by climate change are aquatic, and not terrestrial? This seems entirely plausible. I don’t think we have any idea how climate change will impact aquatic animal populations in the very long run.
It might be in principle true that warmer climates = more insects, but what actually will end up impacting insect populations is going to be a lot more complicated: pace and nature of human development (e.g. changes in habitat destruction), weather variance over the year and across years, etc. Maybe species that are especially good at navigating high weather variance will do especially well for the next few centuries, and that causes local maxima that look very different than the theoretical effects.
It wouldn’t surprise me if total land area by biome type is way more relevant to insect population than overall temperature. This again seems like a question where we know basically nothing about what the longterm impacts of climate change will be.
I guess my overall view is that having any kind of reasonable opinion on the impact of climate change on insect or other animal populations in the longrun, besides extremely weak priors, is basically impossible right now, and most assumptions we can make will end up being wrong in various ways.
I also think it doesn’t follow that if we think suffering in nature outweighs positive experience, we should try to minimize the number of animals. What if it is more cost-effective to improve the lives of those animals? Especially given that we are at best incredibly uncertain if suffering outweighs positive experience, it seems clearly better to explore cost-effective ways to improve welfare over reducing populations, as those interventions will be more robust no matter the overall dominance of negative vs positive experiences in the wild.
I think my view is that while I agree in principle it could be an issue, the voting has worked this way for long enough that I’d expect more evidence of entrenching to exist. Instead, I still see controversial ideas change people’s minds on the forum pretty regularly and not be downvoted to oblivion, and see low quality or bad faith posts/comments get negative karma, and I think that’s the upside of the system working well.
Bioweapons are an existential risk
I’m interpreting this question as “an existential risk that we should be concerned about”, which I think the case for is much weaker than whether or not they are generally an existential risk (though I still think the answer is yes).
Vote power should scale with karma
I think that the upside of the system is high, and that EA Forum posts have been pretty effective in changing community direction in the past, so the downside risk seems low. My impression (as someone who has posted things that aren’t particularly popular at times) is that well reasoned-but-disagreed-with posts still get lots of upvotes.
Not commenting on WAI specifically, I kind of dislike the “wild animal suffering in not very tractable” meme because it feels like it emerged before anyone ever even tried to figure out how tractable it was, and before basically any science happened in the space, but has just stuck, based on armchair philosophizing by non-scientists (me among them, to be clear). It sometimes feels a bit like saying curing polio is intractable before anyone ever tried to look into making a vaccine or think about what we could do to try to cure it — we’re not going to know the tractability of interventions until we actually look into it in detail, and people with the right kind of expertise to evaluate WAW’s tractability have barely started doing so.
It’s also just a massive space — it seems pretty unreasonable to say that, given that hundreds of quadrillions of animals at least as complicated as insects, live vastly different lives across the world in thousands of kinds of biomes/ecosystems, etc, that helping ~none of them is going to be possible without at least trying to look into it for a minute.
My personal belief is that we will probably have good wild animal welfare interventions sooner than we’ll have good marginal uses for farmed animal money beyond current interventions, which suggests that the research seems pretty worth it.
I also think that wild animal welfare just remains a problem for ~everyone, given that wild animal welfare impacts are downstream from most other interventions, so solving it should be a big priority. Insofar as people think that wild animal suffering is intractable because of uncertain impacts of your intervention on other wild animals, surely that would basically just apply to anything you do in the world that impacts wild animals (which is probably basically everything). If you buy the case for wild animals mattering morally, but think that downstream effects make it impossible to act on it, most charity seems to get stuck.
Yeah, that is a great point.
Relatedly, one thing I’ve been thinking about since posting this is how much Distinction feels relevant to EA (though I’m hesitant to cite continental philosophy on the EA Forum of all places!)
I think specifically, EA sometimes concentrates different types of power (e.g. financial, cultural/social, etc) into funders, and doing that is inherently distorting. E.g. I’m thinking about things like: having funders be keynote speakers at events, etc. that elevates funder social position relative to other people.
My experience of the animal welfare space, say, where the deference issues don’t come up as much (though has plenty of other funder-related issues!) is that the funders have lots of financial power, but besides two specific people, aren’t given much social/cultural power, and most the social/cultural power is held by people who don’t distribute funding. I also have heard of things like funders being considered for speaking at conferences, and people pushing back on it a bit out of these kinds of concerns. I think maybe some more healthy skepticism about mixing power types could be helpful?
That’s too bad! I’ll give this feedback to Every.org as they are a moderately aligned nonprofit themselves, and are really receptive to feedback in my experience. FWIW, using them saves organizations a pretty massive amount of bureaucracy / paperwork / compliance-y stuff, so I hope there is a way to use them that can be beneficial for the donors.
On deference to funders
I think that the animal welfare space is especially opaque for strategic reasons. For example, most of the publicly available descriptions of corporate animal welfare strategy are, in my opinion, not particularly accurate. I think most of the actual strategy becoming public would make it significantly less effective. I don’t think it is kept secret with a deep amount of intentionality, but more like there is a shared understanding among many of the best campaigners to not share exactly how they are working outside a circle of collaborators to avoid strategies losing effectiveness.
I think outside organizations’ ability to evaluate the effectiveness of individual corporate campaigning organizations (including ACE unfortunately) is really low due to this (I think that evaluating ecosystems of organizations / the intervention as a whole is easier though).
(I don’t really want to engage much on this because I found it pretty emotionally draining last time — I’ll just leave this comment and stop here):
I think asking for feedback prior to publishing these seems really important. To be clear, I’m very sympathetic to the overall claim! I suspect that most published estimates of the impact of marginal dollars in the farmed animal space are way too high (maybe even by orders of magnitude)I also think the items you raise are important questions for Sinergia to answer!
But, I think getting feedback would be really helpful for you:You cite multiple places where Sinergia claims impact, but you cite evidence to the contrary, such as company statements on their sites either existing prior to Sinergia’s claimed date, or not existing at all.
The experience of corporate campaigners universally is that the degree to which you should take company statements about their animal welfare commitments seriously is relatively low. It’s just a regular fact of corporate campaigning in many countries that companies have statements on their websites that they either don’t follow or don’t intend to follow. Often, companies have weaselly language that lets them get out of a commitment, e.g. “we aspire to do X by Y year,” etc.
The language in company statements matter a ton — when I ran corporate campaigns, a major US restaurant company I was campaigning on put up a verbatim version of the Better Chicken Commitment with all of the specifics removed (e.g. “reduce stocking density” instead of “reduce stocking density to X lbs/sqft”) and did nothing in their supply chain. This allowed them to tell journalists / others that we were just lying that they had not made the commitment. I worry that Google translating pages loses nuance that might matter here. Google Translating Brazilian law also seems like a huge stretch as evidence — and taking the law at face value as a non-Brazilian lawyer (and not evidence about the interpretation and enforcement of the law) seems like a mistake.
Getting a commitment on the website is important, but it’s only a portion of work. Actually getting the company to make the change is way more work.
I know nothing about the JBS case, but can tell you that there are many times where a company has a commitment, then behind the scenes the actual work is several years of getting them to honor it. It seems completely plausible, and even routine, that the actual impact credit should go to an organization working to get a company to act on an existing policy, as opposed to getting them to put up the policy in the first place. I don’t know anything about Sinergia’s claims here, but seems totally plausible that much of their impact comes from this invisible work that you wouldn’t learn about without asking them.
I suspect that in your critique, some of your claims are warranted, but others might have much more complicated stories behind them, such as an organization getting a company to actually follow through on a commitment, or getting a law to be enforced. I think that feedback would help draw out where these critiques are accurate, and where they are missing the mark.
- Feb 20, 2025, 7:40 PM; 27 points) 's comment on Sinergia (ACE Top Charity) Makes False Claims About Helping Millions of Animals by (
Equal Hands — 2 Month Update
Equal Hands is an experiment in democratizing effective giving. Donors simulate pooling their resources together, and voting how to distribute them across cause areas. All votes count equally, independent of someone’s ability to give.
You can learn more about it here, and sign up to learn more or join here. If you sign up before December 16th, you can participate in our current round. As of December 7th, 2024 at 11:00pm Eastern time, 12 donors have pledged $2,915, meaning the marginal $25 donor will move ~$226 in expectation to their preferred cause areas.
In Equal Hands’ first 2 months, 22 donors participated and collectively gave $7,495.01 democratically to impactful charity. Including pledges for its third month, that number will likely increase to at least 24, and $10,410.01
Across the first two months, the gifts made by cause area and pseudo-counterfactual effect (e.g. if people had given their own money in line with their voting, rather than following the democratic outcome) has been:
Animal welfare: $3,133.35, a decrease of $1,662.15
Global health: $1,694.85, a decrease of $54.15
Global catastrophic risks: $2,093.91, an increase of $1,520.16
EA community building: $319.38, an increase of $179.63
Climate change: $253.52, an increase of $16.52
Interestingly, the primary impact has been money being reallocated from animal welfare to global catastrophic risks. From the very little data that we have, this primarily appears to be because animal welfare-motivated donors are much more likely to pledge large amounts to democratic giving, while GCR-motivated donors are more likely to sign up (or are a larger population in general), but are more likely to give smaller amounts.
I’m not sure why exactly this is! The motivation should be the same regardless of cause area for small donors — in expectation, the average vote has moved over $200 to each donor’s preferred causes across both of the first two months, so I would expect it to be motivating for donors from various backgrounds, but maybe GCR-motivated donors are more likely to think in this kind of reasoning.
GCR donors haven’t had as high-retention over the first three months of signups, so currently the third month looks like it might look a bit different — funding is primarily flowing out of animal welfare, and going to a mix of global health and GCRs.
The total administrative time for me to operate Equal Hands has been around 45 minutes per month. I think it will remain below 1 hour per month with up to 100 donors, which is somewhat below what I expected when I started this project.
We’d love to see more people join! I think this project works best by having a larger number of donors, especially people interested in giving above the minimum of $25. If you want to learn more or sign up, you can do so here!
Nice! And yeah, I shouldn’t have said downstream. I mean something like, (almost) every intervention has wild animal welfare considerations (because many things end up impacting wild animals), so if you buy that wild animal welfare matters, the complexity of solving WAW problems isn’t just a problem for WAI — it’s a problem for everyone.
I have seen this before, and wondered if it is conflation with Humane Society of the United States (which is often called the Humane Society). Also, many local animal shelters are named “Humane Society”. I’d guess this phrase would have very high recognition in the US.
I think this is true as a response in certain cases, but many philanthropic interventions probably aren’t tried enough times to get the sample size and lots of communities are small. It’s pretty easy to imagine a situation like:
You and a handful of other people make some positive EV bets.
The median outcome from doing this is the world is worse, and all of the attempts at these bets end up neutral or negative.
The positive EV is never realized and the world is worse on average, despite both the individuals and the ecosystem being +EV.
It seems like this response would imply you should only do EV maximization if your movement is large (or that its impact is reliably predictable if the movement is large).
But I do think this is a fair point overall — though you could imagine a large system of interventions with the same features I describe that would have the same issues as a whole.