www.jimbuhler.site
Also on LessWrong and Substack, with different essays.
Jim Buhler
Are you setting aside wild animals?
this seems to me to imply a greater concern for anthropogenic harm than non-anthropogenic harm. Is that what you meant?
Oh no sorry, increased WAW welfare compared to the “natural” situation counts as impact too.
What I’m saying is: say you help 1 million wild animals out of many or 1 million farmed animals out of fewer. You can’t say the former is better because there are more wild animals. It doesn’t matter how many there are. What matters is how many you help and how much. And there is an asymmetry here where farmed animals are probably 100% helped if humans are disempowered—the problem is totally fixed—whereas, even in the best case scenario, empowered humans will be nowhere near totally fixing wild animal suffering. This asymmetry may compensate for the fact that there are many more wild animals to help.Humans increasing or decreasing the number might be the largest impact
As in (D) is more plausible than (C) (in my typology)? I’d agree. Anyway, my argument holds independently of what people find more likely between (C) and (D).
For example, the regeneration of forest is actively opposed in much of Central Europe, because people have cultural ideas about what the landscape should look like. So there’s a tension there between environmentalists and traditionalists, and I wouldn’t say that the environmentalists are winning.
Oh I didn’t know that, thanks. There, of course, is still the question of the marginal impact WAW advocates would have in such debates, but helpful example!
I wasn’t thinking about promoting/opposing restoration but about influencing how it is done (without necessarily taking a stance on whether no restoration would be better). And I could very well imagine WAI wanting to advise decision-makers on how to conduct restoration.
I think present and future WAW advocates would fiercely disagree about what ecosystems might be net good/bad, and any intervention aimed at making greening more likely would be highly controversial.
Interventions aimed at, at least tentatively, holding off on restoring would be far less controversial, though. And in that case, yes, I doubt that WAW advocates “leveraging conservative valuing of traditional landscapes to oppose it” would successfully prevent any restoration project. Whatever the incentive for restoration is, it seems far stronger than the incentive to please the few detractors who do not want the landscape restored.
Interesting, thanks!
An intervention doesn’t even need to be framed around WAW either—you could just fund an organization to lobby for desert greening (for example) in a particular area, and they could leverage whatever arguments they’ve got.
That’s good only assuming WAW in the ecosystem you create is net positive tho, right?
I was imagining more like:some restoration interventions are and will keep happening anyway.
let’s influence those and push for ecosystems with less suffering.
But I just find it hard to make a difference there. For social/political reasons, yes. Not necessarily because people would be against the idea, but just because there’s no/little incentive for the relevant actors (in the restoration process) to do what we’d want there. Why would they bother? I also feel like WAI would have discussed this more if this were tractable? Haven’t thought about this much tho.
[On my Substack], most of my readers are not as familiar with EA discourse
This surprised me. Where do they come from?
I like the idea of “Promoting High-Welfare Ecosystem States”! I’m surprised you put it in the “promising near-term intervention” box, though. Did you get the chance to talk to WAW scientists about this?
Collaborations between ecological restoration actors and WAW scientists seem acutely scarce at the moment.[1] If someone, in a position similar to you and I, wanted to non-trivially influence restoration projects, even assuming they 100% knew what to recommend, this unfortunately feels a bit intractable to me. Do you see reasons for optimism? Is there any relevant work I’m missing? :)- ^
All I’m aware of is:
Capozzelli et al. (2020) arguing that restoration ecology and welfare science “could enjoy a productive union” and a few discussions of colab that emerged after that.
This 2025 grant from WAI that may “inform freshwater systems restoration strategies with a welfare perspective”.
Animal Ethics (2026) asking fow WAW to be accounted for when decidicing whether/how to do rewilding.
- ^
I think there are plenty of crucial sign-flipping considerations pointing both ways (sec. 1 of my post), and that our takes certainly fail to account for some of them, in ways that likely make these takes irrelevant.
And even if someone’s evaluation somehow does not omit a single crucial consideration, they have to make opaque judgment calls on how to weigh up the conflicting pieces of (theoretical and empirical) evidence. I see little reason to believe such judgment calls would do better than chance.
Clarification on what my “0% Agree” means: I confidently disagree that we should believe it’d go well for animals (sec. 1 of my post), but I don’t think we should believe the opposite either. I think our cause prio should not rely on any assumption on this question (sec. 2 of my post).
If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.).
Nitpick, but it seems unfair to consider this an upside rather than the mere absence of a downside, since the relevant counterfactual scenario, in expectation (if no AI safety work) is a misaligned AI that takes over and probably ends animal farming as it kills or disempowers humans.
AI safety cannot take the credit for a potential future reduction or end of farmed animal suffering if it preserves humanity, without which animal farming would not exist to begin with.
Interesting. I was particularly curious about why you think the ocean fertilization effect will not be as strong as you had originally estimated, if you have readings to recommend there too.
Rewilding projects would increase the total amount of suffering by expanding and intensifying landscapes where such suffering is endemic.
I mean, only if the rewilding projects increase the overall number of (welfare-range-adjusted) wild animals. This would certainly be the case for rewilding projects that introduce (more) life in dead-ish zones. But the rewilding examples you happen to discuss (and also commonly discussed by others, especially outside of EA)[1] are not of this type. They’re about introducing species into a pre-existing ecosystem, and I guess you would agree that these projects don’t clearly “expand and intensify” nature, overall.
It is actually not clear to me how serious/common rewilding projects of the former kind are, how worried WAW advocates should be, and what to do about them.
Nice post! :)
Do you believe that human welfare dominates the welfare of the wild animals that you think are sentient? (I wonder why wildlife conservation isn’t your priority if you assume WAW is positive.)
what is not yet published is that it is looking like the ocean fertilization effect will not be as strong as we had originally estimated.
Has this been published since then? Would love to read this :)
“On the margin, it is better for animals to work on the transition to AGI going well, than directly working on AI for animal welfare”
I’m worried everyone will just agree that this seems unlikely. That’s a very high bar.
“AGI which doesn’t cause human extinction or disempowerment will value animal welfare”
I think we don’t care about whether it “values animal welfare”. We care about what happens to animals. There are many very plausible worlds where these two are uncorrelated (just like in ours, where people have never valued AW that high and it has never been that bad for farmed animals, especially the smaller ones).
“Without extra animal-focused work, even aligned superintelligence would be bad for non-human animals”
That’s my favorite version, but I’m worried it invites everyone to just agree on “we should have some extra animal-focused work, anyway” and not red-team each other deeply enough.
So here’s a minimal version I propose: AI safety work that helps humans also helps other animals, to some extent.
(The “to some extent” is optional. I added it to invite people to think about whether AIS helps other animals at all, and not just all agree over the uncontroversial and boring claim that “AIS helps humans more than animals”.)
I like this minimal formulation becauseimpossible to misinterpret.
it makes clear that the more we lean yes on this, the more AI safety work that helps humans is overall robustly positive, all else equal (e.g., it’d be robust to uncertainty on moral weights, on the expected size of different populations in the future, or on the sign of x-risk reduction). And I think the answer to this question is a crux for some people for (not) supporting AIS work. I feel like none of the versions you propose (or the original version) quite captures this as much as I’d like.
Thanks for asking us, Toby! Looking forward to this debate week :)
I assume the most important reason is that it is something that most people close to them do. Likewise, I think most people prioritise animals with a higher probability of sentience like chickens instead of shrimps because it is what most people close to them do.
Interesting. I think there’s something to this analogy, though ofc the social pressure to put your seatbelt on is far higher than that to prioritize chickens over shrimp.
I guess [their motivation] has little to do with the actual probability of sentience of the animals in question.
Yeah, maybe they just rationalize their motivations with moral weight arguments while their real drive is something else (see Simler & Hanson 2018). And highlighting potential biases we have might be helpful. On the other hand, you may wanna mainly stick to red-teaming the importance of p(sentience) as a potential crux (by, e.g., red-teaming Clatterbuck and Fischer), anyway, if that’s the reason people give (even if it might not be their real motivation deep down). I generally find this to be the most productive. People rarely update just based on noticing or being reminded of a bias they may have.
I guess most people see voting as fulfilling their duty to improve society.
That also seems part of the picture, yeah! And notice that this bolsters my broader point that it might not be about EV max and that there might be no inconsistency between voting and being difference-making risk-averse.
I wonder to what extent people donate to interventions targeting animals which are more likely to be sentient to boost the probability of increasing welfare. People routinely take actions which are super unlikely to actually matter
This position many animal advocates hold (even if only implicitly) was indeed rationalized/explained with difference-making risk aversion by Clatterbuck and Fischer (2025). And in this case, p(sentience), and moral weights more broadly, indeed seem important, actually.
I think it’s very plausible people are inconsistent in how difference-making risk averse they are for different things. However, let me play devil’s advocate:Seatbelts. One could argue this is just a habit they just don’t bother questioning, not risk-neutral EV max and getting mugged by small probabilities.
Voting. I would be surprised if many of the people who prioritize chickens because of risk aversion do vote. If they do, I agree this seems inconsistent. But, fwiw, if they were forced to pick a lane, I think most would drop voting and not their diffence-making risk aversion.
(Tangential but I guess from the above that you think the following is not another example where MNB is sensitive to the individuation of normative views, and I’d like to understand why. Nw at all if you don’t have the time to reply, tho.)
Antonia found an intervention that reduces overall animal suffering in the near-term, but she’s not sure which is true betweenL) the long-term effects dominate, but I don’t know what they overall imply, and I can’t ignore them (so I’m clueless).
N) neartermism thanks to bracketing out the long-term effects (so I should intervene).
Brian comes along and says he agrees with the above and subdivides L, this way:
L) the long-term effects dominate, but he doesn’t know what they overall imply, and he can’t ignore them (so he’s clueless).
L1) same, but he trusts his longtermist best guess that the intervention is bad, assuming pure negative utilitarianism (so he should not intervene).
L2) same but assuming negative-leaning utilitarianism (so he should not intervene).
L3) he trusts his longtermist best guess that the intervention is good, assuming classical utilitarianism (so he should intervene).
N) neartermism thanks to bracketing out the long-term effects (so he should intervene).
Antonia shares Brian’s above best guesses and normative uncertainty. They both totally agree. The only difference is that Brian specified normative sub-views.
Now, say Nuutti joins the party, agrees with these two, but recategorizes things this way:
L1) stubborn precise EV despite imprecision arguments + negative utilitarianism (we should not intervene)
O) all other plausible normative views (in sum, we’re clueless)
The MNB sceptic would say that Antonia grouping L1-3 together to form L is just as arbitrary as Nuutti grouping L2, L3, and N together to form O.[1]
Is your response: The former seems less arbitrary becauseL1-3 share key epistemic principles and/or decision theory that make L an actual normative view (even though the moral theory part is imprecise). In contrast, L2, L3, and N have nothing in common, normatively that justifies grouping them against L1. It’d be too arbitrary to consider N + L2 + L3 as a normative view.
Normative views seem to be the most legitimate units to bracket over (e.g., more legit than empirical views). Making a comprehensive case for/against this would be nice, but I give some reasons for, in this section.
- ^
With the consequentist-bracketing version of the individuation problem I present here, the bracketer can appeal to a “only value locations that have been identified can be bracketed in” principle. This saves them if this principle is sound. Here, this doesn’t save them. The normative theories have been identified in both cases.
The idea that the unpleasantness of pain increases superlinearly with its intensity (i.e. an 8⁄10 on the pain scale is more than twice as bad as a 4⁄10).
Yeah… I wish we would just say that the 4 is actually lower than 4 and directly track what you mean by “unpleasantness” with these scores, since this is what we care about. But that’s not how people use the /10 scale, unfortunately. And that’s understandable. If they were, they would seldom say that they’re suffering above a 1⁄10.[1]
And yes. When researchers/people assign welfare ranges, they think they’re tracking “unpleasantness”, but I also suspect they are actually tracking what you mean by “intensity” to a large extent, which may lead to very misguided cross-species welfare tradeoffs. I am extremely skeptical of the following counter-view you describe:If a researcher judges an animal to be at 10% of its capacity, they simply mean 1⁄10 as bad as its worst state — there’s no question about whether 100% is “really” 10x worse, because that’s just what the numbers mean by construction.
Maybe that’s what they mean, but I doubt that their estimate is not deeply biased by the “unpleasantness”/”intensity” confusion.
To be clear, though, I don’t want people to take away that we should care less about insects and shrimp. There are so many other considerations. If anything, this should make us less confident in precise-ish moral weight estimates (and maybe look for projects robust to this uncertainty).That’s a very important problem you raise! Thank you for this. :)
Great points from you here and from @Mia Fernyhough in another thread! What about in countries where animal advocacy is (almost) nonexistent and where the counterfactual is probably not cage-free, but no change at all? Curious what the two of you (and others) think. I know this does not address all the limitations you raise, but maybe the most crucial ones?
Oh good, I have no objection then. Well played.