www.jimbuhler.site
Also on LessWrong and Substack, with different essays.
Jim Buhler
I just had a naive illumination. Say that sentience first appeared in two different simple creatures, independently, at the same time:
Dolores: She’s just like her non-sentient siblings, except that she feels unnecessarily severe pain if she’s about to die of starvation, although not severe in a way that would impair her ability to do what is necessary not to starve (otherwise, she would die, and it’s her non-sentient siblings who would spread their genes.)
Mildred: Same, except that her starving pain is milder, and that’s enough to motivate her to lexically prioritize solving this problem, just like Dolores.
Judging by what you’ve written in the post and comments, you could give two different arguments for why Dolores would have lower fitness than Mildred:
1. Dolores’s pain would override everything else (e.g., she might be so focused on not starving that she forgets about drinking).
But this applies just as much to Mildred, no? No matter how mild her pain is, it will also override everything if that’s the only thing she feels. If she feels some pain when starving and nothing while thirsty, she might forget about drinking just the same. In pure isolation, how bad the pain is changes absolutely nothing in terms of fitness, here, no?
2. Dolores needs a more demanding biology than Mildred in order to feel something worse.
But how would we know this? Why would subjectively worse mean more demanding energy-wise? Why couldn’t it just as well be the more subtle less bad affects that are more demanding?
What am I misunderstanding/missing?
it seems to me to be very unreasonable to be confident that simpler brains most likely have much smaller welfare ranges
I agree, and I absolutely did not mean to defend this. What I defend is that, in the absence of a good argument based on welfare ranges and not p(sentience), we don’t know if the welfare range of simpler animals is below or above the bar above which their welfare would dominate over that of more complex animals (not that it is below!).
But you disagree with my a priori agnosticism because you think we should (roughly) stick to some precise-ish prior welfare ranges in the absence of significant evidence pointing one way or the other, correct? (And this prior would give simpler animals enough weight for them to likely dominate.) This would explain your disagreement with what you quote.[1] I was implicitly assuming that our prior should be an agnostic imprecise one that offers no action-guidance on its own.- ^
If that’s not where the disagreement is, I don’t see how “a presumption of a reasonable probability of a welfare range that is not too small and no significant evidence against it” does not count as “evidence of a welfare range that is not too insignificant.” Maybe you’re just worried my imprecise phrasing will, while technically correct, lead readers to set the bar too high?
- ^
Discussions of (p)sentience of small animals miss the point
Curious what motivated you to spend time assessing the impact of bird-safe glass on arthropods, specifically, then. Were you hoping to find out that bird effects dominated but found and shared the opposite unsatisfying results? Or maybe you think “here’s another example showing how indirect effects on tiny animals may dominate” and that this will convince some people to also prioritize (i) and (ii)? (people who were not convinced by your previous largely-overlapping posts but might by this one?)
Is there any project you think may not impact arthropods and/or soil animals much more than whatever animals are targeted? I feel like exploring this would be far more insightful at this stage.
Most animals are wild animals, so the answer to this question should focus on them.
Even granting that the overwhelming majority are wild animals, this doesn’t necessarily imply we should focus on them. We have to factor in the welfare difference between the two (welfare ranges and quality of life in practice).
Oh good, I have no objection then. Well played.
Are you setting aside wild animals?
this seems to me to imply a greater concern for anthropogenic harm than non-anthropogenic harm. Is that what you meant?
Oh no sorry, increased WAW welfare compared to the “natural” situation counts as impact too.
What I’m saying is: say you help 1 million wild animals out of many or 1 million farmed animals out of fewer. You can’t say the former is better because there are more wild animals. It doesn’t matter how many there are. What matters is how many you help and how much. And there is an asymmetry here where farmed animals are probably 100% helped if humans are disempowered—the problem is totally fixed—whereas, even in the best case scenario, empowered humans will be nowhere near totally fixing wild animal suffering. This asymmetry may compensate for the fact that there are many more wild animals to help.Humans increasing or decreasing the number might be the largest impact
As in (D) is more plausible than (C) (in my typology)? I’d agree. Anyway, my argument holds independently of what people find more likely between (C) and (D).
For example, the regeneration of forest is actively opposed in much of Central Europe, because people have cultural ideas about what the landscape should look like. So there’s a tension there between environmentalists and traditionalists, and I wouldn’t say that the environmentalists are winning.
Oh I didn’t know that, thanks. There, of course, is still the question of the marginal impact WAW advocates would have in such debates, but helpful example!
I wasn’t thinking about promoting/opposing restoration but about influencing how it is done (without necessarily taking a stance on whether no restoration would be better). And I could very well imagine WAI wanting to advise decision-makers on how to conduct restoration.
I think present and future WAW advocates would fiercely disagree about what ecosystems might be net good/bad, and any intervention aimed at making greening more likely would be highly controversial.
Interventions aimed at, at least tentatively, holding off on restoring would be far less controversial, though. And in that case, yes, I doubt that WAW advocates “leveraging conservative valuing of traditional landscapes to oppose it” would successfully prevent any restoration project. Whatever the incentive for restoration is, it seems far stronger than the incentive to please the few detractors who do not want the landscape restored.
Interesting, thanks!
An intervention doesn’t even need to be framed around WAW either—you could just fund an organization to lobby for desert greening (for example) in a particular area, and they could leverage whatever arguments they’ve got.
That’s good only assuming WAW in the ecosystem you create is net positive tho, right?
I was imagining more like:some restoration interventions are and will keep happening anyway.
let’s influence those and push for ecosystems with less suffering.
But I just find it hard to make a difference there. For social/political reasons, yes. Not necessarily because people would be against the idea, but just because there’s no/little incentive for the relevant actors (in the restoration process) to do what we’d want there. Why would they bother? I also feel like WAI would have discussed this more if this were tractable? Haven’t thought about this much tho.
[On my Substack], most of my readers are not as familiar with EA discourse
This surprised me. Where do they come from?
AI Safety and Cross-Species Robustness: A brief critical review
I like the idea of “Promoting High-Welfare Ecosystem States”! I’m surprised you put it in the “promising near-term intervention” box, though. Did you get the chance to talk to WAW scientists about this?
Collaborations between ecological restoration actors and WAW scientists seem acutely scarce at the moment.[1] If someone, in a position similar to you and I, wanted to non-trivially influence restoration projects, even assuming they 100% knew what to recommend, this unfortunately feels a bit intractable to me. Do you see reasons for optimism? Is there any relevant work I’m missing? :)- ^
All I’m aware of is:
Capozzelli et al. (2020) arguing that restoration ecology and welfare science “could enjoy a productive union” and a few discussions of colab that emerged after that.
This 2025 grant from WAI that may “inform freshwater systems restoration strategies with a welfare perspective”.
Animal Ethics (2026) asking fow WAW to be accounted for when decidicing whether/how to do rewilding.
- ^
I think there are plenty of crucial sign-flipping considerations pointing both ways (sec. 1 of my post), and that our takes certainly fail to account for some of them, in ways that likely make these takes irrelevant.
And even if someone’s evaluation somehow does not omit a single crucial consideration, they have to make opaque judgment calls on how to weigh up the conflicting pieces of (theoretical and empirical) evidence. I see little reason to believe such judgment calls would do better than chance.
Clarification on what my “0% Agree” means: I confidently disagree that we should believe it’d go well for animals (sec. 1 of my post), but I don’t think we should believe the opposite either. I think our cause prio should not rely on any assumption on this question (sec. 2 of my post).
If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.).
Nitpick, but it seems unfair to consider this an upside rather than the mere absence of a downside, since the relevant counterfactual scenario, in expectation (if no AI safety work) is a misaligned AI that takes over and probably ends animal farming as it kills or disempowers humans.
AI safety cannot take the credit for a potential future reduction or end of farmed animal suffering if it preserves humanity, without which animal farming would not exist to begin with.
Interesting. I was particularly curious about why you think the ocean fertilization effect will not be as strong as you had originally estimated, if you have readings to recommend there too.
Rewilding projects would increase the total amount of suffering by expanding and intensifying landscapes where such suffering is endemic.
I mean, only if the rewilding projects increase the overall number of (welfare-range-adjusted) wild animals. This would certainly be the case for rewilding projects that introduce (more) life in dead-ish zones. But the rewilding examples you happen to discuss (and also commonly discussed by others, especially outside of EA)[1] are not of this type. They’re about introducing species into a pre-existing ecosystem, and I guess you would agree that these projects don’t clearly “expand and intensify” nature, overall.
It is actually not clear to me how serious/common rewilding projects of the former kind are, how worried WAW advocates should be, and what to do about them.
Nice post! :)
I very much agree that pretty much whatever our prior should be, the available evidence does not justify substantially updating away from it. I’m just uncertain about what the prior should be (see below).
Yeah, agreed that’s the crux! :) I think you are applying a principle of indifference (POI) across welfare subjects (or have a significant credence in such a move, at least).[1] While I actually also have sympathy for something of the sort,[2] it is widely criticized in the literature on cluelessness and decision-making under uncertainty. Here’s a list of challenges and possible responses taken from a rough paper draft of mine on this exact topic:
1. Uncertainty about how to individuate welfare subjects within a “welfare-containing” entity or within a bigger welfare-containing entity this one is part of → important instance of the problem of the many. Research could help us non-arbitrarily individuate (see, e.g., Gottlieb 2022; Fischer et al. 2022; McIntyre forthcoming) but this research may face very similar challenges to that on moral weights (on how much we can update away from whatever our prior is) and not bring us far.
But maybe biting the bullet and accepting some arbitrariness here is the least bad option we’ve got?
2. Why apply POI at the level of welfare subjects or brains rather than at the level of, e.g., cells?
Maybe persons (i.e., welfare subjects) are themselves what is morally relevant rather than their experience moments (see, e.g., Bader 2022), but
we’d need a solution to the non-identity problem.
and an argument for why following our intuition on this is fine but not with moral weights.
3. Why endorse any form of POI to start with? In a complex cluelessness context like the one we’re in when estimating moral weights,[3] the plausibility of POI is infamously contested (see, e.g., this, that, and refs therein). Hence, maybe we can’t use POI to justify a precise prior. Maybe we should favor an imprecise one such as each non-human species = (0, X). (where X = 1 or a bit higher.) (which would lead to agnosticism about whether many interspecies tradeoffs we make are justified).
However, to the extent that people want to reject such agnosticism (for whatever reason), even as an uninformed prior, they have to pick a precise-ish alternative prior. In this case, “everyone counts for (~)one” may be more advisable than the other options. (Wager on the possibility that we can apply POI).
A tl;dr from Claude that I like: ignorance about X’s welfare range doesn’t automatically justify treating X’s welfare as if it equals human welfare — it might just justify suspending judgment. The move from “we don’t know the ratio” to “assume the ratio is 1″ needs much more justification.
See also Dickens and Shepherd et al. (2023), who endorse this move.
Especially as an alternative to defaulting to our intuitions or “invertebrates don’t matter at all until proven otherwise”.
One could nitpick that there’s technically no complex cluelessness if we’re truly uninformed and ignore the (conflicting) evidence. But in that case, sure, maybe we can start with POI, but then we update towards agnosticism once we consider evidence, so the POI argument for giving everyone the same moral weight wouldn’t work.