www.jimbuhler.site
Jim Buhler
Chicken reforms and veg advocacy may contribute to the Small Animal Replacement Problem—it’s not just environmental strategies
Thanks! In this other comment, I started wondering whether the main crux (for people not worrying that much about SARP) was the temporary setback view or animal advocates just believing they don’t contribute to SARP, and you’re providing some more reasons to believe it’s the latter.
Robin Hanson has also questioned whether farmland used to grow crops for animal feed would be ‘re-wilded’ - at least some of it will be used for development, which will actually reduce wild animal numbers. In any case, whether or not wild animals have net-negative lives is incredibly uncertain.
@JBentham Sorry for jumping in so late but where has he done that? Do you have a link? :)
Nice post! I recently wrote Sentientia, Reversomelas, and the Small Animal Replacement Problem which sort of nuances your nuance by arguing that it’s far from obvious something like moral circle expansion will eventually offset the problem. :) (although I discovered your post only now, after writing mine).
I’d be curious to have your thoughts!
Just to clarify, I really didn’t mean to argue about whether strategy X is contributing to SARP. All I’m saying is “many people i) believe what they do somewhat contributes to SARP but they ii) think it’s just a temporary setback and it’s fine—and (I claim) it’s not obvious they’re right about (ii)”.
You seem to think they might not be right about (i), which is of course also relevant but my impression is that the crux for most people is (ii) and not (i). They generally don’t seem to care about how much what they do might be contributing to SARP. As long as this improves people’s values from their perspective, they generally think it offsets their (potential) contribution to SARP anyway. (See e.g. here and here.)
EDIT: Actually, I’ve just spent some more time looking into every mention of SARP on the EA Forum and it is almost exclusively mentioned in discussions of meat taxes and environmental strategies. There seems to be a meme that SARP is just a reason to avoid helping animals with environmentalist strategies, as if it was obvious that other strategies—e.g., promoting plant-based food, chicken welfare reforms, moral advocacy—did not contribute to SARP (here and here are rare exceptions). So maybe the question of what exact strategies contribute to SARP is more cruxy than I thought. Maybe most animal advocates think they’re not contributing to SARP anyway and haven’t thought that much about (ii).- May 22, 2025, 7:40 AM; 2 points) 's comment on Sentientia, Reversomelas, and the Small Animal Replacement Problem by (
Thanks for the comment Jo :)
“[...] advocacy towards considering the suffering of mammals farmed for their meat, seems to be contributing to the growth of the farming of smaller animals” seems like a core claim, and yet there’s no link or footnote for tentative evidence.
Interesting. I didn’t expect this to be controversial. This was just an example anyway. I didn’t mean to argue about what strategies do and do not contribute to SARP. That’s a whole other discussion and is kinda irrelevant to the point of my post. (Although, obviously, the more we think the strategies people use contribute to SARP, the more my point matters in practice.)[1]
While I think that the size of animals that will be farmed in the future matters a lot, I think that the factors that will determine that are neither the way current vegans talk about animals, nor the choices we make in welfare campaigns during this decade.
What do you think those factors are, then? And do you think the work of people trying to help animals (EA-inspired people in particular) do not affect these factors in any non-chaotic way? (such that there is no need to worry about contributing to SARP.)
- ^
Fwiw, I just foundthis interesting videowhere Matt Ball somewhat suggests that promoting veganism hurts animals overall because of SARP (and he completely ignores animals smaller than chickens).(EDIT: no, I misinterpreted him. He just thinks promoting veganism doesn’t work. This has nothing to do with SARP.)
- ^
Sentientia, Reversomelas, and the Small Animal Replacement Problem
Note that there are likely more comprehensive analyses [of the impact of vegetarianism on wild-animal suffering] now.
Do you happen to be aware of any?
No deadline? Can I register the day before? Or do you expect to potentially reach full capacity at some point before that?
Any sense of what portion of insects we should expect to be farmed for human consumption vs for feeding farmed fish vs for farmed chickens vs for pets vs other things? Sorry if I missed the answer to this question when I read the full report. (Thanks a lot for writing it!)
Nice, I see. I’ll go read that in more detail. Thanks for taking the time to clarify your view in this thread. Glad we identified the crux. :)
Oh ok so our disagreement is on whether concern for the long-term future needs to be selected for for evolution to “directly” (in the same sense you used it earlier) influence longtermists’ beliefs on the value of X-risk reduction and making the future bigger, right?
I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it’s ideally better if they don’t exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let’s forget about agnosticism, here, for simplicity). I mean, the former says “save humanity and increase population size” and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)- ^
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
- ^
I’m not sure why you think non-longtermist beliefs are irrelevant.
Nice. That’s what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don’t care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for “something else”.
So I’m not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I’m saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I’m not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
Does that make sense?- ^
So I’m interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn’t matter or something are irrevelant, here.
- ^
Oh interesting.
> I don’t think there’s any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn’t prove anything about whether EDAs can ever help us; I’m just trying to pin down which assumption I’m making that you don’t or vice versa).
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn’t mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be “right for the wrong reasons”. Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Ah nice, thanks for these points, Cody.
I’d be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.
I mean… it’s quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true. The real question is how strong it is relative to, e.g., a potential indirect selection toward truth-tracking longtermist beliefs. I.e., the EDA argument against optimistic longermism seems trivially valid. The question is how strong it is relative to other arguments. (And I’d really like for my potential paper to make progress on this, yeah!)
(Hopefully, the above also addresses your second bullet point.)
Now, you give potential reasons to believe the EDA is weak (thanks for that!):I’ve seen people reason themselves into and out of pro-natalist and anti-natalist stances, often using mathematical reasoning. I haven’t seen any reason to believe that the pro-natalists’ reasoning in particular is succumbing to evolutionary pressure.
You can’t reason yourself into or out of something like optimistic longtermism just using math. You need to make so many subjective judgment calls. And because you can reason yourself out of a belief does not mean that there weren’t evolutionary pressures toward this belief. This means that the evo pressure was at least not overwhelmingly strong, however, fair. But I don’t think anyone was contesting that. You can say this about absolutely all evolutionary pressures on normative and empirical beliefs. I don’t think there is any that is so strong that we can’t reason ourselves out of it. But this doesn’t mean they can’t have suspicious origins.
On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is “to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?”. Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.
Thanks for engaging with this, Richard!
To be clear: you’re arguing that we should be agnostic (and, more strongly, take others to also be utterly clueless) about whether it would be good or bad for everyone to die?
I think I am making a much weaker claim than this. While I suggest that the EDA argument I raise is valid, I do not argue that it is strong to the point where optimistic longtermism is unwarranted. Also, the argument itself does not say what people should believe if they do not endorse optimistic longtermism (an alternative to cluelessness is pessimistic longtermism—I do not say anything about which one is the most appropriate alternative to optimistic longtermism if the EDA argument is strong enough). Sorry if my writing was unclear.
whether it would be good or bad for everyone to die
Maybe a nitpick, but I find this choice of words quite unfair as it implicitly appeals to commonsensical intuitions that seem to have nothing to do with longtermism (to implicitly back your opinion that we know X-risk reduction is good from a longtermist perspective). You do something very similar multiple times in It’s Not Wise to be Clueless.
If you think that, in general, justified belief is incompatible with “judgment calls”
I didn’t say that. I said that we ought to wonder whether these judgment calls are reliable, claim which you seem to agree with when you write:
It’s OK—indeed, essential—to make judgment calls, and we should simply try to exercise better rather than worse judgment.
Now, you seem much more convinced than me that our judgment calls with regard to the long-term value of X-risk reduction come from a reliable source (such as an evolutionary pressure selecting correct longtermist beliefs, whether directly or indirectly) rather than from evolutionary pressures towards pro-natalist beliefs. In It’s Not Wise to be Clueless, the justification you provide for something in this vicinity[1] is that we ought to start with the prior that something like X-risk reduction is good for the similar reasons why we should start with the prior that the sun will rise tomorrow. But I think Jesse quite accurately pointed out the disanalogy and the problem with your argument in his comment. Do you have another argument and/or an objection to Jesse’s reply that you are happy to share?
- ^
EDIT: actually, not sure this is related. You don’t seem to argue that our judgment calls are truth-tracking. You argue that there is a rational requirement to start with a certain prior (i.e., you implicitly suggest that all rational agents should agree with you on X-risk reduction without having to make judgment calls, in fact).
- ^
Not really, at least not with this specific post. I just wanted to learn things by getting people’s thoughts on SARP and the temporary setback view. Maybe I also very marginally made people update a bit towards “SARP might be a bigger deal than I thought” and “animal macrostrategy is complex and important”, and that seems cool, but this wasn’t the goal.
I like your questions. They got me thinking a lot. :)