How much current animal suffering does longtermism let us ignore?

Some thoughts on whether/​why it makes sense to work on animal welfare, given longtermist arguments. TLDR:

  1. We should only deprioritize the current suffering of billions of farmed animals, if we would similarly deprioritize comparable treatment of millions of humans; and,

  2. We should double-check that our arguments aren’t distorted by status quo bias, especially power imbalances in our favor.

This post consists of six arguments:

  1. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?

  2. Power is exploited, and absolute power is exploited absolutely

  3. Sacrificing others makes sense

  4. Does longtermism mean ignoring current suffering until the heat death of the universe?

  5. Animals are part of longtermism

  6. None of this refutes longtermism

Plus some context and caveats at the bottom.

A. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?

Despite some limitations, I find this analogy compelling. Come on, picture it. Check out some images of battery cages and picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?

Three possible answers:

  1. Yes, the suffering of both the chickens and the humans is outweighed by longtermist concerns (the importance of improving our long-term future).

  2. No, the suffering of the humans is unacceptable, because it differs from the suffering of the chickens in key ways.

  3. No, neither is acceptable: longtermism notwithstanding, we should allocate significant resources to combating both.

I lean towards c) myself, but I can see a case for a): I just think if you’re going to embrace a), you should picture the caged-humans analogy so you fully appreciate the tradeoff involved. I’m less sympathetic to b) because it feels like suspicious convergence—“That theoretical bad thing would definitely make me change my behavior, but this actual bad thing isn’t actually so bad” (see section B below). Still, one could sketch some plausibly relevant differences between the caged chickens and the caged humans, eg:

  1. “Millions of people” are subbing here for “billions of hens”, implying something like a 1:1,000 suffering ratio (1 caged chicken = 0.001 caged humans): this ratio is of course debatable based on sentience, self-awareness, etc. Still, 0.001 is a pretty tiny factor (raw neuron ratio would put 1 chicken closer to 0.002-0.005 humans) and again uncertainty does some of the work for us (the argument works even if it’s only quite plausible chicken suffering matters). There is a school of thought that we can be 99+% confident that a billion chickens trapped on broken legs for years don’t outweigh a single human bruising her shin; I find this view ridiculous.

  2. Maybe caging creatures that are “like us” differs in important ways from caging creatures that are “unlike us”. Like, maybe allowing the caging of humans makes it more likely future humans will be caged too, making it (somehow?) of more interest to longtermists than the chickens case. (But again, see section B.)

  3. A lot of longtermism involves the idea that humans (or AIs), unlike hens, will play a special role in determining the future (I find this reasonable). Maybe this makes caging humans worse.

B. Power is exploited, and absolute power is exploited absolutely

A general principle I find useful is, when group A is exploiting group B, group A tends to come up with rationalizations, when in fact it’s often just a straight-up result of a power imbalance. I sometimes picture a conversation with a time traveler from a future advanced civilization, not too familiar with ours:

TT: So what’s this “gestation crate” thing? And “chick maceration”, what does that even mean?

Us: Oh, well, that’s all part of how our food production industry works.

TT: *stares blankly*

Or maybe not: maybe TT thinks it’s fine, because her civilization has determined that actually factory farming is justifiable. After all I’m not from the future. But to me it seems quite likely that she thinks it’s barbarically inhumane whereas we broadly think it’s OK, or at least spend a lot more energy getting worked up about Will Smith or mask mandates or whatnot. Why do we think it’s OK? Two main reasons:

  1. Status quo bias: this is how things have been all our lives (though not for long before them); it’s normal.

  2. Self-interest: we like (cheap widely available) meat, so we’re motivated to accept the system that gives it to us.

In particular, we’re motivated to come up with reasons why this status quo is reasonable (the chickens don’t suffer that much, don’t value things like freedom that we value, are physically incapable of suffering, etc). If factory farming didn’t exist and there was a proposal to suddenly implement it in its current form, we might find these arguments a lot less convincing; but that’s not our situation (though, see octopus farming).

In general, when group A has extra power (physical, intellectual, technological, military) relative to group B, for group A to end up pushing around group B is normal. It’s what will happen unless there’s some sort of explicit countereffort to prevent it. And the more extreme and lasting the power imbalance, the more extreme yet normalized the exploitation becomes.

I view many historical forms of exploitation through this lens: slavery, colonialism, military conquests, patriarchy, class and caste hierarchies, etc. To me this list is encouraging! A lot of these exploitations are much reduced from their peak, thanks at least in part to some exploiters themselves rallying against them, or finding them harder and harder to defend.

So the takeaway for me is not, “exploitation is inevitable.” The main takeaway is, when we observe what looks naively like exploitation, and hear (or make) ostensibly rational defenses of that apparent exploitation, we should check those arguments carefully for steps whose flimsiness may be masked by 1. status quo bias or 2. self-interest.

(Another conclusion I would not draw is “Self-interested arguments can be dismissed.” Otherwise someone advocating for plant rights or rock rights would have me trapped. Who knows, maybe it will turn out we should be taking plant/​rock welfare seriously: but the fact that that conclusion would be inconvenient for us, is not enough to prove it correct.)

C. Sacrificing others makes sense

My main take on self-sacrifice is a common one: it’s no substitute for results. People will take cold showers for a year to help fight climate change, when a cheque to a high-impact climate org probably does much more to reduce atmospheric carbon.

That said (and this is not a fully fleshed-out thought), there is something suspicious about moral theorizing without sacrifice: especially theorizing about large sacrifices of others. There is a caricature of moral philosophers, debating the vast suffering of other species and peoples, concluding it’s not worth doing much about, finishing their nice meal and heading off to a comfortable bed. (See also: the timeless final scene from Dr Strangelove.) When we edge too close to this caricature (and I certainly live something near it myself) I start to miss the social workers and grassroots activists and cold-shower-takers.

Again, the fact that a conclusion is convenient is not sufficient grounds to dismiss it. But it does mean we should scrutinize it extra closely. And the conclusion that the ongoing (at least plausible) suffering of billions of other creatures, inflicted for our species’ benefit, is less pressing than relatively theoretical future suffering, is convenient enough to be worth double-checking.

I’ve seen elements of both extremes in the EA community: “endless fun debates over nice dinners” at one end, intense guilt-driven overwork leading to burnout at the other. We’ll just have to keep watching out for both extremes.

D. Does longtermism mean ignoring current suffering until the heat death of the universe?

If current suffering is outweighed by the importance of quadrillions of potential future lives, then in say a century, won’t that still be true? There’s a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop’s miser, always saving for the future until eventually we die.

Of course reality might not be so crude. Eg, many have argued that we live at an especially “hingey” time (especially given AI timelines), perhaps (eg if we survive) to be followed by a “long reflection” during or after which we might finally get to take a breather and enjoy the present (and finally deal with large-scale suffering?).

But it’s not really clear to me that in a 100 or 1,000 years the future won’t still loom large, especially if technological progress continues at any pace at all. So perhaps, like the ageing miser, or like a longevity researcher taking some time to eat well and exercise, we should allocate some of our resources towards improving the present, while also giving the future its due.

E. Animals are part of longtermism

The simplest longermist arguments for animal work are that 1. many versions of the far future include vast numbers of animals (abrahamrowe), and 2. how they fare in the future may critically depend on values that will be “locked in” in upcoming generations (eg, before we extend factory farming to the stars—Jacy, Fai). Maybe! But the great thing about longtermist arguments is you only need a maybe. Anyway lots of other people have written about this so I won’t here: those above plus Tobias_Baumann, MichaelA, and others.

F. None of this refutes longtermism

I probably sound like an anti-longermism partisan animal advocate so far, but I actually take longtermist arguments seriously. Eg some things I believe, for what it’s worth:

  1. All future potential lives matter, in total, much more than all current lives. (But I’d argue improving current lives is much more tractable, on a per-life basis, so tractability is an offsetting factor. See also the question of how often longtermism actually diverges from short-termism in practice, and good old Pascal’s mugging.)

  2. Giving future lives proper attention requires turning our attention away from some current suffering. It’s just a question of where we draw the line.

  3. One human life matters much more than one chicken life.

  4. There are powerful biases against longtermism—above all proximity bias.

I’m not here to argue that longtermism is wrong. My argument is just that we need to watch out for the pro-longtermism biases I laid out above—biases we should, y’know, overcome…

Notes

  1. About me: I’ve been a part-time co-organizer of Effective Altruism NYC for several years, and I’m on the board of The Humane League, but I’m speaking only for myself here. I’m not an expert on any of this: after a conversation an EA pal I respect encouraged me to write up my views.

  2. I’m sure many of these arguments have been made and rebutted elsewhere: kindly just link them below.

  3. Some of these arguments could be applied more broadly, eg to global (human) health work rather than animal welfare. Extrapolate away!

  4. A major motivation for this post is the piles of money and attention getting allocated to longtermism these days. Whatever we conclude, kicking the tires of longtermist arguments has never been higher-stakes than it is now.

  5. Battery cages are just one example: eg, broiler chickens (farmed for meat not eggs) are even more numerous and arguably have worse lives, above all because of the Frankensteinian way they’ve been bred to grow much larger and faster than their bodies can healthily support. I used battery cages because it’s easier to picture yourself in a coffin-sized cage than bred to quadruple your natural weight.