Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don’t have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone:
To the extent that we discuss this issue rarely it really ought to be worth someone’s time to write up these supposed strong arguments. To the extent that they haven’t, even after a well publicised week of discussion I will believe it more likely they don’t exist.
@AGB 🔸 would you be willing to provide brief sketches of some of these stronger arguments for global health which weren’t covered during the Debate Week? Like Nathan, I’ve spent a ton of time discussing this issue with other EAs, and I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week.
First, want to flag that what I said was at the post level and then defined stronger as:
the points I’m most likely to hear and give most weight to when discussing this with longtime EAs in person
You said:
I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week
So I can give examples of what I was referring to, but to be clear we’re talking somewhat at cross purposes here:
I would not expect you to consider them strong.
You are not alone here of course, and I suspect this fact also helps to answer Nathan’s confusion for why nobody wrote them up.
Even my post, which has been decently-received all things considered, I don’t consider an actual good use of time, I more did it in order to sleep better at night.
They often were mentioned at comment-level.
With that in mind, I would say that the most common argument I hear from longtime EAs is variants of ‘animals don’t count at all’. Sometimes it’s framed as ‘almost certainly aren’t sentient’ or ‘count as ~nothing compared to a child’s life’. You can see this from Jeff Kaufman, and Eliezer Yudkowsky, and it’s one I hear a decent amount from EAs closer to me as well.
If you’ve discussed this a ton I assume you have heard this too, and just aren’t thinking of the things people say here as strong arguments? Which is fine and all, I’m not trying to argue from authority, at least not at this time. My intended observation was ‘lots of EAs think a thing that is highly relevant to the debate week, none of them wrote it up for the debate week’.
I think that observation holds, though if you still disagree I’m curious why.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that “animals don’t count at all”. I think it’s somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didn’t really justify his view in his comment thread. I’ve never read Zvi justify that view anywhere either. I’ve heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Overwhelming Hierarchicalism
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term “overwhelming” because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, you’d need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Jules’ argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you don’t endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. There’s just no prior for why that would be the case.
Denial of animal consciousness
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, they’d have to be really certain that animals aren’t conscious to endorse global health here. Even if there’s a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think they’d still merit a significant fraction of EA funding. (Probably still more than they’re currently receiving.)
I think it’s fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/painkillers/social interaction as humans’ are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while I’m not a consciousness expert at all, the New York Declaration on Animal Consciousness says that “there is strong scientific support for attributions of conscious experience to other mammals and to birds”. Rethink Priorities’ and Luke Muehlhauser’s work for Open Phil corroborate that. So Yud’s view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yud’s Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didn’t admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didn’t make any attempt to justify them. So I didn’t find anything about his Facebook post convincing.
Conclusion
To me, the strongest reason to believe that animals don’t count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I haven’t read anything remotely convincing that justifies that view on the merits. That’s why I didn’t even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
They didn’t have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
They may have felt like most EAs don’t share the basic intuitions underlying their views, so they’d be talking to a wall. The idea that pigs aren’t conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but I’d need to see way more justification than I’ve seen.
in 2017, Holden’s personal reflections “indicate against the idea that e.g. chickens merit moral concern”. In 2018, Holden stated that “there is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not ‘conscious’ in a morally relevant way”.
Strong downvoted because I find this statement repugnant “I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans.”
Why go there? You don’t do yourself or animal welfare proponents any favors. Make the argument in a less provocative way.
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
I have a lot of respect for most pro-animal arguments, but why go this way?
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
So, the argument you’re making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like it’s a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and it’s important to use them carefully to avoid alienating others.
I feel bad that my comment made you (and a few others, judging by your comment’s agreevotes) feel bad.
As JackM points out, that snarky comment wasn’t addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: There’s no theoretical reason why one’s ethical system should lexicographically prefer one race/gender/species over another, based solely on that characteristic. In my experience, people who have this view on species say things like “we have the right to exploit animals because we’re stronger than them”, or “exploiting animals is the natural order”, which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally don’t even argue for hierarchicalism because it’s just such a dubious view. I wouldn’t write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
“There’s no theoretical reason why one’s ethical system should lexicographically prefer one race/gender/species over another, based solely on that characteristic. In my experience, people who have this view on species say things like “we have the right to exploit animals because we’re stronger than them”, or “exploiting animals is the natural order” I completely agree with this (although I think its probably a straw man, I can’t see anyone here arguing those things).
I just think its a really bad idea to compare almost most argument (including non-animal related ones) with Nazi Germany and that thought-world. I think its possible to provoke without going this way.
1) Insensitive to the people groups that were involved in that horrific period of time 2) Distracts the argument itself (like it has here, although that’s kind of on me) 2) Brings potential unnecessary negative PR issues with EA, as it gives unnecessary ammunition for hit pieces.
Its the style not the substance here I’m strongly against.
I’m surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just “not counting”—I’ve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesn’t seem to have provided any justification (from what I’ve seen) for the claim that animals don’t have relevant experiences that make them moral patients. He simply asserts this as his view. It’s not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I don’t think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didn’t impress me:
I wish I remembered this better, but he made some sort of assertion that animals don’t have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if I’m misremembering anything here).
He didn’t respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.
Jeff because he doesn’t seem to have provided any justification (from what I’ve seen) for the claim that animals don’t have relevant experiences that make them moral patients. He simply asserts this as his view. It’s not even an argument, let alone a strong one.
I agree I haven’t given an argument on this. At various times people have asked what my view is (ex: we’re taking here about something prompted by my completing a survey prompt) and I’ve given that.
Explaining why I have this view would be a big investment in time: I have a bundle of intuitions and thoughts that put me here, but converting that into a cleanly argued blog post would be a lot more work than I would normally do for fun and I don’t expect this to be fun.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate, and since I think my views on animals are much less important than many of my other views, I wouldn’t see this as at all a good thing. I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
The normal thing to do would be to stop here: I’ve said what my view is, and explained why I’ve never put the effort into a careful case for that position. But I’m more committed to transparency than I am to the above, so I’m going to take about 10 minutes (I have 14 minutes before my kids wake up) to very quickly sketch the main things going into my view. Please read this keeping in mind that it is something I am sharing to be helpful, and I’m not claiming it’s fully argued.
The key question for me is whether, in a given system, there’s anyone inside to experience anything.
I think extremely small collections of neurons (ex: nematodes) can receive pain, in the sense of updating on inputs to generate less of some output. But I draw a distinction between pain and suffering, where the latter requires experience. And I think it’s very unlikely nematodes experience anything.
I don’t think this basic pleasure or pain matters, and if you can’t make something extremely morally good by maximizing the number of happy neurons per cubic centimeter.
I’m pretty sure that most adult humans do experience things, because I do and I can talk to other humans about this.
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I don’t find most things that people give as examples for animal consciousness to be very convincing, because you can often make quite a simple system that displays these features.
While some of my views above could imply that some humans are more valuable come up morally than others, I think it would be extremely destructive to act that way. Lots and lots of bad history there. I treat all people as morally equal.
The arguments for extending this to people as a class don’t seem to me to justify extending this to all creatures as a class.
I also think there are things that matter beyond experienced joy and suffering (preference satisfaction, etc), and I’m even less convinced that animals have these.
Eliezer’s view is reasonably close to mine, in places where I’ve seen him argue it.
(I’m not going to be engaging with object level arguments on this issue—I’m not trying to become an anti-animal advocate.)
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
I’d be interested to know how likely you think it is that you could do a “good job”. You say you have a “bundle of intuitions and thoughts” which doesn’t seem like much to me.
I’m also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a “bundle of intuitions and thoughts” on what is ultimately a very difficult and important question.[1] In your original comment you say “This isn’t as deeply a considered view as I’d like”. Were you saying you haven’t considered deeply enough or that the general community hasn’t?
And thanks for the sketch of your reasoning but ultimately I don’t think it’s very helpful without some justification for claims like the following:
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/moral patienthood are pretty strong (e.g. see here for a summary) and I would not say I’m relying on intuition. I’m not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals aren’t moral patients, I think you probably need to be veryconfident of this to counteract the vast numbers of animals that can be helped.
I’d be interested to know how likely you think it is that you could do a “good job”.
I do think I could do a good job, yes. While I’ve been thinking about these problems off and on for over a decade I’ve never dedicated actual serious time here, and in the past when I’ve put that kind of time into work I’ve been proud of what I’ve been able to do.
You say you have a “bundle of intuitions and thoughts” which doesn’t seem like much to me.
What I meant by that is that I don’t have my overall views organized into a form optimized for explaining to others. I’m not asking other people to assume that because I’ve inscrutably come to this conclusion I’m correct or that they should defer to me in any way. But I’d also be dishonest if I didn’t accurately report my views.
In your original comment you say “This isn’t as deeply a considered view as I’d like”. Were you saying you haven’t considered deeply enough or that the general community hasn’t?
Primarily the former. While if someone in the general community had put a lot of time into looking at this question from a perspective similar to my own and I felt like their work addressed my questions that would certainly help, given that no one has and I’m instead forming my own view I would prefer to have put more work into that view.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it. For example, I don’t think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course you’re not under any obligation to write anything (well...perhaps some would argue you are, but I’ll concede you’re not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know I’d write it up.
Ah, thank you for clarifying! That is a much stronger sense of “doing a good job” than I was going for. I was trying to point at something like, successfully writing up my views in a way that felt like a solid contribution to the discourse. Explaining what I thought, why I thought it, and why I didn’t find the standard counter arguments convincing. I think this would probably take me about two months of full-time work, so a pretty substantial opportunity cost.
I think I could do this well enough to become the main person people pointed at when they wanted to give an example of a “don’t value animals” EA (which would probably be negative for my other work), but even major success here would probably only result in convincing <5% of animal-focused EAs to change what they were working on. And much less than that for money, since most of the EA money is from OP, which funds animal work as part of an explicit process of worldview diversification.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it.
I would be primarily known as an anti-animal advocate if I wrote something like this, even if I didn’t want to be.
On whether I would need to put my time into continuing to defend the position, I agree that I strictly wouldn’t have to, but I think that given my temperament and interaction style I wouldn’t actually be able to avoid this. So I need to think of this as if I am allocating a larger amount of time than what it would take to write up the argument.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it.
OK so he says he would primarily be “known” as an anti-animal advocate not “become” one.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
But he then also says the following (bold emphasis mine):
I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
I’m struggling to see how what I said isn’t accurate. Maybe Jeff should have said “I would feel compelled to” rather than “I would need to”.
To my eyes “be known as an anti-animal advocate” is a much lower bar than “be an anti-animal advocate.”
For example I think some people will (still!) consider me an “anti-climate change advocate” (or “anti-anti-climate change advocate?”) due to a fairly short post I wrote 5+ years ago. I would, from their perspective, take actions consistent with that view (eg I’d be willing to defend my position if challenged, describe ways in which I’ve updated, etc). Moreover, it is not implausible that from their perspective, this is the most important thing I do (since they don’t interact with me at other times, and/or they might think my other actions are useless in either direction).
However, by my lights (and I expect by the lights of e.g. the median EA Forum reader) this would be a bad characterization. I don’t view arguing against climate change interventions as an important aspect of my life, nor do I believe my views on the matter as particularly outside of academic consensus.
Hence the distinction between “known as” vs “become.”
It’s the only part of my comment that argues Jeff was effectively saying he would have to “be” an animal advocate, which is exactly what you’re arguing against.
So I guess my best reply is just to point you back to that...
I guess I still don’t think of “I would need to spend a lot of time as a representative of this position” as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet I’d consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[1] In other words, it seems like they can be taught to tell us what they’re feeling in ways unnatural and non-instinctive to them. To me, the difference between this and human language is mostly just a matter of degree, i.e. we form more associations and form them more easily, and we do recursion.
Graziano (2020, pdf), an illusionist and the inventor of Attention Schema Theory, also takes endogenous/top-down/voluntary attention control to be evidence of having a model (schema) of one’s own attention.[2] Then, according to Nieder (2022), there is good evidence for the voluntary/top-down control of attention (and working memory) at least across mammals and birds, and some suggestive evidence for it in some fish.
And I would expect these to happen in fairly preserved neural structures across mammals, at least, including humans.
I also discuss desires and preferences in other animals more here and here.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3.
Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that it’s the anxiety itself that they’re discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants.
Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, “jet lag”, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
But could such results merely reflect a “blindsight-like” guessing: a mere discrimination response that need not reflect underlying awareness? After all, as we have seen for S.P.U.D. subjects, decerebrated pigeons can use colored lights as DSs (128), and humans can use subliminal visual stimuli as DSs [e.g., (121)]. We think several refinements could reduce this risk.
I would expect that checking which brain systems are involved and what their typical functions are could provide further evidence. The case for other mammals would be strongest, given more preserved functions across them, including humans.
Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does. Just as other animals must have a body schema or be condemned to a flailing uncontrolled body, they must have an attention schema or be condemned to an attention system that is purely at the mercy of every new sparkling, bottom-up pull on attention. To control attention endogenously implies an effective controller, which implies a control model.
Thanks for taking the time to expose your view clearly here, and explaining why you do not spend a lot of time on the topic (which I respect).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to “I can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)”.
While nobody disputes that, I find it weird that your conclusion is not “I’m very uncertain about other systems”, but “other systems that cannot tell me directly about their inner experience (very small children, animals) probably don’t have any relevant inner experience”. I’m not sure how you got to that conclusion. At the very least, this would justify extreme uncertainty.
Personally, I think that the fact that animals display a lot of behaviour similar to humans in similar situations should be a significant update toward thinking they have some kind of experience. For instance, a pig is screaming and trying to escape when it is castrated, just as humans would do (we have to observe behaviours).
We can probably build robots that can do the same thing, but that just means we’re good at mimicking other life forms (for instance, we can also build LLMs which tell us they are conscious, and we don’t use that to think humans are not sentient).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to “I can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)”.
I don’t think this is what Jeff believes, though I guess his literal words are consistent with this interpretation.
Eliezer has at least defended his view in a Facebook thread which unfortunately I don’t think exists anymore as all links I can find go to some weird page in another language.
The discussion is archived here and the original Facebook post here.
Thank you! Links in articles such as this just weren’t working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.
Anyone who cares about sentience-friendly intelligence should not harm our fellow subjects of experience. Shutting down factory farms and slaughterhouses will eliminate one of the world’s worst forms of severe and readily avoidable suffering.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger.
I’m confused how this works, could you elaborate?
My usual causal chain linking these would be ‘argument is weak’ → ‘~nobody believes it’ → ‘nobody posts it’.
The middle step fails here. Do you have something else in mind?
FWIW, I thought these twocomments were reasonable guesses at what may be going on here.
I’m not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
‘argument is weak but some people intuitively believe it in part because they want it to be true’ → ‘there is no strong post that can really be written’ → ‘nobody posts it’
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
Ah, gotcha, I guess that works. No, I don’t have anything I would consider strong evidence, I just know it’s come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well.
they should definitely post these and potentially redirect a great deal of altruistic funding towards global health
FWIW this seems wrong, not least because as was correctly pointed out many times there just isn’t a lot of money in the AW space. I’m pretty sure GHD has far better places to fundraise from.
To the extent I have spoken to people (not Jeff, and not that much) about why they don’t engage more on this, I thought the two comments I linked to in my last comment had a lot of overlap with the responses.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isn’t a lot of money in the AW space. I’m pretty sure GHD has far better places to fundraise from.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, I’d imagine some people would make it their overwhelming mission to ensure we don’t (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesn’t seem too bad to me. I’m not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because there’s just no point doesn’t stack up to me.
For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they don’t agree about where the money is getting burned..
So from where I stand I don’t recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:
Most of the money is directed by people who don’t read or otherwise have a fairly low opinion of the forum.
Posting on the forum is ‘not for the faint of heart’.
On the occasion that I have dug into past forum prioritisation posts that were well-received, I generally find them seriously flawed or otherwise uncompelling. I have no particular reason to be sad about (1).
People are often aware that there’s an ‘other side’ that strongly disagrees with their disagreement and will push back hard, so they correctly choose not to waste our collective resources in a mud-slinging match.
I don’t expect to have capacity to engage further here, but if further discussion suggests that one of the above is a particularly surprising claim, I may consider writing it up in more detail in future.
Most EAs I speak to seem to have similarly-sized bugbears?
Maybe I don’t speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isn’t optimal, but I wasn’t aware that many EAs think we are giving tens of millions of dollars to interventions/areas that do NO good in expectation (which is what I mean by “burning money”).
Maybe the burning money point is a bit of a red herring though if the amount you’re burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who don’t think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. I’d love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Ariel’s on the topic of animal welfare vs global health.
As a small note, I don’ think the “believe it because they want it to be true” is really an argument either way. To state the obvious, animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
So I don’t think the “want it to be true” argument stands really at all. Motivations are very strong on both sides, and from a “realpolitik” kind of perspective, there’s so much more riding on this from animal researchers than there is for people like Yud and Zvi.
On the other hand, the “very few people believe animals aren’t moral patients and haven’t made great arguments for it” point for me stands very strong.
Animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get “fuzzies” from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldn’t as a human species then be committing a grave moral atrocity which would be a massive relief.
There aren’t really similar arguments for wanting animals to be moral patients (other than “I work on animal welfare”) but I would be interested if I’m missing any relevant ones.
To the extent that we discuss this issue rarely it really ought to be worth someone’s time to write up these supposed strong arguments. To the extent that they haven’t, even after a well publicised week of discussion I will believe it more likely they don’t exist.
@AGB 🔸 would you be willing to provide brief sketches of some of these stronger arguments for global health which weren’t covered during the Debate Week? Like Nathan, I’ve spent a ton of time discussing this issue with other EAs, and I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week.
First, want to flag that what I said was at the post level and then defined stronger as:
You said:
So I can give examples of what I was referring to, but to be clear we’re talking somewhat at cross purposes here:
I would not expect you to consider them strong.
You are not alone here of course, and I suspect this fact also helps to answer Nathan’s confusion for why nobody wrote them up.
Even my post, which has been decently-received all things considered, I don’t consider an actual good use of time, I more did it in order to sleep better at night.
They often were mentioned at comment-level.
With that in mind, I would say that the most common argument I hear from longtime EAs is variants of ‘animals don’t count at all’. Sometimes it’s framed as ‘almost certainly aren’t sentient’ or ‘count as ~nothing compared to a child’s life’. You can see this from Jeff Kaufman, and Eliezer Yudkowsky, and it’s one I hear a decent amount from EAs closer to me as well.
If you’ve discussed this a ton I assume you have heard this too, and just aren’t thinking of the things people say here as strong arguments? Which is fine and all, I’m not trying to argue from authority, at least not at this time. My intended observation was ‘lots of EAs think a thing that is highly relevant to the debate week, none of them wrote it up for the debate week’.
I think that observation holds, though if you still disagree I’m curious why.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that “animals don’t count at all”. I think it’s somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didn’t really justify his view in his comment thread. I’ve never read Zvi justify that view anywhere either. I’ve heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Overwhelming Hierarchicalism
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term “overwhelming” because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, you’d need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Jules’ argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you don’t endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. There’s just no prior for why that would be the case.
Denial of animal consciousness
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, they’d have to be really certain that animals aren’t conscious to endorse global health here. Even if there’s a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think they’d still merit a significant fraction of EA funding. (Probably still more than they’re currently receiving.)
I think it’s fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/painkillers/social interaction as humans’ are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while I’m not a consciousness expert at all, the New York Declaration on Animal Consciousness says that “there is strong scientific support for attributions of conscious experience to other mammals and to birds”. Rethink Priorities’ and Luke Muehlhauser’s work for Open Phil corroborate that. So Yud’s view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yud’s Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didn’t admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didn’t make any attempt to justify them. So I didn’t find anything about his Facebook post convincing.
Conclusion
To me, the strongest reason to believe that animals don’t count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I haven’t read anything remotely convincing that justifies that view on the merits. That’s why I didn’t even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
They didn’t have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
They may have felt like most EAs don’t share the basic intuitions underlying their views, so they’d be talking to a wall. The idea that pigs aren’t conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but I’d need to see way more justification than I’ve seen.
in 2017, Holden’s personal reflections “indicate against the idea that e.g. chickens merit moral concern”. In 2018, Holden stated that “there is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not ‘conscious’ in a morally relevant way”.
Strong downvoted because I find this statement repugnant “I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans.”
Why go there? You don’t do yourself or animal welfare proponents any favors. Make the argument in a less provocative way.
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
I have a lot of respect for most pro-animal arguments, but why go this way?
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
So, the argument you’re making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like it’s a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and it’s important to use them carefully to avoid alienating others.
(I didn’t downvote your comment, by the way.)
I feel bad that my comment made you (and a few others, judging by your comment’s agreevotes) feel bad.
As JackM points out, that snarky comment wasn’t addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: There’s no theoretical reason why one’s ethical system should lexicographically prefer one race/gender/species over another, based solely on that characteristic. In my experience, people who have this view on species say things like “we have the right to exploit animals because we’re stronger than them”, or “exploiting animals is the natural order”, which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally don’t even argue for hierarchicalism because it’s just such a dubious view. I wouldn’t write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
“There’s no theoretical reason why one’s ethical system should lexicographically prefer one race/gender/species over another, based solely on that characteristic. In my experience, people who have this view on species say things like “we have the right to exploit animals because we’re stronger than them”, or “exploiting animals is the natural order” I completely agree with this (although I think its probably a straw man, I can’t see anyone here arguing those things).
I just think its a really bad idea to compare almost most argument (including non-animal related ones) with Nazi Germany and that thought-world. I think its possible to provoke without going this way.
1) Insensitive to the people groups that were involved in that horrific period of time
2) Distracts the argument itself (like it has here, although that’s kind of on me)
2) Brings potential unnecessary negative PR issues with EA, as it gives unnecessary ammunition for hit pieces.
Its the style not the substance here I’m strongly against.
I’m surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just “not counting”—I’ve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesn’t seem to have provided any justification (from what I’ve seen) for the claim that animals don’t have relevant experiences that make them moral patients. He simply asserts this as his view. It’s not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I don’t think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didn’t impress me:
I wish I remembered this better, but he made some sort of assertion that animals don’t have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if I’m misremembering anything here).
He didn’t respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.
I agree I haven’t given an argument on this. At various times people have asked what my view is (ex: we’re taking here about something prompted by my completing a survey prompt) and I’ve given that.
Explaining why I have this view would be a big investment in time: I have a bundle of intuitions and thoughts that put me here, but converting that into a cleanly argued blog post would be a lot more work than I would normally do for fun and I don’t expect this to be fun.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate, and since I think my views on animals are much less important than many of my other views, I wouldn’t see this as at all a good thing. I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
The normal thing to do would be to stop here: I’ve said what my view is, and explained why I’ve never put the effort into a careful case for that position. But I’m more committed to transparency than I am to the above, so I’m going to take about 10 minutes (I have 14 minutes before my kids wake up) to very quickly sketch the main things going into my view. Please read this keeping in mind that it is something I am sharing to be helpful, and I’m not claiming it’s fully argued.
The key question for me is whether, in a given system, there’s anyone inside to experience anything.
I think extremely small collections of neurons (ex: nematodes) can receive pain, in the sense of updating on inputs to generate less of some output. But I draw a distinction between pain and suffering, where the latter requires experience. And I think it’s very unlikely nematodes experience anything.
I don’t think this basic pleasure or pain matters, and if you can’t make something extremely morally good by maximizing the number of happy neurons per cubic centimeter.
I’m pretty sure that most adult humans do experience things, because I do and I can talk to other humans about this.
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I don’t find most things that people give as examples for animal consciousness to be very convincing, because you can often make quite a simple system that displays these features.
While some of my views above could imply that some humans are more valuable come up morally than others, I think it would be extremely destructive to act that way. Lots and lots of bad history there. I treat all people as morally equal.
The arguments for extending this to people as a class don’t seem to me to justify extending this to all creatures as a class.
I also think there are things that matter beyond experienced joy and suffering (preference satisfaction, etc), and I’m even less convinced that animals have these.
Eliezer’s view is reasonably close to mine, in places where I’ve seen him argue it.
(I’m not going to be engaging with object level arguments on this issue—I’m not trying to become an anti-animal advocate.)
Thanks for your response.
I’d be interested to know how likely you think it is that you could do a “good job”. You say you have a “bundle of intuitions and thoughts” which doesn’t seem like much to me.
I’m also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a “bundle of intuitions and thoughts” on what is ultimately a very difficult and important question.[1] In your original comment you say “This isn’t as deeply a considered view as I’d like”. Were you saying you haven’t considered deeply enough or that the general community hasn’t?
And thanks for the sketch of your reasoning but ultimately I don’t think it’s very helpful without some justification for claims like the following:
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/moral patienthood are pretty strong (e.g. see here for a summary) and I would not say I’m relying on intuition. I’m not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals aren’t moral patients, I think you probably need to be very confident of this to counteract the vast numbers of animals that can be helped.
I do think I could do a good job, yes. While I’ve been thinking about these problems off and on for over a decade I’ve never dedicated actual serious time here, and in the past when I’ve put that kind of time into work I’ve been proud of what I’ve been able to do.
What I meant by that is that I don’t have my overall views organized into a form optimized for explaining to others. I’m not asking other people to assume that because I’ve inscrutably come to this conclusion I’m correct or that they should defer to me in any way. But I’d also be dishonest if I didn’t accurately report my views.
Primarily the former. While if someone in the general community had put a lot of time into looking at this question from a perspective similar to my own and I felt like their work addressed my questions that would certainly help, given that no one has and I’m instead forming my own view I would prefer to have put more work into that view.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it. For example, I don’t think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course you’re not under any obligation to write anything (well...perhaps some would argue you are, but I’ll concede you’re not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know I’d write it up.
Ah, thank you for clarifying! That is a much stronger sense of “doing a good job” than I was going for. I was trying to point at something like, successfully writing up my views in a way that felt like a solid contribution to the discourse. Explaining what I thought, why I thought it, and why I didn’t find the standard counter arguments convincing. I think this would probably take me about two months of full-time work, so a pretty substantial opportunity cost.
I think I could do this well enough to become the main person people pointed at when they wanted to give an example of a “don’t value animals” EA (which would probably be negative for my other work), but even major success here would probably only result in convincing <5% of animal-focused EAs to change what they were working on. And much less than that for money, since most of the EA money is from OP, which funds animal work as part of an explicit process of worldview diversification.
I would be primarily known as an anti-animal advocate if I wrote something like this, even if I didn’t want to be.
On whether I would need to put my time into continuing to defend the position, I agree that I strictly wouldn’t have to, but I think that given my temperament and interaction style I wouldn’t actually be able to avoid this. So I need to think of this as if I am allocating a larger amount of time than what it would take to write up the argument.
I don’t think this is what Jeff said.
OK so he says he would primarily be “known” as an anti-animal advocate not “become” one.
But he then also says the following (bold emphasis mine):
I’m struggling to see how what I said isn’t accurate. Maybe Jeff should have said “I would feel compelled to” rather than “I would need to”.
To my eyes “be known as an anti-animal advocate” is a much lower bar than “be an anti-animal advocate.”
For example I think some people will (still!) consider me an “anti-climate change advocate” (or “anti-anti-climate change advocate?”) due to a fairly short post I wrote 5+ years ago. I would, from their perspective, take actions consistent with that view (eg I’d be willing to defend my position if challenged, describe ways in which I’ve updated, etc). Moreover, it is not implausible that from their perspective, this is the most important thing I do (since they don’t interact with me at other times, and/or they might think my other actions are useless in either direction).
However, by my lights (and I expect by the lights of e.g. the median EA Forum reader) this would be a bad characterization. I don’t view arguing against climate change interventions as an important aspect of my life, nor do I believe my views on the matter as particularly outside of academic consensus.
Hence the distinction between “known as” vs “become.”
You seem to have ignored the bit I made in bold in my previous comment
I don’t think there is or ought to be an expectation to respond to every subpart of a comment in a reply
It’s the only part of my comment that argues Jeff was effectively saying he would have to “be” an animal advocate, which is exactly what you’re arguing against.
So I guess my best reply is just to point you back to that...
Oh well, was nice chatting.
I guess I still don’t think of “I would need to spend a lot of time as a representative of this position” as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet I’d consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
What do you think of the following evidence?
Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[1] In other words, it seems like they can be taught to tell us what they’re feeling in ways unnatural and non-instinctive to them. To me, the difference between this and human language is mostly just a matter of degree, i.e. we form more associations and form them more easily, and we do recursion.
Graziano (2020, pdf), an illusionist and the inventor of Attention Schema Theory, also takes endogenous/top-down/voluntary attention control to be evidence of having a model (schema) of one’s own attention.[2] Then, according to Nieder (2022), there is good evidence for the voluntary/top-down control of attention (and working memory) at least across mammals and birds, and some suggestive evidence for it in some fish.
And I would expect these to happen in fairly preserved neural structures across mammals, at least, including humans.
I also discuss desires and preferences in other animals more here and here.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3.
Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that it’s the anxiety itself that they’re discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants.
Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, “jet lag”, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
However, Mason and Lavery (2022) caution:
I would expect that checking which brain systems are involved and what their typical functions are could provide further evidence. The case for other mammals would be strongest, given more preserved functions across them, including humans.
Graziano (2020, pdf):
Thanks for taking the time to expose your view clearly here, and explaining why you do not spend a lot of time on the topic (which I respect).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to “I can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)”.
While nobody disputes that, I find it weird that your conclusion is not “I’m very uncertain about other systems”, but “other systems that cannot tell me directly about their inner experience (very small children, animals) probably don’t have any relevant inner experience”. I’m not sure how you got to that conclusion. At the very least, this would justify extreme uncertainty.
Personally, I think that the fact that animals display a lot of behaviour similar to humans in similar situations should be a significant update toward thinking they have some kind of experience. For instance, a pig is screaming and trying to escape when it is castrated, just as humans would do (we have to observe behaviours).
We can probably build robots that can do the same thing, but that just means we’re good at mimicking other life forms (for instance, we can also build LLMs which tell us they are conscious, and we don’t use that to think humans are not sentient).
I don’t think this is what Jeff believes, though I guess his literal words are consistent with this interpretation.
The discussion is archived here and the original Facebook post here.
It was also discussed again three years ago here.
Thank you! Links in articles such as this just weren’t working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
I’m confused how this works, could you elaborate?
My usual causal chain linking these would be ‘argument is weak’ → ‘~nobody believes it’ → ‘nobody posts it’.
The middle step fails here. Do you have something else in mind?
FWIW, I thought these two comments were reasonable guesses at what may be going on here.
I’m not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
‘argument is weak but some people intuitively believe it in part because they want it to be true’ → ‘there is no strong post that can really be written’ → ‘nobody posts it’
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
Ah, gotcha, I guess that works. No, I don’t have anything I would consider strong evidence, I just know it’s come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isn’t a lot of money in the AW space. I’m pretty sure GHD has far better places to fundraise from.
To the extent I have spoken to people (not Jeff, and not that much) about why they don’t engage more on this, I thought the two comments I linked to in my last comment had a lot of overlap with the responses.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, I’d imagine some people would make it their overwhelming mission to ensure we don’t (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesn’t seem too bad to me. I’m not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because there’s just no point doesn’t stack up to me.
For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they don’t agree about where the money is getting burned..
So from where I stand I don’t recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:
Most of the money is directed by people who don’t read or otherwise have a fairly low opinion of the forum.
Posting on the forum is ‘not for the faint of heart’.
On the occasion that I have dug into past forum prioritisation posts that were well-received, I generally find them seriously flawed or otherwise uncompelling. I have no particular reason to be sad about (1).
People are often aware that there’s an ‘other side’ that strongly disagrees with their disagreement and will push back hard, so they correctly choose not to waste our collective resources in a mud-slinging match.
I don’t expect to have capacity to engage further here, but if further discussion suggests that one of the above is a particularly surprising claim, I may consider writing it up in more detail in future.
Maybe I don’t speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isn’t optimal, but I wasn’t aware that many EAs think we are giving tens of millions of dollars to interventions/areas that do NO good in expectation (which is what I mean by “burning money”).
Maybe the burning money point is a bit of a red herring though if the amount you’re burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who don’t think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. I’d love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Ariel’s on the topic of animal welfare vs global health.
As a small note, I don’ think the “believe it because they want it to be true” is really an argument either way. To state the obvious, animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
So I don’t think the “want it to be true” argument stands really at all. Motivations are very strong on both sides, and from a “realpolitik” kind of perspective, there’s so much more riding on this from animal researchers than there is for people like Yud and Zvi.
On the other hand, the “very few people believe animals aren’t moral patients and haven’t made great arguments for it” point for me stands very strong.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get “fuzzies” from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldn’t as a human species then be committing a grave moral atrocity which would be a massive relief.
There aren’t really similar arguments for wanting animals to be moral patients (other than “I work on animal welfare”) but I would be interested if I’m missing any relevant ones.