@AGB šø would you be willing to provide brief sketches of some of these stronger arguments for global health which werenāt covered during the Debate Week? Like Nathan, Iāve spent a ton of time discussing this issue with other EAs, and I havenāt heard any arguments Iād consider strong for prioritizing global health which werenāt mentioned during Debate Week.
First, want to flag that what I said was at the post level and then defined stronger as:
the points Iām most likely to hear and give most weight to when discussing this with longtime EAs in person
You said:
I havenāt heard any arguments Iād consider strong for prioritizing global health which werenāt mentioned during Debate Week
So I can give examples of what I was referring to, but to be clear weāre talking somewhat at cross purposes here:
I would not expect you to consider them strong.
You are not alone here of course, and I suspect this fact also helps to answer Nathanās confusion for why nobody wrote them up.
Even my post, which has been decently-received all things considered, I donāt consider an actual good use of time, I more did it in order to sleep better at night.
They often were mentioned at comment-level.
With that in mind, I would say that the most common argument I hear from longtime EAs is variants of āanimals donāt count at allā. Sometimes itās framed as āalmost certainly arenāt sentientā or ācount as ~nothing compared to a childās lifeā. You can see this from Jeff Kaufman, and Eliezer Yudkowsky, and itās one I hear a decent amount from EAs closer to me as well.
If youāve discussed this a ton I assume you have heard this too, and just arenāt thinking of the things people say here as strong arguments? Which is fine and all, Iām not trying to argue from authority, at least not at this time. My intended observation was ālots of EAs think a thing that is highly relevant to the debate week, none of them wrote it up for the debate weekā.
I think that observation holds, though if you still disagree Iām curious why.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that āanimals donāt count at allā. I think itās somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didnāt really justify his view in his comment thread. Iāve never read Zvi justify that view anywhere either. Iāve heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Overwhelming Hierarchicalism
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term āoverwhelmingā because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, youād need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Julesā argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you donāt endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. Thereās just no prior for why that would be the case.
Denial of animal consciousness
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, theyād have to be really certain that animals arenāt conscious to endorse global health here. Even if thereās a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think theyād still merit a significant fraction of EA funding. (Probably still more than theyāre currently receiving.)
I think itās fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/āpainkillers/āsocial interaction as humansā are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while Iām not a consciousness expert at all, the New York Declaration on Animal Consciousness says that āthere is strong scientific support for attributions of conscious experience to other mammals and to birdsā. Rethink Prioritiesā and Luke Muehlhauserās work for Open Phil corroborate that. So Yudās view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yudās Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didnāt admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didnāt make any attempt to justify them. So I didnāt find anything about his Facebook post convincing.
Conclusion
To me, the strongest reason to believe that animals donāt count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I havenāt read anything remotely convincing that justifies that view on the merits. Thatās why I didnāt even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
They didnāt have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
They may have felt like most EAs donāt share the basic intuitions underlying their views, so theyād be talking to a wall. The idea that pigs arenāt conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but Iād need to see way more justification than Iāve seen.
in 2017, Holdenās personal reflections āindicate against the idea that e.g. chickens merit moral concernā. In 2018, Holden stated that āthere is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not āconsciousā in a morally relevant wayā.
Strong downvoted because I find this statement repugnant āI put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans.ā
Why go there? You donāt do yourself or animal welfare proponents any favors. Make the argument in a less provocative way.
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
I have a lot of respect for most pro-animal arguments, but why go this way?
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
So, the argument youāre making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like itās a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and itās important to use them carefully to avoid alienating others.
I feel bad that my comment made you (and a few others, judging by your commentās agreevotes) feel bad.
As JackM points out, that snarky comment wasnāt addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: Thereās no theoretical reason why oneās ethical system should lexicographically prefer one race/āgender/āspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like āwe have the right to exploit animals because weāre stronger than themā, or āexploiting animals is the natural orderā, which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally donāt even argue for hierarchicalism because itās just such a dubious view. I wouldnāt write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
āThereās no theoretical reason why oneās ethical system should lexicographically prefer one race/āgender/āspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like āwe have the right to exploit animals because weāre stronger than themā, or āexploiting animals is the natural orderā I completely agree with this (although I think its probably a straw man, I canāt see anyone here arguing those things).
I just think its a really bad idea to compare almost most argument (including non-animal related ones) with Nazi Germany and that thought-world. I think its possible to provoke without going this way.
1) Insensitive to the people groups that were involved in that horrific period of time 2) Distracts the argument itself (like it has here, although thatās kind of on me) 2) Brings potential unnecessary negative PR issues with EA, as it gives unnecessary ammunition for hit pieces.
Its the style not the substance here Iām strongly against.
Iām surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just ānot countingāāIāve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesnāt seem to have provided any justification (from what Iāve seen) for the claim that animals donāt have relevant experiences that make them moral patients. He simply asserts this as his view. Itās not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I donāt think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didnāt impress me:
I wish I remembered this better, but he made some sort of assertion that animals donāt have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if Iām misremembering anything here).
He didnāt respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.
Jeff because he doesnāt seem to have provided any justification (from what Iāve seen) for the claim that animals donāt have relevant experiences that make them moral patients. He simply asserts this as his view. Itās not even an argument, let alone a strong one.
I agree I havenāt given an argument on this. At various times people have asked what my view is (ex: weāre taking here about something prompted by my completing a survey prompt) and Iāve given that.
Explaining why I have this view would be a big investment in time: I have a bundle of intuitions and thoughts that put me here, but converting that into a cleanly argued blog post would be a lot more work than I would normally do for fun and I donāt expect this to be fun.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate, and since I think my views on animals are much less important than many of my other views, I wouldnāt see this as at all a good thing. I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
The normal thing to do would be to stop here: Iāve said what my view is, and explained why Iāve never put the effort into a careful case for that position. But Iām more committed to transparency than I am to the above, so Iām going to take about 10 minutes (I have 14 minutes before my kids wake up) to very quickly sketch the main things going into my view. Please read this keeping in mind that it is something I am sharing to be helpful, and Iām not claiming itās fully argued.
The key question for me is whether, in a given system, thereās anyone inside to experience anything.
I think extremely small collections of neurons (ex: nematodes) can receive pain, in the sense of updating on inputs to generate less of some output. But I draw a distinction between pain and suffering, where the latter requires experience. And I think itās very unlikely nematodes experience anything.
I donāt think this basic pleasure or pain matters, and if you canāt make something extremely morally good by maximizing the number of happy neurons per cubic centimeter.
Iām pretty sure that most adult humans do experience things, because I do and I can talk to other humans about this.
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I donāt find most things that people give as examples for animal consciousness to be very convincing, because you can often make quite a simple system that displays these features.
While some of my views above could imply that some humans are more valuable come up morally than others, I think it would be extremely destructive to act that way. Lots and lots of bad history there. I treat all people as morally equal.
The arguments for extending this to people as a class donāt seem to me to justify extending this to all creatures as a class.
I also think there are things that matter beyond experienced joy and suffering (preference satisfaction, etc), and Iām even less convinced that animals have these.
Eliezerās view is reasonably close to mine, in places where Iāve seen him argue it.
(Iām not going to be engaging with object level arguments on this issueāIām not trying to become an anti-animal advocate.)
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
Iād be interested to know how likely you think it is that you could do a āgood jobā. You say you have a ābundle of intuitions and thoughtsā which doesnāt seem like much to me.
Iām also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a ābundle of intuitions and thoughtsā on what is ultimately a very difficult and important question.[1] In your original comment you say āThis isnāt as deeply a considered view as Iād likeā. Were you saying you havenāt considered deeply enough or that the general community hasnāt?
And thanks for the sketch of your reasoning but ultimately I donāt think itās very helpful without some justification for claims like the following:
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/āmoral patienthood are pretty strong (e.g. see here for a summary) and I would not say Iām relying on intuition. Iām not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals arenāt moral patients, I think you probably need to be veryconfident of this to counteract the vast numbers of animals that can be helped.
Iād be interested to know how likely you think it is that you could do a āgood jobā.
I do think I could do a good job, yes. While Iāve been thinking about these problems off and on for over a decade Iāve never dedicated actual serious time here, and in the past when Iāve put that kind of time into work Iāve been proud of what Iāve been able to do.
You say you have a ābundle of intuitions and thoughtsā which doesnāt seem like much to me.
What I meant by that is that I donāt have my overall views organized into a form optimized for explaining to others. Iām not asking other people to assume that because Iāve inscrutably come to this conclusion Iām correct or that they should defer to me in any way. But Iād also be dishonest if I didnāt accurately report my views.
In your original comment you say āThis isnāt as deeply a considered view as Iād likeā. Were you saying you havenāt considered deeply enough or that the general community hasnāt?
Primarily the former. While if someone in the general community had put a lot of time into looking at this question from a perspective similar to my own and I felt like their work addressed my questions that would certainly help, given that no one has and Iām instead forming my own view I would prefer to have put more work into that view.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your viewsāsorry I donāt really buy it. For example, I donāt think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course youāre not under any obligation to write anything (well...perhaps some would argue you are, but Iāll concede youāre not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know Iād write it up.
Ah, thank you for clarifying! That is a much stronger sense of ādoing a good jobā than I was going for. I was trying to point at something like, successfully writing up my views in a way that felt like a solid contribution to the discourse. Explaining what I thought, why I thought it, and why I didnāt find the standard counter arguments convincing. I think this would probably take me about two months of full-time work, so a pretty substantial opportunity cost.
I think I could do this well enough to become the main person people pointed at when they wanted to give an example of a ādonāt value animalsā EA (which would probably be negative for my other work), but even major success here would probably only result in convincing <5% of animal-focused EAs to change what they were working on. And much less than that for money, since most of the EA money is from OP, which funds animal work as part of an explicit process of worldview diversification.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your viewsāsorry I donāt really buy it.
I would be primarily known as an anti-animal advocate if I wrote something like this, even if I didnāt want to be.
On whether I would need to put my time into continuing to defend the position, I agree that I strictly wouldnāt have to, but I think that given my temperament and interaction style I wouldnāt actually be able to avoid this. So I need to think of this as if I am allocating a larger amount of time than what it would take to write up the argument.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your viewsāsorry I donāt really buy it.
OK so he says he would primarily be āknownā as an anti-animal advocate not ābecomeā one.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
But he then also says the following (bold emphasis mine):
I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
Iām struggling to see how what I said isnāt accurate. Maybe Jeff should have said āI would feel compelled toā rather than āI would need toā.
To my eyes ābe known as an anti-animal advocateā is a much lower bar than ābe an anti-animal advocate.ā
For example I think some people will (still!) consider me an āanti-climate change advocateā (or āanti-anti-climate change advocate?ā) due to a fairly short post I wrote 5+ years ago. I would, from their perspective, take actions consistent with that view (eg Iād be willing to defend my position if challenged, describe ways in which Iāve updated, etc). Moreover, it is not implausible that from their perspective, this is the most important thing I do (since they donāt interact with me at other times, and/āor they might think my other actions are useless in either direction).
However, by my lights (and I expect by the lights of e.g. the median EA Forum reader) this would be a bad characterization. I donāt view arguing against climate change interventions as an important aspect of my life, nor do I believe my views on the matter as particularly outside of academic consensus.
Hence the distinction between āknown asā vs ābecome.ā
Itās the only part of my comment that argues Jeff was effectively saying he would have to ābeā an animal advocate, which is exactly what youāre arguing against.
So I guess my best reply is just to point you back to that...
I guess I still donāt think of āI would need to spend a lot of time as a representative of this positionā as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet Iād consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[1] In other words, it seems like they can be taught to tell us what theyāre feeling in ways unnatural and non-instinctive to them. To me, the difference between this and human language is mostly just a matter of degree, i.e. we form more associations and form them more easily, and we do recursion.
Graziano (2020, pdf), an illusionist and the inventor of Attention Schema Theory, also takes endogenous/ātop-down/āvoluntary attention control to be evidence of having a model (schema) of oneās own attention.[2] Then, according to Nieder (2022), there is good evidence for the voluntary/ātop-down control of attention (and working memory) at least across mammals and birds, and some suggestive evidence for it in some fish.
And I would expect these to happen in fairly preserved neural structures across mammals, at least, including humans.
I also discuss desires and preferences in other animals more here and here.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in SƔnchez-SuƔrez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3.
Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that itās the anxiety itself that theyāre discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants.
Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, ājet lagā, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
But could such results merely reflect a āblindsight-likeā guessing: a mere discrimination response that need not reflect underlying awareness? After all, as we have seen for S.P.U.D. subjects, decerebrated pigeons can use colored lights as DSs (128), and humans can use subliminal visual stimuli as DSs [e.g., (121)]. We think several refinements could reduce this risk.
I would expect that checking which brain systems are involved and what their typical functions are could provide further evidence. The case for other mammals would be strongest, given more preserved functions across them, including humans.
Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does. Just as other animals must have a body schema or be condemned to a flailing uncontrolled body, they must have an attention schema or be condemned to an attention system that is purely at the mercy of every new sparkling, bottom-up pull on attention. To control attention endogenously implies an effective controller, which implies a control model.
Thanks for taking the time to expose your view clearly here, and explaining why you do not spend a lot of time on the topic (which I respect).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to āI can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)ā.
While nobody disputes that, I find it weird that your conclusion is not āIām very uncertain about other systemsā, but āother systems that cannot tell me directly about their inner experience (very small children, animals) probably donāt have any relevant inner experienceā. Iām not sure how you got to that conclusion. At the very least, this would justify extreme uncertainty.
Personally, I think that the fact that animals display a lot of behaviour similar to humans in similar situations should be a significant update toward thinking they have some kind of experience. For instance, a pig is screaming and trying to escape when it is castrated, just as humans would do (we have to observe behaviours).
We can probably build robots that can do the same thing, but that just means weāre good at mimicking other life forms (for instance, we can also build LLMs which tell us they are conscious, and we donāt use that to think humans are not sentient).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to āI can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)ā.
I donāt think this is what Jeff believes, though I guess his literal words are consistent with this interpretation.
Eliezer has at least defended his view in a Facebook thread which unfortunately I donāt think exists anymore as all links I can find go to some weird page in another language.
The discussion is archived here and the original Facebook post here.
Thank you! Links in articles such as this just werenāt working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? āBlindā panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the āprimitiveā limbic system elicits the most intense experiences. And compare dreams ā not least, nightmares ā many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.
Anyone who cares about sentience-friendly intelligence should not harm our fellow subjects of experience. Shutting down factory farms and slaughterhouses will eliminate one of the worldās worst forms of severe and readily avoidable suffering.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger.
Iām confused how this works, could you elaborate?
My usual causal chain linking these would be āargument is weakā ā ā~nobody believes itā ā ānobody posts itā.
The middle step fails here. Do you have something else in mind?
FWIW, I thought these twocomments were reasonable guesses at what may be going on here.
Iām not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
āargument is weak but some people intuitively believe it in part because they want it to be trueā ā āthere is no strong post that can really be writtenā ā ānobody posts itā
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
Ah, gotcha, I guess that works. No, I donāt have anything I would consider strong evidence, I just know itās come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well.
they should definitely post these and potentially redirect a great deal of altruistic funding towards global health
FWIW this seems wrong, not least because as was correctly pointed out many times there just isnāt a lot of money in the AW space. Iām pretty sure GHD has far better places to fundraise from.
To the extent I have spoken to people (not Jeff, and not that much) about why they donāt engage more on this, I thought the two comments I linked to in my last comment had a lot of overlap with the responses.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isnāt a lot of money in the AW space. Iām pretty sure GHD has far better places to fundraise from.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, Iād imagine some people would make it their overwhelming mission to ensure we donāt (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesnāt seem too bad to me. Iām not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because thereās just no point doesnāt stack up to me.
For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they donāt agree about where the money is getting burned..
So from where I stand I donāt recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:
Most of the money is directed by people who donāt read or otherwise have a fairly low opinion of the forum.
Posting on the forum is ānot for the faint of heartā.
On the occasion that I have dug into past forum prioritisation posts that were well-received, I generally find them seriously flawed or otherwise uncompelling. I have no particular reason to be sad about (1).
People are often aware that thereās an āother sideā that strongly disagrees with their disagreement and will push back hard, so they correctly choose not to waste our collective resources in a mud-slinging match.
I donāt expect to have capacity to engage further here, but if further discussion suggests that one of the above is a particularly surprising claim, I may consider writing it up in more detail in future.
Most EAs I speak to seem to have similarly-sized bugbears?
Maybe I donāt speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isnāt optimal, but I wasnāt aware that many EAs think we are giving tens of millions of dollars to interventions/āareas that do NO good in expectation (which is what I mean by āburning moneyā).
Maybe the burning money point is a bit of a red herring though if the amount youāre burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who donāt think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. Iād love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Arielās on the topic of animal welfare vs global health.
As a small note, I donā think the ābelieve it because they want it to be trueā is really an argument either way. To state the obvious, animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
So I donāt think the āwant it to be trueā argument stands really at all. Motivations are very strong on both sides, and from a ārealpolitikā kind of perspective, thereās so much more riding on this from animal researchers than there is for people like Yud and Zvi.
On the other hand, the āvery few people believe animals arenāt moral patients and havenāt made great arguments for itā point for me stands very strong.
Animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get āfuzziesā from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldnāt as a human species then be committing a grave moral atrocity which would be a massive relief.
There arenāt really similar arguments for wanting animals to be moral patients (other than āI work on animal welfareā) but I would be interested if Iām missing any relevant ones.
@AGB šø would you be willing to provide brief sketches of some of these stronger arguments for global health which werenāt covered during the Debate Week? Like Nathan, Iāve spent a ton of time discussing this issue with other EAs, and I havenāt heard any arguments Iād consider strong for prioritizing global health which werenāt mentioned during Debate Week.
First, want to flag that what I said was at the post level and then defined stronger as:
You said:
So I can give examples of what I was referring to, but to be clear weāre talking somewhat at cross purposes here:
I would not expect you to consider them strong.
You are not alone here of course, and I suspect this fact also helps to answer Nathanās confusion for why nobody wrote them up.
Even my post, which has been decently-received all things considered, I donāt consider an actual good use of time, I more did it in order to sleep better at night.
They often were mentioned at comment-level.
With that in mind, I would say that the most common argument I hear from longtime EAs is variants of āanimals donāt count at allā. Sometimes itās framed as āalmost certainly arenāt sentientā or ācount as ~nothing compared to a childās lifeā. You can see this from Jeff Kaufman, and Eliezer Yudkowsky, and itās one I hear a decent amount from EAs closer to me as well.
If youāve discussed this a ton I assume you have heard this too, and just arenāt thinking of the things people say here as strong arguments? Which is fine and all, Iām not trying to argue from authority, at least not at this time. My intended observation was ālots of EAs think a thing that is highly relevant to the debate week, none of them wrote it up for the debate weekā.
I think that observation holds, though if you still disagree Iām curious why.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that āanimals donāt count at allā. I think itās somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didnāt really justify his view in his comment thread. Iāve never read Zvi justify that view anywhere either. Iāve heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Overwhelming Hierarchicalism
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term āoverwhelmingā because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, youād need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Julesā argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you donāt endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. Thereās just no prior for why that would be the case.
Denial of animal consciousness
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, theyād have to be really certain that animals arenāt conscious to endorse global health here. Even if thereās a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think theyād still merit a significant fraction of EA funding. (Probably still more than theyāre currently receiving.)
I think itās fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/āpainkillers/āsocial interaction as humansā are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while Iām not a consciousness expert at all, the New York Declaration on Animal Consciousness says that āthere is strong scientific support for attributions of conscious experience to other mammals and to birdsā. Rethink Prioritiesā and Luke Muehlhauserās work for Open Phil corroborate that. So Yudās view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yudās Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didnāt admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didnāt make any attempt to justify them. So I didnāt find anything about his Facebook post convincing.
Conclusion
To me, the strongest reason to believe that animals donāt count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I havenāt read anything remotely convincing that justifies that view on the merits. Thatās why I didnāt even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
They didnāt have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
They may have felt like most EAs donāt share the basic intuitions underlying their views, so theyād be talking to a wall. The idea that pigs arenāt conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but Iād need to see way more justification than Iāve seen.
in 2017, Holdenās personal reflections āindicate against the idea that e.g. chickens merit moral concernā. In 2018, Holden stated that āthere is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not āconsciousā in a morally relevant wayā.
Strong downvoted because I find this statement repugnant āI put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans.ā
Why go there? You donāt do yourself or animal welfare proponents any favors. Make the argument in a less provocative way.
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
I have a lot of respect for most pro-animal arguments, but why go this way?
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
So, the argument youāre making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like itās a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and itās important to use them carefully to avoid alienating others.
(I didnāt downvote your comment, by the way.)
I feel bad that my comment made you (and a few others, judging by your commentās agreevotes) feel bad.
As JackM points out, that snarky comment wasnāt addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: Thereās no theoretical reason why oneās ethical system should lexicographically prefer one race/āgender/āspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like āwe have the right to exploit animals because weāre stronger than themā, or āexploiting animals is the natural orderā, which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally donāt even argue for hierarchicalism because itās just such a dubious view. I wouldnāt write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
āThereās no theoretical reason why oneās ethical system should lexicographically prefer one race/āgender/āspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like āwe have the right to exploit animals because weāre stronger than themā, or āexploiting animals is the natural orderā I completely agree with this (although I think its probably a straw man, I canāt see anyone here arguing those things).
I just think its a really bad idea to compare almost most argument (including non-animal related ones) with Nazi Germany and that thought-world. I think its possible to provoke without going this way.
1) Insensitive to the people groups that were involved in that horrific period of time
2) Distracts the argument itself (like it has here, although thatās kind of on me)
2) Brings potential unnecessary negative PR issues with EA, as it gives unnecessary ammunition for hit pieces.
Its the style not the substance here Iām strongly against.
Iām surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just ānot countingāāIāve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesnāt seem to have provided any justification (from what Iāve seen) for the claim that animals donāt have relevant experiences that make them moral patients. He simply asserts this as his view. Itās not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I donāt think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didnāt impress me:
I wish I remembered this better, but he made some sort of assertion that animals donāt have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if Iām misremembering anything here).
He didnāt respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.
I agree I havenāt given an argument on this. At various times people have asked what my view is (ex: weāre taking here about something prompted by my completing a survey prompt) and Iāve given that.
Explaining why I have this view would be a big investment in time: I have a bundle of intuitions and thoughts that put me here, but converting that into a cleanly argued blog post would be a lot more work than I would normally do for fun and I donāt expect this to be fun.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate, and since I think my views on animals are much less important than many of my other views, I wouldnāt see this as at all a good thing. I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
The normal thing to do would be to stop here: Iāve said what my view is, and explained why Iāve never put the effort into a careful case for that position. But Iām more committed to transparency than I am to the above, so Iām going to take about 10 minutes (I have 14 minutes before my kids wake up) to very quickly sketch the main things going into my view. Please read this keeping in mind that it is something I am sharing to be helpful, and Iām not claiming itās fully argued.
The key question for me is whether, in a given system, thereās anyone inside to experience anything.
I think extremely small collections of neurons (ex: nematodes) can receive pain, in the sense of updating on inputs to generate less of some output. But I draw a distinction between pain and suffering, where the latter requires experience. And I think itās very unlikely nematodes experience anything.
I donāt think this basic pleasure or pain matters, and if you canāt make something extremely morally good by maximizing the number of happy neurons per cubic centimeter.
Iām pretty sure that most adult humans do experience things, because I do and I can talk to other humans about this.
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I donāt find most things that people give as examples for animal consciousness to be very convincing, because you can often make quite a simple system that displays these features.
While some of my views above could imply that some humans are more valuable come up morally than others, I think it would be extremely destructive to act that way. Lots and lots of bad history there. I treat all people as morally equal.
The arguments for extending this to people as a class donāt seem to me to justify extending this to all creatures as a class.
I also think there are things that matter beyond experienced joy and suffering (preference satisfaction, etc), and Iām even less convinced that animals have these.
Eliezerās view is reasonably close to mine, in places where Iāve seen him argue it.
(Iām not going to be engaging with object level arguments on this issueāIām not trying to become an anti-animal advocate.)
Thanks for your response.
Iād be interested to know how likely you think it is that you could do a āgood jobā. You say you have a ābundle of intuitions and thoughtsā which doesnāt seem like much to me.
Iām also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a ābundle of intuitions and thoughtsā on what is ultimately a very difficult and important question.[1] In your original comment you say āThis isnāt as deeply a considered view as Iād likeā. Were you saying you havenāt considered deeply enough or that the general community hasnāt?
And thanks for the sketch of your reasoning but ultimately I donāt think itās very helpful without some justification for claims like the following:
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/āmoral patienthood are pretty strong (e.g. see here for a summary) and I would not say Iām relying on intuition. Iām not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals arenāt moral patients, I think you probably need to be very confident of this to counteract the vast numbers of animals that can be helped.
I do think I could do a good job, yes. While Iāve been thinking about these problems off and on for over a decade Iāve never dedicated actual serious time here, and in the past when Iāve put that kind of time into work Iāve been proud of what Iāve been able to do.
What I meant by that is that I donāt have my overall views organized into a form optimized for explaining to others. Iām not asking other people to assume that because Iāve inscrutably come to this conclusion Iām correct or that they should defer to me in any way. But Iād also be dishonest if I didnāt accurately report my views.
Primarily the former. While if someone in the general community had put a lot of time into looking at this question from a perspective similar to my own and I felt like their work addressed my questions that would certainly help, given that no one has and Iām instead forming my own view I would prefer to have put more work into that view.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your viewsāsorry I donāt really buy it. For example, I donāt think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course youāre not under any obligation to write anything (well...perhaps some would argue you are, but Iāll concede youāre not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know Iād write it up.
Ah, thank you for clarifying! That is a much stronger sense of ādoing a good jobā than I was going for. I was trying to point at something like, successfully writing up my views in a way that felt like a solid contribution to the discourse. Explaining what I thought, why I thought it, and why I didnāt find the standard counter arguments convincing. I think this would probably take me about two months of full-time work, so a pretty substantial opportunity cost.
I think I could do this well enough to become the main person people pointed at when they wanted to give an example of a ādonāt value animalsā EA (which would probably be negative for my other work), but even major success here would probably only result in convincing <5% of animal-focused EAs to change what they were working on. And much less than that for money, since most of the EA money is from OP, which funds animal work as part of an explicit process of worldview diversification.
I would be primarily known as an anti-animal advocate if I wrote something like this, even if I didnāt want to be.
On whether I would need to put my time into continuing to defend the position, I agree that I strictly wouldnāt have to, but I think that given my temperament and interaction style I wouldnāt actually be able to avoid this. So I need to think of this as if I am allocating a larger amount of time than what it would take to write up the argument.
I donāt think this is what Jeff said.
OK so he says he would primarily be āknownā as an anti-animal advocate not ābecomeā one.
But he then also says the following (bold emphasis mine):
Iām struggling to see how what I said isnāt accurate. Maybe Jeff should have said āI would feel compelled toā rather than āI would need toā.
To my eyes ābe known as an anti-animal advocateā is a much lower bar than ābe an anti-animal advocate.ā
For example I think some people will (still!) consider me an āanti-climate change advocateā (or āanti-anti-climate change advocate?ā) due to a fairly short post I wrote 5+ years ago. I would, from their perspective, take actions consistent with that view (eg Iād be willing to defend my position if challenged, describe ways in which Iāve updated, etc). Moreover, it is not implausible that from their perspective, this is the most important thing I do (since they donāt interact with me at other times, and/āor they might think my other actions are useless in either direction).
However, by my lights (and I expect by the lights of e.g. the median EA Forum reader) this would be a bad characterization. I donāt view arguing against climate change interventions as an important aspect of my life, nor do I believe my views on the matter as particularly outside of academic consensus.
Hence the distinction between āknown asā vs ābecome.ā
You seem to have ignored the bit I made in bold in my previous comment
I donāt think there is or ought to be an expectation to respond to every subpart of a comment in a reply
Itās the only part of my comment that argues Jeff was effectively saying he would have to ābeā an animal advocate, which is exactly what youāre arguing against.
So I guess my best reply is just to point you back to that...
Oh well, was nice chatting.
I guess I still donāt think of āI would need to spend a lot of time as a representative of this positionā as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet Iād consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
What do you think of the following evidence?
Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[1] In other words, it seems like they can be taught to tell us what theyāre feeling in ways unnatural and non-instinctive to them. To me, the difference between this and human language is mostly just a matter of degree, i.e. we form more associations and form them more easily, and we do recursion.
Graziano (2020, pdf), an illusionist and the inventor of Attention Schema Theory, also takes endogenous/ātop-down/āvoluntary attention control to be evidence of having a model (schema) of oneās own attention.[2] Then, according to Nieder (2022), there is good evidence for the voluntary/ātop-down control of attention (and working memory) at least across mammals and birds, and some suggestive evidence for it in some fish.
And I would expect these to happen in fairly preserved neural structures across mammals, at least, including humans.
I also discuss desires and preferences in other animals more here and here.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in SƔnchez-SuƔrez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3.
Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that itās the anxiety itself that theyāre discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants.
Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, ājet lagā, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
However, Mason and Lavery (2022) caution:
I would expect that checking which brain systems are involved and what their typical functions are could provide further evidence. The case for other mammals would be strongest, given more preserved functions across them, including humans.
Graziano (2020, pdf):
Thanks for taking the time to expose your view clearly here, and explaining why you do not spend a lot of time on the topic (which I respect).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to āI can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)ā.
While nobody disputes that, I find it weird that your conclusion is not āIām very uncertain about other systemsā, but āother systems that cannot tell me directly about their inner experience (very small children, animals) probably donāt have any relevant inner experienceā. Iām not sure how you got to that conclusion. At the very least, this would justify extreme uncertainty.
Personally, I think that the fact that animals display a lot of behaviour similar to humans in similar situations should be a significant update toward thinking they have some kind of experience. For instance, a pig is screaming and trying to escape when it is castrated, just as humans would do (we have to observe behaviours).
We can probably build robots that can do the same thing, but that just means weāre good at mimicking other life forms (for instance, we can also build LLMs which tell us they are conscious, and we donāt use that to think humans are not sentient).
I donāt think this is what Jeff believes, though I guess his literal words are consistent with this interpretation.
The discussion is archived here and the original Facebook post here.
It was also discussed again three years ago here.
Thank you! Links in articles such as this just werenāt working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
Iām confused how this works, could you elaborate?
My usual causal chain linking these would be āargument is weakā ā ā~nobody believes itā ā ānobody posts itā.
The middle step fails here. Do you have something else in mind?
FWIW, I thought these two comments were reasonable guesses at what may be going on here.
Iām not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
āargument is weak but some people intuitively believe it in part because they want it to be trueā ā āthere is no strong post that can really be writtenā ā ānobody posts itā
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
Ah, gotcha, I guess that works. No, I donāt have anything I would consider strong evidence, I just know itās come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isnāt a lot of money in the AW space. Iām pretty sure GHD has far better places to fundraise from.
To the extent I have spoken to people (not Jeff, and not that much) about why they donāt engage more on this, I thought the two comments I linked to in my last comment had a lot of overlap with the responses.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, Iād imagine some people would make it their overwhelming mission to ensure we donāt (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesnāt seem too bad to me. Iām not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because thereās just no point doesnāt stack up to me.
For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they donāt agree about where the money is getting burned..
So from where I stand I donāt recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:
Most of the money is directed by people who donāt read or otherwise have a fairly low opinion of the forum.
Posting on the forum is ānot for the faint of heartā.
On the occasion that I have dug into past forum prioritisation posts that were well-received, I generally find them seriously flawed or otherwise uncompelling. I have no particular reason to be sad about (1).
People are often aware that thereās an āother sideā that strongly disagrees with their disagreement and will push back hard, so they correctly choose not to waste our collective resources in a mud-slinging match.
I donāt expect to have capacity to engage further here, but if further discussion suggests that one of the above is a particularly surprising claim, I may consider writing it up in more detail in future.
Maybe I donāt speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isnāt optimal, but I wasnāt aware that many EAs think we are giving tens of millions of dollars to interventions/āareas that do NO good in expectation (which is what I mean by āburning moneyā).
Maybe the burning money point is a bit of a red herring though if the amount youāre burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who donāt think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. Iād love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Arielās on the topic of animal welfare vs global health.
As a small note, I donā think the ābelieve it because they want it to be trueā is really an argument either way. To state the obvious, animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
So I donāt think the āwant it to be trueā argument stands really at all. Motivations are very strong on both sides, and from a ārealpolitikā kind of perspective, thereās so much more riding on this from animal researchers than there is for people like Yud and Zvi.
On the other hand, the āvery few people believe animals arenāt moral patients and havenāt made great arguments for itā point for me stands very strong.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get āfuzziesā from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldnāt as a human species then be committing a grave moral atrocity which would be a massive relief.
There arenāt really similar arguments for wanting animals to be moral patients (other than āI work on animal welfareā) but I would be interested if Iām missing any relevant ones.