Scale of the welfare of various animal populations
Summary
I Fermi-estimated the scale of the welfare of various animal populations from the relative intensity of their experiences, moral weight, and population size.
Based on my results, I would be very surprised if the scale of the welfare of:
Wild animals ended up being smaller than that of farmed animals.
Farmed animals turned out to be smaller than that of humans.
Introduction
If it is worth doing, it is worth doing with made-up statistics?
Methods
I Fermi-estimated the scale of the welfare of various animal populations from the absolute value of the expected total hedonistic utility (ETHU). I computed this from the product between:
Intensity of the mean experience as a fraction of the median welfare range.
Median welfare range.
Population size.
The data and calculations are here.
Intensity of experience
I calculated the intensity of the mean experience of farmed animals as a fraction of their median welfare range from that of broilers in a reformed scenario[1], assuming:
The time they experience each level of pain defined here (search for ādefinitionsā) is given by these data (search for āpain-tracksā) from the Welfare Footprint Project (WFP).
The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience.
Excruciating pain is 1 k times as bad as disabling pain[2].
Disabling pain is 100 times as bad as hurtful pain, which together with the above implies excruciating pain being 100 k times as bad as hurtful pain.
Hurtful pain is 10 times as bad as annoying pain, which together with the above implies excruciating pain being 1 M times as bad as annoying pain.
Their lifespan is 42 days, in agreement with section āConventional and Reformed Scenariosā of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.
They sleep 8 h each day, and have a neutral experience during that time.
Them being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.
Ideally, I would have used empirical data for the animal populations besides farmed chickens too. However, I do not think they are readily available, so I had to make some assumptions.
For the intensity of the mean experience of humans as a fraction of their median welfare range, I considered we:
Sleep 8 h each day, and have a neutral experience during that time.
Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.
For the intensity of the mean experience of wild animals as a fraction of their median welfare range, I used the same value as for humans. However, whereas I think humans have positive lives (see here), I am very uncertain about wild animals (see this preprint from Heather Browning and Walter Weit).
Median welfare range
I defined the median welfare range from Rethink Prioritiesā estimates for mature individuals[3] provided here by Bob Fischer[4]. For the populations I studied with animals of different species, I used those of:
For wild mammals, pigs.
For farmed fish, salmon.
For wild fish, salmon.
For farmed insects, silkworms.
For wild terrestrial arthropods, silkworms.
For farmed crayfish, crabs and lobsters, mean between crayfish and crabs.
For farmed shrimps and prawns, shrimps.
For wild marine arthropods, silkworms.
For nematodes, silkworms multiplied by 0.1.
Population size
I defined the population size from:
For humans, these data from Our World in Data (OWID) (for 2021).
For wild mammals, the mean of the lower and upper bounds provided in section 3.1.5.2 of Carlier 2020.
For farmed chickens and pigs, these data from OWID (for 2014).
For farmed fish, the midpoint estimate of this analysis from Kelly Anthis and Jacy Anthis (for 2019).
For wild fish, the mean between the mean of the lower and upper bounds provided in section 3.1.5.5 of Carlier 2020, and the order of magnitude given in Table S1 of Barn-On 2018.
For farmed insects raised for food and feed, the mean of the lower and upper bounds provided here by Abraham Rowe (in the 2nd point of the section āKey Findingsā).
For farmed crayfish, crabs and lobsters, and farmed shrimps and prawns, the product between the means of the lower and upper bounds for:
The number of individuals killed per year provided here by fishcount.org (for 2017).
The time in years farmed shrimps spend in grow-out ponds, developing from juvenile until market size, according to this Wikipedia page.
For wild terrestrial and marine arthropods, and nematodes, the orders of magnitude from Table S1 of Barn-On 2018.
Results
The results are presented in the table below by descending absolute value of ETHU as a fraction of that of humans, i.e. decreasing scale of welfare.
Population | Intensity of the mean experience as a fraction of the median welfare range | Median welfare range | Intensity of the mean experience as a fraction of that of humans | Population size | Absolute value of ETHU as a fraction of that of humans |
---|---|---|---|---|---|
Farmed insects raised for food and feed | 12.9 Ī¼ | 2.00 m | 3.87 m | 8.65E10 | 0.0423 |
Farmed pigs | 12.9 Ī¼ | 0.515 | 1.00 | 9.86E8 | 0.124 |
Farmed crayfish, crabs and lobsters | 12.9 Ī¼ | 0.0305 | 0.0590 | 2.21E10 | 0.165 |
Humans | 6.67 Ī¼ | 1.00 | 1.00 | 7.91E9 | 1.00 |
Farmed shrimps and prawns | 12.9 Ī¼ | 0.0310 | 0.0599 | 1.39E11 | 1.05 |
Farmed fish | 12.9 Ī¼ | 0.0560 | 0.108 | 1.11E11 | 1.52 |
Farmed chickens | 12.9 Ī¼ | 0.332 | 0.642 | 2.14E10 | 1.74 |
Farmed animals analysed here | 12.9 Ī¼ | 0.0362 | 0.0700 | 1.36E12 | 4.64 |
Wild mammals | 6.67 Ī¼ | 0.515 | 0.515 | 6.75E11 | 43.9 |
Wild fish | 6.67 Ī¼ | 0.0560 | 0.0560 | 6.20E14 | 4.39 k |
Wild terrestrial arthropods | 6.67 Ī¼ | 2.00 m | 2.00 m | 1.00E18 | 253 k |
Wild marine arthropods | 6.67 Ī¼ | 2.00 m | 2.00 m | 1.00E20 | 25.3 M |
Nematodes | 6.67 Ī¼ | 0.200 m | 0.200 m | 1.00E21 | 25.3 M |
Wild animals analysed here | 6.67 Ī¼ | 0.365 m | 0.365 m | 1.10E21 | 50.8 M |
Discussion
According to my results:
Wild animal welfare dominates farmed animal welfare:
The scale of the welfare of each of the 5 populations of wild animals exceeds that of each of the 6 populations of farmed animals.
The scale of the welfare of the 5 populations of wild animals is 10.9 M (= 50.8*10^6/ā4.64) times that of the 6 populations of farmed animals. This did not surprise me given the sheer numbers of wild animals.
There is a meat-eater problem. The combined importance of the 6 populations of farmed animals I analysed is 4.64 times as large as that of humans. Consequently, a smaller human population will tend to increase welfare in the nearterm if we ignore the effects on wild animals. However, these dominate, and can be positive or negative, so I have no idea what is the overall nearterm effect of changing the size of the human population. For similar reasons, I think it is very hard to say whether GiveWellās top charities are beneficial or harmful.
The intensity of the mean experience of farmed chickens, estimated from data for broilers in a reformed scenario, is 64.2 % that of humans. Intuitively, I would guess the ratio to be higher, but I believe I am biassed towards overweighting the time in disabling and excruciating pain. This is indeed super bad, but does not last for long.
The order of the scale of welfare among wild animals roughly matches what I estimated here based on the total number of neurons, with arthropods and nematodes being the major drivers. The welfare of nematodes has a greater scale here, but my guess for the moral weight of nematodes has quite low resilience.
Among the populations of farmed animals and humans, the welfare of insects raised for food and feed has the smallest scale. I actually expected it to be larger, but I think I was overestimating their population size.
The specific ordering of the various animal populations by scale of welfare I got is not robust given the high uncertainty of my results. However, I would be very surprised if the scale of the welfare of:
Wild animals ended up being smaller than that of farmed animals.
Farmed animals turned out to be smaller than that of humans.
I would say any scope-sensitive ethic will lead to these conclusions, not just expectational total hedonistic utilitarianism.
- PriĀoriĀtisĀing anĀiĀmal welfare over global health and deĀvelĀopĀment? by 13 May 2023 9:03 UTC; 112 points) (
- Farmed anĀiĀmals are neglected by 24 Jun 2024 16:49 UTC; 108 points) (
- Do you think deĀcreasĀing the conĀsumpĀtion of anĀiĀmals is good/ābad? Think again? by 27 May 2023 8:22 UTC; 89 points) (
- The numĀber of seabirds and sea mamĀmals kilĀled by marine plasĀtic polĀluĀtion is quite small relĀaĀtive to the catch of fish by 19 Apr 2022 11:22 UTC; 87 points) (
- Should I donate my kidĀney or part of my liver? by 10 Apr 2024 15:05 UTC; 66 points) (
- The Meat Eater Problem by 17 Jun 2023 6:52 UTC; 61 points) (
- What posts would you like someĀone to write? by 27 Feb 2024 10:30 UTC; 61 points) (
- Founders Pledgeās CliĀmate Change Fund might be more cost-effecĀtive than GiveWellās top charĀiĀties, but it is much less cost-effecĀtive than corĀpoĀrate camĀpaigns for chicken welfare? by 5 May 2024 9:10 UTC; 60 points) (
- What posts would you like someĀone to write? by 27 Feb 2024 10:30 UTC; 60 points) (
- FishĀing-aquaĀculĀture subĀstiĀtuĀtion and aquafeeds by 3 Jun 2024 5:39 UTC; 45 points) (
- Famine deaths due to the cliĀmatic effects of nuĀclear war by 14 Oct 2023 12:05 UTC; 40 points) (
- 18 Mar 2024 17:21 UTC; 31 points) 's comment on EA āWorĀldĀviewsā Need Rethinking by (
- BadĀness of eatĀing farmed anĀiĀmals in terms of smokĀing cigarettes by 22 Jul 2023 8:45 UTC; 26 points) (
- Should Our World in Data disĀcuss wild anĀiĀmal welfare in the conĀtext of naĀture conĀserĀvaĀtion? by 18 Jun 2023 8:32 UTC; 26 points) (
- 15 Jun 2024 18:12 UTC; 20 points) 's comment on What āEffecĀtive AltruĀismā Means to Me by (
- 29 Jun 2023 17:13 UTC; 17 points) 's comment on LongterĀmism and alĀterĀnaĀtive proteins by (
- Helping anĀiĀmals or savĀing huĀman lives in high inĀcome counĀtries is arĀguably betĀter than savĀing huĀman lives in low inĀcome counĀtries? by 21 Mar 2024 9:05 UTC; 12 points) (
- Welfare ranges per calorie consumption by 24 Jun 2023 8:47 UTC; 12 points) (
- 26 Nov 2023 8:53 UTC; 9 points) 's comment on Open Phil Should AlloĀcate Most NeartĀerĀmist FundĀing to AnĀiĀmal Welfare by (
- 18 Jun 2023 8:38 UTC; 8 points) 's comment on AMA: Ed Mathieu, Head of Data & ReĀsearch at Our World in Data by (
- 14 Sep 2023 7:00 UTC; 7 points) 's comment on Shrimp: The anĀiĀmals most comĀmonly used and kilĀled for food production by (
- 21 May 2024 17:52 UTC; 7 points) 's comment on The sufferĀing of a farmed anĀiĀmal is equal in size to the hapĀpiness of a huĀman, acĀcordĀing to a survey by (
- 24 Apr 2024 7:38 UTC; 6 points) 's comment on If Youāre GoĀing To Eat AnĀiĀmals, Eat Beef and Dairy by (
- 9 Nov 2023 10:42 UTC; 5 points) 's comment on Is x-risk the most cost-effecĀtive if we count only the next few genĀerĀaĀtions? by (
- 30 Oct 2023 0:08 UTC; 5 points) 's comment on My Left Kidney by (
- 12 Jun 2023 15:13 UTC; 5 points) 's comment on Wild AnĀiĀmal Welfare SceĀnarĀios for AI Doom by (
- 28 Jun 2023 6:47 UTC; 4 points) 's comment on [Link Post] Do Microbes MatĀter More Than HuĀmans? by (
- 31 Oct 2023 11:24 UTC; 3 points) 's comment on InĀterĀmeĀdiĀate ReĀport on Abrupt SunĀlight ReĀducĀtion Scenarios by (
- 15 May 2023 17:21 UTC; 3 points) 's comment on The ImĀporĀtance of InĀterĀcausal Impacts by (
- 8 Nov 2023 20:14 UTC; 3 points) 's comment on My Left Kidney by (
- 15 Apr 2023 10:31 UTC; 3 points) 's comment on AnĀnouncĀing a new anĀiĀmal adĀvoĀcacy podĀcast: How I Learned to Love Shrimp by (
- 24 Mar 2023 6:38 UTC; 3 points) 's comment on Our reĀsearch proĀcess: an overview from ReĀthink PriĀoriĀtiesā Global Health and DevelĀopĀment team by (
- 19 Mar 2023 8:39 UTC; 2 points) 's comment on [Linkpost] Why pescĀetarĀiĀanism is bad for anĀiĀmal welfareāVox, FuĀture Perfect by (
- 14 Mar 2024 15:54 UTC; 2 points) 's comment on ReĀthink PriĀoriĀtiesā Welfare Range Estimates by (
- 28 Mar 2023 14:16 UTC; 2 points) 's comment on InĀcrease in fuĀture poĀtenĀtial due to mitiĀgatĀing food shocks caused by abrupt sunĀlight reĀducĀtion scenarios by (
- 13 Jul 2023 16:03 UTC; 2 points) 's comment on Why we may exĀpect our sucĀcesĀsors not to care about suffering by (
- 28 Apr 2023 8:59 UTC; 2 points) 's comment on Why the exĀpected numĀbers of farmed anĀiĀmals in the far fuĀture might be huge by (
- 7 Oct 2023 14:21 UTC; 2 points) 's comment on Deep ReĀport on Diabetes by (
- 27 Apr 2023 16:13 UTC; 2 points) 's comment on ReaĀsons to have hope by (
- 29 Dec 2023 15:22 UTC; 2 points) 's comment on The numĀber of seabirds and sea mamĀmals kilĀled by marine plasĀtic polĀluĀtion is quite small relĀaĀtive to the catch of fish by (
- 15 Nov 2023 16:10 UTC; 2 points) 's comment on How ReĀthink PriĀoriĀties is AdĀdressĀing Risk and Uncertainty by (
- 31 Oct 2023 13:40 UTC; 2 points) 's comment on PriĀoriĀtisĀing anĀiĀmal welfare over global health and deĀvelĀopĀment? by (
- 19 Nov 2023 8:29 UTC; 0 points) 's comment on EcoĀnomics of AnĀiĀmal Welfare: Call for Abstracts by (
- 25 Nov 2023 16:14 UTC; 0 points) 's comment on Open PhilanĀthropyās newest foĀcus area: Global Public Health Policy by (
- 22 Mar 2023 17:15 UTC; -15 points) 's comment on AssessĀment of HapĀpier Lives InĀstiĀtuteās Cost-EffecĀtiveĀness AnalĀyĀsis of StrongMinds by (
Thanks for writing this!
You might be able to make some informed guesses or do some informative sensitivity analysis about net welfare in wild animals, given your pain intensity ratios. I think itās reasonable to assume that animals donāt experience any goods as intensely good (as valuable per moment) as excruciating pain is intensely bad. Pleasures as intense as disabling pain may also be rare, but that could be an assumption to vary.
Based on your ratios and total utilitarian assumption, 1 second of excruciating pain outweighs 11.5 days of annoying pain or 1.15 days of hurtful pain, or 11.5 days of goods as intense as annoying pain or 1.15 days of goods as intense as hurtful pain, on average.
Just quickly Googling for the most populous groups Iām aware of, mites, springtails and nematodes live a few weeks at most and copepods up to around a year. There might be other similarly populous groups of aquatic arthropods Iām missing that you should include, but I think mites and springtails capture terrestrial arthropods by moral weight. I think those animals will dominate your calculations, the way youāre doing them. And their deaths could involve intense pain and perhaps only a very small share live more than a week. However, itās not obvious these animals can experience very intense suffering at all, even conditional on their sentience, but this probability could be another sensitivity analysis parameter.
(FWIW, Iād be inclined to exclude nematodes, though. Including them feels like a mugging to me and possibly dominated by panpsychism.)
Ants may live up to a few years and are very populous, and I could imagine have relatively good lives on symmetric ethical views, as eusocial insects investing heavily in their young. But theyāre orders of magnitude less populous than mites and springtails.
Although this group seems likely to be outweighed in expectation, for wild vertebrates (or at least birds and mammals?), sepsis seems to be one of the worst natural ways to die, with 2 hours of excruciating pain and further time at lower intensities in farmed chickens (https://āāwelfarefootprint.org/āāresearch-projects/āācumulative-pain-and-wild-animal-welfare-assessments/āā ). With your ratios, this is the equivalent of more than 200 years of annoying pain or 20 years of hurtful pain, much longer than the vast majority of wild vertebrates (by population and peehaps species) live. I donāt know how common sepsis is, though. Finding out how common sepsis is in the most populous groups of vertebrates could have high value of information for wild vertebrate welfare.
Given the examples of cognitive abilities of nematodes mentioned here, I donāt see them as a mugging. For example, hereās a quote from that link:
Itās not obvious to me why one would draw a line between mites/āspringtails and nematodes, rather than between ants and mites/āspringtails, between small fish and ants, etc.
With only 302 neurons, probably only a minority of which actually generate valenced experiences, if theyāre sentient at all, I might have to worry about random particle interactions in the walls generating suffering.
Nematodes also seem like very minimal RL agents that would be pretty easy to program. The fear-like behaviour seems interesting, but still plausibly easy to program.
I donāt actually know much about mites or springtails, but my ignorance counts in their favour, as does them being more closely related to and sharing more brain structures (e.g. mushroom bodies) with arthropods with more complex behaviours that seem like better evidence for sentience (spiders for mites, and insects for springtails).
I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think thereās even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).
If itās true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons. Perhaps we could build a neural-network RL agent to mimic the learning abilities of C. elegans, but that would likely leave out lots of other cool stuff that those 302 neurons are doing that we havenāt discovered yet. Our RL neural network might be like trying to replace the complex nutrition of real foods with synthetic calories and a multivitamin.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, itās plausible to me that the robot itself would warrant nontrivial moral concern.
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate conscious valence. So it seems that around 50 out of the 302 neurons would be enough to simulate, and maybe even a few times less. Maybe this would be overgeneralizing to nematodes, though.
I did have something like this in mind, but was probably thinking something like biological neurons are 10x more expressive than artificial ones, based on the comments here. Even if thatās not more likely than not, a non-tiny chance of at most around 10x could be enough, and even a tiny chance could get us a wager for panpsychism.
I suppose an artificial neuron could also be much more complex than a few particles, but I can also imagine that could not be the case. And invertebrate neuron potentials are often graded rather than spiking, which could make a difference in how many particles are needed.
Iād be willing to buy something like this. In my view, a real C elegans brain separated from the body and receiving misleading inputs should have valence as intense as C elegans with a body, on the right kinds of inputs. On views other than hedonism, maybe a body makes an important difference, and all else equal, Iād expect having a body and interacting with the real world to just mean greater (more positive and less negative) welfare overall, basically for experience machine reasons.
I see. :) I think counterfactual robustness is important, so maybe Iām less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?
In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it seems like the unused pathways shouldnāt matter for valenced experience. But I would say that whether those unused pathways are present determines how much we should see a āyouā as being there to begin with. There might still be sound waves coming from your mouth, but if theyāre created just by some particles knocking into each other in random ways rather than as part of a robust, organized system, I donāt think thereās much of a āyouā who is actually screaming.
For the same reason, Iām wary of trying to eliminate too much context as unimportant to valence and whittling the neurons down to just a small set. I think the larger context is what turns some seemingly meaningless signal transmission into something that we can see holistically as more than the sum of its parts.
As an analogy, suppose weāre trying to find the mountain in a drawing. I could draw just a triangle shape like
^
and say thatās the mountain, and everything else is non-mountain stuff. But just seeing a^
shape in isolation doesnāt mean much. We have to add some foreground objects, the sky, etc as well before it starts to actually look like a mountain. I think a similar thing applies to valence generation in brains. The surrounding neural machinery is what makes a series of neural firings meaningful rather than just being some seemingly arbitrary signals being passed along.This point about context mattering is also why I have an intuition that a body and real environment contribute something to the total sentience of a brain, although Iām not sure how much they matter, especially if the brain is complex and already creates a lot of the important context within itself based on the relations between the different brain parts. One way to see why a body and environment could matter a little bit is if we think of them as the āextended mindā of the nervous system, doing extra computations that arenāt being done by the neurons themselves.
What do you think of the models of consciousness, with much less than 300 neurons, described in Herzog 2007?
I think the way the theories are assumed to work in that paper are all implausible accounts of consciousness, and, at least for GWT, not how GWT is intended to be interpreted. See https://āāforum.effectivealtruism.org/āāposts/āāvbhoFsyQmrntru6Kw/āādo-brains-contain-many-conscious-subsystems-if-so-should-we#Neural_correlate_theories_of_consciousness_____explanatory_theories_of_consciousness
I now lean towards illusionism, and something like Attention Schema Theory. I donāt think illusionism rules out panpsychism, but Iād say itās much less likely under illusionism. I can share some papers that I found most convincing. Luke Muehlhauserās report on consciousness also supports illusionism.
By āillusionismā do you have in mind something like a higher-order view according to which noticing oneās own awareness (or having a sufficiently complex model of oneās attention, as in attention schema theory) is the crucial part of consciousness? I think that doesnāt necessarily follow from pure illusionism itself.
As I mention here, we could take illusionism to show that the distinction between āconsciousā and āunconsciousā processing is more shallow and trivial than we might have thought. For example, adding a model of oneās attention to a brain seems like a fairly small change that doesnāt require much additional computing power. Why should we give so much weight to such a small computational task, compared against the much larger and more sophisticated computations already occuring in a brain without such a model?
As an analogy, suppose I have a cuckoo clock thatās running. Then I draw a schematic diagram illustrating the parts of the clock and how they fit together (a model of the clock). Why should I say that the full clock that lives in the real world is unimportant, but when I draw a little picture of it, it suddenly starts to matter?
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model and are unlikely to happen elsewhere. From Graziano, 2020:
I would also go a bit further to claim that itās ārichā illusions, not āsparseā illusions, that matter here. Shabasson, 2021 gives a nice summary of Kammerer, 2019, where this distinction is made:
The example rich optical illusion given is the MĆ¼llerāLyer illusion. It doesnāt matter if you just measured the lines to show they have the same length: once you look at the original illusion again (at least without extra markings or rulers to make it obvious that they are the same length), one line will still look longer than the other.
On a practical and more theory-neutral or theory-light approach, we can also distinguish between conscious and unconscious perception in humans, e.g. with blindsight and other responses to things outside awareness. Of course, itās possible the āunconsciousā perception is actually conscious, just not accessible to the higher-order conscious process (conscious awareness/āattention), but there doesnāt seem to be much reason to believe itās conscious at all. Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that. Then, we have little reason to believe capacities that are sometimes realized unconsciously in humans indicate consciousness in other animals.
RPās invertebrate sentience research gave little weight to capacities that (sometimes) operate unconsciously in humans. Conscious vs unconscious perception is discussed more by Birch, 2020. He proposes the facilitation hypothesis:
and three candidate abilities: trace conditioning, rapid reversal learning and cross-model learning. The idea would be to āfind out whether the identified cluster of putatively consciousness-linked abilities is selectively switched on and off under masking in the same way it is in humans.ā
Apparently some rich optical illusions can occur unconsciously while others occur consciously, though (Chen et al., 2018). So, maybe there is some conscious but inaccessible perception, although this is confusing, and Iām not sure about the relationship between these kinds of illusions and illusionism as a theory. Furthermore, Iām still skeptical of inaccessible conscious valence in particular, since valence seems pretty holistic, context-dependent and late in any animalās processing to me. Mason and Lavery, 2022 discuss some refinements to experiments to distinguish conscious and unconscious valence.
I do concede that there could be an important line-drawing or trivial instantiation problem for what counts as having a consciousness illusion, or valence illusion, in particular.
Thanks for the detailed explanation! I havenāt read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.
My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says āWarning: RAM usage is above 90%ā (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple āmodelā of the total amount of āattentionā that your computerās memory is devoting to various things. Suppose your computerās actual RAM usage drops below 90%, but the notification still shows. You click an āxā on the notification to close it, but then a second later, the computer erroneously pops up the notification again. You restart your computer, hoping that will solve it, but the bogus notification returns, even though you can see that your computerās RAM usage is only 38%. Like the MĆ¼ller-Lyer illusion, this buggy notification is resistant to correction.
Maybe your view is that the relevant models and things being modeled should meet various specific criteria, so that we wonāt see trivial instances of them throughout information-processing systems? Iām sympathetic to that view, since I intuitively donāt care much about simplified models of things unless those things are pretty similar to what happens in animal brains. I think there will be a spectrum from highly parochial views that have lots of criteria, to highly cosmopolitan views that have few criteria and therefore will see consciousness in many more places.
Even if we define consciousness as āspecific ways information is processed that would lead to inferences like the kind we make about consciousnessā, thereās a question of whether that should be the only thing we care about morally. We intuitively care about the illusions that we can see using the parts of our brains that can generate high-level, verbal thoughts, because those illusions are the things visible to those parts of our brains. We donāt intuitively care about other processes (even other schematic models elsewhere in our nervous systems) that our high-level thoughts canāt see. But most people also donāt care much about infants dying of diseases in Africa most of the time for the same reason: out of sight, out of mind. Itās not clear to me how much this bias to care about whatās visible should withstand moral reflection.
If its being conscious (whatever that means exactly) wouldnāt be visible to our high-level thoughts, thereās also no reason to believe itās not conscious. :)
The generation of a very specific type of attention schema other than the one we introspect upon using high-level thoughts might be unlikely. But the generation of simplified summaries of things for use by other parts of the nervous system seems fairly ubiquitous. For example, our face-recognition brain region might do lots of detailed processing of a face, determine that itās Jennifer Aniston, and then send a summary message āthis is Jennifer Anistonā to other parts of the brain so that they can react accordingly. Our fight-or-flight system does processing of possible threats, and when a threat is detected, it sends warning signals to other brain regions and triggers release of adrenaline, which is a very simplified āmodelā thatās distributed throughout the body via the blood. These simplified representations of complex things have huge impact on behavior (just like the high-level attention schema does), which is why evolution created them.
I assume you agree, and our disagreement is probably just about how many criteria a simplified model has to meet before it counts as being relevant to consciousness? For example, the message saying āthis is Jennifer Anistonā is a simplified model of a face, not a simplified model of attention, so it wouldnāt lead to illusion about oneās own conscious experience? If so, that makes sense, but when looking at these things from the outside as a neuroscientist would, it seems kind of weird to me to say that a simplified model of attention that can give rise to certain consciousness-related illusions is extremely important, while a simplified model of something else that could give rise to other illusions would be completely unimportant. Is it really the consciousness illusion itself that matters, or does the organism actually care about avoiding harm and seeking rewards, and the illusion is just the thing that we latch our caring energy onto? (Sorry if this is rambling and confused, and feel no need to answer these questions. At some point we get into the apparent absurdity of why we attach value to some physical processes rather than other ones at all.)
Iām not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/āpreferences, basically illusions that things actually matter to the system with those illusions.
I suspect that recognizing faces doesnāt require any illusion that would indicate consciousness. Still, Iām not sure what counts as an illusion, and I could imagine it being the case that there are very simple illusions everywhere.
I think illusionism is the only theory (or set of theories) thatās on the right track to actually (dis)solving the hard problem, by explaining why we have the beliefs we do about consciousness, and Iām pessimistic about all other approaches.
Thanks. :)
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think youāre proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between āconsciousā and āunconsciousā is less fundamental than we assumed and that therefore more things should count as sentient than we previously thought. (Susan Blackmore is one illusionist who concludes from illusionism that thereās less of a distinction between conscious and unconscious than we naively think, although I donāt know how this affects her moral circle.)
Itās not clear to me whether an illusion that āthis rubber hand is part of my bodyā is more relevant to consciousness than a judgment that āthis face is Jennifer Anistonā. I guess weād have to propose detailed criteria for which judgments are relevant to consciousness and have better understandings of what these judgments look like in the brain.
I agree that such illusions seem important. :) But itās plausible to me that itās also at least somewhat important if something matters to the system, even if thereās no high-level illusion saying so. For example, a nematode clearly cares about avoiding bodily damage, even if its nervous system doesnāt contain any nontrivial representation that āI care about avoiding painā. I think adding that higher-level representation increases the sentience of the brain, but it seems weird to say that without the higher-level representation, the brain doesnāt matter at all. I guess without that higher-level representation, itās harder to imagine ourselves in the nematodeās place, because whenever we think about the badness of pain, weāre doing so using that higher level.
Iām not sure where to draw lines, but illusions of āthis is bad!ā (evaluative) or āget this to stop!ā (imperative) could be enough, rather than something like āI care about avoiding painā, and I doubt nematodes have those illusions, too. Itās not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But itās also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I donāt think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesnāt necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesnāt change this.
I think a simple reward/āpunishment signal can be an extremely basic neural representation that āthis is good/ābadā, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes arenāt the simplest systems), but I also donāt see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. Itās like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but thereās a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
Thatās my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing thatās qualitatively different in complex brains relative to nematodes or NPCs, I might shift my viewāalthough I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/āetc that only exists at all in certain minds, then itās easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
Iāll lay out how Iām thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/āprocess is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/āor modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/āprocess is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/āprocess is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
Iām now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/āor belief-forming processes, although maybe the actual responses of the original system/āprocess can help break symmetries, or you can have enough restrictions on the connected introspective and/āor belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which āa reward/āpunishment signalā (and/āor its effects), āactivation of escape musclesā or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do arenāt beliefs (of mattering) under some accounts of beliefs Iām pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes donāt model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/ānormative views (combined with empirical views that nematodes donāt meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as youāre aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, Iād treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldnāt take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements donāt, because I donāt care much about counterfactual robustness. Maybe >90% to I donāt care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
and/āor perhaps general beliefs about consciousness and its qualities like reddishness, classic qualia, the Cartesian theatre, etc..
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses donāt seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the ābeliefā about neural activity. Or, generally, one cell can represent a cell itās connected to. Thereās still a question of whether itās representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
Iād guess there are other ways to deal with nonbinary truth degrees, though.
Ah, welfare range estimates may already be supposed to capture the probability that an animal can experience intense suffering, like excruciating pain.
I included nematodes because they are still animals, and think seriously attempting to estimate (as opposed to guessing as I did) their moral weight would be quite valuable. From my results, the scale of welfare of an animal group tends to increase as the moral weight decreases (assuming the same intensity of the mean experience as a fraction of that of the worst possible experience). If the moral weight of nematodes turned out to be so small that the scale of their welfare was much smaller than that of wild arthropods, we would have some evidence, although very weak one, that the scale of the welfare of populations of beings less sophisticaded than nematodes[1] would also be smaller.
I suppose there is very little data relevant to assessing the moral weight of nematodes. However, it still seems worth for e.g. Rethink Priorities to do a very shallow analysis.
From Table S1 of Bar-On 2017, bacteria (10^30), fungi (10^27), archaea (10^29), protists (10^27), and viruses (10^31).
Thanks for the comments, Michael!
I definitely agree there are lots of potential improvements. In general, Rethink Prioritiesā Moral Weight Project made a great contribution towards quantifying the moral weight of different species, but it is worth having in mind there could be significant variation of the intensity of the mean experience (relative to the moral weight) across species and farming environments too.
Great post! Some points:
Insect farming is a relatively new and ārapidly growingā industry, which may help explain why the insect welfare scale was the lowest.
One piece of good news regarding the meat-eater problem is that humans seem to reduce wild invertebrate populations, and that there are many ethical arguments pointing to these invertebrates living net negative lives. Thereās some reason to believe that this dominates our (horrific) treatment of farmed animals.
Hi Ariel,
Great to know you liked the post!
Yes, nice point! In any case, the scale of the welfare of farmed insects being lower does not mean we should not try to mitigate it. One should also have tractability and neglectedness in mind, and these may well be higher earlier. So current efforts may well be especially cost-effective.
I think net change in forest area is a major driver for the impact of humans on terrestrial arthropods. So, since humans have historically caused deforestation, and deforested areas have less terrestrial arthropods, I can see why humans have decreased the population of terrestrial arthropods. However, forest area is now increasing in many countries. From OWID:
So I think there is not a clear answer.
From reading Brian Tomasikās (great!) posts, I also got the impression wild animals have net negative lives. Meanwhile, I have become essentially agnostic. From here:
Is that just a guess, or has someone said that explicitly? I also get the vague impression that forests have higher productivity than grasslands/āetc, but thatās not obvious, and Iād be curious to see more investigation of whether/āwhen forests do have higher productivity. (This includes both primary productivity and productivity in terms of invertebrate life.)
Thanks for commenting, Brian!
It is a guess informed by you (great!) analysis here, where you assumed the median density of arthropods in rainforests to be 1.53 (= 2.3/ā1.5) times that in Cerrado, although with high uncertainty as you noticed. However, I did not mean that increasing forest area would necessarily lead to more arthropods. I just meant that the change in forest area due to human activities could be the main factor for the net change in the total welfare of arthropods. I am uncertain about the sign of the correlation because I am not only uncertain about which biomes have greater density of arthropods, but also about the sign of the welfare of arthropods.
I have also illustrated here that the change in forest area might be the driver for the nearterm cost-effectiveness of GiveWellās top charities.
Thanks. :) Iām uncertain how accurate or robust the 2.3/ā1.5 comparison was, but youāre right to cite that. And youāre right that human land-use changes (including changes to forest area) likely have big effects of some kind on total arthropod welfare.
Makes sense. I have almost no uncertainty about that because I measure welfare in a suffering-focused way, according to which extreme pain is vastly more important than positive experiences. I suspect that a lot of variation in opinions on this question come down to how suffering-focused or happiness-focused oneās values are, rather than empirical disagreements, though itās also true that we lack a lot of empirical information about how invertebrates perceive and value various good and bad events.
It seems that most people agree that factory-farmed pigs and battery-cage hens have net negative welfare, so I guess there could be some possible empirical information that would persuade most people to take one or the other side of the issue. However, thereās disagreement about whether, e.g., factory-farmed beef and dairy cows have net negative or positive welfare. That seems to mostly be a difference in moral values.