You might be able to make some informed guesses or do some informative sensitivity analysis about net welfare in wild animals, given your pain intensity ratios. I think it’s reasonable to assume that animals don’t experience any goods as intensely good (as valuable per moment) as excruciating pain is intensely bad. Pleasures as intense as disabling pain may also be rare, but that could be an assumption to vary.
Based on your ratios and total utilitarian assumption, 1 second of excruciating pain outweighs 11.5 days of annoying pain or 1.15 days of hurtful pain, or 11.5 days of goods as intense as annoying pain or 1.15 days of goods as intense as hurtful pain, on average.
Just quickly Googling for the most populous groups I’m aware of, mites, springtails and nematodes live a few weeks at most and copepods up to around a year. There might be other similarly populous groups of aquatic arthropods I’m missing that you should include, but I think mites and springtails capture terrestrial arthropods by moral weight. I think those animals will dominate your calculations, the way you’re doing them. And their deaths could involve intense pain and perhaps only a very small share live more than a week. However, it’s not obvious these animals can experience very intense suffering at all, even conditional on their sentience, but this probability could be another sensitivity analysis parameter.
(FWIW, I’d be inclined to exclude nematodes, though. Including them feels like a mugging to me and possibly dominated by panpsychism.)
Ants may live up to a few years and are very populous, and I could imagine have relatively good lives on symmetric ethical views, as eusocial insects investing heavily in their young. But they’re orders of magnitude less populous than mites and springtails.
Although this group seems likely to be outweighed in expectation, for wild vertebrates (or at least birds and mammals?), sepsis seems to be one of the worst natural ways to die, with 2 hours of excruciating pain and further time at lower intensities in farmed chickens (https://welfarefootprint.org/research-projects/cumulative-pain-and-wild-animal-welfare-assessments/ ). With your ratios, this is the equivalent of more than 200 years of annoying pain or 20 years of hurtful pain, much longer than the vast majority of wild vertebrates (by population and peehaps species) live. I don’t know how common sepsis is, though. Finding out how common sepsis is in the most populous groups of vertebrates could have high value of information for wild vertebrate welfare.
Given the examples of cognitive abilities of nematodes mentioned here, I don’t see them as a mugging. For example, here’s a quote from that link:
The deterministic development of the worm’s nervous system would seem to limit its usefulness as a model to study behavioral plasticity, but time and again the worm has demonstrated its extreme sensitivity to experience
It’s not obvious to me why one would draw a line between mites/springtails and nematodes, rather than between ants and mites/springtails, between small fish and ants, etc.
With only 302 neurons, probably only a minority of which actually generate valenced experiences, if they’re sentient at all, I might have to worry about random particle interactions in the walls generating suffering.
Nematodes also seem like very minimal RL agents that would be pretty easy to program. The fear-like behaviour seems interesting, but still plausibly easy to program.
I don’t actually know much about mites or springtails, but my ignorance counts in their favour, as does them being more closely related to and sharing more brain structures (e.g. mushroom bodies) with arthropods with more complex behaviours that seem like better evidence for sentience (spiders for mites, and insects for springtails).
I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think there’s even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).
If it’s true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons. Perhaps we could build a neural-network RL agent to mimic the learning abilities of C. elegans, but that would likely leave out lots of other cool stuff that those 302 neurons are doing that we haven’t discovered yet. Our RL neural network might be like trying to replace the complex nutrition of real foods with synthetic calories and a multivitamin.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it’s plausible to me that the robot itself would warrant nontrivial moral concern.
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate conscious valence. So it seems that around 50 out of the 302 neurons would be enough to simulate, and maybe even a few times less. Maybe this would be overgeneralizing to nematodes, though.
If it’s true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons.
I did have something like this in mind, but was probably thinking something like biological neurons are 10x more expressive than artificial ones, based on the comments here. Even if that’s not more likely than not, a non-tiny chance of at most around 10x could be enough, and even a tiny chance could get us a wager for panpsychism.
I suppose an artificial neuron could also be much more complex than a few particles, but I can also imagine that could not be the case. And invertebrate neuron potentials are often graded rather than spiking, which could make a difference in how many particles are needed.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it’s plausible to me that the robot itself would warrant nontrivial moral concern.
I’d be willing to buy something like this. In my view, a real C elegans brain separated from the body and receiving misleading inputs should have valence as intense as C elegans with a body, on the right kinds of inputs. On views other than hedonism, maybe a body makes an important difference, and all else equal, I’d expect having a body and interacting with the real world to just mean greater (more positive and less negative) welfare overall, basically for experience machine reasons.
these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second
I see. :) I think counterfactual robustness is important, so maybe I’m less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?
In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it seems like the unused pathways shouldn’t matter for valenced experience. But I would say that whether those unused pathways are present determines how much we should see a “you” as being there to begin with. There might still be sound waves coming from your mouth, but if they’re created just by some particles knocking into each other in random ways rather than as part of a robust, organized system, I don’t think there’s much of a “you” who is actually screaming.
For the same reason, I’m wary of trying to eliminate too much context as unimportant to valence and whittling the neurons down to just a small set. I think the larger context is what turns some seemingly meaningless signal transmission into something that we can see holistically as more than the sum of its parts.
As an analogy, suppose we’re trying to find the mountain in a drawing. I could draw just a triangle shape like ^ and say that’s the mountain, and everything else is non-mountain stuff. But just seeing a ^ shape in isolation doesn’t mean much. We have to add some foreground objects, the sky, etc as well before it starts to actually look like a mountain. I think a similar thing applies to valence generation in brains. The surrounding neural machinery is what makes a series of neural firings meaningful rather than just being some seemingly arbitrary signals being passed along.
This point about context mattering is also why I have an intuition that a body and real environment contribute something to the total sentience of a brain, although I’m not sure how much they matter, especially if the brain is complex and already creates a lot of the important context within itself based on the relations between the different brain parts. One way to see why a body and environment could matter a little bit is if we think of them as the “extended mind” of the nervous system, doing extra computations that aren’t being done by the neurons themselves.
I now lean towards illusionism, and something like Attention Schema Theory. I don’t think illusionism rules out panpsychism, but I’d say it’s much less likely under illusionism. I can share some papers that I found most convincing. Luke Muehlhauser’s report on consciousness also supports illusionism.
By “illusionism” do you have in mind something like a higher-order view according to which noticing one’s own awareness (or having a sufficiently complex model of one’s attention, as in attention schema theory) is the crucial part of consciousness? I think that doesn’t necessarily follow from pure illusionism itself.
As I mention here, we could take illusionism to show that the distinction between “conscious” and “unconscious” processing is more shallow and trivial than we might have thought. For example, adding a model of one’s attention to a brain seems like a fairly small change that doesn’t require much additional computing power. Why should we give so much weight to such a small computational task, compared against the much larger and more sophisticated computations already occuring in a brain without such a model?
As an analogy, suppose I have a cuckoo clock that’s running. Then I draw a schematic diagram illustrating the parts of the clock and how they fit together (a model of the clock). Why should I say that the full clock that lives in the real world is unimportant, but when I draw a little picture of it, it suddenly starts to matter?
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model and are unlikely to happen elsewhere. From Graziano, 2020:
Suppose the machine has a much richer model of attention. Somehow, attention is depicted by the model as a Moray eel darting around the world. Maybe the machine already had need for a depiction of Moray eels, and it coapted that model for monitoring its own attention. Now we plug in the speech engine. Does the machine claim to have consciousness? No. It claims to have an external Moray eel.
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
I would also go a bit further to claim that it’s “rich” illusions, not “sparse” illusions, that matter here. Shabasson, 2021 gives a nice summary of Kammerer, 2019, where this distinction is made:
According to Kammerer, the illusion of phenomenal consciousness must be a rich illusion because of its strength. It persists regardless of what an agent might come to believe about the reality (or unreality) of phenomenal consciousness. By contrast, a sparse illusion such as the headless woman illusion quickly loses its grip on us once we come to believe it is an illusion and understand how it is generated. Kammerer criticizes Dennett’s and Graziano’s theories for being sparse-illusion views (2019c: 6–8).
The example rich optical illusion given is the Müller–Lyer illusion. It doesn’t matter if you just measured the lines to show they have the same length: once you look at the original illusion again (at least without extra markings or rulers to make it obvious that they are the same length), one line will still look longer than the other.
On a practical and more theory-neutral or theory-light approach, we can also distinguish between conscious and unconscious perception in humans, e.g. with blindsight and other responses to things outside awareness. Of course, it’s possible the “unconscious” perception is actually conscious, just not accessible to the higher-order conscious process (conscious awareness/attention), but there doesn’t seem to be much reason to believe it’s conscious at all. Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that. Then, we have little reason to believe capacities that are sometimes realized unconsciously in humans indicate consciousness in other animals.
Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus.
and three candidate abilities: trace conditioning, rapid reversal learning and cross-model learning. The idea would be to “find out whether the identified cluster of putatively consciousness-linked abilities is selectively switched on and off under masking in the same way it is in humans.”
Apparently some rich optical illusions can occur unconsciously while others occur consciously, though (Chen et al., 2018). So, maybe there is some conscious but inaccessible perception, although this is confusing, and I’m not sure about the relationship between these kinds of illusions and illusionism as a theory. Furthermore, I’m still skeptical of inaccessible conscious valence in particular, since valence seems pretty holistic, context-dependent and late in any animal’s processing to me. Mason and Lavery, 2022 discuss some refinements to experiments to distinguish conscious and unconscious valence.
I do concede that there could be an important line-drawing or trivial instantiation problem for what counts as having a consciousness illusion, or valence illusion, in particular.
Thanks for the detailed explanation! I haven’t read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.
My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says “Warning: RAM usage is above 90%” (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple “model” of the total amount of “attention” that your computer’s memory is devoting to various things. Suppose your computer’s actual RAM usage drops below 90%, but the notification still shows. You click an “x” on the notification to close it, but then a second later, the computer erroneously pops up the notification again. You restart your computer, hoping that will solve it, but the bogus notification returns, even though you can see that your computer’s RAM usage is only 38%. Like the Müller-Lyer illusion, this buggy notification is resistant to correction.
Maybe your view is that the relevant models and things being modeled should meet various specific criteria, so that we won’t see trivial instances of them throughout information-processing systems? I’m sympathetic to that view, since I intuitively don’t care much about simplified models of things unless those things are pretty similar to what happens in animal brains. I think there will be a spectrum from highly parochial views that have lots of criteria, to highly cosmopolitan views that have few criteria and therefore will see consciousness in many more places.
Even if we define consciousness as “specific ways information is processed that would lead to inferences like the kind we make about consciousness”, there’s a question of whether that should be the only thing we care about morally. We intuitively care about the illusions that we can see using the parts of our brains that can generate high-level, verbal thoughts, because those illusions are the things visible to those parts of our brains. We don’t intuitively care about other processes (even other schematic models elsewhere in our nervous systems) that our high-level thoughts can’t see. But most people also don’t care much about infants dying of diseases in Africa most of the time for the same reason: out of sight, out of mind. It’s not clear to me how much this bias to care about what’s visible should withstand moral reflection.
but there doesn’t seem to be much reason to believe it’s conscious at all
If its being conscious (whatever that means exactly) wouldn’t be visible to our high-level thoughts, there’s also no reason to believe it’s not conscious. :)
Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that.
The generation of a very specific type of attention schema other than the one we introspect upon using high-level thoughts might be unlikely. But the generation of simplified summaries of things for use by other parts of the nervous system seems fairly ubiquitous. For example, our face-recognition brain region might do lots of detailed processing of a face, determine that it’s Jennifer Aniston, and then send a summary message “this is Jennifer Aniston” to other parts of the brain so that they can react accordingly. Our fight-or-flight system does processing of possible threats, and when a threat is detected, it sends warning signals to other brain regions and triggers release of adrenaline, which is a very simplified “model” that’s distributed throughout the body via the blood. These simplified representations of complex things have huge impact on behavior (just like the high-level attention schema does), which is why evolution created them.
I assume you agree, and our disagreement is probably just about how many criteria a simplified model has to meet before it counts as being relevant to consciousness? For example, the message saying “this is Jennifer Aniston” is a simplified model of a face, not a simplified model of attention, so it wouldn’t lead to illusion about one’s own conscious experience? If so, that makes sense, but when looking at these things from the outside as a neuroscientist would, it seems kind of weird to me to say that a simplified model of attention that can give rise to certain consciousness-related illusions is extremely important, while a simplified model of something else that could give rise to other illusions would be completely unimportant. Is it really the consciousness illusion itself that matters, or does the organism actually care about avoiding harm and seeking rewards, and the illusion is just the thing that we latch our caring energy onto? (Sorry if this is rambling and confused, and feel no need to answer these questions. At some point we get into the apparent absurdity of why we attach value to some physical processes rather than other ones at all.)
I’m not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/preferences, basically illusions that things actually matter to the system with those illusions.
I suspect that recognizing faces doesn’t require any illusion that would indicate consciousness. Still, I’m not sure what counts as an illusion, and I could imagine it being the case that there are very simple illusions everywhere.
I think illusionism is the only theory (or set of theories) that’s on the right track to actually (dis)solving the hard problem, by explaining why we have the beliefs we do about consciousness, and I’m pessimistic about all other approaches.
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think you’re proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between “conscious” and “unconscious” is less fundamental than we assumed and that therefore more things should count as sentient than we previously thought. (Susan Blackmore is one illusionist who concludes from illusionism that there’s less of a distinction between conscious and unconscious than we naively think, although I don’t know how this affects her moral circle.)
It’s not clear to me whether an illusion that “this rubber hand is part of my body” is more relevant to consciousness than a judgment that “this face is Jennifer Aniston”. I guess we’d have to propose detailed criteria for which judgments are relevant to consciousness and have better understandings of what these judgments look like in the brain.
illusions that things actually matter to the system with those illusions
I agree that such illusions seem important. :) But it’s plausible to me that it’s also at least somewhat important if something matters to the system, even if there’s no high-level illusion saying so. For example, a nematode clearly cares about avoiding bodily damage, even if its nervous system doesn’t contain any nontrivial representation that “I care about avoiding pain”. I think adding that higher-level representation increases the sentience of the brain, but it seems weird to say that without the higher-level representation, the brain doesn’t matter at all. I guess without that higher-level representation, it’s harder to imagine ourselves in the nematode’s place, because whenever we think about the badness of pain, we’re doing so using that higher level.
I’m not sure where to draw lines, but illusions of “this is bad!” (evaluative) or “get this to stop!” (imperative) could be enough, rather than something like “I care about avoiding pain”, and I doubt nematodes have those illusions, too. It’s not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But it’s also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I don’t think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesn’t necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesn’t change this.
I think a simple reward/punishment signal can be an extremely basic neural representation that “this is good/bad”, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes aren’t the simplest systems), but I also don’t see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. It’s like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but there’s a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
That’s my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing that’s qualitatively different in complex brains relative to nematodes or NPCs, I might shift my view—although I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/etc that only exists at all in certain minds, then it’s easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
I’ll lay out how I’m thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/or modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
I’m now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/or belief-forming processes, although maybe the actual responses of the original system/process can help break symmetries, or you can have enough restrictions on the connected introspective and/or belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which “a reward/punishment signal” (and/or its effects), “activation of escape muscles” or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do aren’t beliefs (of mattering) under some accounts of beliefs I’m pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes don’t model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/normative views (combined with empirical views that nematodes don’t meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as you’re aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, I’d treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldn’t take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements don’t, because I don’t care much about counterfactual robustness. Maybe >90% to I don’t care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses don’t seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the “belief” about neural activity. Or, generally, one cell can represent a cell it’s connected to. There’s still a question of whether it’s representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
Ah, welfare range estimates may already be supposed to capture the probability that an animal can experience intense suffering, like excruciating pain.
(FWIW, I’d be inclined to exclude nematodes, though. Including them feels like a mugging to me and possibly dominated by panpsychism.)
I included nematodes because they are still animals, and think seriously attempting to estimate (as opposed to guessing as I did) their moral weight would be quite valuable. From my results, the scale of welfare of an animal group tends to increase as the moral weight decreases (assuming the same intensity of the mean experience as a fraction of that of the worst possible experience). If the moral weight of nematodes turned out to be so small that the scale of their welfare was much smaller than that of wild arthropods, we would have some evidence, although very weak one, that the scale of the welfare of populations of beings less sophisticaded than nematodes[1] would also be smaller.
I suppose there is very little data relevant to assessing the moral weight of nematodes. However, it still seems worth for e.g. Rethink Priorities to do a very shallow analysis.
I definitely agree there are lots of potential improvements. In general, Rethink Priorities’ Moral Weight Project made a great contribution towards quantifying the moral weight of different species, but it is worth having in mind there could be significant variation of the intensity of the mean experience (relative to the moral weight) across species and farming environments too.
Thanks for writing this!
You might be able to make some informed guesses or do some informative sensitivity analysis about net welfare in wild animals, given your pain intensity ratios. I think it’s reasonable to assume that animals don’t experience any goods as intensely good (as valuable per moment) as excruciating pain is intensely bad. Pleasures as intense as disabling pain may also be rare, but that could be an assumption to vary.
Based on your ratios and total utilitarian assumption, 1 second of excruciating pain outweighs 11.5 days of annoying pain or 1.15 days of hurtful pain, or 11.5 days of goods as intense as annoying pain or 1.15 days of goods as intense as hurtful pain, on average.
Just quickly Googling for the most populous groups I’m aware of, mites, springtails and nematodes live a few weeks at most and copepods up to around a year. There might be other similarly populous groups of aquatic arthropods I’m missing that you should include, but I think mites and springtails capture terrestrial arthropods by moral weight. I think those animals will dominate your calculations, the way you’re doing them. And their deaths could involve intense pain and perhaps only a very small share live more than a week. However, it’s not obvious these animals can experience very intense suffering at all, even conditional on their sentience, but this probability could be another sensitivity analysis parameter.
(FWIW, I’d be inclined to exclude nematodes, though. Including them feels like a mugging to me and possibly dominated by panpsychism.)
Ants may live up to a few years and are very populous, and I could imagine have relatively good lives on symmetric ethical views, as eusocial insects investing heavily in their young. But they’re orders of magnitude less populous than mites and springtails.
Although this group seems likely to be outweighed in expectation, for wild vertebrates (or at least birds and mammals?), sepsis seems to be one of the worst natural ways to die, with 2 hours of excruciating pain and further time at lower intensities in farmed chickens (https://welfarefootprint.org/research-projects/cumulative-pain-and-wild-animal-welfare-assessments/ ). With your ratios, this is the equivalent of more than 200 years of annoying pain or 20 years of hurtful pain, much longer than the vast majority of wild vertebrates (by population and peehaps species) live. I don’t know how common sepsis is, though. Finding out how common sepsis is in the most populous groups of vertebrates could have high value of information for wild vertebrate welfare.
Given the examples of cognitive abilities of nematodes mentioned here, I don’t see them as a mugging. For example, here’s a quote from that link:
It’s not obvious to me why one would draw a line between mites/springtails and nematodes, rather than between ants and mites/springtails, between small fish and ants, etc.
With only 302 neurons, probably only a minority of which actually generate valenced experiences, if they’re sentient at all, I might have to worry about random particle interactions in the walls generating suffering.
Nematodes also seem like very minimal RL agents that would be pretty easy to program. The fear-like behaviour seems interesting, but still plausibly easy to program.
I don’t actually know much about mites or springtails, but my ignorance counts in their favour, as does them being more closely related to and sharing more brain structures (e.g. mushroom bodies) with arthropods with more complex behaviours that seem like better evidence for sentience (spiders for mites, and insects for springtails).
I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think there’s even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).
If it’s true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons. Perhaps we could build a neural-network RL agent to mimic the learning abilities of C. elegans, but that would likely leave out lots of other cool stuff that those 302 neurons are doing that we haven’t discovered yet. Our RL neural network might be like trying to replace the complex nutrition of real foods with synthetic calories and a multivitamin.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it’s plausible to me that the robot itself would warrant nontrivial moral concern.
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate conscious valence. So it seems that around 50 out of the 302 neurons would be enough to simulate, and maybe even a few times less. Maybe this would be overgeneralizing to nematodes, though.
I did have something like this in mind, but was probably thinking something like biological neurons are 10x more expressive than artificial ones, based on the comments here. Even if that’s not more likely than not, a non-tiny chance of at most around 10x could be enough, and even a tiny chance could get us a wager for panpsychism.
I suppose an artificial neuron could also be much more complex than a few particles, but I can also imagine that could not be the case. And invertebrate neuron potentials are often graded rather than spiking, which could make a difference in how many particles are needed.
I’d be willing to buy something like this. In my view, a real C elegans brain separated from the body and receiving misleading inputs should have valence as intense as C elegans with a body, on the right kinds of inputs. On views other than hedonism, maybe a body makes an important difference, and all else equal, I’d expect having a body and interacting with the real world to just mean greater (more positive and less negative) welfare overall, basically for experience machine reasons.
I see. :) I think counterfactual robustness is important, so maybe I’m less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?
In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it seems like the unused pathways shouldn’t matter for valenced experience. But I would say that whether those unused pathways are present determines how much we should see a “you” as being there to begin with. There might still be sound waves coming from your mouth, but if they’re created just by some particles knocking into each other in random ways rather than as part of a robust, organized system, I don’t think there’s much of a “you” who is actually screaming.
For the same reason, I’m wary of trying to eliminate too much context as unimportant to valence and whittling the neurons down to just a small set. I think the larger context is what turns some seemingly meaningless signal transmission into something that we can see holistically as more than the sum of its parts.
As an analogy, suppose we’re trying to find the mountain in a drawing. I could draw just a triangle shape like
^
and say that’s the mountain, and everything else is non-mountain stuff. But just seeing a^
shape in isolation doesn’t mean much. We have to add some foreground objects, the sky, etc as well before it starts to actually look like a mountain. I think a similar thing applies to valence generation in brains. The surrounding neural machinery is what makes a series of neural firings meaningful rather than just being some seemingly arbitrary signals being passed along.This point about context mattering is also why I have an intuition that a body and real environment contribute something to the total sentience of a brain, although I’m not sure how much they matter, especially if the brain is complex and already creates a lot of the important context within itself based on the relations between the different brain parts. One way to see why a body and environment could matter a little bit is if we think of them as the “extended mind” of the nervous system, doing extra computations that aren’t being done by the neurons themselves.
What do you think of the models of consciousness, with much less than 300 neurons, described in Herzog 2007?
I think the way the theories are assumed to work in that paper are all implausible accounts of consciousness, and, at least for GWT, not how GWT is intended to be interpreted. See https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we#Neural_correlate_theories_of_consciousness_____explanatory_theories_of_consciousness
I now lean towards illusionism, and something like Attention Schema Theory. I don’t think illusionism rules out panpsychism, but I’d say it’s much less likely under illusionism. I can share some papers that I found most convincing. Luke Muehlhauser’s report on consciousness also supports illusionism.
By “illusionism” do you have in mind something like a higher-order view according to which noticing one’s own awareness (or having a sufficiently complex model of one’s attention, as in attention schema theory) is the crucial part of consciousness? I think that doesn’t necessarily follow from pure illusionism itself.
As I mention here, we could take illusionism to show that the distinction between “conscious” and “unconscious” processing is more shallow and trivial than we might have thought. For example, adding a model of one’s attention to a brain seems like a fairly small change that doesn’t require much additional computing power. Why should we give so much weight to such a small computational task, compared against the much larger and more sophisticated computations already occuring in a brain without such a model?
As an analogy, suppose I have a cuckoo clock that’s running. Then I draw a schematic diagram illustrating the parts of the clock and how they fit together (a model of the clock). Why should I say that the full clock that lives in the real world is unimportant, but when I draw a little picture of it, it suddenly starts to matter?
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model and are unlikely to happen elsewhere. From Graziano, 2020:
I would also go a bit further to claim that it’s “rich” illusions, not “sparse” illusions, that matter here. Shabasson, 2021 gives a nice summary of Kammerer, 2019, where this distinction is made:
The example rich optical illusion given is the Müller–Lyer illusion. It doesn’t matter if you just measured the lines to show they have the same length: once you look at the original illusion again (at least without extra markings or rulers to make it obvious that they are the same length), one line will still look longer than the other.
On a practical and more theory-neutral or theory-light approach, we can also distinguish between conscious and unconscious perception in humans, e.g. with blindsight and other responses to things outside awareness. Of course, it’s possible the “unconscious” perception is actually conscious, just not accessible to the higher-order conscious process (conscious awareness/attention), but there doesn’t seem to be much reason to believe it’s conscious at all. Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that. Then, we have little reason to believe capacities that are sometimes realized unconsciously in humans indicate consciousness in other animals.
RP’s invertebrate sentience research gave little weight to capacities that (sometimes) operate unconsciously in humans. Conscious vs unconscious perception is discussed more by Birch, 2020. He proposes the facilitation hypothesis:
and three candidate abilities: trace conditioning, rapid reversal learning and cross-model learning. The idea would be to “find out whether the identified cluster of putatively consciousness-linked abilities is selectively switched on and off under masking in the same way it is in humans.”
Apparently some rich optical illusions can occur unconsciously while others occur consciously, though (Chen et al., 2018). So, maybe there is some conscious but inaccessible perception, although this is confusing, and I’m not sure about the relationship between these kinds of illusions and illusionism as a theory. Furthermore, I’m still skeptical of inaccessible conscious valence in particular, since valence seems pretty holistic, context-dependent and late in any animal’s processing to me. Mason and Lavery, 2022 discuss some refinements to experiments to distinguish conscious and unconscious valence.
I do concede that there could be an important line-drawing or trivial instantiation problem for what counts as having a consciousness illusion, or valence illusion, in particular.
Thanks for the detailed explanation! I haven’t read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.
My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says “Warning: RAM usage is above 90%” (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple “model” of the total amount of “attention” that your computer’s memory is devoting to various things. Suppose your computer’s actual RAM usage drops below 90%, but the notification still shows. You click an “x” on the notification to close it, but then a second later, the computer erroneously pops up the notification again. You restart your computer, hoping that will solve it, but the bogus notification returns, even though you can see that your computer’s RAM usage is only 38%. Like the Müller-Lyer illusion, this buggy notification is resistant to correction.
Maybe your view is that the relevant models and things being modeled should meet various specific criteria, so that we won’t see trivial instances of them throughout information-processing systems? I’m sympathetic to that view, since I intuitively don’t care much about simplified models of things unless those things are pretty similar to what happens in animal brains. I think there will be a spectrum from highly parochial views that have lots of criteria, to highly cosmopolitan views that have few criteria and therefore will see consciousness in many more places.
Even if we define consciousness as “specific ways information is processed that would lead to inferences like the kind we make about consciousness”, there’s a question of whether that should be the only thing we care about morally. We intuitively care about the illusions that we can see using the parts of our brains that can generate high-level, verbal thoughts, because those illusions are the things visible to those parts of our brains. We don’t intuitively care about other processes (even other schematic models elsewhere in our nervous systems) that our high-level thoughts can’t see. But most people also don’t care much about infants dying of diseases in Africa most of the time for the same reason: out of sight, out of mind. It’s not clear to me how much this bias to care about what’s visible should withstand moral reflection.
If its being conscious (whatever that means exactly) wouldn’t be visible to our high-level thoughts, there’s also no reason to believe it’s not conscious. :)
The generation of a very specific type of attention schema other than the one we introspect upon using high-level thoughts might be unlikely. But the generation of simplified summaries of things for use by other parts of the nervous system seems fairly ubiquitous. For example, our face-recognition brain region might do lots of detailed processing of a face, determine that it’s Jennifer Aniston, and then send a summary message “this is Jennifer Aniston” to other parts of the brain so that they can react accordingly. Our fight-or-flight system does processing of possible threats, and when a threat is detected, it sends warning signals to other brain regions and triggers release of adrenaline, which is a very simplified “model” that’s distributed throughout the body via the blood. These simplified representations of complex things have huge impact on behavior (just like the high-level attention schema does), which is why evolution created them.
I assume you agree, and our disagreement is probably just about how many criteria a simplified model has to meet before it counts as being relevant to consciousness? For example, the message saying “this is Jennifer Aniston” is a simplified model of a face, not a simplified model of attention, so it wouldn’t lead to illusion about one’s own conscious experience? If so, that makes sense, but when looking at these things from the outside as a neuroscientist would, it seems kind of weird to me to say that a simplified model of attention that can give rise to certain consciousness-related illusions is extremely important, while a simplified model of something else that could give rise to other illusions would be completely unimportant. Is it really the consciousness illusion itself that matters, or does the organism actually care about avoiding harm and seeking rewards, and the illusion is just the thing that we latch our caring energy onto? (Sorry if this is rambling and confused, and feel no need to answer these questions. At some point we get into the apparent absurdity of why we attach value to some physical processes rather than other ones at all.)
I’m not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/preferences, basically illusions that things actually matter to the system with those illusions.
I suspect that recognizing faces doesn’t require any illusion that would indicate consciousness. Still, I’m not sure what counts as an illusion, and I could imagine it being the case that there are very simple illusions everywhere.
I think illusionism is the only theory (or set of theories) that’s on the right track to actually (dis)solving the hard problem, by explaining why we have the beliefs we do about consciousness, and I’m pessimistic about all other approaches.
Thanks. :)
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think you’re proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between “conscious” and “unconscious” is less fundamental than we assumed and that therefore more things should count as sentient than we previously thought. (Susan Blackmore is one illusionist who concludes from illusionism that there’s less of a distinction between conscious and unconscious than we naively think, although I don’t know how this affects her moral circle.)
It’s not clear to me whether an illusion that “this rubber hand is part of my body” is more relevant to consciousness than a judgment that “this face is Jennifer Aniston”. I guess we’d have to propose detailed criteria for which judgments are relevant to consciousness and have better understandings of what these judgments look like in the brain.
I agree that such illusions seem important. :) But it’s plausible to me that it’s also at least somewhat important if something matters to the system, even if there’s no high-level illusion saying so. For example, a nematode clearly cares about avoiding bodily damage, even if its nervous system doesn’t contain any nontrivial representation that “I care about avoiding pain”. I think adding that higher-level representation increases the sentience of the brain, but it seems weird to say that without the higher-level representation, the brain doesn’t matter at all. I guess without that higher-level representation, it’s harder to imagine ourselves in the nematode’s place, because whenever we think about the badness of pain, we’re doing so using that higher level.
I’m not sure where to draw lines, but illusions of “this is bad!” (evaluative) or “get this to stop!” (imperative) could be enough, rather than something like “I care about avoiding pain”, and I doubt nematodes have those illusions, too. It’s not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But it’s also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I don’t think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesn’t necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesn’t change this.
I think a simple reward/punishment signal can be an extremely basic neural representation that “this is good/bad”, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes aren’t the simplest systems), but I also don’t see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. It’s like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but there’s a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
That’s my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing that’s qualitatively different in complex brains relative to nematodes or NPCs, I might shift my view—although I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/etc that only exists at all in certain minds, then it’s easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
I’ll lay out how I’m thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/or modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
I’m now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/or belief-forming processes, although maybe the actual responses of the original system/process can help break symmetries, or you can have enough restrictions on the connected introspective and/or belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which “a reward/punishment signal” (and/or its effects), “activation of escape muscles” or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do aren’t beliefs (of mattering) under some accounts of beliefs I’m pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes don’t model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/normative views (combined with empirical views that nematodes don’t meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as you’re aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, I’d treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldn’t take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements don’t, because I don’t care much about counterfactual robustness. Maybe >90% to I don’t care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
and/or perhaps general beliefs about consciousness and its qualities like reddishness, classic qualia, the Cartesian theatre, etc..
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses don’t seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the “belief” about neural activity. Or, generally, one cell can represent a cell it’s connected to. There’s still a question of whether it’s representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
I’d guess there are other ways to deal with nonbinary truth degrees, though.
Ah, welfare range estimates may already be supposed to capture the probability that an animal can experience intense suffering, like excruciating pain.
I included nematodes because they are still animals, and think seriously attempting to estimate (as opposed to guessing as I did) their moral weight would be quite valuable. From my results, the scale of welfare of an animal group tends to increase as the moral weight decreases (assuming the same intensity of the mean experience as a fraction of that of the worst possible experience). If the moral weight of nematodes turned out to be so small that the scale of their welfare was much smaller than that of wild arthropods, we would have some evidence, although very weak one, that the scale of the welfare of populations of beings less sophisticaded than nematodes[1] would also be smaller.
I suppose there is very little data relevant to assessing the moral weight of nematodes. However, it still seems worth for e.g. Rethink Priorities to do a very shallow analysis.
From Table S1 of Bar-On 2017, bacteria (10^30), fungi (10^27), archaea (10^29), protists (10^27), and viruses (10^31).
Thanks for the comments, Michael!
I definitely agree there are lots of potential improvements. In general, Rethink Priorities’ Moral Weight Project made a great contribution towards quantifying the moral weight of different species, but it is worth having in mind there could be significant variation of the intensity of the mean experience (relative to the moral weight) across species and farming environments too.