âItâs clear that at least some insects, such as fruit flies and bees, have valenced states. Entomologists test for the presence of these states using cognitive bias tests, which involve training animals to associate one stimulus (like the color red) with a reward and another stimulus (like the color blue) with something aversive. Then, the animals are presented with an ambiguous stimulus (like the color purple). Relative to baseline, bees rewarded before encountering the ambiguous stimulus are more likely to approach it, whereas bees given something aversive are more wary.â
How is it âclearâ from this that insect have âsentienceâ or âvalenced statesâ?
Several similarly long stretches made here
Maggots and fruit flies reacting to or avoiding painful stimuli = ?evidence for sentience
Ants using tools = ?evidence for sentience
Bees showing âplay behaviourâ (rolling wooden balls around for some reason) = ?evidence for sentience
These results attest that reward/âpunishment pathways exists. Do they tell us anything else?
You could ask the same question about worms, mites or nematodes.
I think the reductio ad absurdum that if any of these things matter even slightly, then all human moral concerns become completely irrelevant, means we need a high bar for believing this
I think itâs worthwhile distinguishing between the demandingness objection as an argument against insect/âworm/âmite/ânematodesâ interests mattering, and as an argument against them being sentient. I think you can make the first case but not the second.
Thereâs a distinction in theory but in practice the vague definition of âsentienceâ is so tied to moral relevance I donât think you can argue for one without also arguing for the other.
The question âdoes a worm feel painâ isnât really asking âdoes the worm have nociceptors and some degree of integration of those nociceptive signals that causes learning and behavioural changesâ. Itâs really asking, at the core, âdoes a worm âfeel painâ in a way thatâs morally importantâ
A human has their hand cut, and reacts vigorously.
A human has their hand cut, and has no reaction at all.
I do not know for sure whether pain was experienced in the 1st scenario. I can only feel my own pain. However, the 1st scenario is much more likely than the 2nd under the hypothesis that pain was experienced than under the hypothesis that no pain was experienced. So, from Bayesâ rule[1], I should strongly update towards thinking that pain was experienced, and therefore towards the human being sentient.
More broadly, one should update towards believing that a being is sentient if they share properties which are indicators of sentience in humans, such as reacting to damage made to body parts.
âPosterior probability of painâ/ââposterior probability of no painâ = âprobability of vigorous reaction given painâ/ââprobability of vigorous reaction given no painâ*âprior probability of painâ/ââprior probability of no painâ.
What matters is not that the 1st scenario is much more lilely than the 2nd under the hypothesis that pain is experienced (it clearly is). The relevant question is whether the 1st scenario is much more likely under the hypothesis that pain is experienced than under the hypothesis that pain is not experienced (itâs relation to the second scenario is irrelevant, a red herring). And whether this is actually the case is much less clear.
This is what your footnote equation says too, so Iâm not disagreeing with that, but I think the way you presented the argument in the text hides this, and might lead someone to misunderstand what it is they are being asked to judge is âmuch more likelyâ.
You can make an evolutionary argument for why we would expect an animal to react âvigorouslyâ to sustaining damage, and it is not clear why this evolutionary explanation requires the pain to be âexperiencedâ. So someone could make an argument that the likelihood of scenario 1 is high under both hypotheses, in which case it should only cause a small change in your priors.
I thought the post was really interesting, thank you for sharing it! It has updated me towards thinking that thereâs a higher chance insects might be sentient. But I think things are still a lot more complicated than suggested by this reply.
Would be interested to hear from those whoâve disagreed with this, since I think Iâm just pointing out a mathematical mistake? Interested to be corrected if Iâve got something wrong.
Perhaps would help to give some example numbers. Suppose someone assigns, for an insect:
P(react vigorously given pain experienced) = 1
P(react vigorously given no pain experienced) = 0.5
(These numbers seem defensible to me)
This gives you a Bayes factor of 2, when updating your probability that pain is experienced after seeing evidence that insects react vigorously to some negative stimulus. This is not a âstrongâ update.
âItâs clear that at least some insects, such as fruit flies and bees, have valenced states. Entomologists test for the presence of these states using cognitive bias tests, which involve training animals to associate one stimulus (like the color red) with a reward and another stimulus (like the color blue) with something aversive. Then, the animals are presented with an ambiguous stimulus (like the color purple). Relative to baseline, bees rewarded before encountering the ambiguous stimulus are more likely to approach it, whereas bees given something aversive are more wary.â
How is it âclearâ from this that insect have âsentienceâ or âvalenced statesâ?
Several similarly long stretches made here
Maggots and fruit flies reacting to or avoiding painful stimuli = ?evidence for sentience
Ants using tools = ?evidence for sentience
Bees showing âplay behaviourâ (rolling wooden balls around for some reason) = ?evidence for sentience
These results attest that reward/âpunishment pathways exists. Do they tell us anything else?
What would be evidence for sentience in your view?
You could ask the same question about worms, mites or nematodes.
I think the reductio ad absurdum that if any of these things matter even slightly, then all human moral concerns become completely irrelevant, means we need a high bar for believing this
I think itâs worthwhile distinguishing between the demandingness objection as an argument against insect/âworm/âmite/ânematodesâ interests mattering, and as an argument against them being sentient. I think you can make the first case but not the second.
Thereâs a distinction in theory but in practice the vague definition of âsentienceâ is so tied to moral relevance I donât think you can argue for one without also arguing for the other.
The question âdoes a worm feel painâ isnât really asking âdoes the worm have nociceptors and some degree of integration of those nociceptive signals that causes learning and behavioural changesâ. Itâs really asking, at the core, âdoes a worm âfeel painâ in a way thatâs morally importantâ
Hi Henry,
Consider these 2 scenarios:
A human has their hand cut, and reacts vigorously.
A human has their hand cut, and has no reaction at all.
I do not know for sure whether pain was experienced in the 1st scenario. I can only feel my own pain. However, the 1st scenario is much more likely than the 2nd under the hypothesis that pain was experienced than under the hypothesis that no pain was experienced. So, from Bayesâ rule[1], I should strongly update towards thinking that pain was experienced, and therefore towards the human being sentient.
More broadly, one should update towards believing that a being is sentient if they share properties which are indicators of sentience in humans, such as reacting to damage made to body parts.
âPosterior probability of painâ/ââposterior probability of no painâ = âprobability of vigorous reaction given painâ/ââprobability of vigorous reaction given no painâ*âprior probability of painâ/ââprior probability of no painâ.
I think this is a misapplication of Bayes rule.
What matters is not that the 1st scenario is much more lilely than the 2nd under the hypothesis that pain is experienced (it clearly is). The relevant question is whether the 1st scenario is much more likely under the hypothesis that pain is experienced than under the hypothesis that pain is not experienced (itâs relation to the second scenario is irrelevant, a red herring). And whether this is actually the case is much less clear.
This is what your footnote equation says too, so Iâm not disagreeing with that, but I think the way you presented the argument in the text hides this, and might lead someone to misunderstand what it is they are being asked to judge is âmuch more likelyâ.
You can make an evolutionary argument for why we would expect an animal to react âvigorouslyâ to sustaining damage, and it is not clear why this evolutionary explanation requires the pain to be âexperiencedâ. So someone could make an argument that the likelihood of scenario 1 is high under both hypotheses, in which case it should only cause a small change in your priors.
I thought the post was really interesting, thank you for sharing it! It has updated me towards thinking that thereâs a higher chance insects might be sentient. But I think things are still a lot more complicated than suggested by this reply.
Would be interested to hear from those whoâve disagreed with this, since I think Iâm just pointing out a mathematical mistake? Interested to be corrected if Iâve got something wrong.
Perhaps would help to give some example numbers. Suppose someone assigns, for an insect:
P(react vigorously given pain experienced) = 1
P(react vigorously given no pain experienced) = 0.5
(These numbers seem defensible to me)
This gives you a Bayes factor of 2, when updating your probability that pain is experienced after seeing evidence that insects react vigorously to some negative stimulus. This is not a âstrongâ update.
Thatâs a verbose way of saying: âlooks like it feels pain, probably feels painâ. Invoking Bayesâ Theorem gives the argument a false depth.
Being unnecessarily verbose comes across very negatively in EA communication, important to avoid it