Toby, I really appreciate your detailed and thoughtful feedback
As you point out, we don’t yet have a way to assign a mathematical equivalence among intensity categories, such as saying one Pain intensity is 10x or 1000x as painful as another. But I believe somehow (most probably heuristically and roughly) minds navigate these comparisons, as they decide whether an expected (or actual) level of Pain outweighs the expected (or actual) level of another source of Pain, guiding their behavior accordingly (as I gather, this reflects the von Neumann-Morgenstern utility theorem’s principle of deriving preferences from risk trade-offs—thanks for bringing it up). That is indeed the whole biological point of having intensities of affective states: to better steer behaviors in the Benthamian direction of minimizing Pain and maximizing Pleasure (which should, overall, ultimately maximize the organism’s net reproduction).
Moving to your example, I believe one day those equivalences among Pain intensities (and their trade-offs with Pleasure intensities) will be described (we discuss some possibilities in this other post on potential equivalence methods), but for now, the ’13x’ example you gave isn’t something we think we can estimate yet. It is possible, though, for practical reasons to compare the time spent in each level of affect—which is the assumption of the Cumulative Pain and Cumulative Pleasure metrics—which has proven useful and insightful for comparing welfare across conditions.
You also raise a valid concern about our assumption that higher intensity Pain corresponds to greater ‘signal strength’ and requires ‘additional processing units.’ Note that this is a working hypothesis only. But the idea finds some support from neurobiological studies in vertebrates, where increasing Pain intensity has been found to correlate with greater activation of nociceptive pathways and broader neural engagement (in fact also demanding more energy, adding a possible physical component over the possible biological one). We extrapolate this to suggest that, in general, more intense Pain might require more neural resources—hence the mention of ‘processing units.’ Let me share, for what it’s worth, my personal belief in this sense: I don’t think there are significant differences between the level of Pain that a primate and a mouse can feel (despite their differences in brain sizes, and therefore in ‘processing units’) because the differences lie in cognitive brain systems, not the affective systems that process Pain (the affective-cognitive brain divide- see, Panksepp et al 2017 for an engaging debate on this topic). But I do believe that primitive sentient organisms (such as annelids), despite being able to experience affective states, are not able to experience Pain (and Pleasure) in the levels we (or a mouse or a bird) can. Although the great biological wonder of the sentience threshold has been crossed in these organisms, the processing power of the affective part (or parts) in these primitive sentient organisms is still rudimentary. In fact, my personal bet is that primitive sentient organisms are LiLr (as per our classification in this piece), and higher ranges and Resolution evolved as processing power kept increasing during the first millions of years after the onset of sentience (let me also suggest sharing my review of a book that explores the onset of sentience).
And yet, you’re absolutely right that this is not proven and it might be the case that an organism with a simpler nervous system might represent intense Pain with less processing energy, perhaps through a different mechanism, like a binary ‘on/off’ response rather than graded signals (this would be the HiLr scenario where primitive sentient organisms might experience intense Pain without distinguishing many states).
Regarding your concerns about the ethical relevance of defining Pain units in terms of signal strength or processing units, note we’re not proposing that neural metrics (e.g., energy use) should define Pain intensity (even because biological mechanisms—let alone neurological ones—rarely work in linear ways). Rather, we’re just exploring whether such physiological correlates can shed light on how affective states, including Pain, evolved in primitive sentient organisms.
I really appreciate you taking the time to write such a detailed reply to my comment, thank you! And for sharing additional reading material on these questions. What you say makes a lot of sense. I think this research is really important and fascinating work, and I’m excited to hear about what progress you can make.
I understand you might not have time to engage in further back-and-forth on this, but I just wanted to elaborate a bit on the one part of my comment that I think is maybe not directly answered by your reply. This is the issue around how we can compare the utility functions of different individuals, even in principle.
Suppose we know everything there is to know about how the brain of a fly works. We can figure out what they are thinking, and how these thoughts impact their behaviour. Maybe you are right that one day we might be able to infer from this that ‘fly experience A’ is roughly 10x more painful than ‘fly experience B’ (based purely on how it influences their choices among trade-offs—without relying on any assumptions around signal strength). But once we have determined the utility function for each separate individual (up to an undetermined constant factor), this still leaves open the problem of how to compare utility functions between different individuals, e.g. how to compare ‘fly experience A’ to ‘human experience C’. The vN-M approach does not tell you how to do that.
With humans, you can fairly easily get around this limitation in practice. You just need to find one reference experience that you expect to be similar for everyone (which could be as simple as something like stubbing your toe) and then with that one bridge between individual utility functions established, a standard ‘unit’ of utility, all other comparisons follow. I can even imagine doing something like this to compare the welfare of humans with other non-human mammals or birds.
But by the time we consider insects, I find it hard to imagine how we would approach picking this ‘reference’ experience that will allow us to do the inter-species welfare comparisons that we want to do. I’m still not sure I’m convinced that the methods outlined here will ultimately enable us to do that. It’s not just that the ‘signal strength’ assumption is ‘unproven’, it is that I am struggling to wrap my head around what a proof of that assumption would even look like. The correlation between pain intensity and signal strength in vertebrates presumably is based on within-individual comparisons, not between-individual comparisons, and assuming that a correlation in the first-type of comparison implies a correlation in the other still seems like a big extrapolation to me.
Hi Toby, thank you for your kind words. I might take some time to answer, but I’m happy to continue this back-and-forth (and please feel free to challenge or push on any point you disagree with).
I believe the problem we face is practical in nature: we currently lack direct access to the affective states of animals, and our indirect methods become increasingly unreliable as we move further away from humans on the evolutionary tree. For instance, inferring the affective capacity of a reptile is challenging, let alone that of an arthropod or annelid. But when you mention the caveat “even in principle,” I feel much more optimistic. I do believe that, in principle, how affect varies can be projected onto a universal scale—so universal that it could even compare affective experiences across sentient beings on other planets or in digital minds that have developed hedonic capacity.
Despite the variety of qualitative aspects (e.g., whether Pain stems from psychological or physical origins, or signals an unfulfilled need, a threat, damaged tissue, or a desire), the goodness or badness of a feeling—its ‘utility’—should be expressible along a single dimension of real numbers, with positive values for Pleasure, negative values for Pain, and zero as a neutral point. Researchers like Michael Mendl and Elizabeth Paul have explored similar ideas using dimensional models of affect, suggesting that valence and arousal might offer a way to compare experiences across species, which supports the idea of a universal scale—though they also note the empirical gaps we still face.
So, I see this challenge as a technical and scientific issue, not an epistemological one. In other words, I’m optimistic that one day we’ll be able to say that a Pain value of, let’s say, −2.456, represents the same amount of suffering for a human, a fish, or a fly—provided they have the neurological capacity to experience this range of intensities. I recognize this is a bold claim, and given the current lack of empirical data, it’s highly speculative—perhaps even philosophical. But this is my provisional opinion, open to change, of course! :)
Toby, I really appreciate your detailed and thoughtful feedback
As you point out, we don’t yet have a way to assign a mathematical equivalence among intensity categories, such as saying one Pain intensity is 10x or 1000x as painful as another. But I believe somehow (most probably heuristically and roughly) minds navigate these comparisons, as they decide whether an expected (or actual) level of Pain outweighs the expected (or actual) level of another source of Pain, guiding their behavior accordingly (as I gather, this reflects the von Neumann-Morgenstern utility theorem’s principle of deriving preferences from risk trade-offs—thanks for bringing it up). That is indeed the whole biological point of having intensities of affective states: to better steer behaviors in the Benthamian direction of minimizing Pain and maximizing Pleasure (which should, overall, ultimately maximize the organism’s net reproduction).
Moving to your example, I believe one day those equivalences among Pain intensities (and their trade-offs with Pleasure intensities) will be described (we discuss some possibilities in this other post on potential equivalence methods), but for now, the ’13x’ example you gave isn’t something we think we can estimate yet. It is possible, though, for practical reasons to compare the time spent in each level of affect—which is the assumption of the Cumulative Pain and Cumulative Pleasure metrics—which has proven useful and insightful for comparing welfare across conditions.
You also raise a valid concern about our assumption that higher intensity Pain corresponds to greater ‘signal strength’ and requires ‘additional processing units.’ Note that this is a working hypothesis only. But the idea finds some support from neurobiological studies in vertebrates, where increasing Pain intensity has been found to correlate with greater activation of nociceptive pathways and broader neural engagement (in fact also demanding more energy, adding a possible physical component over the possible biological one). We extrapolate this to suggest that, in general, more intense Pain might require more neural resources—hence the mention of ‘processing units.’ Let me share, for what it’s worth, my personal belief in this sense: I don’t think there are significant differences between the level of Pain that a primate and a mouse can feel (despite their differences in brain sizes, and therefore in ‘processing units’) because the differences lie in cognitive brain systems, not the affective systems that process Pain (the affective-cognitive brain divide- see, Panksepp et al 2017 for an engaging debate on this topic). But I do believe that primitive sentient organisms (such as annelids), despite being able to experience affective states, are not able to experience Pain (and Pleasure) in the levels we (or a mouse or a bird) can. Although the great biological wonder of the sentience threshold has been crossed in these organisms, the processing power of the affective part (or parts) in these primitive sentient organisms is still rudimentary. In fact, my personal bet is that primitive sentient organisms are LiLr (as per our classification in this piece), and higher ranges and Resolution evolved as processing power kept increasing during the first millions of years after the onset of sentience (let me also suggest sharing my review of a book that explores the onset of sentience).
And yet, you’re absolutely right that this is not proven and it might be the case that an organism with a simpler nervous system might represent intense Pain with less processing energy, perhaps through a different mechanism, like a binary ‘on/off’ response rather than graded signals (this would be the HiLr scenario where primitive sentient organisms might experience intense Pain without distinguishing many states).
Regarding your concerns about the ethical relevance of defining Pain units in terms of signal strength or processing units, note we’re not proposing that neural metrics (e.g., energy use) should define Pain intensity (even because biological mechanisms—let alone neurological ones—rarely work in linear ways). Rather, we’re just exploring whether such physiological correlates can shed light on how affective states, including Pain, evolved in primitive sentient organisms.
I really appreciate you taking the time to write such a detailed reply to my comment, thank you! And for sharing additional reading material on these questions. What you say makes a lot of sense. I think this research is really important and fascinating work, and I’m excited to hear about what progress you can make.
I understand you might not have time to engage in further back-and-forth on this, but I just wanted to elaborate a bit on the one part of my comment that I think is maybe not directly answered by your reply. This is the issue around how we can compare the utility functions of different individuals, even in principle.
Suppose we know everything there is to know about how the brain of a fly works. We can figure out what they are thinking, and how these thoughts impact their behaviour. Maybe you are right that one day we might be able to infer from this that ‘fly experience A’ is roughly 10x more painful than ‘fly experience B’ (based purely on how it influences their choices among trade-offs—without relying on any assumptions around signal strength). But once we have determined the utility function for each separate individual (up to an undetermined constant factor), this still leaves open the problem of how to compare utility functions between different individuals, e.g. how to compare ‘fly experience A’ to ‘human experience C’. The vN-M approach does not tell you how to do that.
With humans, you can fairly easily get around this limitation in practice. You just need to find one reference experience that you expect to be similar for everyone (which could be as simple as something like stubbing your toe) and then with that one bridge between individual utility functions established, a standard ‘unit’ of utility, all other comparisons follow. I can even imagine doing something like this to compare the welfare of humans with other non-human mammals or birds.
But by the time we consider insects, I find it hard to imagine how we would approach picking this ‘reference’ experience that will allow us to do the inter-species welfare comparisons that we want to do. I’m still not sure I’m convinced that the methods outlined here will ultimately enable us to do that. It’s not just that the ‘signal strength’ assumption is ‘unproven’, it is that I am struggling to wrap my head around what a proof of that assumption would even look like. The correlation between pain intensity and signal strength in vertebrates presumably is based on within-individual comparisons, not between-individual comparisons, and assuming that a correlation in the first-type of comparison implies a correlation in the other still seems like a big extrapolation to me.
Hi Toby, thank you for your kind words. I might take some time to answer, but I’m happy to continue this back-and-forth (and please feel free to challenge or push on any point you disagree with).
I believe the problem we face is practical in nature: we currently lack direct access to the affective states of animals, and our indirect methods become increasingly unreliable as we move further away from humans on the evolutionary tree. For instance, inferring the affective capacity of a reptile is challenging, let alone that of an arthropod or annelid. But when you mention the caveat “even in principle,” I feel much more optimistic. I do believe that, in principle, how affect varies can be projected onto a universal scale—so universal that it could even compare affective experiences across sentient beings on other planets or in digital minds that have developed hedonic capacity.
Despite the variety of qualitative aspects (e.g., whether Pain stems from psychological or physical origins, or signals an unfulfilled need, a threat, damaged tissue, or a desire), the goodness or badness of a feeling—its ‘utility’—should be expressible along a single dimension of real numbers, with positive values for Pleasure, negative values for Pain, and zero as a neutral point. Researchers like Michael Mendl and Elizabeth Paul have explored similar ideas using dimensional models of affect, suggesting that valence and arousal might offer a way to compare experiences across species, which supports the idea of a universal scale—though they also note the empirical gaps we still face.
So, I see this challenge as a technical and scientific issue, not an epistemological one. In other words, I’m optimistic that one day we’ll be able to say that a Pain value of, let’s say, −2.456, represents the same amount of suffering for a human, a fish, or a fly—provided they have the neurological capacity to experience this range of intensities. I recognize this is a bold claim, and given the current lack of empirical data, it’s highly speculative—perhaps even philosophical. But this is my provisional opinion, open to change, of course! :)