“However, perhaps my largest surprise wasn’t an update toward or against a particular type of animal, rather it was based on the extent of conditioned learning behavior that is more or less exhibited by all taxa we considered, including single-celled organisms and animal bodies detached from brain communication, including the lower body of a mouse with a severed spine. While one could take this as weak evidence of widespread sentience, this updated me toward thinking many of these behaviors aren’t very impressive and they were thus largely disregarded in contemplating the positive case for sentience. ”
Marcus, is there any chance you could elaborate on why you leaned one way on this vs the other? I don’t have a clear sense of what I should take away from that, so I’d be curious what your reasoning was.
...
I’d also be interested in all of your thoughts on what exactly a percentage probability of valenced experience (or whatever the morally relevant mind-stuff should be called) is—obviously, they aren’t that close to the fact of whether or not these organisms have valenced experience (which, unless the world is very strange, should be 1 or 0 for all things)
It seems more like they are statements about how you’d make a bet, or something like “confidence in the approach * results from the approach”, or something else about the approach and prioritization. I’m curious how you were defining these probabilities to yourselves, and how definitions would impact their usefulness in cost-effectiveness analyses? i.e. if we were doing a cost-effectiveness estimate, and treating these as confidence * results, I might weight my confidence in this method higher than using my intuitions, but still include other approaches like intuition in my estimate because it theoretically gives me a more accurate model of my current knowledge. But, with a different definition I might just use these numbers.
I’d also be interest in all of your thoughts on what exactly a percentage probability of valenced experience (or whatever the morally relevant mind-stuff should be called) is—obviously, they aren’t the close to the true probabilities these organisms have valenced experience (which, unless the world is very strange, should be 1 or 0 for all things)
I may be an odd person to answer this question, as I chose not to offer probability estimates, but I’ll respond anyway.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience). As I understand them, the probabilities we discuss are credences, sometimes called subjective probabilities or degrees of belief, in the proposition “x is sentient.” Credence 1 (or 100%) represents certainty that the proposition is true and credence 0 (or 0%) represents certainty that the proposition is false. Since there are very few propositions one should be absolutely certain about, the appropriate credences will fall between 0 and 1. The betting analysis of credence is common, though there are some well known problems.
Thinking of these probabilities as credences is neutral on the question of the best way to develop and refine these credences. Someone might base her/his credences entirely on intuition; another person might completely disregard her/his intuitions. This post details what we take to be the best available methodology to investigate invertebrate sentience.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience).
What do you think of the argument that there may be cases where it’s unclear if the term is appropriate or not. So there would be a grey area where there is a “sort of” sentience. I’ve talked to some people who think that this grey area might be taxonomically large, including most invertebrates.
Hey Max, good question. I think we need to clearly separate our metaphysics from our epistemology in this area. If an entity is sentient if and only if there is something it is like to be that entity, then it’s hard to see how sentience could come in degrees. (There are closely related phenomena that might come in degrees—like the intensity of experience or the grain of sensory input—but those phenomena are distinct from sentience.) There are certainly going to be cases where it’s difficult to know if an entity is sentient, but our uncertainty doesn’t imply that the entity is only partially sentient. I think it’s plausible that this area of epistemic indeterminacy could remain quite large even with all the empirical facts in hand.
However, there are some theories of mind on which it looks like there could be cases of metaphysical indeterminacy. If a certain type of reductive physicalism is true, and sentience doesn’t reduce to any one feature of the brain but is instead a cluster concept, and the features that constitute the concept aren’t coextensive, then there could be cases in which we don’t know if an entity is sentient even with all the empirical and the philosophical facts in hand. (Technically, the fact that it can be metaphysically indeterminate that an entity possesses a property doesn’t entail that the property comes in degrees, but it’s a natural extension.)
That makes sense—I understood that you all were expressing credences. I think my comment wasn’t written very clearly. I’m interested in what process you all took to reach these credences, and what you think the appropriate use of them would be. Would these numbers be the numbers you’d use in a cost-effectiveness analysis, etc.? Or a starting point to decide how to weigh further evidence, etc? I know credences are a bit fuzzy as a general concept, but I guess I’d love thoughts on the appropriate use of these numbers (outside your response that we shouldn’t use them or should only use them very carefully).
I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.
To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.
As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.
*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
I might be wrong about this or might be misunderstanding you, but I believe that, in any case where the absence of X is evidence against Y, the presence of X has to be evidence for Y. (Equivalently, whenever the presence of X is evidence for Y, the absence of X has to be evidence against Y.)
This does go against the common statement that “Absence of evidence is not evidence of absence.” But we can understand that statement as having a very large kernel of truth, in that it is often the case that absence of evidence is only extremely weak evidence of absence. It depends on how likely it would be that we’d see the evidence if the hypothesis was true.
For an extreme example, let’s say that an entity not being made of molecules would count as very strong evidence against that entity being sentient. But we also expect a huge number of other entities to be made of molecules without being sentient, and thus the fact that a given entity is made of molecules is extraordinarily weak evidence—arguably negligible for many purposes—that the entity issentient. But it’s still some evidence. If we were trying to bet on whether entity A (made of molecules) or entity B (may or may not be molecules; might be just a single atom or quark or whatever) is more likely to be sentient, we have reason to go with entity A.
This seems to sort-of mirror the possibility you describe (though here we’re not talking behaviours), because being made of molecules is a necessary precondition for a huge number of what we’d take to be “indicators of sentience”, but by itself is far from enough. Which does mean evidence of X is extremely weak evidence of sentience, but it’s still some evidence, relative to a state in which we don’t know whether X is true or not.
(I’m aware this is a bit of a tangent, and one that’s coming fairly late. The post as a whole was very interesting, by the way—thanks to everyone who contributed to it.)
“However, perhaps my largest surprise wasn’t an update toward or against a particular type of animal, rather it was based on the extent of conditioned learning behavior that is more or less exhibited by all taxa we considered, including single-celled organisms and animal bodies detached from brain communication, including the lower body of a mouse with a severed spine. While one could take this as weak evidence of widespread sentience, this updated me toward thinking many of these behaviors aren’t very impressive and they were thus largely disregarded in contemplating the positive case for sentience. ”
Marcus, is there any chance you could elaborate on why you leaned one way on this vs the other? I don’t have a clear sense of what I should take away from that, so I’d be curious what your reasoning was.
...
I’d also be interested in all of your thoughts on what exactly a percentage probability of valenced experience (or whatever the morally relevant mind-stuff should be called) is—obviously, they aren’t that close to the fact of whether or not these organisms have valenced experience (which, unless the world is very strange, should be 1 or 0 for all things)
It seems more like they are statements about how you’d make a bet, or something like “confidence in the approach * results from the approach”, or something else about the approach and prioritization. I’m curious how you were defining these probabilities to yourselves, and how definitions would impact their usefulness in cost-effectiveness analyses? i.e. if we were doing a cost-effectiveness estimate, and treating these as confidence * results, I might weight my confidence in this method higher than using my intuitions, but still include other approaches like intuition in my estimate because it theoretically gives me a more accurate model of my current knowledge. But, with a different definition I might just use these numbers.
I may be an odd person to answer this question, as I chose not to offer probability estimates, but I’ll respond anyway.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience). As I understand them, the probabilities we discuss are credences, sometimes called subjective probabilities or degrees of belief, in the proposition “x is sentient.” Credence 1 (or 100%) represents certainty that the proposition is true and credence 0 (or 0%) represents certainty that the proposition is false. Since there are very few propositions one should be absolutely certain about, the appropriate credences will fall between 0 and 1. The betting analysis of credence is common, though there are some well known problems.
Thinking of these probabilities as credences is neutral on the question of the best way to develop and refine these credences. Someone might base her/his credences entirely on intuition; another person might completely disregard her/his intuitions. This post details what we take to be the best available methodology to investigate invertebrate sentience.
What do you think of the argument that there may be cases where it’s unclear if the term is appropriate or not. So there would be a grey area where there is a “sort of” sentience. I’ve talked to some people who think that this grey area might be taxonomically large, including most invertebrates.
Hey Max, good question. I think we need to clearly separate our metaphysics from our epistemology in this area. If an entity is sentient if and only if there is something it is like to be that entity, then it’s hard to see how sentience could come in degrees. (There are closely related phenomena that might come in degrees—like the intensity of experience or the grain of sensory input—but those phenomena are distinct from sentience.) There are certainly going to be cases where it’s difficult to know if an entity is sentient, but our uncertainty doesn’t imply that the entity is only partially sentient. I think it’s plausible that this area of epistemic indeterminacy could remain quite large even with all the empirical facts in hand.
However, there are some theories of mind on which it looks like there could be cases of metaphysical indeterminacy. If a certain type of reductive physicalism is true, and sentience doesn’t reduce to any one feature of the brain but is instead a cluster concept, and the features that constitute the concept aren’t coextensive, then there could be cases in which we don’t know if an entity is sentient even with all the empirical and the philosophical facts in hand. (Technically, the fact that it can be metaphysically indeterminate that an entity possesses a property doesn’t entail that the property comes in degrees, but it’s a natural extension.)
Thanks Jason!
That makes sense—I understood that you all were expressing credences. I think my comment wasn’t written very clearly. I’m interested in what process you all took to reach these credences, and what you think the appropriate use of them would be. Would these numbers be the numbers you’d use in a cost-effectiveness analysis, etc.? Or a starting point to decide how to weigh further evidence, etc? I know credences are a bit fuzzy as a general concept, but I guess I’d love thoughts on the appropriate use of these numbers (outside your response that we shouldn’t use them or should only use them very carefully).
I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.
To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.
As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.
*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
I might be wrong about this or might be misunderstanding you, but I believe that, in any case where the absence of X is evidence against Y, the presence of X has to be evidence for Y. (Equivalently, whenever the presence of X is evidence for Y, the absence of X has to be evidence against Y.)
This does go against the common statement that “Absence of evidence is not evidence of absence.” But we can understand that statement as having a very large kernel of truth, in that it is often the case that absence of evidence is only extremely weak evidence of absence. It depends on how likely it would be that we’d see the evidence if the hypothesis was true.
For an extreme example, let’s say that an entity not being made of molecules would count as very strong evidence against that entity being sentient. But we also expect a huge number of other entities to be made of molecules without being sentient, and thus the fact that a given entity is made of molecules is extraordinarily weak evidence—arguably negligible for many purposes—that the entity is sentient. But it’s still some evidence. If we were trying to bet on whether entity A (made of molecules) or entity B (may or may not be molecules; might be just a single atom or quark or whatever) is more likely to be sentient, we have reason to go with entity A.
This seems to sort-of mirror the possibility you describe (though here we’re not talking behaviours), because being made of molecules is a necessary precondition for a huge number of what we’d take to be “indicators of sentience”, but by itself is far from enough. Which does mean evidence of X is extremely weak evidence of sentience, but it’s still some evidence, relative to a state in which we don’t know whether X is true or not.
(I’m aware this is a bit of a tangent, and one that’s coming fairly late. The post as a whole was very interesting, by the way—thanks to everyone who contributed to it.)