I’d also be interest in all of your thoughts on what exactly a percentage probability of valenced experience (or whatever the morally relevant mind-stuff should be called) is—obviously, they aren’t the close to the true probabilities these organisms have valenced experience (which, unless the world is very strange, should be 1 or 0 for all things)
I may be an odd person to answer this question, as I chose not to offer probability estimates, but I’ll respond anyway.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience). As I understand them, the probabilities we discuss are credences, sometimes called subjective probabilities or degrees of belief, in the proposition “x is sentient.” Credence 1 (or 100%) represents certainty that the proposition is true and credence 0 (or 0%) represents certainty that the proposition is false. Since there are very few propositions one should be absolutely certain about, the appropriate credences will fall between 0 and 1. The betting analysis of credence is common, though there are some well known problems.
Thinking of these probabilities as credences is neutral on the question of the best way to develop and refine these credences. Someone might base her/his credences entirely on intuition; another person might completely disregard her/his intuitions. This post details what we take to be the best available methodology to investigate invertebrate sentience.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience).
What do you think of the argument that there may be cases where it’s unclear if the term is appropriate or not. So there would be a grey area where there is a “sort of” sentience. I’ve talked to some people who think that this grey area might be taxonomically large, including most invertebrates.
Hey Max, good question. I think we need to clearly separate our metaphysics from our epistemology in this area. If an entity is sentient if and only if there is something it is like to be that entity, then it’s hard to see how sentience could come in degrees. (There are closely related phenomena that might come in degrees—like the intensity of experience or the grain of sensory input—but those phenomena are distinct from sentience.) There are certainly going to be cases where it’s difficult to know if an entity is sentient, but our uncertainty doesn’t imply that the entity is only partially sentient. I think it’s plausible that this area of epistemic indeterminacy could remain quite large even with all the empirical facts in hand.
However, there are some theories of mind on which it looks like there could be cases of metaphysical indeterminacy. If a certain type of reductive physicalism is true, and sentience doesn’t reduce to any one feature of the brain but is instead a cluster concept, and the features that constitute the concept aren’t coextensive, then there could be cases in which we don’t know if an entity is sentient even with all the empirical and the philosophical facts in hand. (Technically, the fact that it can be metaphysically indeterminate that an entity possesses a property doesn’t entail that the property comes in degrees, but it’s a natural extension.)
That makes sense—I understood that you all were expressing credences. I think my comment wasn’t written very clearly. I’m interested in what process you all took to reach these credences, and what you think the appropriate use of them would be. Would these numbers be the numbers you’d use in a cost-effectiveness analysis, etc.? Or a starting point to decide how to weigh further evidence, etc? I know credences are a bit fuzzy as a general concept, but I guess I’d love thoughts on the appropriate use of these numbers (outside your response that we shouldn’t use them or should only use them very carefully).
I may be an odd person to answer this question, as I chose not to offer probability estimates, but I’ll respond anyway.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience). As I understand them, the probabilities we discuss are credences, sometimes called subjective probabilities or degrees of belief, in the proposition “x is sentient.” Credence 1 (or 100%) represents certainty that the proposition is true and credence 0 (or 0%) represents certainty that the proposition is false. Since there are very few propositions one should be absolutely certain about, the appropriate credences will fall between 0 and 1. The betting analysis of credence is common, though there are some well known problems.
Thinking of these probabilities as credences is neutral on the question of the best way to develop and refine these credences. Someone might base her/his credences entirely on intuition; another person might completely disregard her/his intuitions. This post details what we take to be the best available methodology to investigate invertebrate sentience.
What do you think of the argument that there may be cases where it’s unclear if the term is appropriate or not. So there would be a grey area where there is a “sort of” sentience. I’ve talked to some people who think that this grey area might be taxonomically large, including most invertebrates.
Hey Max, good question. I think we need to clearly separate our metaphysics from our epistemology in this area. If an entity is sentient if and only if there is something it is like to be that entity, then it’s hard to see how sentience could come in degrees. (There are closely related phenomena that might come in degrees—like the intensity of experience or the grain of sensory input—but those phenomena are distinct from sentience.) There are certainly going to be cases where it’s difficult to know if an entity is sentient, but our uncertainty doesn’t imply that the entity is only partially sentient. I think it’s plausible that this area of epistemic indeterminacy could remain quite large even with all the empirical facts in hand.
However, there are some theories of mind on which it looks like there could be cases of metaphysical indeterminacy. If a certain type of reductive physicalism is true, and sentience doesn’t reduce to any one feature of the brain but is instead a cluster concept, and the features that constitute the concept aren’t coextensive, then there could be cases in which we don’t know if an entity is sentient even with all the empirical and the philosophical facts in hand. (Technically, the fact that it can be metaphysically indeterminate that an entity possesses a property doesn’t entail that the property comes in degrees, but it’s a natural extension.)
Thanks Jason!
That makes sense—I understood that you all were expressing credences. I think my comment wasn’t written very clearly. I’m interested in what process you all took to reach these credences, and what you think the appropriate use of them would be. Would these numbers be the numbers you’d use in a cost-effectiveness analysis, etc.? Or a starting point to decide how to weigh further evidence, etc? I know credences are a bit fuzzy as a general concept, but I guess I’d love thoughts on the appropriate use of these numbers (outside your response that we shouldn’t use them or should only use them very carefully).