I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.
To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.
As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.
*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
I might be wrong about this or might be misunderstanding you, but I believe that, in any case where the absence of X is evidence against Y, the presence of X has to be evidence for Y. (Equivalently, whenever the presence of X is evidence for Y, the absence of X has to be evidence against Y.)
This does go against the common statement that “Absence of evidence is not evidence of absence.” But we can understand that statement as having a very large kernel of truth, in that it is often the case that absence of evidence is only extremely weak evidence of absence. It depends on how likely it would be that we’d see the evidence if the hypothesis was true.
For an extreme example, let’s say that an entity not being made of molecules would count as very strong evidence against that entity being sentient. But we also expect a huge number of other entities to be made of molecules without being sentient, and thus the fact that a given entity is made of molecules is extraordinarily weak evidence—arguably negligible for many purposes—that the entity issentient. But it’s still some evidence. If we were trying to bet on whether entity A (made of molecules) or entity B (may or may not be molecules; might be just a single atom or quark or whatever) is more likely to be sentient, we have reason to go with entity A.
This seems to sort-of mirror the possibility you describe (though here we’re not talking behaviours), because being made of molecules is a necessary precondition for a huge number of what we’d take to be “indicators of sentience”, but by itself is far from enough. Which does mean evidence of X is extremely weak evidence of sentience, but it’s still some evidence, relative to a state in which we don’t know whether X is true or not.
(I’m aware this is a bit of a tangent, and one that’s coming fairly late. The post as a whole was very interesting, by the way—thanks to everyone who contributed to it.)
I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.
To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.
As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.
*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.
I might be wrong about this or might be misunderstanding you, but I believe that, in any case where the absence of X is evidence against Y, the presence of X has to be evidence for Y. (Equivalently, whenever the presence of X is evidence for Y, the absence of X has to be evidence against Y.)
This does go against the common statement that “Absence of evidence is not evidence of absence.” But we can understand that statement as having a very large kernel of truth, in that it is often the case that absence of evidence is only extremely weak evidence of absence. It depends on how likely it would be that we’d see the evidence if the hypothesis was true.
For an extreme example, let’s say that an entity not being made of molecules would count as very strong evidence against that entity being sentient. But we also expect a huge number of other entities to be made of molecules without being sentient, and thus the fact that a given entity is made of molecules is extraordinarily weak evidence—arguably negligible for many purposes—that the entity is sentient. But it’s still some evidence. If we were trying to bet on whether entity A (made of molecules) or entity B (may or may not be molecules; might be just a single atom or quark or whatever) is more likely to be sentient, we have reason to go with entity A.
This seems to sort-of mirror the possibility you describe (though here we’re not talking behaviours), because being made of molecules is a necessary precondition for a huge number of what we’d take to be “indicators of sentience”, but by itself is far from enough. Which does mean evidence of X is extremely weak evidence of sentience, but it’s still some evidence, relative to a state in which we don’t know whether X is true or not.
(I’m aware this is a bit of a tangent, and one that’s coming fairly late. The post as a whole was very interesting, by the way—thanks to everyone who contributed to it.)