Thanks for your comment, Dennis. One worry here is that you might be holding work on animal minds to an impossible standard. Yes, no one has a way to detect qualia directly, but surely we can make some inferences across the species boundary. Minimally, it seems very plausible that Neanderthals were sentient—and they, of course, were not homo sapiens. What makes that so plausible? Well, all the usual evidence: behavioral similarities, neurophysiological similarities, the lack of a plausible evolutionary story about why sentience would only have emerged after Neanderthals, etc.
Admittedly, plausibility decreases as phylogenetic distance grows (though the rate of the change is up for debate). Still, our epistemic situation seems to differ in degree, not in kind, when we consider stags—and, I submit, stag beetles.
One way to appreciate the value of the evidence that you’re criticizing is to imagine it away. Suppose that Bateson and Bradshaw had not found “the measurable quantities denoted… by the words ‘stress’ and ‘agony’ (such as enzyme levels in the bloodstream).” Surely it would have been less reasonable to believe that stags suffer in those circumstances. But if it would have been less reasonable without that evidence, it can be more reasonable with it.
To me, the outcome of the experiment wouldn’t matter either way. I wouldn’t suddenly accept it if it corroborated my view. (At least I like to think I’d be too rational to do that.) The methodological issues of using science to sidestep a philosophical problem, and of assuming the conclusion, remain.
When it comes to Neanderthals, I’m no expert. But when it comes to present-day animals, I haven’t found many behavioral similarities between them and humans. On the contrary, having studied animals a bit, I repeatedly find them to behave utterly differently from humans. And on the rare occasions they do behave similarly, it’s whenever humans are not being critical but enacting automated routines, like when sleepwalking. That’s when humans are pretty comparable to animals.
I’ve documented extensive evidence of animals behaving as though they are not sentient but robotic. I also address the arguments from phylogenetic proximity and neurophysiological similarities here and here, respectively – along with all the other commonly raised objections and questions.
My current view is that animals lack a critical ability and that sentience stems from this ability alone. Following Deutsch, I believe this critical ability is a binary matter, not a matter of degrees. So an organism either has it or it doesn’t – a stag wouldn’t have more of it than a stag beetle. I don’t think either one has any of it. That said, with the right programming, both could be made sentient (though that would presumably be highly immoral).
The good news here is that, if I am right, maybe animal suffering is a non issue after all, which means a whole host of ethical problems just kinda… resolve on their own.
Yes: if you’re right! But that’s an awfully big bet. As you might expect, it isn’t one I’m prepared to make. And I’m not sure it’s one you should be prepared to make either, as your credence in this view would need to be extremely high to justify it.
In any case, thank you for the detailed reply. I have a much better understanding of our disagreement because of it.
[T]hat’s an awfully big bet. [Y]our credence in this view would need to be extremely high to justify it.
We have different epistemologies. I don’t use credences or justifications for ideas. I hold my views about animals because I’m not aware of any criticisms I haven’t addressed. In other words, there are no rational reasons to drop those views. Until there are, I tentatively hold them to be true.
Let’s say you faced a situation where you could either (a) improve the welfare of 1 human, or (b) potentially improve (conditional on sentience), to the same extent as the human, the welfare of X animals which you currently believe are not sentient.
Does your epistemology imply that no matter how large X was, you would never choose (b) until you found a “rational reason to drop your views”? But you admit there is a possibility that you will find such a reason in the future, including the possibility that credences are a superior way of representing beliefs?
Yes to both questions (ignoring footnotes such as whether it’s one’s responsibility to improve anyone’s ‘welfare’ or what that even means, and whether epistemology is about beliefs or “representing” them and whatever that might mean – your questions are based on a rather math-y way of looking at things that I disagree with but am entertaining just to play devil’s advocate against my own views).
The problem with Pascal’s Wager is that it ignores reversed scenarios that would offset it: e.g. there could as well be a god that would punish you for believing in God without having good evidence.
I don’t think this would be applicable to our scenario. Whether we choose to help the human or the animals, there will always be uncertainty about the (long-term) effects of our intervention, but the intervention would ideally be researched well enough for us to have confidence that its expected value is robustly positive.
There are many problems with Pascal’s Wager. The problem I was thinking of is that, by imagining the punishment for not believing in god to be arbitrarily severe, one can offset even the smallest ‘chance’ of his existence.
We could arbitrarily apply that ‘logic’ to anything. For example, I don’t think rocks can suffer. But maybe I’m wrong. Maybe there’s a ‘small chance’ they do suffer anytime I step on them. And I step on many rocks every day – so many that even the smallest chance would warrant more care.
Maybe video-game characters can suffer. I’m pretty sure they can’t, but I can’t be 100% sure. Many people play GTA every day. So much potential suffering! Maybe we should all stop playing GTA. Maybe the government should outlaw any game that has any amount of violence…
Sure there is a small chance, but the question is: what can we do about it and will the opportunity cost be justifiable? And for the same reason that Pascal’s Wager fails, we can’t just arbitrarily say “doing this may reduce suffering” and think it justifies the action, since the reversal “doing this may increase suffering” plausibly offsets it.
Thanks for your comment, Dennis. One worry here is that you might be holding work on animal minds to an impossible standard. Yes, no one has a way to detect qualia directly, but surely we can make some inferences across the species boundary. Minimally, it seems very plausible that Neanderthals were sentient—and they, of course, were not homo sapiens. What makes that so plausible? Well, all the usual evidence: behavioral similarities, neurophysiological similarities, the lack of a plausible evolutionary story about why sentience would only have emerged after Neanderthals, etc.
Admittedly, plausibility decreases as phylogenetic distance grows (though the rate of the change is up for debate). Still, our epistemic situation seems to differ in degree, not in kind, when we consider stags—and, I submit, stag beetles.
One way to appreciate the value of the evidence that you’re criticizing is to imagine it away. Suppose that Bateson and Bradshaw had not found “the measurable quantities denoted… by the words ‘stress’ and ‘agony’ (such as enzyme levels in the bloodstream).” Surely it would have been less reasonable to believe that stags suffer in those circumstances. But if it would have been less reasonable without that evidence, it can be more reasonable with it.
Hi Bob,
To me, the outcome of the experiment wouldn’t matter either way. I wouldn’t suddenly accept it if it corroborated my view. (At least I like to think I’d be too rational to do that.) The methodological issues of using science to sidestep a philosophical problem, and of assuming the conclusion, remain.
When it comes to Neanderthals, I’m no expert. But when it comes to present-day animals, I haven’t found many behavioral similarities between them and humans. On the contrary, having studied animals a bit, I repeatedly find them to behave utterly differently from humans. And on the rare occasions they do behave similarly, it’s whenever humans are not being critical but enacting automated routines, like when sleepwalking. That’s when humans are pretty comparable to animals.
I’ve documented extensive evidence of animals behaving as though they are not sentient but robotic. I also address the arguments from phylogenetic proximity and neurophysiological similarities here and here, respectively – along with all the other commonly raised objections and questions.
My current view is that animals lack a critical ability and that sentience stems from this ability alone. Following Deutsch, I believe this critical ability is a binary matter, not a matter of degrees. So an organism either has it or it doesn’t – a stag wouldn’t have more of it than a stag beetle. I don’t think either one has any of it. That said, with the right programming, both could be made sentient (though that would presumably be highly immoral).
The good news here is that, if I am right, maybe animal suffering is a non issue after all, which means a whole host of ethical problems just kinda… resolve on their own.
Yes: if you’re right! But that’s an awfully big bet. As you might expect, it isn’t one I’m prepared to make. And I’m not sure it’s one you should be prepared to make either, as your credence in this view would need to be extremely high to justify it.
In any case, thank you for the detailed reply. I have a much better understanding of our disagreement because of it.
We have different epistemologies. I don’t use credences or justifications for ideas. I hold my views about animals because I’m not aware of any criticisms I haven’t addressed. In other words, there are no rational reasons to drop those views. Until there are, I tentatively hold them to be true.
See also https://www.daviddeutsch.org.uk/2014/08/simple-refutation-of-the-bayesian-philosophy-of-science/
Let’s say you faced a situation where you could either (a) improve the welfare of 1 human, or (b) potentially improve (conditional on sentience), to the same extent as the human, the welfare of X animals which you currently believe are not sentient.
Does your epistemology imply that no matter how large X was, you would never choose (b) until you found a “rational reason to drop your views”? But you admit there is a possibility that you will find such a reason in the future, including the possibility that credences are a superior way of representing beliefs?
Yes to both questions (ignoring footnotes such as whether it’s one’s responsibility to improve anyone’s ‘welfare’ or what that even means, and whether epistemology is about beliefs or “representing” them and whatever that might mean – your questions are based on a rather math-y way of looking at things that I disagree with but am entertaining just to play devil’s advocate against my own views).
There’s also Pascal’s Wager.
The problem with Pascal’s Wager is that it ignores reversed scenarios that would offset it: e.g. there could as well be a god that would punish you for believing in God without having good evidence.
I don’t think this would be applicable to our scenario. Whether we choose to help the human or the animals, there will always be uncertainty about the (long-term) effects of our intervention, but the intervention would ideally be researched well enough for us to have confidence that its expected value is robustly positive.
There are many problems with Pascal’s Wager. The problem I was thinking of is that, by imagining the punishment for not believing in god to be arbitrarily severe, one can offset even the smallest ‘chance’ of his existence.
We could arbitrarily apply that ‘logic’ to anything. For example, I don’t think rocks can suffer. But maybe I’m wrong. Maybe there’s a ‘small chance’ they do suffer anytime I step on them. And I step on many rocks every day – so many that even the smallest chance would warrant more care.
Maybe video-game characters can suffer. I’m pretty sure they can’t, but I can’t be 100% sure. Many people play GTA every day. So much potential suffering! Maybe we should all stop playing GTA. Maybe the government should outlaw any game that has any amount of violence…
And so on.
Sure there is a small chance, but the question is: what can we do about it and will the opportunity cost be justifiable? And for the same reason that Pascal’s Wager fails, we can’t just arbitrarily say “doing this may reduce suffering” and think it justifies the action, since the reversal “doing this may increase suffering” plausibly offsets it.