[T]hat’s an awfully big bet. [Y]our credence in this view would need to be extremely high to justify it.
We have different epistemologies. I don’t use credences or justifications for ideas. I hold my views about animals because I’m not aware of any criticisms I haven’t addressed. In other words, there are no rational reasons to drop those views. Until there are, I tentatively hold them to be true.
Let’s say you faced a situation where you could either (a) improve the welfare of 1 human, or (b) potentially improve (conditional on sentience), to the same extent as the human, the welfare of X animals which you currently believe are not sentient.
Does your epistemology imply that no matter how large X was, you would never choose (b) until you found a “rational reason to drop your views”? But you admit there is a possibility that you will find such a reason in the future, including the possibility that credences are a superior way of representing beliefs?
Yes to both questions (ignoring footnotes such as whether it’s one’s responsibility to improve anyone’s ‘welfare’ or what that even means, and whether epistemology is about beliefs or “representing” them and whatever that might mean – your questions are based on a rather math-y way of looking at things that I disagree with but am entertaining just to play devil’s advocate against my own views).
The problem with Pascal’s Wager is that it ignores reversed scenarios that would offset it: e.g. there could as well be a god that would punish you for believing in God without having good evidence.
I don’t think this would be applicable to our scenario. Whether we choose to help the human or the animals, there will always be uncertainty about the (long-term) effects of our intervention, but the intervention would ideally be researched well enough for us to have confidence that its expected value is robustly positive.
There are many problems with Pascal’s Wager. The problem I was thinking of is that, by imagining the punishment for not believing in god to be arbitrarily severe, one can offset even the smallest ‘chance’ of his existence.
We could arbitrarily apply that ‘logic’ to anything. For example, I don’t think rocks can suffer. But maybe I’m wrong. Maybe there’s a ‘small chance’ they do suffer anytime I step on them. And I step on many rocks every day – so many that even the smallest chance would warrant more care.
Maybe video-game characters can suffer. I’m pretty sure they can’t, but I can’t be 100% sure. Many people play GTA every day. So much potential suffering! Maybe we should all stop playing GTA. Maybe the government should outlaw any game that has any amount of violence…
Sure there is a small chance, but the question is: what can we do about it and will the opportunity cost be justifiable? And for the same reason that Pascal’s Wager fails, we can’t just arbitrarily say “doing this may reduce suffering” and think it justifies the action, since the reversal “doing this may increase suffering” plausibly offsets it.
We have different epistemologies. I don’t use credences or justifications for ideas. I hold my views about animals because I’m not aware of any criticisms I haven’t addressed. In other words, there are no rational reasons to drop those views. Until there are, I tentatively hold them to be true.
See also https://www.daviddeutsch.org.uk/2014/08/simple-refutation-of-the-bayesian-philosophy-of-science/
Let’s say you faced a situation where you could either (a) improve the welfare of 1 human, or (b) potentially improve (conditional on sentience), to the same extent as the human, the welfare of X animals which you currently believe are not sentient.
Does your epistemology imply that no matter how large X was, you would never choose (b) until you found a “rational reason to drop your views”? But you admit there is a possibility that you will find such a reason in the future, including the possibility that credences are a superior way of representing beliefs?
Yes to both questions (ignoring footnotes such as whether it’s one’s responsibility to improve anyone’s ‘welfare’ or what that even means, and whether epistemology is about beliefs or “representing” them and whatever that might mean – your questions are based on a rather math-y way of looking at things that I disagree with but am entertaining just to play devil’s advocate against my own views).
There’s also Pascal’s Wager.
The problem with Pascal’s Wager is that it ignores reversed scenarios that would offset it: e.g. there could as well be a god that would punish you for believing in God without having good evidence.
I don’t think this would be applicable to our scenario. Whether we choose to help the human or the animals, there will always be uncertainty about the (long-term) effects of our intervention, but the intervention would ideally be researched well enough for us to have confidence that its expected value is robustly positive.
There are many problems with Pascal’s Wager. The problem I was thinking of is that, by imagining the punishment for not believing in god to be arbitrarily severe, one can offset even the smallest ‘chance’ of his existence.
We could arbitrarily apply that ‘logic’ to anything. For example, I don’t think rocks can suffer. But maybe I’m wrong. Maybe there’s a ‘small chance’ they do suffer anytime I step on them. And I step on many rocks every day – so many that even the smallest chance would warrant more care.
Maybe video-game characters can suffer. I’m pretty sure they can’t, but I can’t be 100% sure. Many people play GTA every day. So much potential suffering! Maybe we should all stop playing GTA. Maybe the government should outlaw any game that has any amount of violence…
And so on.
Sure there is a small chance, but the question is: what can we do about it and will the opportunity cost be justifiable? And for the same reason that Pascal’s Wager fails, we can’t just arbitrarily say “doing this may reduce suffering” and think it justifies the action, since the reversal “doing this may increase suffering” plausibly offsets it.