I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo’s recent post arguing that some of GiveWell’s grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I’ll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let’s say that a moral decision process is dogmatic if it’s completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central example of a dogmatic belief is: “Making a single human happy is more morally valuable than making any number of chickens happy.” The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic.
(Caveat: this seems fine for entities that are totally outside one’s moral circle of concern. For instance, I’m intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn’t get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern—so long as you assign nonzero weight to them—there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.)
And so when I see comments saying things like “I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative”, I’m like… really? There’s no empirical facts that could possibly cause the trade-off to go the other way?
Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlying facts, you actually have to examine the facts and do the math. But, like, the real world is messy and complicated, and sometimes you just have to do the math if you want to figure out the right answer.
Per the Wikipedia article on scope neglect, scope sensitivity would mean actually doing multiplication: making 100 people happy is 100 times better than making 1 person happy. I’m not fully sold on scope sensitivity; I feel much more strongly about non-dogmatism, which means that the numbers have to at least enter the picture, even if not multiplicatively.
To me, rather than scope neglect per se, I would take issue with >99% confidence in a statement like “Making a single human happy is more morally valuable than making any number of chickens happy” or otherwise treating it like it’s true.
Whatever someone thinks makes humans infinitely more important than chickens could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position).
Or, if they don’t think there’s anything at all, say except the mere fact of species membership, then this is just brute speciesism and seems arbitrary.
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn’t it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don’t like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn’t matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying “X is lexicographically preferable to Y but Y has positive value”, and “Y has no value”?
From SEP: “A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2.”
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo’s recent post arguing that some of GiveWell’s grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I’ll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let’s say that a moral decision process is dogmatic if it’s completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central example of a dogmatic belief is: “Making a single human happy is more morally valuable than making any number of chickens happy.” The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic.
(Caveat: this seems fine for entities that are totally outside one’s moral circle of concern. For instance, I’m intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn’t get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern—so long as you assign nonzero weight to them—there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.)
And so when I see comments saying things like “I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative”, I’m like… really? There’s no empirical facts that could possibly cause the trade-off to go the other way?
Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlying facts, you actually have to examine the facts and do the math. But, like, the real world is messy and complicated, and sometimes you just have to do the math if you want to figure out the right answer.
Per the Wikipedia article on scope neglect, scope sensitivity would mean actually doing multiplication: making 100 people happy is 100 times better than making 1 person happy. I’m not fully sold on scope sensitivity; I feel much more strongly about non-dogmatism, which means that the numbers have to at least enter the picture, even if not multiplicatively.
To me, rather than scope neglect per se, I would take issue with >99% confidence in a statement like “Making a single human happy is more morally valuable than making any number of chickens happy” or otherwise treating it like it’s true.
Whatever someone thinks makes humans infinitely more important than chickens could actually be present in chickens in some similarly important form with non-tiny or even modest probability (examples here), or not actually be what makes humans important at all (more general related discussion, although that piece defends a disputed position).
Or, if they don’t think there’s anything at all, say except the mere fact of species membership, then this is just brute speciesism and seems arbitrary.
This page could be a useful pointer?
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn’t it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don’t like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn’t matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying “X is lexicographically preferable to Y but Y has positive value”, and “Y has no value”?
From SEP: “A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2.”