Thanks so much, this quasi-sociological perspective is quite helpful.
One thing that puzzles me is the role of intuition in this context. A few people have responded to the repugnant conclusion by saying that animals in CAFOs, even in cage-free poultry systems, have negative welfare. But that’s not borne out by the empirical research on the topic. In my view, it’s largely an unverified assumption, or intuition. That seems to run against the general project of “using reason and evidence to do the most good”.
Similar tensions seemed apparent to me in what you write about stances of some effective altruists. You say that many EAs want to rely on reason rather than intuition, and don’t consider their own moral intuitions trustworthy. But then you also say that they “consider consequentialism the strongest perspective to take — perhaps because they find it least counterintuitive.” So, the acceptance of consequentialism itself is based on intuition.
The use of intuitions appears to be quite selective and arbitrary, when it serves prior commitments or helps to insulate parts of the worldview against objections.
On the first point—that seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/barely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically “somewhat” risk-averse and “somewhat” negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questions—that doesn’t seem inherently at odds for me. (I do think that the latter still clashes with “using evidence and reason,” of course, but can be “explained for” with risk aversion and negative-leaning positions—which would change what ”… to do the most good” means). But at this point, I am just speculating about what people are thinking about in making these arguments.
On the second point; my impression is that EAs rarely completely abandon moral intuition. They don’t consider it particularly trustworthy, but don’t think it’s useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then they’d basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I don’t think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesn’t completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are “doing-good” first; and underlying that is a strong moral compass/intuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely.
Hi Kevin,
Thanks so much, this quasi-sociological perspective is quite helpful.
One thing that puzzles me is the role of intuition in this context. A few people have responded to the repugnant conclusion by saying that animals in CAFOs, even in cage-free poultry systems, have negative welfare. But that’s not borne out by the empirical research on the topic. In my view, it’s largely an unverified assumption, or intuition. That seems to run against the general project of “using reason and evidence to do the most good”.
Similar tensions seemed apparent to me in what you write about stances of some effective altruists. You say that many EAs want to rely on reason rather than intuition, and don’t consider their own moral intuitions trustworthy. But then you also say that they “consider consequentialism the strongest perspective to take — perhaps because they find it least counterintuitive.” So, the acceptance of consequentialism itself is based on intuition.
The use of intuitions appears to be quite selective and arbitrary, when it serves prior commitments or helps to insulate parts of the worldview against objections.
Vera
On the first point—that seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/barely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically “somewhat” risk-averse and “somewhat” negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questions—that doesn’t seem inherently at odds for me. (I do think that the latter still clashes with “using evidence and reason,” of course, but can be “explained for” with risk aversion and negative-leaning positions—which would change what ”… to do the most good” means). But at this point, I am just speculating about what people are thinking about in making these arguments.
On the second point; my impression is that EAs rarely completely abandon moral intuition. They don’t consider it particularly trustworthy, but don’t think it’s useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then they’d basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I don’t think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesn’t completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are “doing-good” first; and underlying that is a strong moral compass/intuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely.