Thank you for sharing your paper, Vera! I have been trying to discuss and understand a lot of adjacent themes around foundational philosophical assumptions, ad absurdum arguments, and moral intuitions vs. strict theory. Since youâre asking how this will land with the audience here, Iâd like to offer my personal account based on engaging with many EA(A)s. This is very much my subjective impression, not endorsed by any one person I discussed this with in particular. Also, I say a lot of âtheyâ hereâto be transparent, I endorse most, but not all, of these stances myself.
My impression is that EAs are largely aware of the counterintuitive implications their theory of choice faces. This has broadly been my experience with consequentialist-leaning people: they rarely claim, or even want to claim, that their ethical theory is perfect or aligned with all intuitions. They just believe the counterintuitive implications of their theory are less counterintuitive than those of other theories, and/âor find other theories either inconsistent or arbitrary.
Underlying this is a general desire to reason through ethics, and underlying that is a general skepticism toward moral intuition as a definitive argument. My impression is that EAs are very analytical in their approach to philosophy, and as a result, they often donât consider their own moral intuitions particularly trustworthy â to varying degrees; some might want to abandon them altogether, others simply donât weigh them heavily.
I think these two points are why many EAs would read the paper and think something like: âYes, I know. This isnât surprising. And also, show me a theory that doesnât run into such issues.â
However, most EAs donât fundamentally start from a purely philosophical stance on what is true in ethics and try to apply it to all their actions. They âfirstâ want to do good and almost instrumentally try to figure out what that means. I think most EAs are âquasi-consequentialistâ: when pressed, or when wanting to defend their views in a theoretical discussion, they consider consequentialism the strongest perspective to take â perhaps because they find it least counterintuitive, or closest to explaining how they conceptualize ethics.
When put into practice, this stance becomes largely action-guiding. It acts as a first proxy for identifying what to do or which choice is better. But unlike in theoretical discussions, a reductio ad absurdum isnât just a âbullet to biteâ where one can rest calmly on knowing that others have âbigger bullets to biteâ â itâs a practical blocker that is rarely broken through. I think thatâs why everyone is âleaning utilitarianâ or âleaning consequentialistâ: the theory acts as a guide and pushes the limits of oneâs otherwise-unquestioned moral intuitions to some degree, but not far beyond what one would consider generally reasonable. This is also exemplified by how often I hear things like âI know that X is probably right, but I just donât feel comfortable doing that.â Strict theory comes a close second, but almost no one completely abandons their moral intuitions in favour of it.
I think a neat resolution to all of this is the concept of moral uncertainty. I wouldnât confidently claim that the people I describe above are likely to explicitly endorse this framing, but I think it explains much of the friction between theory and intuition. Under moral uncertainty, one doesnât simply act on oneâs best-guess ethical theory; one hedges across plausible theories, weighted by credence. That naturally prevents the kind of single-theory extremes that generate repugnant/âcounterintuitive conclusions in practice, even while allowing consequentialism to carry significant weight. This sort of uncertainty, I think, is very much in line with typical EAsâ general way of thinking.
I think thatâs why most EAs wonât feel particularly âaddressedâ by this or similar arguments â and why theyâll likely end up with something like: âYeah, I know this could be an issue, but I wouldnât do this anyway.â I also think this explains why many will first try to argue based on empirical assumptions (e.g., can CAFOâs even be net-positive).
Hope this makes sense, curious to hear whether this has mirrored your experience discussing this piece :)
Thanks so much, this quasi-sociological perspective is quite helpful.
One thing that puzzles me is the role of intuition in this context. A few people have responded to the repugnant conclusion by saying that animals in CAFOs, even in cage-free poultry systems, have negative welfare. But thatâs not borne out by the empirical research on the topic. In my view, itâs largely an unverified assumption, or intuition. That seems to run against the general project of âusing reason and evidence to do the most goodâ.
Similar tensions seemed apparent to me in what you write about stances of some effective altruists. You say that many EAs want to rely on reason rather than intuition, and donât consider their own moral intuitions trustworthy. But then you also say that they âconsider consequentialism the strongest perspective to take â perhaps because they find it least counterintuitive.â So, the acceptance of consequentialism itself is based on intuition.
The use of intuitions appears to be quite selective and arbitrary, when it serves prior commitments or helps to insulate parts of the worldview against objections.
On the first pointâthat seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/âbarely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically âsomewhatâ risk-averse and âsomewhatâ negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questionsâthat doesnât seem inherently at odds for me. (I do think that the latter still clashes with âusing evidence and reason,â of course, but can be âexplained forâ with risk aversion and negative-leaning positionsâwhich would change what â⌠to do the most goodâ means). But at this point, I am just speculating about what people are thinking about in making these arguments.
On the second point; my impression is that EAs rarely completely abandon moral intuition. They donât consider it particularly trustworthy, but donât think itâs useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then theyâd basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I donât think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesnât completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are âdoing-goodâ first; and underlying that is a strong moral compass/âintuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely.
Thank you for sharing your paper, Vera! I have been trying to discuss and understand a lot of adjacent themes around foundational philosophical assumptions, ad absurdum arguments, and moral intuitions vs. strict theory. Since youâre asking how this will land with the audience here, Iâd like to offer my personal account based on engaging with many EA(A)s. This is very much my subjective impression, not endorsed by any one person I discussed this with in particular. Also, I say a lot of âtheyâ hereâto be transparent, I endorse most, but not all, of these stances myself.
My impression is that EAs are largely aware of the counterintuitive implications their theory of choice faces. This has broadly been my experience with consequentialist-leaning people: they rarely claim, or even want to claim, that their ethical theory is perfect or aligned with all intuitions. They just believe the counterintuitive implications of their theory are less counterintuitive than those of other theories, and/âor find other theories either inconsistent or arbitrary.
Underlying this is a general desire to reason through ethics, and underlying that is a general skepticism toward moral intuition as a definitive argument. My impression is that EAs are very analytical in their approach to philosophy, and as a result, they often donât consider their own moral intuitions particularly trustworthy â to varying degrees; some might want to abandon them altogether, others simply donât weigh them heavily.
I think these two points are why many EAs would read the paper and think something like: âYes, I know. This isnât surprising. And also, show me a theory that doesnât run into such issues.â
However, most EAs donât fundamentally start from a purely philosophical stance on what is true in ethics and try to apply it to all their actions. They âfirstâ want to do good and almost instrumentally try to figure out what that means. I think most EAs are âquasi-consequentialistâ: when pressed, or when wanting to defend their views in a theoretical discussion, they consider consequentialism the strongest perspective to take â perhaps because they find it least counterintuitive, or closest to explaining how they conceptualize ethics.
When put into practice, this stance becomes largely action-guiding. It acts as a first proxy for identifying what to do or which choice is better. But unlike in theoretical discussions, a reductio ad absurdum isnât just a âbullet to biteâ where one can rest calmly on knowing that others have âbigger bullets to biteâ â itâs a practical blocker that is rarely broken through. I think thatâs why everyone is âleaning utilitarianâ or âleaning consequentialistâ: the theory acts as a guide and pushes the limits of oneâs otherwise-unquestioned moral intuitions to some degree, but not far beyond what one would consider generally reasonable. This is also exemplified by how often I hear things like âI know that X is probably right, but I just donât feel comfortable doing that.â Strict theory comes a close second, but almost no one completely abandons their moral intuitions in favour of it.
I think a neat resolution to all of this is the concept of moral uncertainty. I wouldnât confidently claim that the people I describe above are likely to explicitly endorse this framing, but I think it explains much of the friction between theory and intuition. Under moral uncertainty, one doesnât simply act on oneâs best-guess ethical theory; one hedges across plausible theories, weighted by credence. That naturally prevents the kind of single-theory extremes that generate repugnant/âcounterintuitive conclusions in practice, even while allowing consequentialism to carry significant weight. This sort of uncertainty, I think, is very much in line with typical EAsâ general way of thinking.
I think thatâs why most EAs wonât feel particularly âaddressedâ by this or similar arguments â and why theyâll likely end up with something like: âYeah, I know this could be an issue, but I wouldnât do this anyway.â I also think this explains why many will first try to argue based on empirical assumptions (e.g., can CAFOâs even be net-positive).
Hope this makes sense, curious to hear whether this has mirrored your experience discussing this piece :)
Hi Kevin,
Thanks so much, this quasi-sociological perspective is quite helpful.
One thing that puzzles me is the role of intuition in this context. A few people have responded to the repugnant conclusion by saying that animals in CAFOs, even in cage-free poultry systems, have negative welfare. But thatâs not borne out by the empirical research on the topic. In my view, itâs largely an unverified assumption, or intuition. That seems to run against the general project of âusing reason and evidence to do the most goodâ.
Similar tensions seemed apparent to me in what you write about stances of some effective altruists. You say that many EAs want to rely on reason rather than intuition, and donât consider their own moral intuitions trustworthy. But then you also say that they âconsider consequentialism the strongest perspective to take â perhaps because they find it least counterintuitive.â So, the acceptance of consequentialism itself is based on intuition.
The use of intuitions appears to be quite selective and arbitrary, when it serves prior commitments or helps to insulate parts of the worldview against objections.
Vera
On the first pointâthat seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/âbarely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically âsomewhatâ risk-averse and âsomewhatâ negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questionsâthat doesnât seem inherently at odds for me. (I do think that the latter still clashes with âusing evidence and reason,â of course, but can be âexplained forâ with risk aversion and negative-leaning positionsâwhich would change what â⌠to do the most goodâ means). But at this point, I am just speculating about what people are thinking about in making these arguments.
On the second point; my impression is that EAs rarely completely abandon moral intuition. They donât consider it particularly trustworthy, but donât think itâs useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then theyâd basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I donât think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesnât completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are âdoing-goodâ first; and underlying that is a strong moral compass/âintuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely.