My impression is that this is already happening – lots of people in the EA movement self-identify as moral anti-realists.
Even in the very beginnings of this movement, realism wasn’t necessarily the default. Admittedly, the Oxford-based origins of EA are influenced by moral realism (e.g., I think Toby Ord was a moral realist or at least was convinced that acting as though moral realism is true is the prudent thing to do, and may still think so, for all I know.). However, Peter Singer, at the time he wrote Practical Ethics, used to be a moral anti-realist.(He wrote a great essay on the triviality of the is-ought distinction and his chapter on “Why act morally” in Practical Ethics doesn’t rely on moral realism.) Similarly, Holden Karnofsky, who co-founded GiveWell, isn’t a moral realist (in a post published on Lesswrong today, he calls himself a “moral quasi-realist,” which sounds pretty similar to what I think of as moral anti-realism [“quasi-realism” also has a technical meaning in metaethics, but that’s not what Holden meant, as I understand it]). Eliezer Yudkowsky and Luke Muehlhauser wrote entiresequences on metaethics with anti-realist takes. All those people were important in establishing the early effective altruism movement.
For what it’s worth, I agree with you about the importance of a strong core. But I don’t see why anti-realists can’t be incredibly dedicated. I already mentioned examples of highly-dedicated anti-realists above. There are a lot more examples. Brian Tomasik is an anti-realist – I’ve yet to see a person think his contributions to EA are at risk of watering down the movement. Richard Ngo is an anti-realist (here and here) and Joe Carlsmith too (or at least has strong sympathies?), by the looks of hisposts. Paul Christiano, in the context of his AI alignment research, wrote twoaccounts for normativity/human values/human judgment that illustrate how “AI makes philosophy honest” – both strike me as decidedly anti-realist in their approach.
To summarize, I think you’re going off a mistaken impression of EA demographics.
Perhaps you were primarily commenting about all-out utilitarianism (in the sense of particularly high levels of altruistic dedication ) vs. something closer to “EA on the side.” I think that’s a spectrum and we have to find a good balance. Julia Wise has a couple of great posts (e.g., here or here) arguing against too much fanaticism and self-sacrificing life goals. I’ve written a similar post, so I think these sorts of posts were steering things in a good/needed direction on the margin.
To summarize, I think you’re going off a mistaken impression of EA demographics.
To the degree that these are common beliefs, it may suggest that there’s something problematic with how some people communicate about effective altruism. After all, as I’m arguing in this sequence, I think moral realism (worthy of the name) is almost certainly wrong. If that’s true, we wouldn’t want people to believe that effective altruists are predominantly committed to moral realism.
My impression is that this is already happening – lots of people in the EA movement self-identify as moral anti-realists.
Even in the very beginnings of this movement, realism wasn’t necessarily the default. Admittedly, the Oxford-based origins of EA are influenced by moral realism (e.g., I think Toby Ord was a moral realist or at least was convinced that acting as though moral realism is true is the prudent thing to do, and may still think so, for all I know.). However, Peter Singer, at the time he wrote Practical Ethics, used to be a moral anti-realist.(He wrote a great essay on the triviality of the is-ought distinction and his chapter on “Why act morally” in Practical Ethics doesn’t rely on moral realism.) Similarly, Holden Karnofsky, who co-founded GiveWell, isn’t a moral realist (in a post published on Lesswrong today, he calls himself a “moral quasi-realist,” which sounds pretty similar to what I think of as moral anti-realism [“quasi-realism” also has a technical meaning in metaethics, but that’s not what Holden meant, as I understand it]). Eliezer Yudkowsky and Luke Muehlhauser wrote entire sequences on metaethics with anti-realist takes. All those people were important in establishing the early effective altruism movement.
For what it’s worth, I agree with you about the importance of a strong core. But I don’t see why anti-realists can’t be incredibly dedicated. I already mentioned examples of highly-dedicated anti-realists above. There are a lot more examples. Brian Tomasik is an anti-realist – I’ve yet to see a person think his contributions to EA are at risk of watering down the movement. Richard Ngo is an anti-realist (here and here) and Joe Carlsmith too (or at least has strong sympathies?), by the looks of his posts. Paul Christiano, in the context of his AI alignment research, wrote two accounts for normativity/human values/human judgment that illustrate how “AI makes philosophy honest” – both strike me as decidedly anti-realist in their approach.
To summarize, I think you’re going off a mistaken impression of EA demographics.
Perhaps you were primarily commenting about all-out utilitarianism (in the sense of particularly high levels of altruistic dedication ) vs. something closer to “EA on the side.” I think that’s a spectrum and we have to find a good balance. Julia Wise has a couple of great posts (e.g., here or here) arguing against too much fanaticism and self-sacrificing life goals. I’ve written a similar post, so I think these sorts of posts were steering things in a good/needed direction on the margin.
To the degree that these are common beliefs, it may suggest that there’s something problematic with how some people communicate about effective altruism. After all, as I’m arguing in this sequence, I think moral realism (worthy of the name) is almost certainly wrong. If that’s true, we wouldn’t want people to believe that effective altruists are predominantly committed to moral realism.