[Responding to the quoted sentence, not specifically your comment]
I definitely agree that empirical beliefs like those listed do a substantial amount of work in leading to EA’s unusual set of priorities. I don’t have a view on whether that does more of the work than moral claims do.
That said, I think there are two things worth noting in relation to the quoted sentence.
First, I think this sentence could be (mis?)interpreted as implying that the relevant empirical beliefs are ones where EAs tend to disagree with beliefs that are relatively confidently, clearly, and widely held by large numbers of thoughtful non-EAs. If so, we should ask “Why do EAs disagree with these people? Are we making a mistake, or are they? Do they know of evidence or arguments we’ve overlooked?” And those questions would seem especially important given that EAs haven’t yet spent huge amounts of time forming, checking, critiquing, etc. those beliefs. (I’m basically talking about epistemic humility.)
I imagine this is true of some of the relevant “unusual” empirical beliefs. But I don’t think it’s true of most of them, including the hinge of history hypothesis and the efficacy of hits-based giving. My impression is that those topics are ones where there just aren’t clear, established, standard views among non-EAs. My impression is that it’s more like:
relatively few people outside of EA have even considered the questions
those who have often frame the questions a bit differently, perhaps evaluate them in ways influenced by differences in moral views (e.g., a focus on consequences vs deontological principles), and often disagree amongst themselves
(I haven’t checked those impressions carefully, and I acknowledge that these statements are somewhat vague.)
In other words, our beliefs on these sorts of topics may be unusual primarily because we have any clear views on these precise topics at all, not because we’re disagreeing with a mainstream consensus. I think that that reduces the extent to which we should ask the epistemic-humility-style questions mentioned above (such as “Do they know of evidence or arguments we’ve overlooked?”). (Though I’m still in favour of often asking those sorts of questions.)
Second, I think “our unusual beliefs” is perhaps a somewhat misleading phrase, as I think there’s substantial disagreement and debate among EAs on many of the beliefs in question. For example, there has been vigorous debate on the Forum regarding the hinge of history hypothesis, and two key thought leaders in EA (MacAskill and Ord) seem to mostly be on opposing sides of the debate. And Open Phil seems supportive of hits-based giving, but one of the most prominent EA orgs (GiveWell) has historically mostly highlighted “safer bets” and has drawn many EAs in that way.
There are of course many empirical questions on which the median/majority EA position is not also a median/majority position among other groups of people (sometimes simply because most members of other groups have no clear position on the question). But off the top of my head, I’m not sure if there’s an empirical belief on which the vast majority of EAs agree, but that’s unusual outside EA, and that plays a major role in driving differences between EA priorities and mainstream priorities.
(This comment is not necessarily disagreeing with Richard, as I imagine he probably didn’t mean to convey the interpretations I’m pushing back against.)
[Responding to the quoted sentence, not specifically your comment]
I definitely agree that empirical beliefs like those listed do a substantial amount of work in leading to EA’s unusual set of priorities. I don’t have a view on whether that does more of the work than moral claims do.
That said, I think there are two things worth noting in relation to the quoted sentence.
First, I think this sentence could be (mis?)interpreted as implying that the relevant empirical beliefs are ones where EAs tend to disagree with beliefs that are relatively confidently, clearly, and widely held by large numbers of thoughtful non-EAs. If so, we should ask “Why do EAs disagree with these people? Are we making a mistake, or are they? Do they know of evidence or arguments we’ve overlooked?” And those questions would seem especially important given that EAs haven’t yet spent huge amounts of time forming, checking, critiquing, etc. those beliefs. (I’m basically talking about epistemic humility.)
I imagine this is true of some of the relevant “unusual” empirical beliefs. But I don’t think it’s true of most of them, including the hinge of history hypothesis and the efficacy of hits-based giving. My impression is that those topics are ones where there just aren’t clear, established, standard views among non-EAs. My impression is that it’s more like:
relatively few people outside of EA have even considered the questions
those who have often frame the questions a bit differently, perhaps evaluate them in ways influenced by differences in moral views (e.g., a focus on consequences vs deontological principles), and often disagree amongst themselves
(I haven’t checked those impressions carefully, and I acknowledge that these statements are somewhat vague.)
In other words, our beliefs on these sorts of topics may be unusual primarily because we have any clear views on these precise topics at all, not because we’re disagreeing with a mainstream consensus. I think that that reduces the extent to which we should ask the epistemic-humility-style questions mentioned above (such as “Do they know of evidence or arguments we’ve overlooked?”). (Though I’m still in favour of often asking those sorts of questions.)
Second, I think “our unusual beliefs” is perhaps a somewhat misleading phrase, as I think there’s substantial disagreement and debate among EAs on many of the beliefs in question. For example, there has been vigorous debate on the Forum regarding the hinge of history hypothesis, and two key thought leaders in EA (MacAskill and Ord) seem to mostly be on opposing sides of the debate. And Open Phil seems supportive of hits-based giving, but one of the most prominent EA orgs (GiveWell) has historically mostly highlighted “safer bets” and has drawn many EAs in that way.
There are of course many empirical questions on which the median/majority EA position is not also a median/majority position among other groups of people (sometimes simply because most members of other groups have no clear position on the question). But off the top of my head, I’m not sure if there’s an empirical belief on which the vast majority of EAs agree, but that’s unusual outside EA, and that plays a major role in driving differences between EA priorities and mainstream priorities.
(This comment is not necessarily disagreeing with Richard, as I imagine he probably didn’t mean to convey the interpretations I’m pushing back against.)