I don’t have much to contribute to the normative social epistemology questions raised here, since this is a huge debate within philosophy. People interested in a general summary might read the Philosophy Compass review or the SEP article.
But I did want to question the claim about the descriptive social epistemology of the EA movement which is made i.e. that:
What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view.
I’m not sure this is useful as a general characterisation of the EA community, though certainly at times people are too confident, too deferential etc. What beliefs might be the beneficiaries of this fawning obeisance? There doesn’t seem to me to be sufficient uncontroversial agreement about much (even utilitarianism has a number of prominent ‘thought leaders’ pushing against it saying that we ought to be opening ourselves up to alternatives).
The general characterisation seems in tension with the common idea that EA is highly combative and confrontational (it would be strange though not impossible if we had a constant disagreement and attempted argumentative one-upmanship, combined with excessive deference to certain thought leaders). Instead what I see is occasional excessive deference to people respected within certain cliques, by members of those circles, but not ‘centralization’ on any one particular view. Perhaps all Greg has in mind is these kinds of cases where people defer too much to people they shouldn’t (perhaps due to a lack of actual experts in EA rather than due to their own vice). But then it’s not clear to me what the typical EA-rationalist who has not and probably shouldn’t make a deep study of many-worlds, free will, or meta-ethics should do to avoid this problem.
Apropos of which, SEP published an article on disagreement last week, which provides an (even more) up to date survey of philosophical discussion in this area.
I don’t have much to contribute to the normative social epistemology questions raised here, since this is a huge debate within philosophy. People interested in a general summary might read the Philosophy Compass review or the SEP article.
But I did want to question the claim about the descriptive social epistemology of the EA movement which is made i.e. that:
I’m not sure this is useful as a general characterisation of the EA community, though certainly at times people are too confident, too deferential etc. What beliefs might be the beneficiaries of this fawning obeisance? There doesn’t seem to me to be sufficient uncontroversial agreement about much (even utilitarianism has a number of prominent ‘thought leaders’ pushing against it saying that we ought to be opening ourselves up to alternatives).
The general characterisation seems in tension with the common idea that EA is highly combative and confrontational (it would be strange though not impossible if we had a constant disagreement and attempted argumentative one-upmanship, combined with excessive deference to certain thought leaders). Instead what I see is occasional excessive deference to people respected within certain cliques, by members of those circles, but not ‘centralization’ on any one particular view. Perhaps all Greg has in mind is these kinds of cases where people defer too much to people they shouldn’t (perhaps due to a lack of actual experts in EA rather than due to their own vice). But then it’s not clear to me what the typical EA-rationalist who has not and probably shouldn’t make a deep study of many-worlds, free will, or meta-ethics should do to avoid this problem.
Apropos of which, SEP published an article on disagreement last week, which provides an (even more) up to date survey of philosophical discussion in this area.
Also, EA selects for utilitarians in the first place. So you can’t say that we’re being irrational just because we’re disproportionately utilitarian.