“Moral authority” and “intellectual legitimacy” are such fuzzy terms that I’m not really sure what this post is arguing.
Insofar as they just denote public perceptions, sure: this is obviously bad PR for the movement. It shows we’re not immune from big mistakes, and raises fair questions about the judgment of individual EAs, or certain problematic norms/mindsets among the living breathing community of humans associated with the label. We’ll probably get mocked a bit more and be greeted with more skepticism in elite circles. There are meta-EA problems that need fixing, that I’ve been introspecting on the past two weeks.
But “careful observers” also knew that before the FTX scandal, and it’s unclear to me which specific ideas in EA philosophy are less intellectually legitimate or authoritative than they were before. When a prominent Democrat politician has a scandal, Democrats get mocked—but nobody intelligent thinks that reduces the moral authority of being pro-choice or supporting stricter gun control, etc. The ideas are right or wrong independent of how their highest-profile advocates behave.
Perhaps SBF’s fraud indicts the EA community’s lack of scrutiny or safeguards around how to raise money. But to me, it does not at all indict EA’s ability to allocate resources once they’ve been raised. It’s not as if the charities SBF was funding were proven ineffective. “The idea that the EA movement is better than others at allocating time and resources toward saving and improving human lives” could be right or wrong, but this incident isn’t good evidence either way.
See Samo’s essay series here for the definition of “intellectual legitimacy” as it’s being used in the OP:
An idea has intellectual legitimacy insofar as it is recognized by society as respectable and reasonable. An intellectually legitimate idea does not need to be recognized as credible by all people, or even by very many people at all. There only needs to be a general perception that society at large holds the idea to be legitimate. Powerful institutions and individuals are seen as tolerating or endorsing it. Such a perception isn’t necessarily coupled to whether an idea is true.
...
individuals routinely use legitimacy as a shortcut for evaluating the quality of the ideas around them. What may intuitively feel like evaluating an idea on its merits has oftentimes already factored in how an idea is communicated, and who is communicating it. We do this because it is harder for us to assess claims in fields that are outside our areas of expertise and so, instead, we learn from experience which sources to rely on. Evaluating an idea’s intellectual legitimacy is often safer and easier than evaluating the idea itself, and in a healthy society, the shortcut works. This makes the shortcut an efficient and effective heuristic for individuals. Even then, though, an intellectually valid idea that is correctly perceived as legitimate might ultimately still turn out to be false.
“Moral authority” and “intellectual legitimacy” are such fuzzy terms that I’m not really sure what this post is arguing.
Insofar as they just denote public perceptions, sure: this is obviously bad PR for the movement. It shows we’re not immune from big mistakes, and raises fair questions about the judgment of individual EAs, or certain problematic norms/mindsets among the living breathing community of humans associated with the label. We’ll probably get mocked a bit more and be greeted with more skepticism in elite circles. There are meta-EA problems that need fixing, that I’ve been introspecting on the past two weeks.
But “careful observers” also knew that before the FTX scandal, and it’s unclear to me which specific ideas in EA philosophy are less intellectually legitimate or authoritative than they were before. When a prominent Democrat politician has a scandal, Democrats get mocked—but nobody intelligent thinks that reduces the moral authority of being pro-choice or supporting stricter gun control, etc. The ideas are right or wrong independent of how their highest-profile advocates behave.
Perhaps SBF’s fraud indicts the EA community’s lack of scrutiny or safeguards around how to raise money. But to me, it does not at all indict EA’s ability to allocate resources once they’ve been raised. It’s not as if the charities SBF was funding were proven ineffective. “The idea that the EA movement is better than others at allocating time and resources toward saving and improving human lives” could be right or wrong, but this incident isn’t good evidence either way.
See Samo’s essay series here for the definition of “intellectual legitimacy” as it’s being used in the OP:
...