To be clear, my criticism of the EA Forum’s initial response to the Expo article was never “it’s wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotions”, and it also wasn’t “it should have been obviously in advance to all EAs that this wasn’t a huge deal”.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forum’s response was:
I think that EAs made factual claims about the world that weren’t warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and I’d claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naiveté about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EA’s reputation.
The EA Forum sort of “trapped” FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (“a full explanation of what exactly happened here”, in Shakeel’s words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAs’ words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPoints’ Canceling: there’s a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups don’t exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics — many of which will be a lot messier than the Tegmark case — we’ll need to try to be unusually allowing of dissent, disagreement, “but what if X?”, etc. on topics that are more emotionally charged. (Hard as that sounds!)
To be clear, my criticism of the EA Forum’s initial response to the Expo article was never “it’s wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotions”, and it also wasn’t “it should have been obviously in advance to all EAs that this wasn’t a huge deal”.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forum’s response was:
I think that EAs made factual claims about the world that weren’t warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and I’d claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naiveté about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EA’s reputation.
The EA Forum sort of “trapped” FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (“a full explanation of what exactly happened here”, in Shakeel’s words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAs’ words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPoints’ Canceling: there’s a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups don’t exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics — many of which will be a lot messier than the Tegmark case — we’ll need to try to be unusually allowing of dissent, disagreement, “but what if X?”, etc. on topics that are more emotionally charged. (Hard as that sounds!)