I think there is a lot of detail and complexity here and I don’t think that this comment is going to do it justice, but I want to signal that I’m open to dialog about these things.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
On the face of it, this seems like a bad idea to me. I don’t want “introductory” EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn’t like the discourse norms in the central EA spaces, I don’t want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.
To say it another way, I think it is a mistake to have “advanced” and “introductory” EA spaces, at all.
I am intending to make a pretty strong claim here.
[One operationalization I generated, but want to think more about before I fully endorse it: “I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of ‘EA’s discourse norms are as good as those in academia.’”]
Some cruxes:
I think what is valuable about the EA movement is the quality of the epistemic discourse in the EA movement, and almost nothing else matters (and to the extent that other factors matter, the indifference curve heavily favors better epistemology). If I changed my mind about that, it would change my view about a lot of things, including the answer to this question.
I think a model by which people gradually “warm up” to “more advanced” discourse norms is false. I predict that people will mostly stay in their comfort zone, and people who like discussion at the “less advanced” level will prefer to stay at that level. If I were wrong about that, I would substantially reconsider my view.
Large number of people at the fringes of a movement tend to influence the direction of the movement, and significantly shape the flow of talent to the core of the movement. If I thought that you could have 90% of the people identifying as EAs have somewhat worse discourse norms than we have on this forum without meaningfully impacting the discourse or action of the people at the core of the movement, I think I might change my mind about this.
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Am I understanding correctly?
“guilty” in quotes to suggest the ambiguity about whether the behaviors in question are actually bad or guiltworthy.