Great. Thank you!
Elityre
@Kat Woods
I’m trying to piece together a timeline of events.
You say in the evidence doc that3 days after starting at Nonlinear, Alice left to spend a whole month with her family. We even paid her for 3 of the 4 weeks despite her not doing much work. (To be fair, she was sick.)
Can you tell me what month this was? Does this mean just after she quit her previous job or just after she started traveling with you?
FWIW, that was not obvious to me on first reading, until the comments pointed it out to me.
Mostly I find it ironic, given that Ben says his original post was motivated by a sense that there was a pervasive silencing effect, where people felt unwilling to share their negative experiences with Nonlinear for fear of reprisal.
Why might humans evolve a rejection of things that taste to sweet? What fitness reducing thing does “eating oversweet things” correlate with? Or is it a spandrel of something else?
If this is true, it’s fascinating, because it suggest that our preference for cold and carbonation are a kind of specification gaming!
Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting to point out the dynamic?
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Am I understanding correctly?
- ^
“guilty” in quotes to suggest the ambiguity about whether the behaviors in question are actually bad or guiltworthy.
Crostposted from LessWong (link)
Maybe I’m missing something, but it seems like it should take less than an hour to read the post, make a note of every claim that’s not true, and then post that list of false claims, even if it would take many days to collect all the evidence that shows those points are false.
I imagine that would be helpful for you, because readers are much more likely to reserve judgement if you listed which specific things are false.Personally, I could look over that list and say “oh yeah, number 8 [or whatever] is cruxy for me. If that turns out not to be true, I think that substantially changes my sense of the situation.”, and I would feel actively interested in what evidence you provide regarding that point later. And it would let you know which points to prioritize refuting, because you would know which things are cruxy for people reading.
In contrast, a generalized bid to reserve judgement because “many of the important claims were false or extremely misleading”...well, it just seems less credible, and so leaves me less willing to actually reserve judgement.Indeed, deferring on producing such a list of claims-you-think-are-false suggests the possibility that you’re trying to “get your story straight.” ie that you’re taking the time now to hurriedly go through and check which facts you and others will be able to prove or disprove, so that you know which things you can safely lie or exagerate about, or what narrative paints you in the best light while still being consistent with the legible facts.
- 20 Dec 2023 21:23 UTC; 13 points) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (LessWrong;
(2) I think something odd about the comments claiming that this post is full of misinformation, is that they don’t correct any of the misinformation. Like, I get that assembling receipts, evidence etc can take a while, and writing a full rebuttal of this would take a while. But if there are false claims in the post, pick one and say why it’s false.
Seconding this.
I would be pretty interested to read a comment from nonlinear folks listing out everything that they believe to be false in the narrative as stated, even if they can’t substantiate their counter-claims yet.- 20 Dec 2023 21:23 UTC; 13 points) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (LessWrong;
I recommend that you use a spoiler tag for that last part. Not everyone who wants to has finished the story!
I imagine that most of the disagreement is with (implied, but not stated) conditional “that Owen did this means that decent men don’t exist”.
I want to know if you can find more people companies that have experienced a similar thing with the FDA.
Is there a reddit or discussion forum where people discuss and commiserate about FDA threats like this one? Can you find people there, and then verify that they / their experiences are real?
As a naive outsider, it seems to me like all of the specific actions you suggest would be stronger and more compelling if you can muster a legitimate claim that this is a pattern of behavior and not just a one-off. An article with one source making an accusations is more than 3x less credible than an article with 3 sources making the same accusation, for instance.
And if this is just a one-off, then it seems a lot less concerning, and taking action seems much less pressing. (Though it seems much easier to verify that this is a pattern, by finding other people in a similar situation to yours, than to verify that it isn’t, since there are incentives to be quiet about this sort of thing).
I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.
I’m not sure what to say to this.
Again, just because someone claims to feel harmed by some tread of discourse, that can’t be sufficient grounds to establish a social rule against it.But I am most baffled by this...
. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.
Um. Yes? Of course? It’s pretty rare that people are in good faith and sincerely truth-seeking. And of course there are some bad-actors, in every group. And of course those people will be pretending to have good intentions. Is the claim that in order to feel safe, people need to know that there are no bad actors?(I think that is not a good paraphrase of you.)
We need a diverse set of people to at least feel safe in our community.
Yeah. So the details here matter a lot, and if we operationalize, I might change my stance here. But on the face of this, I disagree. I think that we want people to be safe in our community and that we should obviously take steps to insure that. But it seems to be asking to much to insure that people feel safe. People can have all kinds of standards regarding what they need to feel safe, and I don’t think that we are obligated to carter to them because they are on the list of things that some segment of people need to feel safe.
Especially if one of the things on that list is “don’t openly discuss some topics that are relevant to improving the world.” That is what we do. That’s what we’re here to do. We should sacrifice pretty much none of the core point of the group to be more inclusive.
“How much systemic racism is there, what forms does it take, and how does it impact people?” are actually important questions for understanding and improving the world. We want to know if there is anything we can do about it, and how it stacks up against other interventions. Curtailing that discussion is not a small or trivial ask.
(In contrast, if using people’s preferred pronouns, or serving vegan meals at events, or not swearing, or not making loud noises, etc. helped people feel safe and/or comfortable, and they are otherwise up for our discourse standards, I feel much more willing to accommodate them. Because none of those compromise the core point of the EA community.)
...Oh. I guess one thing that seems likely to be a crux:...if we are to succeed in truly achieving effective altruism at scale..
I am not excited about scaling EA. If I thought that trying to do EA at scale was a good idea, then I would be much more interested in having different kinds of discussions in push and pull media.
Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.
I want to distinguish between “harmful” and “upsetting”. It seems to me that there is a big difference between shouting ‘FIRE’ in a crowed theater, “commanding others to do direct harm” on the one hand, and “being unable to focus for hours” after reading a facebook thread, being exhausted from fielding questions.
My intuitive grasp of these things has it that the “harm” of the first category is larger than that of the second. But even if that isn’t true, and the harm of reading racist stuff is as bad as literal physical torture, there are a number of important differences.
For one thing, the speech acts in the first category have physical, externally legible bad consequences. This matters, because it means we can have rules around those kinds of consequences that can be socially enforced without those rules being extremely exploitable. If we adopt a set of discourse rules that say “we will ban any speech act that produce significant emotional harm”, then anyone not in good faith can shut down and discourse that they don’t like by claiming to be emotionally harmed by it. Indeed, they don’t even need to be consciously malicious (though of course there will be some explicitly manipulative bad actors); this creates a subconscious incentive to be and act more upset than you might otherwise be by some speech-acts, because if you are sufficiently upset, the people saying things you don’t like will stop.
Second, I note that both of the examples in the second category are much easier to avoid than the second category. If there are Facebook threads that drain someone’s ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. Most of us have some kind of political topics that we find triggering, and a lot of us find that browsing facebook at all saps our motivation. So we have workarounds to avoid that stuff. These workarounds aren’t perfect, and occasionally you’ll encounter material that triggers you. But it seems way better to have that responsibility be on the individual. Hence the idea of safe spaces in the first place.
Furthermore, there are lots of things that are upsetting (for instance, that there are people dying of preventable Malaria in the third world right now, and that this in principle, could be stopped if enough people and the first world knew and cared about it, or that the extinction of humanity is plausibly imminent), which are never the less pretty important to talk about.
I think this comment says what I was getting at in my own reply, though more strongly.
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
I don’t follow how what you’re saying is a response to what I was saying.
I think a model by which people gradually “warm up” to “more advanced” discourse norms is false.
I wasn’t saying “the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms.” I was saying if that I was mistaken about that “warming up effect”, it would cause me to reconsider my view here.
In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.
I think there is a lot of detail and complexity here and I don’t think that this comment is going to do it justice, but I want to signal that I’m open to dialog about these things.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
On the face of it, this seems like a bad idea to me. I don’t want “introductory” EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn’t like the discourse norms in the central EA spaces, I don’t want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.
To say it another way, I think it is a mistake to have “advanced” and “introductory” EA spaces, at all.
I am intending to make a pretty strong claim here.
[One operationalization I generated, but want to think more about before I fully endorse it: “I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of ‘EA’s discourse norms are as good as those in academia.’”]
Some cruxes:
I think what is valuable about the EA movement is the quality of the epistemic discourse in the EA movement, and almost nothing else matters (and to the extent that other factors matter, the indifference curve heavily favors better epistemology). If I changed my mind about that, it would change my view about a lot of things, including the answer to this question.
I think a model by which people gradually “warm up” to “more advanced” discourse norms is false. I predict that people will mostly stay in their comfort zone, and people who like discussion at the “less advanced” level will prefer to stay at that level. If I were wrong about that, I would substantially reconsider my view.
Large number of people at the fringes of a movement tend to influence the direction of the movement, and significantly shape the flow of talent to the core of the movement. If I thought that you could have 90% of the people identifying as EAs have somewhat worse discourse norms than we have on this forum without meaningfully impacting the discourse or action of the people at the core of the movement, I think I might change my mind about this.
He recently made this comment on LessWrong, which expresses some of his views on the harm that OP causes.