Yeah, I agree, I don’t think it’s worth the amount of contextualizing it would take to make this kind of info properly received and useful. I’m doubting that we can gossip in the way that, for example, you may have thought was needed for SBF, that productively in this forum. I think you and Ben have a rather complex worldview that explains why these incidents are very significant to you whereas superficially similar things that are common in EA are not (or are positive). I’m less concerned that weirdness will be discouraged and more concerned that people will be on blast in a way that seems arbitrary to them and is hard for them to predict without e.g. seeking your permission first if they don’t want to be called out on main later. Being called out is very damaging and I don’t like the whiff of “you have nothing to fear if have nothing to hide” that I’m getting in this comment section. Seems like the only defense against this kind of thing is never being successful enough to control money or hire employees.
I’m looking at maybe starting a new org in the next year and doing something that’s a little outside the Sequences morality (advocacy, involving politics and PR and being in coalitions with not-fully-aligned people). I really think it’s not only right but the best thing for me to be doing, but posts like this make me nervous that I could be subject to public shame for good faith disagreements and exploration. Feels extra shitty too because I tolerate other people doing pretty dumb things (from an organizational perspective) that I don’t do, like having sex parties with my (potential) coworkers, which for some reason are considered okay and not red flags for future misconduct in different domains.
Tbc, I think I have enough context to update usefully from this post. But I would guess less than 10% of readers do, and that a majority of readers will update somewhat negatively about Nonlinear for the bad/incomplete reasons I stated above. Your goal might not be fairness to Nonlinear, per se, and it doesn’t have to be. There are much bigger things at stake. But I think it’s harsh on them and chilling to others, and that cost should be weighed more than I think you guys are weighing it because you think you are just sharing openly (or something).
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting to point out the dynamic?
Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.
This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?
That said, I don’t think “callout / inventory of grievances / complaints” and “nuanced post about how to run an org better/fix the errors of your ways” always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.
I feel like for the next… week to month… I view it as primarily Nonlinear’s ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards “how to run things better.”
Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: “We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B...”
I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it’s a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.
If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.
(If they do a bad job at this, or even just if they seem to have “missed a spot”: I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)
((If Nonlinear fails to do this at all: Maybe it does fall to other people to… “digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?” IDK. Personally, I’d like to see what they come up with first.))
...I do currently think the primary take-away that “this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position” looks quite solid. In the absence of a high-level “Dialogue in the Comments: Meta Summary Post” comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.
Yeah, I agree, I don’t think it’s worth the amount of contextualizing it would take to make this kind of info properly received and useful. I’m doubting that we can gossip in the way that, for example, you may have thought was needed for SBF, that productively in this forum. I think you and Ben have a rather complex worldview that explains why these incidents are very significant to you whereas superficially similar things that are common in EA are not (or are positive). I’m less concerned that weirdness will be discouraged and more concerned that people will be on blast in a way that seems arbitrary to them and is hard for them to predict without e.g. seeking your permission first if they don’t want to be called out on main later. Being called out is very damaging and I don’t like the whiff of “you have nothing to fear if have nothing to hide” that I’m getting in this comment section. Seems like the only defense against this kind of thing is never being successful enough to control money or hire employees.
I’m looking at maybe starting a new org in the next year and doing something that’s a little outside the Sequences morality (advocacy, involving politics and PR and being in coalitions with not-fully-aligned people). I really think it’s not only right but the best thing for me to be doing, but posts like this make me nervous that I could be subject to public shame for good faith disagreements and exploration. Feels extra shitty too because I tolerate other people doing pretty dumb things (from an organizational perspective) that I don’t do, like having sex parties with my (potential) coworkers, which for some reason are considered okay and not red flags for future misconduct in different domains.
Tbc, I think I have enough context to update usefully from this post. But I would guess less than 10% of readers do, and that a majority of readers will update somewhat negatively about Nonlinear for the bad/incomplete reasons I stated above. Your goal might not be fairness to Nonlinear, per se, and it doesn’t have to be. There are much bigger things at stake. But I think it’s harsh on them and chilling to others, and that cost should be weighed more than I think you guys are weighing it because you think you are just sharing openly (or something).
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Am I understanding correctly?
“guilty” in quotes to suggest the ambiguity about whether the behaviors in question are actually bad or guiltworthy.
Yes, very good summary!
Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting to point out the dynamic?
Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.
This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?
That said, I don’t think “callout / inventory of grievances / complaints” and “nuanced post about how to run an org better/fix the errors of your ways” always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.
I feel like for the next… week to month… I view it as primarily Nonlinear’s ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards “how to run things better.”
Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: “We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B...”
I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it’s a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.
If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.
(If they do a bad job at this, or even just if they seem to have “missed a spot”: I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)
((If Nonlinear fails to do this at all: Maybe it does fall to other people to… “digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?” IDK. Personally, I’d like to see what they come up with first.))
...I do currently think the primary take-away that “this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position” looks quite solid. In the absence of a high-level “Dialogue in the Comments: Meta Summary Post” comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.