Using “preoccupied” feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation.
The post has a lot of details that should allow people to make a more detailed model than “weird is bad”, but I don’t think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important.
It would seem low-integrity by my standards to decline to pursue this case because I would be worried that people would misunderstand the facts in a way that would cause inconvenient political movements for me. It seems like a lot of people have a justified interest in knowing what happened here, and I don’t want to optimize hard against that, just because they will predictably learn a different lesson than I have. The right thing to do is to argue in favor of my position after the facts are out, not to withhold information like this.
Also, I think the key components of this story are IMO mostly about the threats of retaliation and associated information control, which I think mostly comes across to readers (at least based on the comments I’ve seen so far), and also really doesn’t seem like it has much to do with general weirdness. If anything this kind of information control is more common in the broader world, and things like libel suits are more frequent.
The thing I think is potentially unfair is that ~Lightcone has its own strict morality about integrity violations. For instance, I don’t think on its face trying to control your reputation is bad in the same way that it seems to be a smoking gun to you. A lot of people reading this probably don’t either, but when it’s presented this way they see it as consistent with all the other infractions that were listed and really damning.
I think integrity violations are dangerous and corrosive, and I think it’s a good impulse to share information like that when you have it, but despite all the words here, I don’t think that info is properly contextualized for most people to use it correctly. It easily comes across as a laundry list of reasons to cancel them rather than the calibrated honest reporting it’s trying to be.
This continues to feel quite a bit too micromanagy to me. Mostly these are the complaints that seemed significant to Ben (which also roughly aligned with my assessment).
The post was already like 100+ hours of effort to write. I don’t think “more contextualizing” is a good use of at least our time (though if other people want to do this kind of job and would do more of that, then that seems maybe fine to me).
Like, again, I think if some people want to update that all weirdness is bad, then that’s up to them. It is not my job, and indeed would be a violation of what I consider cooperative behavior, to filter evidence so that the situation here only supports my (or Ben’s) position about how organizations should operate.
Yeah, I agree, I don’t think it’s worth the amount of contextualizing it would take to make this kind of info properly received and useful. I’m doubting that we can gossip in the way that, for example, you may have thought was needed for SBF, that productively in this forum. I think you and Ben have a rather complex worldview that explains why these incidents are very significant to you whereas superficially similar things that are common in EA are not (or are positive). I’m less concerned that weirdness will be discouraged and more concerned that people will be on blast in a way that seems arbitrary to them and is hard for them to predict without e.g. seeking your permission first if they don’t want to be called out on main later. Being called out is very damaging and I don’t like the whiff of “you have nothing to fear if have nothing to hide” that I’m getting in this comment section. Seems like the only defense against this kind of thing is never being successful enough to control money or hire employees.
I’m looking at maybe starting a new org in the next year and doing something that’s a little outside the Sequences morality (advocacy, involving politics and PR and being in coalitions with not-fully-aligned people). I really think it’s not only right but the best thing for me to be doing, but posts like this make me nervous that I could be subject to public shame for good faith disagreements and exploration. Feels extra shitty too because I tolerate other people doing pretty dumb things (from an organizational perspective) that I don’t do, like having sex parties with my (potential) coworkers, which for some reason are considered okay and not red flags for future misconduct in different domains.
Tbc, I think I have enough context to update usefully from this post. But I would guess less than 10% of readers do, and that a majority of readers will update somewhat negatively about Nonlinear for the bad/incomplete reasons I stated above. Your goal might not be fairness to Nonlinear, per se, and it doesn’t have to be. There are much bigger things at stake. But I think it’s harsh on them and chilling to others, and that cost should be weighed more than I think you guys are weighing it because you think you are just sharing openly (or something).
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting to point out the dynamic?
Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.
This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?
That said, I don’t think “callout / inventory of grievances / complaints” and “nuanced post about how to run an org better/fix the errors of your ways” always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.
I feel like for the next… week to month… I view it as primarily Nonlinear’s ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards “how to run things better.”
Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: “We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B...”
I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it’s a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.
If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.
(If they do a bad job at this, or even just if they seem to have “missed a spot”: I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)
((If Nonlinear fails to do this at all: Maybe it does fall to other people to… “digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?” IDK. Personally, I’d like to see what they come up with first.))
...I do currently think the primary take-away that “this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position” looks quite solid. In the absence of a high-level “Dialogue in the Comments: Meta Summary Post” comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.
I do not want to actually do this, because I love Lightcone and I trust you guys, but would it help you understand if a redteamer wrote a post like this about your org? Would you be fine with all the donors that were turned off and the people who didn’t want to work with you because you had the stink of drama on you?
I think it depends a lot on what you mean by “a post like this”. Like, I do think I would just really like more investigation and more airing of suspicions around, and yeah, that includes people’s concerns with Lightcone.
I could see something like that working but probably in a different format. Maybe something closer to a social credit score voting/aggregation mechanism?
Using “preoccupied” feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation.
The post has a lot of details that should allow people to make a more detailed model than “weird is bad”, but I don’t think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important.
Still, the most upvoted comment on this post does seem to push in the direction of “weird is bad”:
This situation reminded me of this post, EA’s weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice’s allegations (which I do), it’s hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don’t live together, travel the world together, and become romantically entangled).
Yep, not clear what to do about that. Seems kind of sad, and I’ve strong-downvoted the relevant comment. I don’t think it’s mine or Ben’s job to micromanage people’s models of how organizations should operate.
I share Holly’s appreciation for you all, and also the concern that Lightcone’s culture and your specific views of these problem don’t necessarily scale nor translate well outside of rat spheres of influence. I agree that’s sad, but I think it’s good for people to update their own views and with that in mind.
My takeaways from all this are fairly confidently the following:*
EA orgs could do with following more “common sense” in their operations.
For example,
hire “normie” staff or contractors early on who are expected to know and enforce laws, financial regulations, and labor practices conscientiously, despite the costs of “red tape.” Build org knowledge and infrastructure for conscientious accounting, payroll, and contracting practices, like a “normal” non-profit or startup. After putting that in place, allow leaders to pushback on red tape, but expect them to justify the costs of not following any “unnecessary” rules, rather than expecting junior employees to justify the costs of following rules.
don’t frequently mention a world-saving mission when trying to convince junior staff to do things they are hesitant to do. Focus on object level tasks and clear, org-level results instead. It’s fine to believe in the world-saving mission, obviously. But when you regularly invoke the potential for astronomical impact as a way to persuade junior staff to do things, you run a very high risk of creating manipulative pressure, suppressing disagreement, and short-circuiting their own judgment.
do not live with your employees. Peers might be ok, but it’s high risk of too much entanglement for junior and senior staff to live together.
similarly, do not expect staff to be your “family” or tribe, nor treat them with familial intimacy. Expecting productivity is enough. Expect them to leave for other jobs regularly, for a lot of reasons. Wish them well, don’t take it personally.
I think these 4 guidelines would have prevented 90%+ of the problems Alice and Chloe experienced.
I expect we only agree on the 4th point?
[*I have not worked directly with anyone involved. I have, however, worked in a similar rat-based project environment that lacked ‘normal’ professional boundaries. It left me seriously hurt, bewildered, isolated, and with a deeply distressing blow to my finances and sense of self, despite everyone’s good intentions. I resonated with Alice and Chloe a lot, even without dealing with any adversarial views like those attributed to Emerson.
I think the guidelines above would have prevented about 70% of my distress.
I believe Richenda and Minh that they’ve had good experiences with Kat. I had many positive experiences too on my project. I think it’s possible to have neutral to positive experiences with someone with Kat’s views, but only with much better boundaries in place].
Using “preoccupied” feels a bit strawmanny here. People using this situation as a way to enforce general conservativism in a naive way was one of the top concerns that kept coming up when I talked to Ben about the post and investigation.
The post has a lot of details that should allow people to make a more detailed model than “weird is bad”, but I don’t think it would be better for it to take a stronger stance on the causes of the problems that its providing evidence for, since getting the facts out is IMO more important.
It would seem low-integrity by my standards to decline to pursue this case because I would be worried that people would misunderstand the facts in a way that would cause inconvenient political movements for me. It seems like a lot of people have a justified interest in knowing what happened here, and I don’t want to optimize hard against that, just because they will predictably learn a different lesson than I have. The right thing to do is to argue in favor of my position after the facts are out, not to withhold information like this.
Also, I think the key components of this story are IMO mostly about the threats of retaliation and associated information control, which I think mostly comes across to readers (at least based on the comments I’ve seen so far), and also really doesn’t seem like it has much to do with general weirdness. If anything this kind of information control is more common in the broader world, and things like libel suits are more frequent.
The thing I think is potentially unfair is that ~Lightcone has its own strict morality about integrity violations. For instance, I don’t think on its face trying to control your reputation is bad in the same way that it seems to be a smoking gun to you. A lot of people reading this probably don’t either, but when it’s presented this way they see it as consistent with all the other infractions that were listed and really damning.
I think integrity violations are dangerous and corrosive, and I think it’s a good impulse to share information like that when you have it, but despite all the words here, I don’t think that info is properly contextualized for most people to use it correctly. It easily comes across as a laundry list of reasons to cancel them rather than the calibrated honest reporting it’s trying to be.
This continues to feel quite a bit too micromanagy to me. Mostly these are the complaints that seemed significant to Ben (which also roughly aligned with my assessment).
The post was already like 100+ hours of effort to write. I don’t think “more contextualizing” is a good use of at least our time (though if other people want to do this kind of job and would do more of that, then that seems maybe fine to me).
Like, again, I think if some people want to update that all weirdness is bad, then that’s up to them. It is not my job, and indeed would be a violation of what I consider cooperative behavior, to filter evidence so that the situation here only supports my (or Ben’s) position about how organizations should operate.
Yeah, I agree, I don’t think it’s worth the amount of contextualizing it would take to make this kind of info properly received and useful. I’m doubting that we can gossip in the way that, for example, you may have thought was needed for SBF, that productively in this forum. I think you and Ben have a rather complex worldview that explains why these incidents are very significant to you whereas superficially similar things that are common in EA are not (or are positive). I’m less concerned that weirdness will be discouraged and more concerned that people will be on blast in a way that seems arbitrary to them and is hard for them to predict without e.g. seeking your permission first if they don’t want to be called out on main later. Being called out is very damaging and I don’t like the whiff of “you have nothing to fear if have nothing to hide” that I’m getting in this comment section. Seems like the only defense against this kind of thing is never being successful enough to control money or hire employees.
I’m looking at maybe starting a new org in the next year and doing something that’s a little outside the Sequences morality (advocacy, involving politics and PR and being in coalitions with not-fully-aligned people). I really think it’s not only right but the best thing for me to be doing, but posts like this make me nervous that I could be subject to public shame for good faith disagreements and exploration. Feels extra shitty too because I tolerate other people doing pretty dumb things (from an organizational perspective) that I don’t do, like having sex parties with my (potential) coworkers, which for some reason are considered okay and not red flags for future misconduct in different domains.
Tbc, I think I have enough context to update usefully from this post. But I would guess less than 10% of readers do, and that a majority of readers will update somewhat negatively about Nonlinear for the bad/incomplete reasons I stated above. Your goal might not be fairness to Nonlinear, per se, and it doesn’t have to be. There are much bigger things at stake. But I think it’s harsh on them and chilling to others, and that cost should be weighed more than I think you guys are weighing it because you think you are just sharing openly (or something).
I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they’re not necessarily shared by the EA community or the broader world.
Under those norms, actions like threatening your ex-employees’s carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a “you don’t badmouth me, I don’t badmouth you” ceasefire is pretty normal.
In this post, Ben is accusing Nonlinear of bad behavior. In particular, he’s accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the fact, Ben would not have prioritized this.
However, many bystanders are likely to miss that subtlety. They see Nonlinear being accused, but don’t share Lightcone’s specific norms and culture.
So many readers, tracking the social momentum, walk away with the low-dimensional bottom line conclusion “Boo Nonlinear!”, but without particularly tracking Ben’s cruxes.
eg They have the takeaway “it’s irresponsible to date or live with your coworkers, and only irresponsible people do that” instead of “Some people in the ecosystem hold that suppressing negative info about your org is a major violation.”
And importantly, it means in practice, Nonlinear is getting unfairly punished for some behaviors that are actually quite common in the EA subculture.
This creates a dynamic analogous to “There are so many laws on the book that technically everyone is a criminal. So the police/government can harass or imprison anyone they choose, by selectively punishing crimes.” If enough social momentum gets mounted against an org, they can be lambasted for things that many orgs are “guilty” of[1], while the other orgs get off scott free.
And furthermore, this creates unpredictability. People can’t tell whether their version of some behavior is objectionable or not.
So overall, Ben might be accusing Nonlinear for principled reasons, but to many bystanders, this is indistinguishable from accusing them for pretty common EA behaviors, by fiat. Which is a pretty scary precedent!
Am I understanding correctly?
“guilty” in quotes to suggest the ambiguity about whether the behaviors in question are actually bad or guiltworthy.
Yes, very good summary!
Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting to point out the dynamic?
Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.
This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?
That said, I don’t think “callout / inventory of grievances / complaints” and “nuanced post about how to run an org better/fix the errors of your ways” always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.
I feel like for the next… week to month… I view it as primarily Nonlinear’s ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards “how to run things better.”
Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: “We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B...”
I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it’s a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.
If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.
(If they do a bad job at this, or even just if they seem to have “missed a spot”: I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)
((If Nonlinear fails to do this at all: Maybe it does fall to other people to… “digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?” IDK. Personally, I’d like to see what they come up with first.))
...I do currently think the primary take-away that “this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position” looks quite solid. In the absence of a high-level “Dialogue in the Comments: Meta Summary Post” comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.
I do not want to actually do this, because I love Lightcone and I trust you guys, but would it help you understand if a redteamer wrote a post like this about your org? Would you be fine with all the donors that were turned off and the people who didn’t want to work with you because you had the stink of drama on you?
I think it depends a lot on what you mean by “a post like this”. Like, I do think I would just really like more investigation and more airing of suspicions around, and yeah, that includes people’s concerns with Lightcone.
I could see something like that working but probably in a different format. Maybe something closer to a social credit score voting/aggregation mechanism?
Still, the most upvoted comment on this post does seem to push in the direction of “weird is bad”:
Yep, not clear what to do about that. Seems kind of sad, and I’ve strong-downvoted the relevant comment. I don’t think it’s mine or Ben’s job to micromanage people’s models of how organizations should operate.
I share Holly’s appreciation for you all, and also the concern that Lightcone’s culture and your specific views of these problem don’t necessarily scale nor translate well outside of rat spheres of influence. I agree that’s sad, but I think it’s good for people to update their own views and with that in mind.
My takeaways from all this are fairly confidently the following:*
EA orgs could do with following more “common sense” in their operations.
For example,
hire “normie” staff or contractors early on who are expected to know and enforce laws, financial regulations, and labor practices conscientiously, despite the costs of “red tape.” Build org knowledge and infrastructure for conscientious accounting, payroll, and contracting practices, like a “normal” non-profit or startup. After putting that in place, allow leaders to pushback on red tape, but expect them to justify the costs of not following any “unnecessary” rules, rather than expecting junior employees to justify the costs of following rules.
don’t frequently mention a world-saving mission when trying to convince junior staff to do things they are hesitant to do. Focus on object level tasks and clear, org-level results instead. It’s fine to believe in the world-saving mission, obviously. But when you regularly invoke the potential for astronomical impact as a way to persuade junior staff to do things, you run a very high risk of creating manipulative pressure, suppressing disagreement, and short-circuiting their own judgment.
do not live with your employees. Peers might be ok, but it’s high risk of too much entanglement for junior and senior staff to live together.
similarly, do not expect staff to be your “family” or tribe, nor treat them with familial intimacy. Expecting productivity is enough. Expect them to leave for other jobs regularly, for a lot of reasons. Wish them well, don’t take it personally.
I think these 4 guidelines would have prevented 90%+ of the problems Alice and Chloe experienced.
I expect we only agree on the 4th point?
[*I have not worked directly with anyone involved. I have, however, worked in a similar rat-based project environment that lacked ‘normal’ professional boundaries. It left me seriously hurt, bewildered, isolated, and with a deeply distressing blow to my finances and sense of self, despite everyone’s good intentions. I resonated with Alice and Chloe a lot, even without dealing with any adversarial views like those attributed to Emerson.
I think the guidelines above would have prevented about 70% of my distress.
I believe Richenda and Minh that they’ve had good experiences with Kat. I had many positive experiences too on my project. I think it’s possible to have neutral to positive experiences with someone with Kat’s views, but only with much better boundaries in place].