I want to point out that the existence of a libel law that is expensive to engage with, does practically nothing against the posting of anonymized callout posts. You can’t sue someone you can’t identify.
Love it or hate it: the more harshly libel law is enforced, the more I expect similar things to be handled through fully-anonymous or low-transparency channels, instead of high-transparency ones. And in aggregate, I expect an environment high on libel suits, to disincentivize transparent behavior or highly specific allegations (which risks de-anonymization) on the part of accusers, more strongly than it incentivizes epistemic carefulness.
This is one reason to be against encouraging highly litigious attitudes, that I haven’t yet seen mentioned, so I thought I’d briefly put it out there.
Yes, I think a lot of commenters are almost certainly making bad updates about how to judge or how to run an EA org off of this, or are using it to support their own pre-existing ideas around this topic.
This kinda stinks, but I do think it is what happens by default. I hope the next big org founder picks up more nuance than that, from somewhere else?
That said, I don’t think “callout / inventory of grievances / complaints” and “nuanced post about how to run an org better/fix the errors of your ways” always have to be the same post. That would be a lot to take on, and Lesswrong is positioned at the periphery here, at best; doing information-gathering and sense-making from the periphery is really hard.
I feel like for the next… week to month… I view it as primarily Nonlinear’s ball (...and/or whoever it is who wants to fund them, or feels responsibility to provide oversight/rehab on them, if any do judge that worthwhile...) to shift the conversation towards “how to run things better.”
Given their currently demonstrated attitude, I am not starting out hugely optimistic here. But: I hope Nonlinear will rise to the occasion, and take the first stab at writing some soul-searching/error-analysis synthesis post that explains: “We initially tried THIS system/attitude to handle employees, in the era the complaints are from. We made the following (wrong in retrospect) assumptions. That worked out poorly. Now we try this other thing, and after trialing several things, X seems to go fine (see # other mentee/employee impressions). On further thought, we intend to make Y additional adjustment going forward. Also, we commit to avoiding situations where Z in the future. We admit that A looks sketchy to some, but we wish to signal that we intend to continue doing it, and defend that using logic B...”
I think giving Nonlinear the chance to show that they have thought through how to fix these issues/avoid generating them in the future, would be good. They are in what (should) be the best position to know what has happened or set up an investigation, and are probably the most invested in making sense of it (Emotions and motivated cognition come with that, so it’s a mixed bag, sure. I hope public scrutiny keeps them honest.). They are also probably the only ones who have the ability to enforce or monitor a within-org change in policy, and/or to undergo some personal-growth.
If Nonlinear is the one who creates it, this could be an opportunity to read a bit into how they are thinking about it, and for others to reevaluate how much they expect past behavior and mistakes to continue to accurately predict their future behavior, and judge how likely these people are to fix the genre of problems brought up here.
(If they do a bad job at this, or even just if they seem to have “missed a spot”: I do hope people will chime in at that point, with a bunch of more detailed and thoughtful models/commentary on how to run a weird experimental small EA org without this kind of problem emerging, in the comments. I think burnout is common, but experiences this bad are rare, especially as a pattern.)
((If Nonlinear fails to do this at all: Maybe it does fall to other people to… “digest some take-aways for them, on behalf of the audience, as a hypothetical exercise?” IDK. Personally, I’d like to see what they come up with first.))
...I do currently think the primary take-away that “this does not look like a good or healthy org for new EAs to do work for off-the-books, pls do not put yourself in that position” looks quite solid. In the absence of a high-level “Dialogue in the Comments: Meta Summary Post” comment: I do kinda wish Ben would elevate from the comments to a footnote, that nobody seems to have brought up any serious complaints about Drew, though.