LessWrong/Lightcone Infrastructure
Ruby
A Slow Guide to Confronting Doom
Maybe useful: “Latently controversial” – there’s no public controversy because people didn’t know about it, but if people had more information, there would be public controversy. I think this would perhaps be more the case with Manifest if the article hadn’t come out, but it’s still reasonable to consider Manifest to have some inherent potential “controversialness” given choice of speakers.
I think if the only thing claiming controversy was the article, it might make sense to call that fabricated/false claim by an outsider journalist, but given this post and the fact many people either disapprove or want to avoid Manifest, (and also that Austin writes about consciously deciding to invite people they thought were edgy,) means I think it’s just actually just a reasonable description.
And there’s disanalogy there. Racism is about someone’s beliefs and behaviors, and I can’t change those of someone’s else’s with a label. But controversy means people disagree, disapprove, etc. and someone can make someone else’s belief controversial just by disagreeing with it (or if one disagreement isn’t enough to be controversy, a person contributes to it with their disagreement).
RobertM and I are having a “dialogue”[1] on LessWrong with a lot of focus on whether it was appropriate for this to be posted when it was and with info collected so far (e.g. not waiting for Nonlinear response).
What is the optimal frontier for due diligence?
I think it matters a lot to be precise with claims here. If someone believes that any case of people with power over others asking them to commit crimes is damning, then all we need to establish is that this happened. If it’s understood that whether this was bad depends on the details, then we need to get into the details. Jack’s comment was not precise so it felt important to disambiguate (and make the claim I think is correct).
Dialogue: What is the optimal frontier for due diligence?
There are a lot of dumb laws. Without saying it was right in this case, I don’t think that’s categorical a big red line.
Or if it’s majority false, pick out the things you think are actually true, implying everything else you contest!
I would think you could go through the post and list out 50 bullet points of what you plan to contest in a couple of hours.
My guess is it was enough time to say which claims you objected to and sketch out the kind of evidence you planned to bring. And Ben judged that your response didn’t indicate you were going to bring anything that would change his mind enough that the info he had was worth sharing. E.g. you seemed to focus on showing that Alice couldn’t be trusted, but Ben felt that this would not refute enough of the other info he had collected / the kinds of refutation (e.g. only a $50 for driving without a license, she brought back illegal substances anyway) were not compelling enough to change that the info was worth sharing.
I do think one can make judgments from the meta info, and 3 hours is enough to get a lot of that.
I consider something of a missing mood on your part to be quite damning. From what I hear and see (Ben’s report of your call with him, how you’re responding public, threat to Lightcone/Ben), you are overwhelmingly concerned with defending yourself and don’t seem contrite at all that people you employed feel so extremely hurt by their time with you. I haven’t heard you dispute their claims of hurt (do you think those are lies for some reason?), instead focusing on the veracity of reasons for being hurt. But do you think you’re causally entangled with them feeling hurt? If so, where is the apology or contrition and horror at yourself that they think being with you resulted in the worst months of their lives?
I’d understand a lack of that if your position was “they’re definitely lying about how they felt probably for motivation X, give us time and can prove that”, but this hasn’t been the nature of your response.
I actually would expect more “competent” uncompassionate people concerned only with their own reputation to have acted contrite, because it’d make the audience more sympathetic, suggesting that you all aren’t very good at modeling people. Which makes it more likely you weren’t modeling your employees experience very well either, perhaps resulting in a lot of harm from negligence more than malice (which still warrants sharing this info about you).
I think asking your friends to vouch for you is quite possibly okay, but that people should disclose there was a request.
It’s different evidence between “people who know you who saw this felt motivated to share their perspective” vs “people showed up because it was requested”.
I appreciate the frame of this post and the question it proposes, it’s worth considering. The questions I’d want to address before fully buying though is:
1) Are the standard of investigative journalism actually good for their purpose? Or they did get distorted along the way for the same reason lots of regulated/standardized things do (e.g. building codes)
2) Supposing they’re good for their purpose, does that really apply not in mainstream media, but rather a smaller community.I think answering (2), we really do have a tricky false positive/false negative tradeoff. If you raise the bar for sharing critical information, you increase the likelihood of important info not getting shared. If you lower the bar, you increase the likelihood of false things getting out.
Currently, I think we should likely lower the bar, anyone (not saying you actually are) advocating higher levels of rigor before sharing are mistaken. EA has limited infrastructure for investigating and dealing with complaints like this (I doubt Ben/Lightcone colllectively would have consciously upfront thought it was worth 150 hours of Ben’s time, it kind of more happened/snowballed). We don’t have good mean of soliciting and propagating or getting things adjudicated. Given that, I think someone writes a blog post is pretty good, and pretty valuable.If I’d been the one investigating and writing, I think I’d have published something much less thoroughly researched after 10-15 hours to say “I have some bad critical info I’m pretty sure of that’s worth people knowing, and I have no better way to get the right communal updates than just sharing”.
Some reasons to not say “Doomer”
Is that link correct?
I can follow that reasoning.
I think what you get with fewer dedicated people is people with the opportunity for a build-up of deep moderation philosophy and also experience handling tricky cases. (Even after moderating for a really long time, I still find myself building those and benefitting from stronger investment.)
Quick thought after skimming, so forgive me if was already addressed. Why is the moderator position for ~3 hours? Why not get full-time people (or at least half-time), or go for 3 hours minimum. Mostly I expect fewer people spending more time doing the task will be better than more people doing it less.
I think this post falls short of arguing compellingly for the conclusion.
It brings 1 positive example of a successful movement that didn’t schism early one, and 2 examples of large movements that did schism and then had trouble.
I don’t think it’s illegitimate to bring suggestive examples vs a system review of movement trajectories, but I think it should be admitted that cherry-picking isn’t hard for three examples.
There’s no effort expended to establish equivalence between EA and its goals and Christianity, Islam, or Atheism at the gears level of what they’re trying to do. I could argue that they’re pretty different.
I seriously do not expect that an EA schism would result in bloodshed for centuries. Instead, it might save thousands of hours spent debating online.
The argument that “EA is too important” proves too much. I could just as easily say that because the stakes are so high, we can’t afford to have a movement containing people with harmful beliefs, and therefore it’s crucial that we schism and focus fresh with people who have True Spirit of EA or whatever.
This is not something I fault this post for not arguing about , but I’m personally inclined to think that “longtermist” EA should not have tried to become a mass movement (which is what the examples described are), and instead should have stayed relatively small and grown extremely slowly. I suspect many people are starting to wonder whether that’s true, and if so, people who want a smaller, more focused, weirder, “extreme” group of people collaborating should withdraw from the people who aspire for a welcoming, broadly palatable mass-movement, and each group will get out each other’s way.
There are historical reasons for why things developed the way that did, but I think it is clear there are some distinct cultural/wordlview clusters in EA that have different models and values, and aren’t united by enough to overcome that. I think that splitting might allow both groups to continue rather than what would likely happen is one group just dissolving, or both groups dissolving except for a core of people who want to argue indefinitely.
What would convince me against splitting is if no, really, everyone here is united very strongly by some underlying core values and world beliefs, and we can make enough progress on the differences en masse. I’m skeptical, but it’s good to say what might convince you.
When I think about being part of the movement or not, I’m not asking whether I feel welcomed, valued, or respected. I want to feel confident that it’s a group of people who have the values, culture, models, beliefs, epistemics, etc that means being part of the group will help me accomplish more of my values than if I didn’t join the group.
Or in other words, I’d rather push uphill to join an unwelcoming group (perhaps very insular) that I have confidence in their ability to do good, than join a group that is all open arms and validation, but I don’t think will get anything done (or get negative things done).
And to be more bold, I think if a group is trying to be very welcoming, they will end up with a lot of members that I am doubtful share my particular nuanced approach to doing good, and with whom I’m skeptical I can build trust and collaborate because our worldviews and assumptions are just too different.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
crossposting from LessWrong since I think this is more common on EA Forum
At first blush, I find this common caveat amusing.
1. If there are errors, we can infer that those providing feedback were unable to identify them.
2. If the author was fallible enough to have made errors, perhaps they are are fallible enough to miss errors in input sourced from others.
What purpose does it serve? Given its often paired with “credit goes to..<list of names> it seems like an attempt that people providing feedback/input to a post are only exposed to upside from doing so, and the author takes all the downside reputation risk if the post is received poorly or exposed as flawed.
Maybe this works? It seems that as a capable reviewer/feedback haver, I might agree to offer feedback on a poor post written by a poor author, perhaps pointing out flaws, and my having given feedback on it might reflect poorly on my time allocation, but the bad output shouldn’t be assigned to me. Whereas if my name is attached to something quite good, it’s plausible that I contributed to that. I think because it’s easier to help a good post be great than to save a bad post.
But these inferences seem like they’re there to be made and aren’t changed by what an author might caveat at the start. I suppose the author might want to remind the reader of them rather than make them true through an utterance.
Upon reflection, I think (1) doesn’t hold. The reviewers/input makers might be aware of the errors but be unable to save the author from them. (2) That the reviewers made mistakes that have flowed into the piece seems all the more likely the worse the piece is overall, since we can update that the author wasn’t likely to catch them.
On the whole, I think I buy the premise that we can’t update too much negatively on reviewers and feedback givers from them having deigned to give feedback on something bad, though their time allocation is suspect. Maybe they’re bad at saying no, maybe they’re bad at dismissing people’s ideas aren’t that good, maybe they have hope for this person. Unclear. Upside I’m more willing to attribute.
Perhaps I would replace the “errors are my my own[, credit goes to]” with a reminder or pointer that these are the correct inferences to make. The words themselves don’t change them? Not sure, haven’t thought about this much.
Edited To Add: I do think “errors are my own” is a very weird kind of social move that’s being performed in an epistemic contexts and I don’t like.