I think this article paints a fairly misleading picture, in a way that’s difficult for me to not construe as deliberate.
It doesn’t provide dates for most of the incidents it describes, despite that many of them happened many years ago, and thereby seems to imply that all the bad stuff brought up is ongoing. To my knowledge, no MIRI researcher has had a psychotic break in ~a decade. Brent Dill is banned from entering the group house I live in. I was told by a friend that Michael Vassar (the person who followed Sonia Joseph home and slept on her floor despite that it made her uncomfortable, also an alleged perpetrator of sexual assault) is barred from Slate Star Codex meetups.
The article strongly reads to me as if it’s saying that these things aren’t the case, that the various transgressors didn’t face any repercussions and remained esteemed members of the community.
Obviously it’s bad that people were assaulted, harassed, and abused at all, regardless of how long ago it happened. It’s probably good for people to know that these things happened. But the article seems to assume that all these things are still happening, and it seems to be drawing conclusions on the basis of those assumptions, e.g. that misogyny is currently ubiquitous in the community, that AI alignment is a toxic field to work in, that people are regularly having psychotic breaks, etc.
For what it’s worth, I think (as someone who has firsthand experience with EA and rationality in the Bay Area) that these things are untrue. But even if they were true, the author has not demonstrated that they possess the evidence to draw these conclusions, and yet they are nonetheless trying to convince the reader of them.
It’s unsurprising that the people who were willing to allow Bloomberg to print their names or identifying information about the wrongdoers were associated with situations where the community has rallied against the wrongdoer. It’s also unsurprising that those who were met with criticism, retailation, and other efforts to protect the wrongdoer were not willing to allow publication of potentially identifying information. Therefore, I don’t think it’s warranted to draw inferences about community response in the cases without identifying information based on the response in cases with that information.
It would be helpful if the article mentioned both the status of the wrongdoer at the time of the incident and their current status in the relevant community.
This comment is gold. I believe there is an iceberg effect here—EA cannot measure the number of times an accuser attempted to say something but got shut down or retaliated against by the community.
Personally, I would like to see discussion shift toward how to create a safe environment for all genders, and how to respond to accusers appropriately.
One book that I recommend is Citadels of Pride, which goes into institutional abuse in Hollywood and the arts scene. The patterns are similar: lack of clear boundaries between work/life, men in positions of power commanding capital, high costs to say anything, lack of feedback mechanisms in reporting. I am thankful that CEA is upping its efforts here; however, I also see that the amorphous nature (especially in the Bay Area) of these subcultures makes things difficult. It seems that most of the toxicity comes from the rationalist community, where there are virtually no mechanisms of feedback.
I am in touch with some of the women in the article, and they tell me that they feel safe speaking up now that they’re no longer associated with these circles and have built separate networks. However, I agree that EA is very heterogeneous and diffuse, so it’s important to understand which subnetworks they’re talking about. Unfortunately, that trades off between identifiability, and these women may not want to be identified due to having tried to speak up before, then being shut down in a traumatizing manner.
There is an extremely high cost in some of these subnetworks to naming an accuser. These subnetworks are usually male-dominated, and view any form of speaking up as malicious “cancellation.”
This is also why I find the LW response to be inappropriate. While many of these claims are “old news” to those communities, many of these claims are fresh, and the LW community responded with a high level of dismissal. There was some poor reasoning about “baseline rates” (i.e. the people in this article are power law anomalies), but this reasoning is flawed because a) sexual assault remains the most underreported crime, so there is likely instead an iceberg effect, and b) women who were harassed/assaulted have left the movement which changes your distribution, and c) women who would enter your movement otherwise now stay away due to whisper networks and bad vibes.
Finally, I want to note that there is a distinction between EA/LW as official “movements,” and the ideology. Most of these comments focus on the movement (e.g. governance structures, mechanisms of feedback). That’s great to scrutinize, but we also need to examine the ideology given that it comingles with Bay Area ideology, rationalists, etc.
Personally, when I was a teenager in EA (I am a woman), I found that there was little discussion about boundaries, emotional intelligence, healthy relationships, non-abusive dynamics, listening to your gut, and etc. I think a lack of focus on these areas makes can make subnetworks in these communities systemically unequipped to deal with these issues.
Okay so you have noted 2 possible types of victims:
People who reported and were met with community support (who you expect would feel comfortable using names)
People who reported and were met with criticism (who you expect would not use names)
I just want to (respectfully) flag that there is a possible third and fourth group of victims (and likely more possibilities too tbh)
People who reported and were met with support, but who now want to continue to use a handled incident as proof of problems. These people would avoid using names to avoid claims of dishonestly controlling the narrative.
People who never reported their incidents to the community at all, and therefore could not be met with either support or criticism. If I were in this third group of people who didn’t report, I would also not use my name when talking to a journalist, to avoid upset about taking a complaint public before allowing the community to actually handle something they might absolutely have wanted to handle.
I’m not saying what’s going on, like it definitely looks like at least one person from group 2 is present. But I just want to note for readers that you also can’t simplify the possibility sphere of name-redacted victims in general into only 1 group
With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:
I hereby apologize for the role I played in X’s ostracism from the community, which AFAICT was both unjust and harmful to both the community and X. There’s more to say here, and I don’t yet know how to say it well. But the shortest version is that in the years leading up to my original comment X was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about X were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have X around talking to people in these communities, though this will bring friction.
There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:
Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about X and related matters also helped drive this home for me.)
Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.
It is nevertheless the case that X has had a tendency to e.g. yell rather more than I would like. For an aspiring rationality community’s general “who is worth ever talking to?” list, this ought to matter much less than the above. Insofar as a given person is trying to create contexts where people reliably don’t yell or something, they’ll want to do whatever they want to do; but insofar as we’re creating a community-wide include/exclude list (as in e.g. this comment on whether to let X speak at SSC meetups), it is my opinion that X ought to be on the “include” list.
Thoughts/comments welcome, and probably helpful for getting to shared accurate pictures about any of what’s above.
CFAR investigated Brent and stood by him until there was public outcry!
This says very bad things about the leadership of CFAR, and probably other CFAR staff (to the extent that they either agreed with leadership or failed to push back hard enough, though the latter can be hard to do).
It seems to say good things about the public that did the outcry, which at the time felt to me like “almost everyone outside of CFAR”. Everyone* yelled at a venerable and respected org until they stopped doing bad stuff. Is this a negative update against EA/rationality, or a positive one?
*It’s entirely possible that there were private whisper networks supporting Brett/attacking his accusers, or even public posts defending him that I missed. But it felt to me like the overwhelming community sentiment was “get Brent the hell out of here”.
I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar.
To my knowledge, no MIRI researcher has had a psychotic break in ~a decade
It’s worth noting that the article was explicit that ex-MIRI researcher Jessica Taylor’s psychotic break was in 2017:
In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post.
She also alleged in December 2021 that at least two other MIRI employees had experienced psychosis in the past few years:
At least two former MIRI employees who were not significantly talking with Vassar or Ziz experienced psychosis in the past few years.
Re: the MIRI employees, it seems relevant that they’re “former” rather than current employees, given that you’d expect there to be more former than current employees, and former employees presumably don’t have MIRI as a major figure in their lives.
was told by a friend that Michael Vassar is barred from Slate Star Codex meetups.
He was banned, but still managed to slip through the cracks enough to be invited to an SSC online meetup in 2020. (To be very clear, this was not organised or endorsed by Scott alexander, who did ban Vasser from his events).
You can read the mea culpa from the organiser here. It really looks to me like Vasser has been treated with a missing stair approach until very recently, where those in the know quietly disinvite him to things but others, even within the community, are unaware. Even in the comments here where some very harsh allegations are made against him, people are still being urged not to “ostracise” him, which to me seems like an entirely appropriate action.
Neither Scotts banning of Vassar nor the REACH banning was quiet. It’s just that there’s no process by which those people who organize Slate Star Codex meetups are made aware.
It turns out that plenty of people who organize Slate Star Codex meetups are not in touch with Bay Area community drama. The person who organized that SSC online meetup was from Israel.
Even in the comments here where some very harsh allegations are made against him
That’s because some of the harsh allegations don’t seem to hold up. Scott Alexander spent a significant amount of time investigating and came up with:
While I disagree with Jessica’s interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael’s ideas (and psychedelics) was not the single unique cause of her problems but may have contributed.
It’s just that there’s no process by which those people who organize Slate Star Codex meetups are made aware.
This definitely indicates a mishandling of the situation, that leaves room for improvement. In a better world, somebody would have spotted the talk before it went ahead. As it is now, it made it (falsely) look like he was endorsed by SSC, which I hope we can agree is not something we want. We already know he’s been using his connection with Yud (via HPMOR) to try and seduce people.
With regards to the latter, if someone was triggering psychotic breaks in my community, I would feel no shame in kicking them out, even if it was unintentional. There is no democratic right to participate in one particular subculture. Ostracism is an appropriate response for far less than this.
I’m particularly concerned with the Anna Salamon statement that sapphire posted above, where she apologises to him for the ostracisation, and says she recommends inviting him to future SSC meetups. This is going in the exact wrong direction, and seems like an indicator that the rationalists are poorly handling abuse.
This definitely indicates a mishandling of the situation, that leaves room for improvement.
I agree with that and do think that having a better system to share information would be good.
With regards to the latter, if someone was triggering psychotic breaks in my community, I would feel no shame in kicking them out, even if it was unintentional.
If Vassar tells someone about how the organization for which they are working is corrupt and the person Vassar is talking with considers his arguments persuasive, that’s going to be bad for their mental health.
Anna Salamon wrote that post because she believes that some arguments made about how CFAR was corrupt were reasonable arguments.
To the extent that the rationalist ideal makes sense, it includes not ostracising people for speaking out uncomfortable truths even if those uncomfortable truths are bad for the mental health of some people.
We already know he’s been using his connection with Yud (via HPMOR) to try and seduce people.
The seduction here is “Look, I’m bad in a way that I served as a template for the evil villain”.
While “X is a bad boy” can be attractive to some women, it should be a very clear sign that he’s poor relationship material. It also shouldn’t be surprising for anyone when he’s actually a bad boy in that relationship.
A woman who wants a relationship with a bad boy can find that and it feels a bit paternalistic to say that a woman who wants that shouldn’t get any opportunity to get that.
I do think there are good reasons not to have him at meetups but it’s a complex decision.
The “chinese robber fallacy” is being overstretched, in my opinion. All it says is that having many examples of X behaviour within a group doesn’t necessarily prove that X is worse than average within that group. But that doesn’t mean it isn’t worse than average. I could easily imagine the catholic church throwing this type of link out in response to the first bombshell articles about abuse.
Most importantly, we shouldn’t be aiming for average, we should be aiming for excellence. And I think the poor response to a lot of the incidents described is pretty strong evidence that excellence is not being achieved on this matter.
If having many examples of behavior X within a group doesn’t show that the group is worse at that or better at that than average—if you expect to see the same thing in either case—then being presented with such a list has given you zero evidence on which to update.
They would have written the same article whether behavior X was half as common or twice as common or vanishingly rare. They would have written the same article whether things were handled well or poorly, as shown by their framing things misleadingly and their lies of omission. They had an ax to grind and they’ve ground it. We should be aiming for excellence but when we get there (or if we’ve gotten there) it will do absolutely nothing to prevent people from writing these articles.
When someone goes looking for examples of X behavior, knowing that they’ll find a list, with the goal of damaging your reputation among third parties, being presented with the list does not seem to me a good impetus for paroxysms of soul-searching and finger pointing.
In the absence of evidence that rationalism is uniquely good at dealing with sexual harrasment (it isn’t), then the prior assumption about the level of misconduct should be “average”, not “excellent”. Which means that there is room for improvement.
Even if these stories do not update your beliefs about the level of misconduct in the communities, they do give you information about how misconduct is happening, and point to areas that can be improved. I must admit I am baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community.
I wonder if you both might just believe the same thing here? titotal, do you not think it possible that lumpyproletariat was offering that as one option out of many as a sort of insurance plan that witchhunts against EA not begin? Witchhunts are very easy to start, especially once you know more details and you know you need them, but they are very hard to stop. So I guess I think they agree with you but just want to drop in that reminder for people of that possibility to try to help things go well? Rather than making a claim that base rates is necessarily the case? After all, you both want the community to do better and be “aiming for excellence”?
[I agree it would have been better framed as one reason out of various though. I’ve been liking the allegory of the blind men and the elephant more for this myself]
baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community.
I don’t really want to get involved in this thread other than saying “I think you guys agree” so it’s okay if you consider it a tangent but… I’ll just flag that I think this bit isn’t accurate in character. What if the lessons were learned back then and the “immediate response” has actually passed? What about this response from the Community Health Team about an ongoing project to help clarify problems and reveal avenues for making the community safer, which probably implies that the rest of us don’t have much to do quite yet except maybe help people be patient for that? Another option is, if we want to go our own way, actually to try to figure out the veracity of the media and other sources of info, because questioning the importance of various pieces of info would also lilely be the first step in helping determine which potential interventions might do nothing or do amazingly or do net harm. I wouldnt recommend getting stuck on using such selective data, when there should be better to some soon, but you are certainly welcome to try to use this information to protect the community, and let us know if you think of something for us to do!
I think it’s good to both address sexual misconduct and to correct misleading context in media pieces. But if you only mention the latter, it gives the impression that the former doesn’t matter. I would highly encourage people who care about both to at least mention that you care about reducing the level of misconduct. It may sound like stating the obvious, but it really does matter.
While I certainly hope everyone cares about both, I can’t honestly say I believe that. Going through the lesswrong thread, it honestly looks to me like a lot of people genuinely don’t want to think about the issue at all, and I find this concerning. For example, downvoting the thread to 0 seems completely unwarranted.
None of this was news to the people who use LessWrong.
The time to have a conversation about what went wrong and what a community can do better, is immediately after you learned that the thing happened. If you search for the names of the people involved, you’ll see that LessWrong did that at length.
The worst possible time to bring the topic up again, is when someone writes a misleading article for the express purpose of hurting you, which was not written to be helpful and purposefully lacks the context that it would require in order to be helpful. Why would you give someone a button they can press to make your forum talk for weeks about nothing?
It was a low-quality article and was downvoted so fewer people saw it. I wish the same had happened here.
I think this article paints a fairly misleading picture, in a way that’s difficult for me to not construe as deliberate.
It doesn’t provide dates for most of the incidents it describes, despite that many of them happened many years ago, and thereby seems to imply that all the bad stuff brought up is ongoing. To my knowledge, no MIRI researcher has had a psychotic break in ~a decade. Brent Dill is banned from entering the group house I live in. I was told by a friend that Michael Vassar (the person who followed Sonia Joseph home and slept on her floor despite that it made her uncomfortable, also an alleged perpetrator of sexual assault) is barred from Slate Star Codex meetups.
The article strongly reads to me as if it’s saying that these things aren’t the case, that the various transgressors didn’t face any repercussions and remained esteemed members of the community.
Obviously it’s bad that people were assaulted, harassed, and abused at all, regardless of how long ago it happened. It’s probably good for people to know that these things happened. But the article seems to assume that all these things are still happening, and it seems to be drawing conclusions on the basis of those assumptions, e.g. that misogyny is currently ubiquitous in the community, that AI alignment is a toxic field to work in, that people are regularly having psychotic breaks, etc.
For what it’s worth, I think (as someone who has firsthand experience with EA and rationality in the Bay Area) that these things are untrue. But even if they were true, the author has not demonstrated that they possess the evidence to draw these conclusions, and yet they are nonetheless trying to convince the reader of them.
It’s unsurprising that the people who were willing to allow Bloomberg to print their names or identifying information about the wrongdoers were associated with situations where the community has rallied against the wrongdoer. It’s also unsurprising that those who were met with criticism, retailation, and other efforts to protect the wrongdoer were not willing to allow publication of potentially identifying information. Therefore, I don’t think it’s warranted to draw inferences about community response in the cases without identifying information based on the response in cases with that information.
It would be helpful if the article mentioned both the status of the wrongdoer at the time of the incident and their current status in the relevant community.
This comment is gold. I believe there is an iceberg effect here—EA cannot measure the number of times an accuser attempted to say something but got shut down or retaliated against by the community.
Personally, I would like to see discussion shift toward how to create a safe environment for all genders, and how to respond to accusers appropriately.
One book that I recommend is Citadels of Pride, which goes into institutional abuse in Hollywood and the arts scene. The patterns are similar: lack of clear boundaries between work/life, men in positions of power commanding capital, high costs to say anything, lack of feedback mechanisms in reporting. I am thankful that CEA is upping its efforts here; however, I also see that the amorphous nature (especially in the Bay Area) of these subcultures makes things difficult. It seems that most of the toxicity comes from the rationalist community, where there are virtually no mechanisms of feedback.
I am in touch with some of the women in the article, and they tell me that they feel safe speaking up now that they’re no longer associated with these circles and have built separate networks. However, I agree that EA is very heterogeneous and diffuse, so it’s important to understand which subnetworks they’re talking about. Unfortunately, that trades off between identifiability, and these women may not want to be identified due to having tried to speak up before, then being shut down in a traumatizing manner.
There is an extremely high cost in some of these subnetworks to naming an accuser. These subnetworks are usually male-dominated, and view any form of speaking up as malicious “cancellation.”
This is also why I find the LW response to be inappropriate. While many of these claims are “old news” to those communities, many of these claims are fresh, and the LW community responded with a high level of dismissal. There was some poor reasoning about “baseline rates” (i.e. the people in this article are power law anomalies), but this reasoning is flawed because a) sexual assault remains the most underreported crime, so there is likely instead an iceberg effect, and b) women who were harassed/assaulted have left the movement which changes your distribution, and c) women who would enter your movement otherwise now stay away due to whisper networks and bad vibes.
Finally, I want to note that there is a distinction between EA/LW as official “movements,” and the ideology. Most of these comments focus on the movement (e.g. governance structures, mechanisms of feedback). That’s great to scrutinize, but we also need to examine the ideology given that it comingles with Bay Area ideology, rationalists, etc.
Personally, when I was a teenager in EA (I am a woman), I found that there was little discussion about boundaries, emotional intelligence, healthy relationships, non-abusive dynamics, listening to your gut, and etc. I think a lack of focus on these areas makes can make subnetworks in these communities systemically unequipped to deal with these issues.
Okay so you have noted 2 possible types of victims:
People who reported and were met with community support (who you expect would feel comfortable using names)
People who reported and were met with criticism (who you expect would not use names)
I just want to (respectfully) flag that there is a possible third and fourth group of victims (and likely more possibilities too tbh)
People who reported and were met with support, but who now want to continue to use a handled incident as proof of problems. These people would avoid using names to avoid claims of dishonestly controlling the narrative.
People who never reported their incidents to the community at all, and therefore could not be met with either support or criticism. If I were in this third group of people who didn’t report, I would also not use my name when talking to a journalist, to avoid upset about taking a complaint public before allowing the community to actually handle something they might absolutely have wanted to handle.
I’m not saying what’s going on, like it definitely looks like at least one person from group 2 is present. But I just want to note for readers that you also can’t simplify the possibility sphere of name-redacted victims in general into only 1 group
With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:
This says very bad things about the leadership of CFAR, and probably other CFAR staff (to the extent that they either agreed with leadership or failed to push back hard enough, though the latter can be hard to do).
It seems to say good things about the public that did the outcry, which at the time felt to me like “almost everyone outside of CFAR”. Everyone* yelled at a venerable and respected org until they stopped doing bad stuff. Is this a negative update against EA/rationality, or a positive one?
*It’s entirely possible that there were private whisper networks supporting Brett/attacking his accusers, or even public posts defending him that I missed. But it felt to me like the overwhelming community sentiment was “get Brent the hell out of here”.
I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar.
It’s worth noting that the article was explicit that ex-MIRI researcher Jessica Taylor’s psychotic break was in 2017:
She also alleged in December 2021 that at least two other MIRI employees had experienced psychosis in the past few years:
Re: the MIRI employees, it seems relevant that they’re “former” rather than current employees, given that you’d expect there to be more former than current employees, and former employees presumably don’t have MIRI as a major figure in their lives.
He was banned, but still managed to slip through the cracks enough to be invited to an SSC online meetup in 2020. (To be very clear, this was not organised or endorsed by Scott alexander, who did ban Vasser from his events).
You can read the mea culpa from the organiser here. It really looks to me like Vasser has been treated with a missing stair approach until very recently, where those in the know quietly disinvite him to things but others, even within the community, are unaware. Even in the comments here where some very harsh allegations are made against him, people are still being urged not to “ostracise” him, which to me seems like an entirely appropriate action.
Neither Scotts banning of Vassar nor the REACH banning was quiet. It’s just that there’s no process by which those people who organize Slate Star Codex meetups are made aware.
It turns out that plenty of people who organize Slate Star Codex meetups are not in touch with Bay Area community drama. The person who organized that SSC online meetup was from Israel.
That’s because some of the harsh allegations don’t seem to hold up. Scott Alexander spent a significant amount of time investigating and came up with:
This definitely indicates a mishandling of the situation, that leaves room for improvement. In a better world, somebody would have spotted the talk before it went ahead. As it is now, it made it (falsely) look like he was endorsed by SSC, which I hope we can agree is not something we want. We already know he’s been using his connection with Yud (via HPMOR) to try and seduce people.
With regards to the latter, if someone was triggering psychotic breaks in my community, I would feel no shame in kicking them out, even if it was unintentional. There is no democratic right to participate in one particular subculture. Ostracism is an appropriate response for far less than this.
I’m particularly concerned with the Anna Salamon statement that sapphire posted above, where she apologises to him for the ostracisation, and says she recommends inviting him to future SSC meetups. This is going in the exact wrong direction, and seems like an indicator that the rationalists are poorly handling abuse.
I agree with that and do think that having a better system to share information would be good.
If Vassar tells someone about how the organization for which they are working is corrupt and the person Vassar is talking with considers his arguments persuasive, that’s going to be bad for their mental health.
Anna Salamon wrote that post because she believes that some arguments made about how CFAR was corrupt were reasonable arguments.
To the extent that the rationalist ideal makes sense, it includes not ostracising people for speaking out uncomfortable truths even if those uncomfortable truths are bad for the mental health of some people.
The seduction here is “Look, I’m bad in a way that I served as a template for the evil villain”.
While “X is a bad boy” can be attractive to some women, it should be a very clear sign that he’s poor relationship material. It also shouldn’t be surprising for anyone when he’s actually a bad boy in that relationship.
A woman who wants a relationship with a bad boy can find that and it feels a bit paternalistic to say that a woman who wants that shouldn’t get any opportunity to get that.
I do think there are good reasons not to have him at meetups but it’s a complex decision.
I think these were relatively quiet. The only public thing I can find about REACH is this post where Ben objects to it, and Scott’s listing was just as “Michael A” and then later “Michael V”.
Someone on the LessWrong crosspost linked this relevant thing: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
The “chinese robber fallacy” is being overstretched, in my opinion. All it says is that having many examples of X behaviour within a group doesn’t necessarily prove that X is worse than average within that group. But that doesn’t mean it isn’t worse than average. I could easily imagine the catholic church throwing this type of link out in response to the first bombshell articles about abuse.
Most importantly, we shouldn’t be aiming for average, we should be aiming for excellence. And I think the poor response to a lot of the incidents described is pretty strong evidence that excellence is not being achieved on this matter.
I agree that we should be aiming for excellence.
If having many examples of behavior X within a group doesn’t show that the group is worse at that or better at that than average—if you expect to see the same thing in either case—then being presented with such a list has given you zero evidence on which to update.
They would have written the same article whether behavior X was half as common or twice as common or vanishingly rare. They would have written the same article whether things were handled well or poorly, as shown by their framing things misleadingly and their lies of omission. They had an ax to grind and they’ve ground it. We should be aiming for excellence but when we get there (or if we’ve gotten there) it will do absolutely nothing to prevent people from writing these articles.
When someone goes looking for examples of X behavior, knowing that they’ll find a list, with the goal of damaging your reputation among third parties, being presented with the list does not seem to me a good impetus for paroxysms of soul-searching and finger pointing.
In the absence of evidence that rationalism is uniquely good at dealing with sexual harrasment (it isn’t), then the prior assumption about the level of misconduct should be “average”, not “excellent”. Which means that there is room for improvement.
Even if these stories do not update your beliefs about the level of misconduct in the communities, they do give you information about how misconduct is happening, and point to areas that can be improved. I must admit I am baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community.
I wonder if you both might just believe the same thing here? titotal, do you not think it possible that lumpyproletariat was offering that as one option out of many as a sort of insurance plan that witchhunts against EA not begin? Witchhunts are very easy to start, especially once you know more details and you know you need them, but they are very hard to stop. So I guess I think they agree with you but just want to drop in that reminder for people of that possibility to try to help things go well? Rather than making a claim that base rates is necessarily the case? After all, you both want the community to do better and be “aiming for excellence”?
[I agree it would have been better framed as one reason out of various though. I’ve been liking the allegory of the blind men and the elephant more for this myself]
I don’t really want to get involved in this thread other than saying “I think you guys agree” so it’s okay if you consider it a tangent but… I’ll just flag that I think this bit isn’t accurate in character. What if the lessons were learned back then and the “immediate response” has actually passed? What about this response from the Community Health Team about an ongoing project to help clarify problems and reveal avenues for making the community safer, which probably implies that the rest of us don’t have much to do quite yet except maybe help people be patient for that? Another option is, if we want to go our own way, actually to try to figure out the veracity of the media and other sources of info, because questioning the importance of various pieces of info would also lilely be the first step in helping determine which potential interventions might do nothing or do amazingly or do net harm. I wouldnt recommend getting stuck on using such selective data, when there should be better to some soon, but you are certainly welcome to try to use this information to protect the community, and let us know if you think of something for us to do!
I think it’s good to both address sexual misconduct and to correct misleading context in media pieces. But if you only mention the latter, it gives the impression that the former doesn’t matter. I would highly encourage people who care about both to at least mention that you care about reducing the level of misconduct. It may sound like stating the obvious, but it really does matter.
While I certainly hope everyone cares about both, I can’t honestly say I believe that. Going through the lesswrong thread, it honestly looks to me like a lot of people genuinely don’t want to think about the issue at all, and I find this concerning. For example, downvoting the thread to 0 seems completely unwarranted.
None of this was news to the people who use LessWrong.
The time to have a conversation about what went wrong and what a community can do better, is immediately after you learned that the thing happened. If you search for the names of the people involved, you’ll see that LessWrong did that at length.
The worst possible time to bring the topic up again, is when someone writes a misleading article for the express purpose of hurting you, which was not written to be helpful and purposefully lacks the context that it would require in order to be helpful. Why would you give someone a button they can press to make your forum talk for weeks about nothing?
It was a low-quality article and was downvoted so fewer people saw it. I wish the same had happened here.
The Bloomberg piece was not an update on how misconduct has happened in EA to anyone who has been previously paying attention.