Note: I am socially peripheral to EA-the-community and philosophically distant from EA-the-intellectual-movement; salt according to taste.
While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I’ve seen around Gleb are charitable and systematic in excess of reasonable caution.
My first introduction to Gleb was Jeff’s August post, read before there were any comments up, and it seemed very clear that he was acting in bad faith and trying to use community norms of particular communication styles, owning up to mistakes, openness to feedback, etc. to disarm those engaging honestly and enable the con to go on longer. I don’t think I’m an especially untrusting person (quite the opposite, really), but even if that’s the case nearly every subsequent revealed detail and interaction confirmed this. Gleb responds to criticism he can’t successfully evade by addressing it in only the most literal and superficial manner, and continues on as before. It is to the point that if I were Gleb, and had somehow honestly stumbled this many times and fell into this pattern over and over, I would feel I had to withdraw on the grounds that no one external to my own thought processes could possibly reasonably take me seriously and that I clearly had a lot of self-improvement to do before engaging in a community like this in the future.
The responses to this behavior that I’ve seen are overwhelmingly of the form of taking Gleb seriously, giving him the benefit of the doubt where none should exist, providing feedback in good faith, and responding positively to the superficial signs Gleb gives of understanding. This is true even for people who I know have engaged with him before. I’m not completely confident of this, but the pattern looks like people are applying the standards of charity and forgiveness that would be appropriate for any one of these incidences in isolation, not taking into account that the overall pattern of behavior makes such charitable interpretations increasingly implausible. On top of that, some seem to have formed clear final opinions that Gleb is not acting in good faith, yet still use very cautious language and are hesitant to take a single step beyond what they can incontrovertibly demonstrate to third parties.
A few examples from this post, not trying to be comprehensive:
Using the word “concerns” in the title and introductory matter
noting that Gleb doesn’t “appear” to have altered his practices around name-dropping
Saying “Tsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably ‘true’ on some tendentious reading.” without bringing up the possibility of him knowing exactly what he’s doing and just lying
Calling Gleb’s self-proclaimed bestselling author status only “potentially” misleading.
Moreover, the fully comprehensive nature of the post and the painstaking lengths it goes to separate out definitely valid issues from potentially invalid ones seems to be part of the same pattern. No one, not even Gleb, is claiming that these instances didn’t happen or that he is being set up, yet this post seems to be taking on a standard appropriate for an adversarial court of law.
And this is a problem, because in addition to wasting people’s time it causes people less aware of these issues to take Gleb more seriously, encourages him to continue behaving as he has been, and I suspect in some cases inclines even the more knowledgeable people involved to trust Gleb too much in the future, despite whatever private opinions they may have of his reliability. At some point there needs to be a way for people to say “no, this is enough, we are done with you” in the face of bad behavior; in this case if that is happening at all it is being communicated behind-the-scenes or by people silently failing to engage. That makes it much harder for the community as a whole to respond appropriately.
I take your point as “aren’t we being too nice to this guy?” but I actually really like the approach taken here, which seems extremely fair-minded and diligent. My suspicion is this sort of stuff is long-term really valuable because it establishes good norms for something that will likely recur in future. I’d be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything I’d said, rather than just calling me names and walking away.
I’d be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything I’d said, rather than just calling me names and walking away.
I don’t understand what you’re claiming here. Are you saying you’d be honest in a community if you thought it would investigate you a lot to determine your honesty, but dishonest otherwise? Why not just be honest in all communities, and leave the ones you don’t like?
I literally still don’t understand. I can understand the motivation to be an asshole in communities you think won’t treat you fairly, but why be a lying asshole? I think the OP wrote “honesty” and meant something else.
I think the common point of intervention for people telling mis-truths, is not holding themselves back when they don’t really have enough evidence. A person might be about to write of a quick reply, and in most communities, know that they’re not going to be held accountable for any mischaracterisations of others’ opinions, or referring inaccurately to studies and data. In those communities, the comments are awful. In communities where you know that, if you do this over a sustained period, Carl Shulman, Jeff Kaufman, Oliver Habryka, Gregory Lewis and more are gonna write tens of thousands of words documenting your errors, you’ll be more likely to note when you haven’t quite substantiated the comment you’re about to hit ‘send’ on.
There’s an important difference between repeatedly making errors, jumping to conclusions, or being attached to a preconceived notion (all of which which I’ve personally done in front of Carl plenty of times), and the sort of behavior described in the OP, which seems more like intentional misrepresentation for the sake of climbing a social status gradient.
I’d like to agree partially with MichaelPlant and Paul_Crowley, in so far as I’m glad that I’m part of a community that responds to problems in such a charitable and diligent manner. However, I feel they missed the most important point of shlevy’s comment. Without arguing for a less fair-mined and thoughtful response, we can still ask the following: Gleb started InIn back in 2014; why did it take us two years to get to the point where we were able to call him out on his bad behaviour? This could’ve been called out much earlier.
I think the answer looks like this:
Firstly, Gleb has learned the in-group signals of communicating in good-faith (for example, at every criticism, he says he has “updated”, and he says ‘thank you’ for criticism). This alone is not a problem—it would merely take a few people to realise this, call it out, and then he could be asked to leave the community.
There’s a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences—as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation. My guess for the fundamental reason that we are having this conversation now, is that Jeff Kaufman bravely made his beliefs about Gleb common knowledge—he made a blog post about InIn, after which everyone else realised “Oh, everyone else believes this too. I’m not worried any more that everyone will think negatively of me for acting as though Gleb is acting in bad faith. I will now let out the piled up problems I have with Gleb’s behaviour.”
To re-iterate, it’s delightful to be part of a community that responds to this sort of situation by spending ~100s of hours (collectively) and ~100k words (I’m counting the original Facebook thread as well as the post here) analysing the situation and producing a considered, charitable yet damning report. However, it’s important to realise that there are communities out there for whom Gleb would’ve been outed in months rather than years, and without the time of many top researchers in the community wasted.
I’m not sure what the correct norms to have are. I’d suggest that we should be more trusting that when someone in the community criticises someone else not in the community, they’re doing it for good reasons. However, writing that out is almost self-refuting—that’s what all insular communities are doing. Perhaps appointing a small group of moderators for the community to whom we trust. That’s how good online communities often work, perhaps the model can be extended to the EA community (which is significantly more than just an online community). I certainly want to sustain the excellent norms of charity, diligence and respect that we currently have, something necessary to any successful intellectual project.
I just want to highlight that I feel like part of this post is based on a false premise; you mention InIn was started in 2014. While that may be true, all of the incidents in EA (and Less Wrong) circles cited above date to November 2015 or later. Gleb’s very first submission in the EA forum is in October 2015. By saying ‘it took two years’ and then talking about ‘months rather than years’ you give the impression that Gleb could have been excluded sometime back in 2015 and would have been elsewhere, which I think is pretty misleading (though presumably unintentionally so).
The truth is that it took a little over 9 months from Gleb’s first post to Jeff’s major public criticism. 9 months and a decent amount of time is not trivial. But let’s not overstate the problem.
“There’s a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences—as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation.”
I do strongly agree with this. I had some very frustrating conversations around that thread.
Pretty much agree with you and shlevy here, except that the wasting hundreds of collective hours carefully checking that Gleb is acting in bad faith seems more like a waste to me.
If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.
But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn’t a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I don’t know how workable this would be for the EA community, since my impression is that it’s spread out across several platforms.
Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because we’re missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.
Yes! But… you said your opening line as though it disagreed somehow? I said:
it’s important to realise that there are communities out there for whom Gleb would’ve been outed in months rather than years, and without the time of many top researchers in the community wasted.
To re-iterate, it’s delightful to be part of a community that responds to this sort of situation by spending ~100s of hours (collectively) and ~100k words (I’m counting the original Facebook thread as well as the post here) analysing the situation and producing a considered, charitable yet damning report.
and while I think this behavior is in some sense admirable, I think it is on net not delightful, and the huge waste of time it represents is bad on net except to the extent that it leads to better community norms around policing bad actors.
I’d suggest that we should be more trusting that when someone in the community criticises someone else not in the community, they’re doing it for good reasons. However, writing that out is almost self-refuting—that’s what all insular communities are doing.
Yes, insofar communities do that, but typically in emotive and highly biased ways. EA at least has more constructive norms for how these things are discussed. It’s not perfect, and it’s not fast, but here I see people taking pains to be as fair-minded as they can be. (We achieve that to different degrees, but the effort is expected.)
Perhaps appointing a small group of moderators for the community to whom we trust.
My System 1 doesn’t like this. Giving this power to a group of people and suggesting that we accept their guidance… that feels cultish, and not very compatible with a community of critical thinkers.
Scientific departments have ethics boards. Good online communities (e.g. Hacker News) have moderators. Society as a whole has a justice part of governance, and other groups that check on the decisions made by the courts. Suggesting that it feels cult-y to outsource some of our community norm-enfacement (so as to save the community as a whole significant time input, and make the process more efficient and effective) is… I’m just confused every time someone calls something totally normal ‘cult-y’.
I deliberately said “My System 1 doesn’t like this.” and “that feels cultish” – on an intuitive level, I feel uncomfortable, and I’m trying to work out why. I do see value in having effective gatekeepers.
I’m not even sure what it means to be “banned” from a movement consisting of multiple organisations and many individuals. It may be that if the process is clearly defined, and we know who is making the decision, on whose behalf, I’d be more comfortable with it.
Just in case you’re interested: I think the word ‘cultish’ is massively overloaded (with negative connotations) and mis-used. I’d also point out that saying that a statement is one’s gut feeling isn’t equivalent to saying one doesn’t endorse the feeling, and so I felt pretty defensive when you suggested my idea was cultish and not compatible with our community.
I wrote this because I thought you might prefer to know the impacts of your comments rather than not hearing negative feedback. My apologies in advance if that was a false assumption.
Thanks – helpful feedback (and from Owen also). In hindsight I would probably have kept the word “cultish” while being much more explicit about not completely endorsing the feeling.
Something went wrong with the communication channel if you ended up feeling defensive.
However, despite generally agreeing with you about problems with the world “cultish”, I actually think this is a reasonable use-case. It has a lot of connotations, and it was being reported that the description was triggering some of those connotations in the reader. That’s useful information that it may be worth some effort to avoiding it being perceived that way if the idea is pursued (your stack of examples make it pretty clear that it is avoidable).
I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.
Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a “con”.
I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things.
On the inside, the picture is not so clear. What’s going on inside his head?
If this is a con, what in the world does he want? He can’t seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. He’s not making a disproportionate amount of money off of this… and spreading rationality doesn’t even look like one of those things which a con artist could quickly do for a disproportionate reward… so I am confused.
If I thought Intentional Insights was a con, I’d be right with you trying to make that more obvious to everyone… but I launched my con detector and that test was negative.
Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word “con” either way just as people use the word “troll” either way.
For the same reasons that we should face the fact that there’s a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem we’re seeing with Intentional Insights as accurately as possible.
Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipedia’s page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and major loss. (https://en.wikipedia.org/wiki/Psychological_trauma)
As some kind of weird hybrid between a bleeding heart and a shrewd person, I can’t justify anything but minimizing the brutality of a traumatic event for someone with a learning disorder, no matter how destructive it is. At the same time, I agree that ousting destructive people is a necessity if they won’t or can’t change, but I think in the case of an incompetent person, there are a lot of ways in which the community has been too brutal. In the event of a malicious con, we’ve been too charitable, and I’m guilty of this as well. If Gleb really is a con artist, we should be removing him as fast as possible. I just don’t see strong evidence that the problem he has is intentional, nor does it even seem to be clearly differentiated from terrible social skills and general ignorance about marketing.
Our response is too brutal for someone with a learning disorder or other form of incompetence, and it’s too charitable for a con artist. In order to move forward, I think perhaps we aught to stop and solve this disagreement.
Here’s what’s at stake: currently, I intend to advocate for an intervention*. If you convince me that he is a con artist, I will abandon this intent and instead do what you are doing. I’ll help people see the con.
/ (By intervention, I mean: encouraging everyone to tell Gleb we require him to shape up or ship out, to negotiate things like what we mean by shape up and how we would like him to minimize risk while he is improving. If he has a learning disorder, a bit of extra support could go a long way if* the specific problems are identified so the support can target them accurately. I suspect that Gleb needs to see a professional for a learning disorder assessment, especially for Asperger’s.)
I’m open to being convinced that Intentional Insights actually does qualify as some type of con or intends net negative destructive behavior. I don’t see it, but I’d like to synchronize perspectives, whether I “win” or “lose” the disagreement.
I don’t think incompetent and malicious are the only two options (I wouldn’t bet on either as the primary driver of Gleb’s behavior), and I don’t think they’re mutually exclusive or binary.
Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.
yet this post seems to be taking on a standard appropriate for an adversarial court of law.
Which is fine.
And this is a problem, because in addition to wasting people’s time it causes people less aware of these issues to take Gleb more seriously, encourages him to continue behaving as he has been, and I suspect in some cases inclines even the more knowledgeable people involved to trust Gleb too much in the future, despite whatever private opinions they may have of his reliability. At some point there needs to be a way for people to say “no, this is enough, we are done with you” in the face of bad behavior; in this case if that is happening at all it is being communicated behind-the-scenes or by people silently failing to engage. That makes it much harder for the community as a whole to respond appropriately.
People can look at clear and concise summaries like the one above and come to their own conclusion. They don’t need to be told what to believe and they don’t need to be led into a groupthink.
Attacking people who are bad protects other people in the community from having their time wasted or being hurt in other ways by bad people. Try putting yourself in the shoes of the sort of people who engage in witch hunts because they’re genuinely afraid of witches, who if they existed would be capable of and willing to do great harm.
To be clear, it’s admirable to want to avoid witch hunts against people who aren’t witches and won’t actually harm anyone. But sometimes there really are witches, and hunting them is less bad than not.
People can look at clear and concise summaries like the one above and come to their own conclusion. They don’t need to be told what to believe and they don’t need to be led into a groupthink.
This approach doesn’t scale. Suppose the EA community eventually identifies 100 people at least as bad as Gleb in it, and so generates 100 separate posts like this (costing, what, 10k hours collectively?) that others have to read and come to their own conclusions about before they know who the bad actors in the EA community are. That’s a lot to ask of every person who wants to join the EA community, not to mention everyone who’s already in it, and the alternative is that newcomers don’t know who not to trust.
The simplest approach that scales (both with the size of the community and with the size of the pool of bad actors in it) is to kick out the worst actors so nobody has to spend any additional time and/or effort wondering / figuring out how bad they are.
Attacking people who are bad protects other people in the community from having their time wasted or being hurt in other ways by bad people.
Yes, but Gleb isn’t actively hurting anyone. You need an ironclad rationale before deciding to just build a wall in front of people who you think are unhelpful.
This approach doesn’t scale.
Even if you could really have 100 people starting their own organizations related to EA… it’s not relevant. Just because it won’t scale doesn’t mean it’s not the right approach with 1 person. We might think that the time and investment now is worthwhile, whereas if there were enough questionable characters that we didn’t have the time to do this with all of them, then (and only then) we’d be compelled to scale back.
The problem is that Gleb is manufacturing false affiliations in the eyes of outsiders, and outsiders who only briefly glance at lengthy, polite documents like this one are unlikely to realize that’s what’s happening.
Gleb did lots of things and the post describes them, so it’s about more than just manufacturing false affiliations.” The issue is not that the post is too long or contains too many details, that’s a silly thing to complain about. The issue is whether the post should be adversarial and whether it should manufacture a dominant point of view. The answer to that is No.
Note: I am socially peripheral to EA-the-community and philosophically distant from EA-the-intellectual-movement; salt according to taste.
While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I’ve seen around Gleb are charitable and systematic in excess of reasonable caution.
My first introduction to Gleb was Jeff’s August post, read before there were any comments up, and it seemed very clear that he was acting in bad faith and trying to use community norms of particular communication styles, owning up to mistakes, openness to feedback, etc. to disarm those engaging honestly and enable the con to go on longer. I don’t think I’m an especially untrusting person (quite the opposite, really), but even if that’s the case nearly every subsequent revealed detail and interaction confirmed this. Gleb responds to criticism he can’t successfully evade by addressing it in only the most literal and superficial manner, and continues on as before. It is to the point that if I were Gleb, and had somehow honestly stumbled this many times and fell into this pattern over and over, I would feel I had to withdraw on the grounds that no one external to my own thought processes could possibly reasonably take me seriously and that I clearly had a lot of self-improvement to do before engaging in a community like this in the future.
The responses to this behavior that I’ve seen are overwhelmingly of the form of taking Gleb seriously, giving him the benefit of the doubt where none should exist, providing feedback in good faith, and responding positively to the superficial signs Gleb gives of understanding. This is true even for people who I know have engaged with him before. I’m not completely confident of this, but the pattern looks like people are applying the standards of charity and forgiveness that would be appropriate for any one of these incidences in isolation, not taking into account that the overall pattern of behavior makes such charitable interpretations increasingly implausible. On top of that, some seem to have formed clear final opinions that Gleb is not acting in good faith, yet still use very cautious language and are hesitant to take a single step beyond what they can incontrovertibly demonstrate to third parties.
A few examples from this post, not trying to be comprehensive:
Using the word “concerns” in the title and introductory matter
noting that Gleb doesn’t “appear” to have altered his practices around name-dropping
Saying “Tsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably ‘true’ on some tendentious reading.” without bringing up the possibility of him knowing exactly what he’s doing and just lying
Calling Gleb’s self-proclaimed bestselling author status only “potentially” misleading.
Moreover, the fully comprehensive nature of the post and the painstaking lengths it goes to separate out definitely valid issues from potentially invalid ones seems to be part of the same pattern. No one, not even Gleb, is claiming that these instances didn’t happen or that he is being set up, yet this post seems to be taking on a standard appropriate for an adversarial court of law.
And this is a problem, because in addition to wasting people’s time it causes people less aware of these issues to take Gleb more seriously, encourages him to continue behaving as he has been, and I suspect in some cases inclines even the more knowledgeable people involved to trust Gleb too much in the future, despite whatever private opinions they may have of his reliability. At some point there needs to be a way for people to say “no, this is enough, we are done with you” in the face of bad behavior; in this case if that is happening at all it is being communicated behind-the-scenes or by people silently failing to engage. That makes it much harder for the community as a whole to respond appropriately.
I take your point as “aren’t we being too nice to this guy?” but I actually really like the approach taken here, which seems extremely fair-minded and diligent. My suspicion is this sort of stuff is long-term really valuable because it establishes good norms for something that will likely recur in future. I’d be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything I’d said, rather than just calling me names and walking away.
I don’t understand what you’re claiming here. Are you saying you’d be honest in a community if you thought it would investigate you a lot to determine your honesty, but dishonest otherwise? Why not just be honest in all communities, and leave the ones you don’t like?
I think he means that it is human behaviour to do that, not that he does it himself.
I literally still don’t understand. I can understand the motivation to be an asshole in communities you think won’t treat you fairly, but why be a lying asshole? I think the OP wrote “honesty” and meant something else.
I think the common point of intervention for people telling mis-truths, is not holding themselves back when they don’t really have enough evidence. A person might be about to write of a quick reply, and in most communities, know that they’re not going to be held accountable for any mischaracterisations of others’ opinions, or referring inaccurately to studies and data. In those communities, the comments are awful. In communities where you know that, if you do this over a sustained period, Carl Shulman, Jeff Kaufman, Oliver Habryka, Gregory Lewis and more are gonna write tens of thousands of words documenting your errors, you’ll be more likely to note when you haven’t quite substantiated the comment you’re about to hit ‘send’ on.
There’s an important difference between repeatedly making errors, jumping to conclusions, or being attached to a preconceived notion (all of which which I’ve personally done in front of Carl plenty of times), and the sort of behavior described in the OP, which seems more like intentional misrepresentation for the sake of climbing a social status gradient.
I’d like to agree partially with MichaelPlant and Paul_Crowley, in so far as I’m glad that I’m part of a community that responds to problems in such a charitable and diligent manner. However, I feel they missed the most important point of shlevy’s comment. Without arguing for a less fair-mined and thoughtful response, we can still ask the following: Gleb started InIn back in 2014; why did it take us two years to get to the point where we were able to call him out on his bad behaviour? This could’ve been called out much earlier.
I think the answer looks like this:
Firstly, Gleb has learned the in-group signals of communicating in good-faith (for example, at every criticism, he says he has “updated”, and he says ‘thank you’ for criticism). This alone is not a problem—it would merely take a few people to realise this, call it out, and then he could be asked to leave the community.
There’s a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences—as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation. My guess for the fundamental reason that we are having this conversation now, is that Jeff Kaufman bravely made his beliefs about Gleb common knowledge—he made a blog post about InIn, after which everyone else realised “Oh, everyone else believes this too. I’m not worried any more that everyone will think negatively of me for acting as though Gleb is acting in bad faith. I will now let out the piled up problems I have with Gleb’s behaviour.”
To re-iterate, it’s delightful to be part of a community that responds to this sort of situation by spending ~100s of hours (collectively) and ~100k words (I’m counting the original Facebook thread as well as the post here) analysing the situation and producing a considered, charitable yet damning report. However, it’s important to realise that there are communities out there for whom Gleb would’ve been outed in months rather than years, and without the time of many top researchers in the community wasted.
I’m not sure what the correct norms to have are. I’d suggest that we should be more trusting that when someone in the community criticises someone else not in the community, they’re doing it for good reasons. However, writing that out is almost self-refuting—that’s what all insular communities are doing. Perhaps appointing a small group of moderators for the community to whom we trust. That’s how good online communities often work, perhaps the model can be extended to the EA community (which is significantly more than just an online community). I certainly want to sustain the excellent norms of charity, diligence and respect that we currently have, something necessary to any successful intellectual project.
I just want to highlight that I feel like part of this post is based on a false premise; you mention InIn was started in 2014. While that may be true, all of the incidents in EA (and Less Wrong) circles cited above date to November 2015 or later. Gleb’s very first submission in the EA forum is in October 2015. By saying ‘it took two years’ and then talking about ‘months rather than years’ you give the impression that Gleb could have been excluded sometime back in 2015 and would have been elsewhere, which I think is pretty misleading (though presumably unintentionally so).
The truth is that it took a little over 9 months from Gleb’s first post to Jeff’s major public criticism. 9 months and a decent amount of time is not trivial. But let’s not overstate the problem.
“There’s a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences—as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation.”
I do strongly agree with this. I had some very frustrating conversations around that thread.
Pretty much agree with you and shlevy here, except that the wasting hundreds of collective hours carefully checking that Gleb is acting in bad faith seems more like a waste to me.
If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.
But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn’t a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I don’t know how workable this would be for the EA community, since my impression is that it’s spread out across several platforms.
Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because we’re missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.
Yes! But… you said your opening line as though it disagreed somehow? I said:
I may be misinterpreting you here; you wrote
and while I think this behavior is in some sense admirable, I think it is on net not delightful, and the huge waste of time it represents is bad on net except to the extent that it leads to better community norms around policing bad actors.
Yup, we are in agreement.
(I was just noting how sweet it was that we do this much more kindly than most other communities. It’s totally not optimal though.)
Yes, insofar communities do that, but typically in emotive and highly biased ways. EA at least has more constructive norms for how these things are discussed. It’s not perfect, and it’s not fast, but here I see people taking pains to be as fair-minded as they can be. (We achieve that to different degrees, but the effort is expected.)
My System 1 doesn’t like this. Giving this power to a group of people and suggesting that we accept their guidance… that feels cultish, and not very compatible with a community of critical thinkers.
Scientific departments have ethics boards. Good online communities (e.g. Hacker News) have moderators. Society as a whole has a justice part of governance, and other groups that check on the decisions made by the courts. Suggesting that it feels cult-y to outsource some of our community norm-enfacement (so as to save the community as a whole significant time input, and make the process more efficient and effective) is… I’m just confused every time someone calls something totally normal ‘cult-y’.
I deliberately said “My System 1 doesn’t like this.” and “that feels cultish” – on an intuitive level, I feel uncomfortable, and I’m trying to work out why. I do see value in having effective gatekeepers.
I’m not even sure what it means to be “banned” from a movement consisting of multiple organisations and many individuals. It may be that if the process is clearly defined, and we know who is making the decision, on whose behalf, I’d be more comfortable with it.
Thanks for clarifying!
Just in case you’re interested: I think the word ‘cultish’ is massively overloaded (with negative connotations) and mis-used. I’d also point out that saying that a statement is one’s gut feeling isn’t equivalent to saying one doesn’t endorse the feeling, and so I felt pretty defensive when you suggested my idea was cultish and not compatible with our community.
I wrote this because I thought you might prefer to know the impacts of your comments rather than not hearing negative feedback. My apologies in advance if that was a false assumption.
Thanks – helpful feedback (and from Owen also). In hindsight I would probably have kept the word “cultish” while being much more explicit about not completely endorsing the feeling.
Something went wrong with the communication channel if you ended up feeling defensive.
However, despite generally agreeing with you about problems with the world “cultish”, I actually think this is a reasonable use-case. It has a lot of connotations, and it was being reported that the description was triggering some of those connotations in the reader. That’s useful information that it may be worth some effort to avoiding it being perceived that way if the idea is pursued (your stack of examples make it pretty clear that it is avoidable).
I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.
Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a “con”.
I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things.
On the inside, the picture is not so clear. What’s going on inside his head?
If this is a con, what in the world does he want? He can’t seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. He’s not making a disproportionate amount of money off of this… and spreading rationality doesn’t even look like one of those things which a con artist could quickly do for a disproportionate reward… so I am confused.
If I thought Intentional Insights was a con, I’d be right with you trying to make that more obvious to everyone… but I launched my con detector and that test was negative.
Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word “con” either way just as people use the word “troll” either way.
For the same reasons that we should face the fact that there’s a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem we’re seeing with Intentional Insights as accurately as possible.
Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipedia’s page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and major loss. (https://en.wikipedia.org/wiki/Psychological_trauma)
As some kind of weird hybrid between a bleeding heart and a shrewd person, I can’t justify anything but minimizing the brutality of a traumatic event for someone with a learning disorder, no matter how destructive it is. At the same time, I agree that ousting destructive people is a necessity if they won’t or can’t change, but I think in the case of an incompetent person, there are a lot of ways in which the community has been too brutal. In the event of a malicious con, we’ve been too charitable, and I’m guilty of this as well. If Gleb really is a con artist, we should be removing him as fast as possible. I just don’t see strong evidence that the problem he has is intentional, nor does it even seem to be clearly differentiated from terrible social skills and general ignorance about marketing.
Our response is too brutal for someone with a learning disorder or other form of incompetence, and it’s too charitable for a con artist. In order to move forward, I think perhaps we aught to stop and solve this disagreement.
Here’s what’s at stake: currently, I intend to advocate for an intervention*. If you convince me that he is a con artist, I will abandon this intent and instead do what you are doing. I’ll help people see the con.
/ (By intervention, I mean: encouraging everyone to tell Gleb we require him to shape up or ship out, to negotiate things like what we mean by shape up and how we would like him to minimize risk while he is improving. If he has a learning disorder, a bit of extra support could go a long way if* the specific problems are identified so the support can target them accurately. I suspect that Gleb needs to see a professional for a learning disorder assessment, especially for Asperger’s.)
I’m open to being convinced that Intentional Insights actually does qualify as some type of con or intends net negative destructive behavior. I don’t see it, but I’d like to synchronize perspectives, whether I “win” or “lose” the disagreement.
I don’t think incompetent and malicious are the only two options (I wouldn’t bet on either as the primary driver of Gleb’s behavior), and I don’t think they’re mutually exclusive or binary.
Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.
Views my own, not my employer’s.
That was a truly excellent argument. Thank you.
Thanks Kathy!
Witch hunting and attacks do nothing for anyone.
Which is fine.
People can look at clear and concise summaries like the one above and come to their own conclusion. They don’t need to be told what to believe and they don’t need to be led into a groupthink.
Attacking people who are bad protects other people in the community from having their time wasted or being hurt in other ways by bad people. Try putting yourself in the shoes of the sort of people who engage in witch hunts because they’re genuinely afraid of witches, who if they existed would be capable of and willing to do great harm.
To be clear, it’s admirable to want to avoid witch hunts against people who aren’t witches and won’t actually harm anyone. But sometimes there really are witches, and hunting them is less bad than not.
This approach doesn’t scale. Suppose the EA community eventually identifies 100 people at least as bad as Gleb in it, and so generates 100 separate posts like this (costing, what, 10k hours collectively?) that others have to read and come to their own conclusions about before they know who the bad actors in the EA community are. That’s a lot to ask of every person who wants to join the EA community, not to mention everyone who’s already in it, and the alternative is that newcomers don’t know who not to trust.
The simplest approach that scales (both with the size of the community and with the size of the pool of bad actors in it) is to kick out the worst actors so nobody has to spend any additional time and/or effort wondering / figuring out how bad they are.
Yes, but Gleb isn’t actively hurting anyone. You need an ironclad rationale before deciding to just build a wall in front of people who you think are unhelpful.
Even if you could really have 100 people starting their own organizations related to EA… it’s not relevant. Just because it won’t scale doesn’t mean it’s not the right approach with 1 person. We might think that the time and investment now is worthwhile, whereas if there were enough questionable characters that we didn’t have the time to do this with all of them, then (and only then) we’d be compelled to scale back.
The problem is that Gleb is manufacturing false affiliations in the eyes of outsiders, and outsiders who only briefly glance at lengthy, polite documents like this one are unlikely to realize that’s what’s happening.
Gleb did lots of things and the post describes them, so it’s about more than just manufacturing false affiliations.” The issue is not that the post is too long or contains too many details, that’s a silly thing to complain about. The issue is whether the post should be adversarial and whether it should manufacture a dominant point of view. The answer to that is No.