I don’t care if it is intentionally a con or not. Given that cons exist, the EA community needs an immune system that will reject them. The immune system has to respond to behavior, not intentions, because behavior is all we can see, and because good intentions are not protection from the effects of behavior.
I no longer believe things Gleb says. In the Facebook thread he made numerous statements that turned out to be fundamentally misleading. Maybe he wasn’t intentionally lying; I don’t know, I’m not psychic. But the immune system needs to reject people when the things they say turn out to be consistently misleading and a certain number of attempts to correct fail.
I don’t think everyone needs to draw the line in the same place, I approve of people helping others after some people have given up on them as a category, even if I think it’s not going to work in this case. But before you invest, I encourage you to write out what would make you give up. It can’t be “he admits he’s a scam artist”, because scam artists won’t do that, and because that may not be the problem. What amount of work, lack of improvement from him, and negative effects from his work and interactions would convince you helping was no longer worth your time?
These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I don’t know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument:
1.) EA needs a set of rules which cannot be gamed by con artists.
2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists.
vs.
3.) Let’s not ban people and organizations who have good intentions.
A possible compromise between people on different sides would be:
Previously, there had been no rule about this. (Correct me if I’m wrong about this!) Therefore, we cannot say InIn had broken any rule. Let’s make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period / number of promotional pieces / volunteers / whatever. *
If InIn breaks the new rule after it is made, then we’ll both agree they should be banned.
If you think they should be banned right now, whether there was an existing rule or not, please tell me why.
/* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output.
The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because that’s a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get.
Comparing 1 / 3 vs. 1 / 100 is comparing apples to oranges.
I’m not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apologize for over-explaining!
I have bunch of different unorganized thoughts on this.
One, absolute number is obviously the incorrect thing to use. Ratio is an improvement, but I feel loses a lot of information. “Better wrong than vague” is a valuable community norm, and how people respond to criticism and new information is more important than whether they were initially correct. It also matters how public and formal the statement was- an article published in a mainstream publication is different than spitballing on tumblr.
I’m unsure what you mean by “ban”. There is no governing body or defined EA group. There are people clustering around particular things. I think banning him from the FB group should be based on the expected quality of his contribution to the FB group, incorporating information from his writing elsewhere. Whether people give him money should depend on their judgement about how well the money will be used. Whether he attends or speaks at EAG should be based on his expected contribution. None of these are independent, but they can have different answers.
I don’t think any hard and fast rule would work, even if there was a body to choose and enforce it, because anything can be gamed.
What I want is for people to feel free to make mistakes, and other people to feel free to express concerns, and for proportionate responses to occur if the concerns aren’t addressed. I think immune system is exactly the right metaphor. If a foreign particle enters your body, a lot of different immune molecules inspect it. Most will pass it by. Maybe one or two notice a concern. They attack to it and alert other immune molecules they should maybe be concerned. This may go nowhere, or it may cause a cascading reaction targeting the specific foreign particle. If a lot of foreign particles show up you may get an organ wide reaction (runny nose) or whole body (fever). The system coordinates without a coordinator.
Every time an individual talked to Gleb privately (which I’m told happened a lot), that was the first bout of the immune system. Then people complained publicly about specific things in specific posts here, on lesswrong, or on FB, that was the next step. I view the massive facebook thread and public letter as system wide responses necessary only because he did not adjust his behavior after the smaller steps. (Yes, he said he would, and yes, small things changed in the moment, but he kept making the same mistakes). Even now, I don’t think you should be “banned” from helping him, if you’re making an informed choice. You’re an individual and you get to decide where your energy goes.
I do want to see changes in our immune system going forward. There is something of a halo effect around the big organizations, and I would like to see them criticized more often, and be more responsive to that criticism. Ben Hoffman’s series on GiveWell is exactly the kind of thing we need more of. I’d also like to see us be less rigorous in evaluating very new organizations, because it discourages people from trying new things. I’ve been guilty of this- I was pretty hard on Charity Science originally, and I still don’t think their fundraising was particularly effective, but they grew into Charity Entrepreneurship, which looks incredible.
I don’t think the consequences of Gleb’s actions should wait until there is a formal rule and he has had sufficient time to shoot himself in the foot, for a lot of reasons. One, I don’t think a formal rule and enforcement is possible. Two, I think the information he has been receiving for over a year should have been sufficient to produce acceptable behavior, so the chances he actually improves are quite small. Three, I think he is doing harm now, and I want to reduce that as quickly as possible.
I realize the lack of hard and fast rule is harder for some people than for others, e.g. people on the autism spectrum. That’s sad and unfair and I wish it weren’t true. But as a community we’re objectively very welcoming to people on the spectrum, far more so than most, and in this particular case I think the costs of being more accommodating would outweigh the benefits.
The panel wouldn’t have any direct power, but it would “assess potential egregious violations of those principles, and make recommendations to the community on the basis of that assessment.”
I’m glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of “better wrong than vague”, (though I am not sure exactly how you would go about implementing it as part of an assessment beyond “If they’re always vague, be suspicious.” which doesn’t seem actionable.).
Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that it’s too likely to be a net negative. This is totally understandable and totally reasonable.
I think the reason we don’t seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems.
I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact.
Perhaps the real crux of our apparent disagreement is something more like differing levels of determination / ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes.
The way some appear to be seeing this is: “The community is fed up with InIn. Therefore, let’s take the opportunity to oust them.”.
The way I appear to be seeing this is: “The community is fed up with InIn. Therefore, let’s take the opportunity to persuade InIn to believe they need to do enough 2-way communication to understand how others think about reputation and promotion.”.
Part of this is because I think Gleb’s ignorance about reputation and marketing are so deep that he didn’t see a need to spend a significant amount of time learning about these. Perhaps he is/was unaware of how much there is for him to learn. If someone could just convince him that there is a lot he needs to learn, he would be likely to make decisions comparable to: taking a break from promotion while he learns, granting someone knowledgeable veto power over all promotion efforts that aren’t good enough, or hiring an expert and following all their advice.
(You presented a lot more worthwhile thoughts in your comment and I wish I could reply intelligibly to them all, but unfortunately, I don’t have the time to do all of these thoughts justice right now.)
I don’t care if it is intentionally a con or not. Given that cons exist, the EA community needs an immune system that will reject them. The immune system has to respond to behavior, not intentions, because behavior is all we can see, and because good intentions are not protection from the effects of behavior.
I no longer believe things Gleb says. In the Facebook thread he made numerous statements that turned out to be fundamentally misleading. Maybe he wasn’t intentionally lying; I don’t know, I’m not psychic. But the immune system needs to reject people when the things they say turn out to be consistently misleading and a certain number of attempts to correct fail.
I don’t think everyone needs to draw the line in the same place, I approve of people helping others after some people have given up on them as a category, even if I think it’s not going to work in this case. But before you invest, I encourage you to write out what would make you give up. It can’t be “he admits he’s a scam artist”, because scam artists won’t do that, and because that may not be the problem. What amount of work, lack of improvement from him, and negative effects from his work and interactions would convince you helping was no longer worth your time?
These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I don’t know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument:
1.) EA needs a set of rules which cannot be gamed by con artists.
2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists.
vs.
3.) Let’s not ban people and organizations who have good intentions.
A possible compromise between people on different sides would be:
Previously, there had been no rule about this. (Correct me if I’m wrong about this!) Therefore, we cannot say InIn had broken any rule. Let’s make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period / number of promotional pieces / volunteers / whatever. *
If InIn breaks the new rule after it is made, then we’ll both agree they should be banned.
If you think they should be banned right now, whether there was an existing rule or not, please tell me why.
/* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output.
The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because that’s a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get.
Comparing 1 / 3 vs. 1 / 100 is comparing apples to oranges.
I’m not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apologize for over-explaining!
I have bunch of different unorganized thoughts on this.
One, absolute number is obviously the incorrect thing to use. Ratio is an improvement, but I feel loses a lot of information. “Better wrong than vague” is a valuable community norm, and how people respond to criticism and new information is more important than whether they were initially correct. It also matters how public and formal the statement was- an article published in a mainstream publication is different than spitballing on tumblr.
I’m unsure what you mean by “ban”. There is no governing body or defined EA group. There are people clustering around particular things. I think banning him from the FB group should be based on the expected quality of his contribution to the FB group, incorporating information from his writing elsewhere. Whether people give him money should depend on their judgement about how well the money will be used. Whether he attends or speaks at EAG should be based on his expected contribution. None of these are independent, but they can have different answers.
I don’t think any hard and fast rule would work, even if there was a body to choose and enforce it, because anything can be gamed.
What I want is for people to feel free to make mistakes, and other people to feel free to express concerns, and for proportionate responses to occur if the concerns aren’t addressed. I think immune system is exactly the right metaphor. If a foreign particle enters your body, a lot of different immune molecules inspect it. Most will pass it by. Maybe one or two notice a concern. They attack to it and alert other immune molecules they should maybe be concerned. This may go nowhere, or it may cause a cascading reaction targeting the specific foreign particle. If a lot of foreign particles show up you may get an organ wide reaction (runny nose) or whole body (fever). The system coordinates without a coordinator.
Every time an individual talked to Gleb privately (which I’m told happened a lot), that was the first bout of the immune system. Then people complained publicly about specific things in specific posts here, on lesswrong, or on FB, that was the next step. I view the massive facebook thread and public letter as system wide responses necessary only because he did not adjust his behavior after the smaller steps. (Yes, he said he would, and yes, small things changed in the moment, but he kept making the same mistakes). Even now, I don’t think you should be “banned” from helping him, if you’re making an informed choice. You’re an individual and you get to decide where your energy goes.
I do want to see changes in our immune system going forward. There is something of a halo effect around the big organizations, and I would like to see them criticized more often, and be more responsive to that criticism. Ben Hoffman’s series on GiveWell is exactly the kind of thing we need more of. I’d also like to see us be less rigorous in evaluating very new organizations, because it discourages people from trying new things. I’ve been guilty of this- I was pretty hard on Charity Science originally, and I still don’t think their fundraising was particularly effective, but they grew into Charity Entrepreneurship, which looks incredible.
I don’t think the consequences of Gleb’s actions should wait until there is a formal rule and he has had sufficient time to shoot himself in the foot, for a lot of reasons. One, I don’t think a formal rule and enforcement is possible. Two, I think the information he has been receiving for over a year should have been sufficient to produce acceptable behavior, so the chances he actually improves are quite small. Three, I think he is doing harm now, and I want to reduce that as quickly as possible.
I realize the lack of hard and fast rule is harder for some people than for others, e.g. people on the autism spectrum. That’s sad and unfair and I wish it weren’t true. But as a community we’re objectively very welcoming to people on the spectrum, far more so than most, and in this particular case I think the costs of being more accommodating would outweigh the benefits.
There isn’t currently one, but Will is proposing setting up a panel: Setting Community Norms and Values: A response to the InIn Open Letter.
The panel wouldn’t have any direct power, but it would “assess potential egregious violations of those principles, and make recommendations to the community on the basis of that assessment.”
I’m glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of “better wrong than vague”, (though I am not sure exactly how you would go about implementing it as part of an assessment beyond “If they’re always vague, be suspicious.” which doesn’t seem actionable.).
Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that it’s too likely to be a net negative. This is totally understandable and totally reasonable.
I think the reason we don’t seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems.
I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact.
Perhaps the real crux of our apparent disagreement is something more like differing levels of determination / ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes.
The way some appear to be seeing this is: “The community is fed up with InIn. Therefore, let’s take the opportunity to oust them.”.
The way I appear to be seeing this is: “The community is fed up with InIn. Therefore, let’s take the opportunity to persuade InIn to believe they need to do enough 2-way communication to understand how others think about reputation and promotion.”.
Part of this is because I think Gleb’s ignorance about reputation and marketing are so deep that he didn’t see a need to spend a significant amount of time learning about these. Perhaps he is/was unaware of how much there is for him to learn. If someone could just convince him that there is a lot he needs to learn, he would be likely to make decisions comparable to: taking a break from promotion while he learns, granting someone knowledgeable veto power over all promotion efforts that aren’t good enough, or hiring an expert and following all their advice.
(You presented a lot more worthwhile thoughts in your comment and I wish I could reply intelligibly to them all, but unfortunately, I don’t have the time to do all of these thoughts justice right now.)