I think you’re doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention.
It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe it’s because Bob had a difficult childhood and finds it hard to get close to people… all the while ignoring that outside view, the simplest hypothesis that explains all of Bob’s behavior is that he is manipulating her into giving him sex and affection. It’s in some sense admirable for Alice to try to be charitable about Bob’s behavior, but at some point 1) Alice is incentivizing terrible behavior on Bob’s part and 2) the personal cost to Alice of putting up with Bob’s shit is terrible and she shouldn’t have to pay it.
I think Kathy’s perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an “aggressive self-promoter” who just want “money and attention”. If I were forced to guess, I’d guess what’s probably going on is some thought process like:
“I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
“I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
“Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
“The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
All of these claims are questionable in this case: the upside of EA outreach may depend a lot on who we’re reaching out to and how; the downside may be substantial (e.g., at least some people have reported thinking EA was terrible because they thought InIn represented it); outreach and updating skills are both lacking; and playing fast and loose with the facts “for the greater good” is a terrible long-run heuristic to follow even if it really is sometimes a good idea from a myopic utility-maximizing perspective. The problem is compounded if not being fully forthcoming with others makes it progressively harder to see the whole truth oneself.
I agree with nearly all of this and I’m glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occam’s razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what we’re seeing with Gleb. This could be valuable, so I really would like to test it. :)
Occam’s razor:
It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in “My take on each of your 4 points” below.)
It’s possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that would be surprising. He is doing charity work, so seems less likely to have the motivations of a selfish jerk like the one Qiaochu describes. Gleb hates doing fund raising work, which supports my belief that he has a skill related problem more than it supports Qiaochu’s belief that he wants to milk people for money.
Testable Prediction:
I find that Occam’s razor helps me select explanations upon which I can build hypotheses that end up testing positive, so I’ll present a hypothesis and turn it into a testable prediction.
If my hypothesis is correct, then Gleb would have the chance to succeed if he heard enough descriptions specifying how others go about modeling other people’s minds when status is involved, what consequences they guess will happen if specific reputations are applied to InIn, and what quantity of negative/positive impact each specific reputation would result in. To turn it into a testable prediction: if Gleb received this information on every promotion-related idea he was seriously considering for the next three months, I think he’d learn enough to delegate successfully. The changes we’d see are that people would no longer complain about InIn and also that InIn would attract good people who were not interested in volunteering there before.
To prevent disaster during the 3 month period of time, perhaps InIn could take a break from most or all promotion type work, including publishing most/all articles.
My Pessimism Is Located Here:
I can see how I came across as overly optimistic in the comment Qiaochu_Yuan was replying to. My first comment on this post did a much better job of summarizing my overall take on the situation than that one. That one was only intended to explain a much more specific area of thoughts than my overall perspective. I gave Qiaochu a quick sample of my pessimism here:
1.) “I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
My take: People with different social status instincts can have a tendency to drastically underestimate the reputation damage that can be done if outreach is low quality. I think anyone who underestimates the downsides enough would be likely to end up thinking the way you describe in 1.
2.) “I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
My take: If Gleb believes he is good enough at outreach for now, then this could be Dunning-Krueger effect, anosognosia, or underestimating the negative impact his imperfections are having. Any of these three reasons would be likely to cause a person to think their skill level is sufficient for now and/or easy enough to improve, when it is not.
3.) “Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
My take: I believe InIn’s long-run value will be small or negative if the impacts of reputation risks continue to be underestimated. I think it is unfortunately far too likely that InIn will only end up producing important problems. These may include causing people to feel averse to rationality, confusing people about effective altruism, or drawing the wrong people into the EA movement. The risk of counter-productive results has been far too high for me to offer InIn anything other than things which could help reduce the risk of such problems (like feedback). However, the reason I think InIn’s long-run value is likely to be low or negative is because I am not underestimating the impact of InIn’s reputation problems the way Gleb is. You and I may be having something like hindsight bias or illusion of transparency here. I think anyone who has a pattern of underestimating reputation problems would be pretty likely to end up believing 3.
4.) “The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
My take: I suspect that you probably do not expect Gleb to be deontological about this or use virtue ethics or anything. Instead, I suspect that you would probably require him to meet a much higher standard with his trade-off decisions. To you and I, the negative reputation impact of the behavior you describe in 4 seems large. My reaction to this is to automatically model other people’s minds, guess some consequences for this dishonest behavior, and feel disgust. One guess is that people may feel suspicion toward Intentional Insights and regard their rationality teachings with skepticism. That alone could toast all of the value of the organization. Therefore, it is a major reputation disaster which would need to be rectified in a satisfactory manner before we can believe InIn will have a positive impact. Probably, we need to overcome mind projection fallacy to see why Gleb would think this way. My model of Gleb says the problem is that he models other people differently from the way I do when status is involved, does not guess the same consequences of reputation problems, and this is how he ends up underestimating the impact of reputation disasters. Underestimating the negative impact of dishonesty would, of course, result in Gleb choosing different risk vs. reward trade-offs than we would.
I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: “I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.” … “Perhaps I didn’t get the memo, but I don’t think we’ve tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement.”
One of the main reasons I have hope is because I’ve given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. We’ll see if it works.
I think you’re doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention.
It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe it’s because Bob had a difficult childhood and finds it hard to get close to people… all the while ignoring that outside view, the simplest hypothesis that explains all of Bob’s behavior is that he is manipulating her into giving him sex and affection. It’s in some sense admirable for Alice to try to be charitable about Bob’s behavior, but at some point 1) Alice is incentivizing terrible behavior on Bob’s part and 2) the personal cost to Alice of putting up with Bob’s shit is terrible and she shouldn’t have to pay it.
I think Kathy’s perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an “aggressive self-promoter” who just want “money and attention”. If I were forced to guess, I’d guess what’s probably going on is some thought process like:
“I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
“I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
“Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
“The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
All of these claims are questionable in this case: the upside of EA outreach may depend a lot on who we’re reaching out to and how; the downside may be substantial (e.g., at least some people have reported thinking EA was terrible because they thought InIn represented it); outreach and updating skills are both lacking; and playing fast and loose with the facts “for the greater good” is a terrible long-run heuristic to follow even if it really is sometimes a good idea from a myopic utility-maximizing perspective. The problem is compounded if not being fully forthcoming with others makes it progressively harder to see the whole truth oneself.
I agree with nearly all of this and I’m glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occam’s razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what we’re seeing with Gleb. This could be valuable, so I really would like to test it. :)
Occam’s razor:
It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in “My take on each of your 4 points” below.)
It’s possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that would be surprising. He is doing charity work, so seems less likely to have the motivations of a selfish jerk like the one Qiaochu describes. Gleb hates doing fund raising work, which supports my belief that he has a skill related problem more than it supports Qiaochu’s belief that he wants to milk people for money.
Testable Prediction:
I find that Occam’s razor helps me select explanations upon which I can build hypotheses that end up testing positive, so I’ll present a hypothesis and turn it into a testable prediction.
If my hypothesis is correct, then Gleb would have the chance to succeed if he heard enough descriptions specifying how others go about modeling other people’s minds when status is involved, what consequences they guess will happen if specific reputations are applied to InIn, and what quantity of negative/positive impact each specific reputation would result in. To turn it into a testable prediction: if Gleb received this information on every promotion-related idea he was seriously considering for the next three months, I think he’d learn enough to delegate successfully. The changes we’d see are that people would no longer complain about InIn and also that InIn would attract good people who were not interested in volunteering there before.
To prevent disaster during the 3 month period of time, perhaps InIn could take a break from most or all promotion type work, including publishing most/all articles.
My Pessimism Is Located Here:
I can see how I came across as overly optimistic in the comment Qiaochu_Yuan was replying to. My first comment on this post did a much better job of summarizing my overall take on the situation than that one. That one was only intended to explain a much more specific area of thoughts than my overall perspective. I gave Qiaochu a quick sample of my pessimism here:
http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8qt
My take on each of your 4 points:
1.) “I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
My take: People with different social status instincts can have a tendency to drastically underestimate the reputation damage that can be done if outreach is low quality. I think anyone who underestimates the downsides enough would be likely to end up thinking the way you describe in 1.
2.) “I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
My take: If Gleb believes he is good enough at outreach for now, then this could be Dunning-Krueger effect, anosognosia, or underestimating the negative impact his imperfections are having. Any of these three reasons would be likely to cause a person to think their skill level is sufficient for now and/or easy enough to improve, when it is not.
3.) “Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
My take: I believe InIn’s long-run value will be small or negative if the impacts of reputation risks continue to be underestimated. I think it is unfortunately far too likely that InIn will only end up producing important problems. These may include causing people to feel averse to rationality, confusing people about effective altruism, or drawing the wrong people into the EA movement. The risk of counter-productive results has been far too high for me to offer InIn anything other than things which could help reduce the risk of such problems (like feedback). However, the reason I think InIn’s long-run value is likely to be low or negative is because I am not underestimating the impact of InIn’s reputation problems the way Gleb is. You and I may be having something like hindsight bias or illusion of transparency here. I think anyone who has a pattern of underestimating reputation problems would be pretty likely to end up believing 3.
4.) “The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
My take: I suspect that you probably do not expect Gleb to be deontological about this or use virtue ethics or anything. Instead, I suspect that you would probably require him to meet a much higher standard with his trade-off decisions. To you and I, the negative reputation impact of the behavior you describe in 4 seems large. My reaction to this is to automatically model other people’s minds, guess some consequences for this dishonest behavior, and feel disgust. One guess is that people may feel suspicion toward Intentional Insights and regard their rationality teachings with skepticism. That alone could toast all of the value of the organization. Therefore, it is a major reputation disaster which would need to be rectified in a satisfactory manner before we can believe InIn will have a positive impact. Probably, we need to overcome mind projection fallacy to see why Gleb would think this way. My model of Gleb says the problem is that he models other people differently from the way I do when status is involved, does not guess the same consequences of reputation problems, and this is how he ends up underestimating the impact of reputation disasters. Underestimating the negative impact of dishonesty would, of course, result in Gleb choosing different risk vs. reward trade-offs than we would.
I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: “I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.” … “Perhaps I didn’t get the memo, but I don’t think we’ve tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement.”
(Perhaps you didn’t read all of my comments because this thread has too many to read but that one is located here: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o8)
One of the main reasons I have hope is because I’ve given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. We’ll see if it works.