Gleb’s problems seem due to important differences in social status instincts. For example, Eliezer once wrote that he doesn’t experience the “status smackdown emotions” that other people experience, but he didn’t realize it until a lot of people complained that his Harry Potter character comes across as insufferably arrogant to them. Readers wanted to smack down his Harry Potter character but this possibility did not occur to Eliezer at the time. So, Eliezer could not have written a Harry Potter character that people did not want to smack down.
I suspect that, for similar reasons, Gleb did not expect to see a large number of complaints of this nature. He might be having difficulty modeling other people’s minds regarding status, so he might find it difficult to relate to the people who have complained.
Some with social status instinct differences might be described as “status blind”. They might not notice status messages at all, they might not make clear distinctions between different statuses, or they make such detailed distinctions that it becomes impossible to organize the statuses into a hierarchy. This very detailed approach has effects that are totally unlike social status as most people seem to experience it.
Additionally, someone who is status blind might have a very blurry emotional experience of statuses or they might feel nothing at all. That is to say status may not feel important to someone who is status blind. Richard Feynman wrote that he “Never knows who he is talking to.” and this resulted in him starting arguments with geniuses and famous people. Fortunately for Feynman, he was bright enough that he was able to hold his own and maybe it didn’t seem too out of place to others for him to behave that way. I don’t know if this example from Feynman is some form of status blindness, but I hope it makes it easier to imagine what status blindness might feel like for someone. For some, I think status blindness feels like always being of equal status no matter who you’re talking to.
On many occasions, I have noticed that Gleb didn’t seem to mind public feedback. This is very unusual. That can certainly be a strength, but is part of a double-edged reputation sword. Most people who want feedback get an anonymous form so they can receive it in private. This prevents other people from reading things that make them look bad. Things like this cause me to suspect that, for Gleb, status messages do not have an emotional impact.
For the same reasons, when Gleb makes a status claim, he may not realize it will feel very important to others.
If I am correct that Gleb has a very different experience of social status, this would make promotion very hard for Gleb. It could lead to an outward appearance a little bit similar to Eliezer’s “Arrogance Problem” as described by Luke Muehlhauser. When chatting, Gleb doesn’t come across as an arrogant person, but some of his promotional materials do have an element of that. It’s mainly when he is trying to promote InIn that I see things really standing out that seem due to differences in status instincts.
I’m sure that nobody here intends to shame Gleb for inherent differences that he may have and I’m sure nobody intends to behave like an ableist. It seems like what’s going on with these group discussions is mainly due to inferential distance. People didn’t understand Gleb and Gleb didn’t really understand others because it’s complicated and nobody had insight into what the difference is.
I hypothesize that what Gleb needs most is a few good, detailed explanations about how other people perceive statuses. He also needs to know what specifically he can do to “speak the language of status” to effectively communicate, given the way others are going to interpret him. This would help him communicate promotional messages in a way that a broad audience will find is both accurate and persuasive, despite the differences in social status experiences. I believe it is very important to Gleb to be able to present Intentional Insights accurately and effectively. To succeed at that that, I think Gleb needs to become much more aware of everything having to do with social statuses and how they are perceived by others.
Fortunately Gleb does take feedback. I think he will improve if he gets explanations that help him really understand the problem and what the solution looks like. I can’t be sure what’s going on inside of Gleb, of course. I’m not in his head, but I would like to suggest that we all try to be careful and make good distinctions between ignorance and malice.
I see a lot of examples of people investing a lot of energy giving Gleb feedback to no result. What do you think should be done differently that would lead to a different result?
I don’t want to shame anyone for things they can’t control, but if Gleb does not have the abilities that are necessary for outreach and fundraising, it is correct for him to not do outreach and fundraising. This is in some sense discrimination based on ability, but calling it “behaving like an ableist” seems like a really bad framing to me. First, it frames it as an issue of identity rather than individual actions. It would be more helpful to say “expecting Gleb to X unfairly discriminates on ability” than “Expecting X is behaving like an ableist”
Second, ableist is a vague word that includes both “judging moral worth based on ability”, “discrimination based on lack of abilities that have nothing to do with the question at hand” and “different abilities lead to different outcomes”. If Gleb doesn’t have the abilities to succeed in his chosen field that is very sad. I mourn for the things I would like to do but lack the ability for. But that does not change the outcome of his actions.
You have a great point that I agree with: if a person is incompetent at a particular task, they should not be doing that particular task (or should learn first rather than making a mess). IMO, Gleb should not write his own promotional materials himself and should not be the decision maker regarding methods of promotion (or he should invest the time to learn to do it well first). However, in my view, what Gleb does at Intentional Insights is not merely promotion. That is just the most visible thing that Gleb does. What Gleb actually does at InIn includes a lot of uncommon and valuable abilities like:
Gleb has a really intense level of dedication to the cause of spreading rationality. Gleb is brave enough to stick his neck out and take a risk while most people are terrified just to speak in front of an audience (Though I believe someone else aught to write his speeches. Delegating speech writing is common anyway.). He is also taking large risks financially in order to make InIn happen, and not everyone can do that. Gleb cares a lot about helping the world and being kind to others and is very dedicated to that. He is educated and knowledgeable as a professor and as a rationalist, though I realize this doesn’t show very well in the articles written by some of his writers. In his own articles, the quality is much higher. So, I believe his main quality problem is not that he doesn’t understand quality but that his awkward promotion behaviors are repelling the good writers and/or attracting poor ones so that he is left trying to make the best of it. I’ve actually seen this repelling effect happening first hand. I believe that if he proved that Intentional Insights can do promotion well, good writers would want the benefit of being promoted by InIn.
Most importantly, Gleb actually wants the truth while some “rationalists” are motivated by other things (ego, status, loving to argue, wanting to hang out with smart people, etc.), so cannot actually practice rationality, nor do such people have any hope of ever spreading rationality. Spreading rationality is ridiculously hard and it’s not something that most dedicated and reality-minded rationalists would do well right way. Someone like Gleb at least has a chance because his motives are in the right place. That is both mission critical for the cause of spreading rationality, and it’s not common enough.
I think Gleb could pretty easily upgrade his leadership style to play to his strengths, and then learn enough about things like promotion to delegate what he is weak at effectively. All the successful leaders I’ve gotten to know are ignorant about a variety of things their organizations do, but delegate those things well. This works surprisingly well. I’ve seen delegation compensate for some truly hideous areas of incompetence, so I regard delegation as a very powerful strategy. I believe Gleb can learn to use delegation as a sort of reasonable accommodation for the issues that result from social status instinct differences.
Why hasn’t Gleb seemed to update on this yet? He is an updater—I’ve seen it. Maybe you didn’t know this, but Gleb has already begun delegating some of the promotional decisions.
I think what he needs to make delegation successful is a better understanding of promotion. Part of the problem may be that “the apple doesn’t fall far from the tree”, so some of the people that Gleb has attracted and chosen to delegate the promotional decisions to aren’t much better at promotion than Gleb is.
The size of the inferential distance in this area is very large and it wasn’t obvious to anyone how to explain across the distance before. I believe that what I wrote in the comment we’re responding to is an insightful enough foundation of an explanation that Gleb, myself, and others can build upon it to help Gleb become informed enough to succeed at delegating promotional tasks to skilled people.
It’s not our responsibility to educate him, of course, but I think there are enough people who are willing enough to do that, even though it takes time. I think Gleb is willing enough to spend the time learning. I think that this approach of crossing the inferential distance is worth testing to see whether it succeeds.
Additionally, I’m happy to document my own attempts at explaining to Gleb, and explaining Gleb to others, by placing these explanations here on the forum. Because I am documenting all of this, others in the EA movement with social status instinct differences will have an opportunity to find information which will assist them with self-improvement. Therefore, my efforts, so long as I document them here, are much more valuable than just helping Gleb.
Even if I test my belief that we can cross the distance with Gleb, and my attempt fails, that test result is still valuable information!
I think you’re doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention.
It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe it’s because Bob had a difficult childhood and finds it hard to get close to people… all the while ignoring that outside view, the simplest hypothesis that explains all of Bob’s behavior is that he is manipulating her into giving him sex and affection. It’s in some sense admirable for Alice to try to be charitable about Bob’s behavior, but at some point 1) Alice is incentivizing terrible behavior on Bob’s part and 2) the personal cost to Alice of putting up with Bob’s shit is terrible and she shouldn’t have to pay it.
I think Kathy’s perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an “aggressive self-promoter” who just want “money and attention”. If I were forced to guess, I’d guess what’s probably going on is some thought process like:
“I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
“I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
“Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
“The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
All of these claims are questionable in this case: the upside of EA outreach may depend a lot on who we’re reaching out to and how; the downside may be substantial (e.g., at least some people have reported thinking EA was terrible because they thought InIn represented it); outreach and updating skills are both lacking; and playing fast and loose with the facts “for the greater good” is a terrible long-run heuristic to follow even if it really is sometimes a good idea from a myopic utility-maximizing perspective. The problem is compounded if not being fully forthcoming with others makes it progressively harder to see the whole truth oneself.
I agree with nearly all of this and I’m glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occam’s razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what we’re seeing with Gleb. This could be valuable, so I really would like to test it. :)
Occam’s razor:
It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in “My take on each of your 4 points” below.)
It’s possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that would be surprising. He is doing charity work, so seems less likely to have the motivations of a selfish jerk like the one Qiaochu describes. Gleb hates doing fund raising work, which supports my belief that he has a skill related problem more than it supports Qiaochu’s belief that he wants to milk people for money.
Testable Prediction:
I find that Occam’s razor helps me select explanations upon which I can build hypotheses that end up testing positive, so I’ll present a hypothesis and turn it into a testable prediction.
If my hypothesis is correct, then Gleb would have the chance to succeed if he heard enough descriptions specifying how others go about modeling other people’s minds when status is involved, what consequences they guess will happen if specific reputations are applied to InIn, and what quantity of negative/positive impact each specific reputation would result in. To turn it into a testable prediction: if Gleb received this information on every promotion-related idea he was seriously considering for the next three months, I think he’d learn enough to delegate successfully. The changes we’d see are that people would no longer complain about InIn and also that InIn would attract good people who were not interested in volunteering there before.
To prevent disaster during the 3 month period of time, perhaps InIn could take a break from most or all promotion type work, including publishing most/all articles.
My Pessimism Is Located Here:
I can see how I came across as overly optimistic in the comment Qiaochu_Yuan was replying to. My first comment on this post did a much better job of summarizing my overall take on the situation than that one. That one was only intended to explain a much more specific area of thoughts than my overall perspective. I gave Qiaochu a quick sample of my pessimism here:
1.) “I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
My take: People with different social status instincts can have a tendency to drastically underestimate the reputation damage that can be done if outreach is low quality. I think anyone who underestimates the downsides enough would be likely to end up thinking the way you describe in 1.
2.) “I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
My take: If Gleb believes he is good enough at outreach for now, then this could be Dunning-Krueger effect, anosognosia, or underestimating the negative impact his imperfections are having. Any of these three reasons would be likely to cause a person to think their skill level is sufficient for now and/or easy enough to improve, when it is not.
3.) “Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
My take: I believe InIn’s long-run value will be small or negative if the impacts of reputation risks continue to be underestimated. I think it is unfortunately far too likely that InIn will only end up producing important problems. These may include causing people to feel averse to rationality, confusing people about effective altruism, or drawing the wrong people into the EA movement. The risk of counter-productive results has been far too high for me to offer InIn anything other than things which could help reduce the risk of such problems (like feedback). However, the reason I think InIn’s long-run value is likely to be low or negative is because I am not underestimating the impact of InIn’s reputation problems the way Gleb is. You and I may be having something like hindsight bias or illusion of transparency here. I think anyone who has a pattern of underestimating reputation problems would be pretty likely to end up believing 3.
4.) “The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
My take: I suspect that you probably do not expect Gleb to be deontological about this or use virtue ethics or anything. Instead, I suspect that you would probably require him to meet a much higher standard with his trade-off decisions. To you and I, the negative reputation impact of the behavior you describe in 4 seems large. My reaction to this is to automatically model other people’s minds, guess some consequences for this dishonest behavior, and feel disgust. One guess is that people may feel suspicion toward Intentional Insights and regard their rationality teachings with skepticism. That alone could toast all of the value of the organization. Therefore, it is a major reputation disaster which would need to be rectified in a satisfactory manner before we can believe InIn will have a positive impact. Probably, we need to overcome mind projection fallacy to see why Gleb would think this way. My model of Gleb says the problem is that he models other people differently from the way I do when status is involved, does not guess the same consequences of reputation problems, and this is how he ends up underestimating the impact of reputation disasters. Underestimating the negative impact of dishonesty would, of course, result in Gleb choosing different risk vs. reward trade-offs than we would.
I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: “I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.” … “Perhaps I didn’t get the memo, but I don’t think we’ve tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement.”
One of the main reasons I have hope is because I’ve given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. We’ll see if it works.
True, I don’t have a very good perception of social status instincts. I focus more on the quality of someone’s contributions and expertise rather than their status. I despise status games.
Also, there’s a basic inference gap between people who perceive InIn and me as being excessively self-promotional. I am trying to break the typical and very unhelpful humility characteristic of do-gooders. See more about this in my piece here.
FWIW, I read quite a bit of the self-promotional stuff as being status-gamey. I expect I’m not all that unusual in this.
That it gets read this way is a challenge here, and indeed a challenge to the general problem of trying to dial back humility re. good deeds. I think some humility about good deeds is instrumentally pretty important for sending the right signals and encouraging others to be attracted to the idea (not of course to the point of keeping them all private).
I observe that people seem to evaluate a very large number of things in terms of status. It’s actually ridiculously hard to write something that contains absolutely no status message about anybody whatsoever. If you don’t believe me, try writing something that’s both interesting or useful, but does not contain a single line or other element that can be interpreted in terms of status.
Ironically, I think it’s the people who are worst at conveying status messages who are most often accused of playing status games. Not to say that you’re accusing anyone! I can see that you are not! :)
The people who are very good at making status messages simply receive status. Part of what popular people do is to be smooth enough that most people don’t think about the fact that they’re even presenting status messages. To be unskilled with status messages is awkward, which attracts attention to the fact that status messages are present.
So, from what I have observed, it seems like the people who are best at actually playing status games are rarely called out for it (Even though their skill level suggests that they may, in fact, practice that on purpose!), while the people who are terrible at it can’t seem to avoid making status messages all together, nor manage to consistently craft smooth status messages that don’t stick out like a sore thumb.
It makes things a bit confusing for someone who doesn’t do status things the stereotypical way. Do you “stop” playing status games so people do not complain? How do you get around the major limitations on expression you’d impose onto yourself by being unable to say anything that anyone might possibly interpret as a status message? Do you just swallow the irony, dive in, and intentionally practice playing status games smoothly so that nobody complains to you about status games anymore?
Perhaps you agree about Gleb’s intentions, or have no opinion on this, but I just wanted to say that if Gleb appears to be playing status games, he probably isn’t very good at actually playing status games. This supports Gleb’s claim that he hates status games more than any claim that he is playing them. Though I do acknowledge that all you’re saying here is that he comes across as playing status games. That is not an accusation. It’s feedback. I agree with you.
What I’m curious about is what do people think Gleb should do? Should he learn to play status games smoothly and in a way that will lead people to believe an accurate view of reality? Should Gleb try to limit himself to expressions that no one will interpret as status messages? Something else?
I agree that Gleb appears to be bad at status games. I don’t have a view about whether he is deliberately engaging in them (I’d kind of expect him to be better if he conceived of himself as engaging in them, but I observe that he has generated status among some group of supporters of InIn).
I think he should take a break from EA promotion and try to learn how to do better in this domain, in a way that doesn’t take up large slices of time and attention from the EA community. It seems possible that he could come to be a productive member of the community, although I’m a bit pessimistic on the basis of the amount of feedback he has received without apparently fixing the important issues. ‘Learning to do better’ means not necessarily getting very good at status games, but getting good enough to recognise what might be construed as engaging in them, and avoiding that. I also think it’s crucial that he moves from a position of trying to avoid saying strictly-false things to trying to avoid saying things that could lead people to take away false impressions.
One of the things I’m trying to do, as I noted above, is a meta-move to change the culture of humility about good deeds. I generally have an attitude of trying to be the change that I want to see in the world and leading by example. It’s a long-term strategy that has short-term costs, clearly :-)
I understand the long-term goal. I’m claiming that this strategy is actually instrumentally bad for that long-term goal, as it is too widely read as negative (hence reinforcing cultural norms towards humility). More effective would be to embody something which is superior to current cultural norms but will still be seen as positive.
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don’t think the world is ready for it yet. I think promoting discussions about accomplishments among effective altruists is a great idea. I think if we do that enough, then effective altruists will eventually manage to present that to friends and family members effectively. This is a slow process but I really think word of mouth is the best promotional method for spreading this cultural change outside of EA, at least for now.
I totally agree with you that the world should not shut altruists down for talking about accomplishments, however we have to make a distinction between what we think people should do and what they are actually going to do.
Also, we cannot simply tell people “You shouldn’t shut down altruists for talking about accomplishments.” because it takes around 11 repetitions for them to even remember that. One cannot just post a single article and expect everyone to update. Even the most popular authors in our network don’t get that level of attention. At best, only a significant minority reads all of what is written by a given author. Only some, not all, of those readers remember all the points. Fewer choose to apply them. Only some of the people applying a thing succeed in making a habit.
Additionally, we currently have no idea how to present this idea to the outside world in a way that is persuasive yet. That part requires a bunch of testing. So, we could repeat the idea 11 times, and succeed at absolutely no change whatsoever. Or we could repeat it 11 times and be ridiculed, succeeding only at causing people to remember that we did something which, to them, made us look ridiculous.
Then, there’s the fact that the friends of the people who receive our message won’t necessarily receive the message, too. Friends of our audience members will not understand this cultural element. That makes it very hard for the people in our audience to practice. If audience members can’t consistently practice a social habit like sharing altruistic accomplishments with others, they either won’t develop the habit in the first place, or the habit will be lost to disuse.
Another thing is that there could be some unexpected obstacle or Chesterton’s fence we don’t know about yet. Sometimes when you try to change things, you run face first into something really difficult and confusing. It can take a while to figure out what the heck happened. If we ask others to do something different, we can’t be sure we aren’t causing those others to run face first into some weird obstacle… at which point they may just wonder if we have any sense at all, lol. So, this is something that takes a lot of time, and care. It takes a lot of paying close attention to look for weird, awkward details that could be a sign of some sort of obstacle. This is another great reason to keep our efforts limited to a small group for now. The small group is a lot more likely to report weird obstacles to us, giving us a chance to do something sensible about it.
Changing a culture is really, really hard. To implement such a cultural change just within a chunk of the EA movement would take a significant amount of time. To get it to spread to all of EA would take a lot of time, and to get it spreading further would take many years.
Unless we one day see good evidence that a lot of people have adopted this cultural change, it’s really best to speak for the audience that is actually present, whatever their culture happens to be. Even if we have to bend over backwards while playing contortionist to express our point of view to people, we just have to start by showing them respect no matter what they believe, and do whatever it takes to reach out across inferential distances and get through to them properly. It takes work.
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don’t think the world is ready for it yet… Another thing is that there could be some unexpected obstacle or Chesterton’s fence we don’t know about yet.
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the “Status” chapter of Keith Johnstone’s Impro, which contains this quote:
We soon discovered the ‘see-saw’ principle: ‘I go up and you go down’. Walk into a dressing-room and say ‘I got the part’ and everyone will congratulate you, but will feel lowered [in status]. Say ‘They said I was old’ and people commiserate, but cheer up perceptibly… The exception to this see-saw principle comes when you identify with the person being raised or lowered, when you sit on his end of the see-saw, so to speak. If you claim status because you know some famous person, then you’ll feel raised when they are: similarly, an ardent royalist won’t want to see the Queen fall off her horse. When we tell people nice things about ourselves this is usually a little like kicking them. People really want to be told things to our discredit in such a way that they don’t have to feel sympathy. Low-status players save up little tit-bits involving their own discomfiture with which to amuse and placate other people.
Emphasis mine. Of course, a large fraction of EA folks and rationalists I’ve met claim to not be bothered by others bragging about their accomplishments, so I think you’re right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.
This makes sense for spreading the message among EAs, which is why we have the Effective Altruist Accomplishments Facebook group. I’ll have to think further about the most effective ways of spreading this message more broadly, as I’m not in a good mental space to think about it right now.
Gleb’s problems seem due to important differences in social status instincts. For example, Eliezer once wrote that he doesn’t experience the “status smackdown emotions” that other people experience, but he didn’t realize it until a lot of people complained that his Harry Potter character comes across as insufferably arrogant to them. Readers wanted to smack down his Harry Potter character but this possibility did not occur to Eliezer at the time. So, Eliezer could not have written a Harry Potter character that people did not want to smack down.
I suspect that, for similar reasons, Gleb did not expect to see a large number of complaints of this nature. He might be having difficulty modeling other people’s minds regarding status, so he might find it difficult to relate to the people who have complained.
Some with social status instinct differences might be described as “status blind”. They might not notice status messages at all, they might not make clear distinctions between different statuses, or they make such detailed distinctions that it becomes impossible to organize the statuses into a hierarchy. This very detailed approach has effects that are totally unlike social status as most people seem to experience it.
Additionally, someone who is status blind might have a very blurry emotional experience of statuses or they might feel nothing at all. That is to say status may not feel important to someone who is status blind. Richard Feynman wrote that he “Never knows who he is talking to.” and this resulted in him starting arguments with geniuses and famous people. Fortunately for Feynman, he was bright enough that he was able to hold his own and maybe it didn’t seem too out of place to others for him to behave that way. I don’t know if this example from Feynman is some form of status blindness, but I hope it makes it easier to imagine what status blindness might feel like for someone. For some, I think status blindness feels like always being of equal status no matter who you’re talking to.
On many occasions, I have noticed that Gleb didn’t seem to mind public feedback. This is very unusual. That can certainly be a strength, but is part of a double-edged reputation sword. Most people who want feedback get an anonymous form so they can receive it in private. This prevents other people from reading things that make them look bad. Things like this cause me to suspect that, for Gleb, status messages do not have an emotional impact.
For the same reasons, when Gleb makes a status claim, he may not realize it will feel very important to others.
If I am correct that Gleb has a very different experience of social status, this would make promotion very hard for Gleb. It could lead to an outward appearance a little bit similar to Eliezer’s “Arrogance Problem” as described by Luke Muehlhauser. When chatting, Gleb doesn’t come across as an arrogant person, but some of his promotional materials do have an element of that. It’s mainly when he is trying to promote InIn that I see things really standing out that seem due to differences in status instincts.
I’m sure that nobody here intends to shame Gleb for inherent differences that he may have and I’m sure nobody intends to behave like an ableist. It seems like what’s going on with these group discussions is mainly due to inferential distance. People didn’t understand Gleb and Gleb didn’t really understand others because it’s complicated and nobody had insight into what the difference is.
I hypothesize that what Gleb needs most is a few good, detailed explanations about how other people perceive statuses. He also needs to know what specifically he can do to “speak the language of status” to effectively communicate, given the way others are going to interpret him. This would help him communicate promotional messages in a way that a broad audience will find is both accurate and persuasive, despite the differences in social status experiences. I believe it is very important to Gleb to be able to present Intentional Insights accurately and effectively. To succeed at that that, I think Gleb needs to become much more aware of everything having to do with social statuses and how they are perceived by others.
Fortunately Gleb does take feedback. I think he will improve if he gets explanations that help him really understand the problem and what the solution looks like. I can’t be sure what’s going on inside of Gleb, of course. I’m not in his head, but I would like to suggest that we all try to be careful and make good distinctions between ignorance and malice.
I see a lot of examples of people investing a lot of energy giving Gleb feedback to no result. What do you think should be done differently that would lead to a different result?
I don’t want to shame anyone for things they can’t control, but if Gleb does not have the abilities that are necessary for outreach and fundraising, it is correct for him to not do outreach and fundraising. This is in some sense discrimination based on ability, but calling it “behaving like an ableist” seems like a really bad framing to me. First, it frames it as an issue of identity rather than individual actions. It would be more helpful to say “expecting Gleb to X unfairly discriminates on ability” than “Expecting X is behaving like an ableist”
Second, ableist is a vague word that includes both “judging moral worth based on ability”, “discrimination based on lack of abilities that have nothing to do with the question at hand” and “different abilities lead to different outcomes”. If Gleb doesn’t have the abilities to succeed in his chosen field that is very sad. I mourn for the things I would like to do but lack the ability for. But that does not change the outcome of his actions.
You have a great point that I agree with: if a person is incompetent at a particular task, they should not be doing that particular task (or should learn first rather than making a mess). IMO, Gleb should not write his own promotional materials himself and should not be the decision maker regarding methods of promotion (or he should invest the time to learn to do it well first). However, in my view, what Gleb does at Intentional Insights is not merely promotion. That is just the most visible thing that Gleb does. What Gleb actually does at InIn includes a lot of uncommon and valuable abilities like:
Gleb has a really intense level of dedication to the cause of spreading rationality. Gleb is brave enough to stick his neck out and take a risk while most people are terrified just to speak in front of an audience (Though I believe someone else aught to write his speeches. Delegating speech writing is common anyway.). He is also taking large risks financially in order to make InIn happen, and not everyone can do that. Gleb cares a lot about helping the world and being kind to others and is very dedicated to that. He is educated and knowledgeable as a professor and as a rationalist, though I realize this doesn’t show very well in the articles written by some of his writers. In his own articles, the quality is much higher. So, I believe his main quality problem is not that he doesn’t understand quality but that his awkward promotion behaviors are repelling the good writers and/or attracting poor ones so that he is left trying to make the best of it. I’ve actually seen this repelling effect happening first hand. I believe that if he proved that Intentional Insights can do promotion well, good writers would want the benefit of being promoted by InIn.
Most importantly, Gleb actually wants the truth while some “rationalists” are motivated by other things (ego, status, loving to argue, wanting to hang out with smart people, etc.), so cannot actually practice rationality, nor do such people have any hope of ever spreading rationality. Spreading rationality is ridiculously hard and it’s not something that most dedicated and reality-minded rationalists would do well right way. Someone like Gleb at least has a chance because his motives are in the right place. That is both mission critical for the cause of spreading rationality, and it’s not common enough.
I think Gleb could pretty easily upgrade his leadership style to play to his strengths, and then learn enough about things like promotion to delegate what he is weak at effectively. All the successful leaders I’ve gotten to know are ignorant about a variety of things their organizations do, but delegate those things well. This works surprisingly well. I’ve seen delegation compensate for some truly hideous areas of incompetence, so I regard delegation as a very powerful strategy. I believe Gleb can learn to use delegation as a sort of reasonable accommodation for the issues that result from social status instinct differences.
Why hasn’t Gleb seemed to update on this yet? He is an updater—I’ve seen it. Maybe you didn’t know this, but Gleb has already begun delegating some of the promotional decisions.
I think what he needs to make delegation successful is a better understanding of promotion. Part of the problem may be that “the apple doesn’t fall far from the tree”, so some of the people that Gleb has attracted and chosen to delegate the promotional decisions to aren’t much better at promotion than Gleb is.
The size of the inferential distance in this area is very large and it wasn’t obvious to anyone how to explain across the distance before. I believe that what I wrote in the comment we’re responding to is an insightful enough foundation of an explanation that Gleb, myself, and others can build upon it to help Gleb become informed enough to succeed at delegating promotional tasks to skilled people.
It’s not our responsibility to educate him, of course, but I think there are enough people who are willing enough to do that, even though it takes time. I think Gleb is willing enough to spend the time learning. I think that this approach of crossing the inferential distance is worth testing to see whether it succeeds.
Additionally, I’m happy to document my own attempts at explaining to Gleb, and explaining Gleb to others, by placing these explanations here on the forum. Because I am documenting all of this, others in the EA movement with social status instinct differences will have an opportunity to find information which will assist them with self-improvement. Therefore, my efforts, so long as I document them here, are much more valuable than just helping Gleb.
Even if I test my belief that we can cross the distance with Gleb, and my attempt fails, that test result is still valuable information!
I think you’re doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention.
It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe it’s because Bob had a difficult childhood and finds it hard to get close to people… all the while ignoring that outside view, the simplest hypothesis that explains all of Bob’s behavior is that he is manipulating her into giving him sex and affection. It’s in some sense admirable for Alice to try to be charitable about Bob’s behavior, but at some point 1) Alice is incentivizing terrible behavior on Bob’s part and 2) the personal cost to Alice of putting up with Bob’s shit is terrible and she shouldn’t have to pay it.
I think Kathy’s perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an “aggressive self-promoter” who just want “money and attention”. If I were forced to guess, I’d guess what’s probably going on is some thought process like:
“I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
“I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
“Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
“The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
All of these claims are questionable in this case: the upside of EA outreach may depend a lot on who we’re reaching out to and how; the downside may be substantial (e.g., at least some people have reported thinking EA was terrible because they thought InIn represented it); outreach and updating skills are both lacking; and playing fast and loose with the facts “for the greater good” is a terrible long-run heuristic to follow even if it really is sometimes a good idea from a myopic utility-maximizing perspective. The problem is compounded if not being fully forthcoming with others makes it progressively harder to see the whole truth oneself.
I agree with nearly all of this and I’m glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occam’s razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what we’re seeing with Gleb. This could be valuable, so I really would like to test it. :)
Occam’s razor:
It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in “My take on each of your 4 points” below.)
It’s possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that would be surprising. He is doing charity work, so seems less likely to have the motivations of a selfish jerk like the one Qiaochu describes. Gleb hates doing fund raising work, which supports my belief that he has a skill related problem more than it supports Qiaochu’s belief that he wants to milk people for money.
Testable Prediction:
I find that Occam’s razor helps me select explanations upon which I can build hypotheses that end up testing positive, so I’ll present a hypothesis and turn it into a testable prediction.
If my hypothesis is correct, then Gleb would have the chance to succeed if he heard enough descriptions specifying how others go about modeling other people’s minds when status is involved, what consequences they guess will happen if specific reputations are applied to InIn, and what quantity of negative/positive impact each specific reputation would result in. To turn it into a testable prediction: if Gleb received this information on every promotion-related idea he was seriously considering for the next three months, I think he’d learn enough to delegate successfully. The changes we’d see are that people would no longer complain about InIn and also that InIn would attract good people who were not interested in volunteering there before.
To prevent disaster during the 3 month period of time, perhaps InIn could take a break from most or all promotion type work, including publishing most/all articles.
My Pessimism Is Located Here:
I can see how I came across as overly optimistic in the comment Qiaochu_Yuan was replying to. My first comment on this post did a much better job of summarizing my overall take on the situation than that one. That one was only intended to explain a much more specific area of thoughts than my overall perspective. I gave Qiaochu a quick sample of my pessimism here:
http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8qt
My take on each of your 4 points:
1.) “I’m convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly.”
My take: People with different social status instincts can have a tendency to drastically underestimate the reputation damage that can be done if outreach is low quality. I think anyone who underestimates the downsides enough would be likely to end up thinking the way you describe in 1.
2.) “I thinks I have a lot of good outreach skills and know-how, and while I’m not perfect, I’m sufficiently good at ‘updating’ and accepting criticism that I’m likely to improve a lot over time.”
My take: If Gleb believes he is good enough at outreach for now, then this could be Dunning-Krueger effect, anosognosia, or underestimating the negative impact his imperfections are having. Any of these three reasons would be likely to cause a person to think their skill level is sufficient for now and/or easy enough to improve, when it is not.
3.) “Therefore InIn’s long-run value is huge no matter how many small hiccups there are at the moment.”
My take: I believe InIn’s long-run value will be small or negative if the impacts of reputation risks continue to be underestimated. I think it is unfortunately far too likely that InIn will only end up producing important problems. These may include causing people to feel averse to rationality, confusing people about effective altruism, or drawing the wrong people into the EA movement. The risk of counter-productive results has been far too high for me to offer InIn anything other than things which could help reduce the risk of such problems (like feedback). However, the reason I think InIn’s long-run value is likely to be low or negative is because I am not underestimating the impact of InIn’s reputation problems the way Gleb is. You and I may be having something like hindsight bias or illusion of transparency here. I think anyone who has a pattern of underestimating reputation problems would be pretty likely to end up believing 3.
4.) “The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren’t literally injuring anyone and as long as the ends are sufficiently good.”
My take: I suspect that you probably do not expect Gleb to be deontological about this or use virtue ethics or anything. Instead, I suspect that you would probably require him to meet a much higher standard with his trade-off decisions. To you and I, the negative reputation impact of the behavior you describe in 4 seems large. My reaction to this is to automatically model other people’s minds, guess some consequences for this dishonest behavior, and feel disgust. One guess is that people may feel suspicion toward Intentional Insights and regard their rationality teachings with skepticism. That alone could toast all of the value of the organization. Therefore, it is a major reputation disaster which would need to be rectified in a satisfactory manner before we can believe InIn will have a positive impact. Probably, we need to overcome mind projection fallacy to see why Gleb would think this way. My model of Gleb says the problem is that he models other people differently from the way I do when status is involved, does not guess the same consequences of reputation problems, and this is how he ends up underestimating the impact of reputation disasters. Underestimating the negative impact of dishonesty would, of course, result in Gleb choosing different risk vs. reward trade-offs than we would.
I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: “I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.” … “Perhaps I didn’t get the memo, but I don’t think we’ve tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement.”
(Perhaps you didn’t read all of my comments because this thread has too many to read but that one is located here: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o8)
One of the main reasons I have hope is because I’ve given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. We’ll see if it works.
True, I don’t have a very good perception of social status instincts. I focus more on the quality of someone’s contributions and expertise rather than their status. I despise status games.
Also, there’s a basic inference gap between people who perceive InIn and me as being excessively self-promotional. I am trying to break the typical and very unhelpful humility characteristic of do-gooders. See more about this in my piece here.
FWIW, I read quite a bit of the self-promotional stuff as being status-gamey. I expect I’m not all that unusual in this.
That it gets read this way is a challenge here, and indeed a challenge to the general problem of trying to dial back humility re. good deeds. I think some humility about good deeds is instrumentally pretty important for sending the right signals and encouraging others to be attracted to the idea (not of course to the point of keeping them all private).
I observe that people seem to evaluate a very large number of things in terms of status. It’s actually ridiculously hard to write something that contains absolutely no status message about anybody whatsoever. If you don’t believe me, try writing something that’s both interesting or useful, but does not contain a single line or other element that can be interpreted in terms of status.
Ironically, I think it’s the people who are worst at conveying status messages who are most often accused of playing status games. Not to say that you’re accusing anyone! I can see that you are not! :)
The people who are very good at making status messages simply receive status. Part of what popular people do is to be smooth enough that most people don’t think about the fact that they’re even presenting status messages. To be unskilled with status messages is awkward, which attracts attention to the fact that status messages are present.
So, from what I have observed, it seems like the people who are best at actually playing status games are rarely called out for it (Even though their skill level suggests that they may, in fact, practice that on purpose!), while the people who are terrible at it can’t seem to avoid making status messages all together, nor manage to consistently craft smooth status messages that don’t stick out like a sore thumb.
It makes things a bit confusing for someone who doesn’t do status things the stereotypical way. Do you “stop” playing status games so people do not complain? How do you get around the major limitations on expression you’d impose onto yourself by being unable to say anything that anyone might possibly interpret as a status message? Do you just swallow the irony, dive in, and intentionally practice playing status games smoothly so that nobody complains to you about status games anymore?
Perhaps you agree about Gleb’s intentions, or have no opinion on this, but I just wanted to say that if Gleb appears to be playing status games, he probably isn’t very good at actually playing status games. This supports Gleb’s claim that he hates status games more than any claim that he is playing them. Though I do acknowledge that all you’re saying here is that he comes across as playing status games. That is not an accusation. It’s feedback. I agree with you.
What I’m curious about is what do people think Gleb should do? Should he learn to play status games smoothly and in a way that will lead people to believe an accurate view of reality? Should Gleb try to limit himself to expressions that no one will interpret as status messages? Something else?
I agree that Gleb appears to be bad at status games. I don’t have a view about whether he is deliberately engaging in them (I’d kind of expect him to be better if he conceived of himself as engaging in them, but I observe that he has generated status among some group of supporters of InIn).
I think he should take a break from EA promotion and try to learn how to do better in this domain, in a way that doesn’t take up large slices of time and attention from the EA community. It seems possible that he could come to be a productive member of the community, although I’m a bit pessimistic on the basis of the amount of feedback he has received without apparently fixing the important issues. ‘Learning to do better’ means not necessarily getting very good at status games, but getting good enough to recognise what might be construed as engaging in them, and avoiding that. I also think it’s crucial that he moves from a position of trying to avoid saying strictly-false things to trying to avoid saying things that could lead people to take away false impressions.
(Views my own, not my employer’s.)
One of the things I’m trying to do, as I noted above, is a meta-move to change the culture of humility about good deeds. I generally have an attitude of trying to be the change that I want to see in the world and leading by example. It’s a long-term strategy that has short-term costs, clearly :-)
I understand the long-term goal. I’m claiming that this strategy is actually instrumentally bad for that long-term goal, as it is too widely read as negative (hence reinforcing cultural norms towards humility). More effective would be to embody something which is superior to current cultural norms but will still be seen as positive.
I will think about this further, as I am not in a good space mentally to give this the consideration it deserves
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don’t think the world is ready for it yet. I think promoting discussions about accomplishments among effective altruists is a great idea. I think if we do that enough, then effective altruists will eventually manage to present that to friends and family members effectively. This is a slow process but I really think word of mouth is the best promotional method for spreading this cultural change outside of EA, at least for now.
I totally agree with you that the world should not shut altruists down for talking about accomplishments, however we have to make a distinction between what we think people should do and what they are actually going to do.
Also, we cannot simply tell people “You shouldn’t shut down altruists for talking about accomplishments.” because it takes around 11 repetitions for them to even remember that. One cannot just post a single article and expect everyone to update. Even the most popular authors in our network don’t get that level of attention. At best, only a significant minority reads all of what is written by a given author. Only some, not all, of those readers remember all the points. Fewer choose to apply them. Only some of the people applying a thing succeed in making a habit.
Additionally, we currently have no idea how to present this idea to the outside world in a way that is persuasive yet. That part requires a bunch of testing. So, we could repeat the idea 11 times, and succeed at absolutely no change whatsoever. Or we could repeat it 11 times and be ridiculed, succeeding only at causing people to remember that we did something which, to them, made us look ridiculous.
Then, there’s the fact that the friends of the people who receive our message won’t necessarily receive the message, too. Friends of our audience members will not understand this cultural element. That makes it very hard for the people in our audience to practice. If audience members can’t consistently practice a social habit like sharing altruistic accomplishments with others, they either won’t develop the habit in the first place, or the habit will be lost to disuse.
Another thing is that there could be some unexpected obstacle or Chesterton’s fence we don’t know about yet. Sometimes when you try to change things, you run face first into something really difficult and confusing. It can take a while to figure out what the heck happened. If we ask others to do something different, we can’t be sure we aren’t causing those others to run face first into some weird obstacle… at which point they may just wonder if we have any sense at all, lol. So, this is something that takes a lot of time, and care. It takes a lot of paying close attention to look for weird, awkward details that could be a sign of some sort of obstacle. This is another great reason to keep our efforts limited to a small group for now. The small group is a lot more likely to report weird obstacles to us, giving us a chance to do something sensible about it.
Changing a culture is really, really hard. To implement such a cultural change just within a chunk of the EA movement would take a significant amount of time. To get it to spread to all of EA would take a lot of time, and to get it spreading further would take many years.
Unless we one day see good evidence that a lot of people have adopted this cultural change, it’s really best to speak for the audience that is actually present, whatever their culture happens to be. Even if we have to bend over backwards while playing contortionist to express our point of view to people, we just have to start by showing them respect no matter what they believe, and do whatever it takes to reach out across inferential distances and get through to them properly. It takes work.
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the “Status” chapter of Keith Johnstone’s Impro, which contains this quote:
Emphasis mine. Of course, a large fraction of EA folks and rationalists I’ve met claim to not be bothered by others bragging about their accomplishments, so I think you’re right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.
This makes sense for spreading the message among EAs, which is why we have the Effective Altruist Accomplishments Facebook group. I’ll have to think further about the most effective ways of spreading this message more broadly, as I’m not in a good mental space to think about it right now.
I don’t believe you.