The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here is that it probably would have merited the early and permanent exclusion of the Singularity Institute/MIRI from the EA community. Holden wrote a blog on LessWrong saying that he didn’t like their organization and didn’t think they were worth funding. Some assorted complaints have been floating around the web for a long time complaining about them associating with neoreactionaries and about LessWrong being cultists as well as complaints about the way they communicate and write. There’s been a few odd ‘incidents’ (if you can call them that) over the years between MIRI, LessWrong, and the rationalist sphere. It would be easy to jumble all of that together into some kind of meta-post documenting concerns, and there is certainly no shortage of people who are willing and able to write long impassioned posts expressing their feelings and saying that they want nothing to do with SIAI/MIRI and recommending others to adhere to that. We could have done that, lots of people would come out of the woodwork to add their own complaints, the conversation would reach critical mass, and boom—all of a sudden, half the steam behind AI safety goes down the tubes.
It’s easy to find online communities today where people are mind-numbingly dismissive of anything AI-related due to a poorly-argued, critical-mass groupthink against everything LessWrong. Good thing that we’re not one of them.
I agree that it’s important that EA stay open to weird things and not exclude people solely for being low status. I see several key distinctions between early SI/early MIRI and Intentional Insights:
SI was cause focused, II a fundraising org. Causes can be argued on their merits. For fundraising, “people dislike you for no reason” is in and of itself evidence you are bad at fundraising and should stop.
I think this is an important general lesson. Right now “fundraising org” seems to be the default thing for people to start, but it’s actually one of the hardest things to do right and has the worst consequences if it goes poorly. With the exception of local groups, I’d like to see the community norms shift to discourage inexperienced people from starting fundraising groups.
AFAIK, SI wasn’t trying to use the credibility of the EA movement to bolster itself . Gleb is, both explicitly (by repeatedly and persistently listing endorsements he did not receive) and implicitly. As long as he is doing that the proportionate response is criticizing him/distancing him from EA enough to cancel out the benefits.
The effective altruism name wasn’t worth as much when MIRI was getting started. There was no point in faking an endorsement because no one had heard of us. Now that EA has some cachet with people outside the movement there exists the possibility of trying to exploit that cachet, and it makes sense for us to raise the bar on who gets to claim endorsement.
Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.
My experience (as someone connected to both the rationalist and Oxford/Giving What We Can clusters as EA came into being) is that its birth came out of Giving What We Can, and the communities you mentioned contributed to growth (by aligning with EA) but not so much to birth.
I see several key distinctions between early SI/early MIRI and Intentional Insights:
You can equally draw a list of distinctions which point in the other direction: distinctions that would have made it more worthwhile to exclude MIRI than to exclude InIn. I’ve listed some already.
I don’t think this comparison holds water. Briefly, I think SI/MIRI would have mostly attracted criticism for being weird in various ways. As far as I can tell, Gleb is not acting weird; he is acting normal in the sense that he’s making normal moves in a game (called Promote-Your-Organization-At-All-Costs) that other people in the community don’t want him playing, especially not in a way that implicates other EA orgs by association.
Whatever you think of that object-level point, an independent meta-level point: it’s also possible that the EA movement excluding SI/MIRI at some point would have been a reasonable move in expectation. Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.
Briefly, I think SI/MIRI would have mostly attracted criticism for being weird in various ways.
They’ve attracted criticism for more substantial reasons; many academics didn’t and still don’t take them seriously because they have an unusual point of view. And other people believe that they are horrible people who are in between neoreactionary racists and a Silicon Valley conspiracy to take people’s money. It’s easy to pick up on something being a little off-putting and then get carried down the spiral of looking for and finding other problems. The original and underlying reason people have been pissed about InIn this entire time is that they are aesthetically displeased by their content. “It comes across as spammy and promotional”. An obvious typical mind fallacy. If you can fall for that then you can fall for “Eliezer’s writing style is winding and confusing.”
it’s also possible that the EA movement excluding SI/MIRI at some point would have been a reasonable move in expectation.
Highly implausible.
AI safety is a large issue. MIRI has done great work and has itself benefited tremendously from its involvement. Besides that, there have been many benefits to EA for aligning with rationalists more generally.
Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.
Yes, but people are taking this case to be a true positive that proves the rule, which is no better.
Some of the criticisms I’ve read of MIRI are so nasty that I hesitate to rehash them all here for fear of changing the subject and side tracking the conversation. I’ll just say this:
MIRI has been accused of much worse stuff than this post is accusing Gleb of right now. Compared to that weird MIRI stuff, Gleb looks like a normal guy who is fumbling his way through marketing a startup. The weird stuff MIRI / Eliezer did is really bizarre. For just one example, there are places in The Sequences where Eliezer presented his particular beliefs as The Correct Beliefs. In the context of a marketing piece, that would be bad (albeit in a mundane way that we see often), but in the context of a document on how to think rationally, that’s more like… egregious blasphemy. It’s a good thing the guy counter-balanced whatever that behavior was with articles like “Screening Off Authority” and “Guardians of the Truth”.
Do some searches for web marketing advice sometime, and you’ll see that Gleb might have actually been following some kind of instructions in some of the cases listed above. Not the best instructions, mind you… but somebody’s serious attempt to persuade you that some pretty weird stuff is the right thing to do. This is not exactly a science… it’s not even psychology. We’re talking about marketing. For instance, paying Facebook to promote things can result in problems… yet this is recommended by a really big company, Facebook. :/
There are a few complaints against him that stand out as a WTF… (Then again, if you’re really scouring for problems, you’re probably going to find the sorts of super embarrassing mistakes people only make when they’re really exhausted or whatever. I don’t know what to make of every single one of these examples yet.)
Anyway, MIRI / Eliezer can’t claim stuff like “I was following some marketing instructions I read on the Internet somewhere.”, which, IMO, would explain a lot of this stuff that Gleb did—which is not to say I think copying him is an effective or ethical way of promoting things! The Eliezer stuff was, like self-contradictory enough that it was weird to the point of being original. It took me forever to figure that guy out. There were several years where I simply had no cogent opinion on him.
The stuff Gleb is doing is just so commonly bad. It’s not an excuse. I still want to see InIn shape up or ship out. I think EA can and should have higher standards than this. I have read and experienced a lot in the area of promoting things, and I know there are ways of persuading through making people think that don’t bias them or mislead them, but by getting them more in touch with reality. I think it takes a really well thought out person to accomplish that because seeing reality is only the first step… then, you need to know how to deal with it, and you need to encourage the person to do something constructive with the knowledge as well. Sometimes bare information can leave people feeling pretty cynical, and it’s not like we were all taught how to be creative and resourceful and lead ourselves in situations that are unexpectedly different from what we believed.
I really believe there are better ways to be memorable other than making claims about how much attention you’re getting. Providing questionable info of this type is certainly bad. The way I’m seeing it, wasting time on such uninspired attempts involves such a large quantity of lost potential that questionable info is almost silly by comparison. I feel like we’re worried about a guy who says he has the best lemonade stand ever, but what we should be worried about is why he hasn’t managed to move up to selling at the grocery store yet.
I can very clearly envision the difference between what Gleb has been doing, and specific awesome ways in which it is possible to promote rationality. I can’t condemn Gleb as some sort of bad guy when what he’s doing wrong betrays such deep ignorance about marketing. I feel like: surely, a true villain would have taken over the beverage aisle at the grocery store by now.
If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.
But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn’t a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I don’t know how workable this would be for the EA community, since my impression is that it’s spread out across several platforms.
Right now we don’t have a procedure set up for formally deciding whether a particular person is a bad actor. If someone feels that another person is a bad actor, the only way to deal with the situation is informally. Since the community largely functions online, the discussion has a “witch hunt” character to it.
I think most people agree that bad actors exist, and we should have the capability to kick them out in principle (even if we don’t want to use it in Gleb’s particular case). But I agree that online discussions are not the best way to make these decisions. I’ve spent some time thinking about better alternatives, and I’ll make a top-level post outlining my proposal if this comment gets at least +4.
Edit: Alternatively, for people who feel it should be possible to oust a person like Gleb with less effort, a formal procedure could streamline this kind of thing in the future.
[ETA: a number of these comments are addressed to possible versions of this that John is not advocating, see his comment replying to mine.]
My attitude on this is rather negative, for several reasons:
The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements
Individual fora have their moderation policies, individual organizations can choose who to affiliate with or how to authorize use of their trademarks, individuals can decide who to work with or donate to
There was no agreed-on course of action among the contributors to this document, let alone the wider EA community
Public discussion (including criticism) allows individual actors to make their own decisions
There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I don’t think others are in a position to ask that they cut off such interactions if they find them valuable
I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and not willing to give blind trust to) a non-transparent condemnation from such a process, and I think it would reflect badly on those involved and the movement as a whole
If one wants to avoid heated online discussions , flame wars, and whatnot, they would be elicited by the outputs of the formal process (moreso, if less transparent and careful, I think)
The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements
But controversial decisions will still need to be made—about who to ban from the forum, say. As EA gets bigger, I see advantages to setting up some sort of due process (if only so the process can be improved over time) vs doing things in an ad hoc way.
There was no agreed-on course of action among the contributors to this document, let alone the wider EA community
Well, perhaps an official body would choose some kind of compromise action, such as what you did (making knowledge about Gleb’s behavior public without doing anything else). I don’t see why this is a compelling argument for an ad hoc approach.
Public discussion (including criticism) allows individual actors to make their own decisions
Without official means for dealing with bad actors, the only way to deal with them is by being a vigilante. The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario I’m most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way.
There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I don’t think others are in a position to ask that they cut off such interactions if they find them valuable
I don’t (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way.
I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and not willing to give blind trust to) a non-transparent condemnation from such a process, and I think it would reflect badly on those involved and the movement as a whole
OK, let’s make it transparent then :) The question here is formal vs ad hoc, not transparent vs opaque.
If one wants to avoid heated online discussions , flame wars, and whatnot, they would be elicited by the outputs of the formal process (moreso, if less transparent and careful, I think)
If I see a long post on the EA forum that explains why someone I know is bad for the movement, I need to read the entire post to determine whether it was constructed in a careful & transparent way. If the person is a good friend, I might be tempted to skip reading the post and just make a negative judgement about its authors. If the post is written by people whose job is to do things carefully and transparently (people who will be fired if they do this badly), it’s easier to accept the post’s conclusions at face value.
The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario I’m most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way.
This is a very good point. One reason I got involved in the OP was to offset some of this selection effect. On the other hand, I was also reluctant to involve EA institutions to avoid dragging them into it (I was not expecting Will MacAskill’s post or the announcement by the EA Facebook group moderators, and mainly aiming at a summary of the findings for individuals). A respected institution may have an easier time in an individual case, but it may also lose some of its luster by getting involved in disputes.
Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/ad hoc. However a more official body could have more power to do the things I mention, so I don’t think the issues are orthogonal.
Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/ad hoc. However a more official body could have more power to do the things I mention, so I don’t think the issues are orthogonal.
True, but I suspect the worst case scenario for an official body is still less bad than the worst case scenario for vigilantism. Let’s say we set up an Effective Altruism Association to be the governing body for effective altruism. Let’s say it becomes apparent over time that the board of the Effective Altruism Association is abusing its powers. And let’s say members of the board ignore pressure to step down, and there’s nothing in the Association’s charter that would allow us to fix this problem. Well at that point, someone can set up a rival League of Effective Altruists, and people can vote with their feet & start attending League-sponsored events instead of Association-sponsored events. This sounds to me like an outcome that would be bad, but not catastrophic in the way spiraling vigalantism has been for communities demographically similar to ours devoted to programming, atheism, video games, science fiction, etc. If anything, I am more worried about the case where the Association’s board is unable to do anything about vigilantism, or itself becomes the target of a hostile takeover by vigilantes.
I suspect a big cause of disagreement here is that in America at least, we’ve lost cultural memories about how best to organize ourselves.
When Tocqueville visited the United States in the 1830s, it was the Americans’ propensity for civic association that most impressed him as the key to their unprecedented ability to make democracy work. “Americans of all ages, all stations in life, and all types of disposition,” he observed, “are forever forming associations. There are not only commercial and industrial associations in which all take part, but others of a thousand different types—religious, moral, serious, futile, very general and very limited, immensely large and very minute… Nothing, in my view, deserves more attention than the intellectual and moral associations in America.”
...
Within all educational categories, total associational membership declined significantly between 1967 and 1993. Among the college-educated, the average number of group memberships per person fell from 2.8 to 2.0 (a 26-percent decline); among high-school graduates, the number fell from 1.8 to 1.2 (32 percent); and among those with fewer than 12 years of education, the number fell from 1.4 to 1.1 (25 percent). In other words, at all educational (and hence social) levels of American society, and counting all sorts of group memberships, the average number of associational memberships has fallen by about a fourth over the last quarter-century.
I don’t think formal procedures are likely to be followed and I don’t think it’s generally sensible to go to all the trouble of building an explicit policy to kick people out of EA. It’s a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly. Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology.
I’m not against online discussions on a structural level. I think they’re fine. I’m against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.
I don’t think formal procedures are likely to be followed
The impression I get from Jeff’s post is that the people involved took great pains to be as reasonable as possible. They don’t even issue recommendations for what to do in the body of the post—they just present observations. This after ~2000 edits over the course of more than two months. This makes me think they’d have been willing to go to the trouble of following a formal procedure. Especially if the procedure was streamlined enough that it took less time than what they actually did.
I don’t think it’s generally sensible to go to all the trouble of building an explicit policy to kick people out of EA
My recommendations are about how to formally resolve divisive disputes in general. If divisive disputes constitute existential threats to the movement, it might make sense to have a formal policy for resolving them, in the same way buildings have fire extinguishers despite the low rate of fires. Also, I took in to account that my policy might be used rarely or never, and kept its maintenance cost as low as possible.
It’s a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly.
Drama seems pretty universal—I don’t think it can be wished away.
Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology.
There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it’d be nice to know when it’s acceptable to ban a user from the EA forum, Facebook group, etc.
I’m not especially impressed with the reference class of social movements when it comes to doing good, and I’m not sure we should do a particular thing just because it’s what other social movements do.
I keep seeing other communities implode due to divisive internet drama, and I’d rather this not happen to mine. I would at least like my community to find a new way to implode. I’d rather be an interesting case study for future generations than an uninteresting one.
I’m against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.
So what’s the right way to take action, if you and your friends think someone is a bad actor who’s harming your movement?
The impression I get from Jeff’s post is that the people involved took great pains to be as reasonable as possible. They don’t even issue recommendations for what to do in the body of the post—they just present observations. This after ~2000 edits over the course of more than two months. This makes me think they’d have been willing to go to the trouble of following a formal procedure.
I mean for the community as a whole, to say, “oh, look, our thought leaders decided to reject someone—ok, let’s all shut them out.”
Drama seems pretty universal—I don’t think it can be wished away.
There’s the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko’s Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.
There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it’d be nice to know when it’s acceptable to ban a user from the EA forum, Facebook group, etc
Those groups can make their own decisions. EA has no central authority. I moderate a group like that and there is no chance I’d ban someone just because of the sort of thing which is going on here, and certainly not merely because the high chancellor of the effective altruists told me to.
I’m not especially impressed with the reference class of social movements when it comes to doing good, and I’m not sure we should do a particular thing just because it’s what other social movements do.
We’re not following their lead on how to change the world. We’re following their lead on how to treat other members of the community. That’s something which is universal to social movements.
keep seeing other communities implode due to divisive internet drama, and I’d rather this not happen to mine. I would at least like my community to find a new way to implode. I’d rather be an interesting case study for future generations than an uninteresting one.
Is this serious? EA is way more important than yet another obscure annal in Internet history.
So what’s the right way to take action, if you and your friends think someone is a bad actor who’s harming your movement?
Tell it to them. Talk about it to other people. Run my organizations the way I see fit.
There’s the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko’s Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.
I think the second kind of drama is more likely in the absence of a governing body. See the vigilante action paragraph in this comment of mine.
Is this serious? EA is way more important than yet another obscure annal in Internet history.
If the limiting factor for a movement like Effective Altruism is being able to coordinate people via the Internet, then coordinating people via the Internet ought to be a problem of EA interest.
I see your objections to my proposal as being fundamentally aesthetic. You don’t like the idea of central authority, but not because of some particular reason why it would lead to bad consequences—it just doesn’t appeal to you intuitively. Does that sound accurate?
I think the second kind of drama is more likely in the absence of a governing body.
The second kind of drama was literally caused by the actions of a governing body. Specifically, one that was so self-absorbed in its own constellation of ideas that it forgot about everything that outsiders considered normal.
See the vigilante action paragraph in this comment of mine.
So you’re trying to say that the worst case scenario of setting up an official EA panel is not as bad as the worst case scenario of vigilantism. That’s a very limited argument. First, merely comparing the worst case scenarios is a very limited approach. Firstly because by definition these are events at the extreme tail ends of our expectations which implies that we are particularly incapable of understanding and predicting them, secondly because we also need to take probabilities into account, and thirdly because we need to take average, median, best case, etc. expectations into account. Furthermore, it’s not clear to me that the level of witch hunting and vigilantism currently present in programming, atheist, etc. communities, is actually worse than having a veritable political rift between EA organizations. Moreover, you’re jumping from Roko’s Basilisk type weird drama and controversy to vigilantism, when the two are fairly different things. And finally, you’re shifting the subject of discussion from a panel that excommunicates people to some kind of big organization that runs all the events.
Besides that, the fact that there has been essentially no vigilantism in EA except for a small number of people in this thread suggests that you’re jumping far too quickly to enormous solutions for vague problems.
If the limiting factor for a movement like Effective Altruism is being able to coordinate people via the Internet, then coordinating people via the Internet ought to be a problem of EA interest.
That’s way too simplistic. Communities don’t hit a ceiling and then fail when they run into a universal limiting factor. Their actions and evolution are complicated and chaotic and always affected by many things. And hardly any social movements are led by people who look at other social movements and then pattern their own behavior based on others’.
I see your objections to my proposal as being fundamentally aesthetic.
I prefer the term ‘common sense’.
You don’t like the idea of central authority, but not because of some particular reason why it would lead to bad consequences—it just doesn’t appeal to you intuitively.
The second kind of drama was literally caused by the actions of a governing body. Specifically, one that was so self-absorbed in its own constellation of ideas that it forgot about everything that outsiders considered normal.
If selection of leadership is an explicit process, we can be careful to select people we trust to represent the EA movement to the world at large. If the process isn’t explicit, forum moderators may be selected in an incidental way, e.g. on the basis of being popular bloggers.
So you’re trying to say that the worst case scenario of setting up an official EA panel is not as bad as the worst case scenario of vigilantism. That’s a very limited argument. First, merely comparing the worst case scenarios is a very limited approach. Firstly because by definition these are events at the extreme tail ends of our expectations which implies that we are particularly incapable of understanding and predicting them, secondly because we also need to take probabilities into account, and thirdly because we need to take average, median, best case, etc. expectations into account.
Governance in general seems like it’s mainly about mitigation of worst case scenarios. Anyway, the evidence I presented doesn’t just apply to the tail ends of the distribution.
Furthermore, it’s not clear to me that the level of witch hunting and vigilantism currently present in programming, atheist, etc. communities, is actually worse than having a veritable political rift between EA organizations.
This is an empirical question. I don’t get the impression that competition between organizations is usually very destructive. It might be interesting for someone to research e.g. the history of the NBA and the ABA (competing professional basketball leagues in the 1970s) or the history of AYSO and USYSA (competing youth soccer leagues in the US that still both exist—contrast with youth baseball, where I don’t believe Little League has any serious rivals). I haven’t heard much about destructive competition between rival organizations of this type. Even rival businesses are often remarkably civil towards one another.
I suspect the reason competition between organizations is rarely destructive is because organizations are fighting over mindshare, and acting like a jerk is a good way to lose mindshare. When Google released its Dropbox competitor Google Drive, the CEO of Dropbox could have started saying nasty things about Google’s CEO in order to try & discredit Drive. Instead, he cracked a joke. The second response makes me much more favorably inclined toward Dropbox’s product.
Vigilantes don’t typically think like this. They’re not people who were chosen by others to represent an organization. They’re people who self-select on the basis of anger. They want revenge. And they often do things that end up discrediting their cause.
The biggest example I can think of re: organizations competing in a nasty way is rival political parties, and I think there are incentives that account for that. Based on what I’ve read about the details of how Australia’s system operates, it seems like Australian politicians face a slightly better set of incentives than American ones. I’d be interested to hear from Australians about whether they think their politicians are less nasty to each other.
Was there a particular case of destructive competition between organizations that you had in mind?
the fact that there has been essentially no vigilantism in EA except for a small number of people in this thread suggests that you’re jumping far too quickly to enormous solutions for vague problems.
Part of the reason this hasn’t been much of a problem is because the EA movement is sufficiently “elitist” to filter out troublemakers during the recruitment stage. (Gleb got through, which is arguably my fault—I’m the person who introduced him to other EAs and told them his organization seemed interesting. Sorry about that.) Better mechanisms for mitigating bad actors who get through means we can be less paranoid about growth.
Also, it makes sense to set something like this up well before it’s needed. If it’s formed in response to an existing crisis, it won’t have much accumulated moral authority, and it might look like a play on the part of one party or another to create a “neutral” arbiter that favors them.
And hardly any social movements are led by people who look at other social movements and then pattern their own behavior based on others’.
People in EA have done this a fair amount. I’ve heard of at least two EAs besides Jeff who have spent significant time looking at the history of social movements, and here is OpenPhil’s research in to the history of philanthropy. I assume a smart EA-type movement of the future would also do this stuff.
I also think that contributing to society’s stock of knowledge about how to organize people is valuable, because groups are rarely set up for the purpose of doing harm and often end up incidentally doing good (e.g. charitable activities of fraternal organizations).
Governance in general seems like it’s mainly about mitigation of worst case scenarios.
Doesn’t seem like that to me. And just because “governance in general” does something doesn’t mean we should.
This is an empirical question.
Yeah, and it’s unclear. I don’t see why it is relevant anyway. I never claimed that creating an EA panel would lead to a political divide between organizations.
Part of the reason this hasn’t been much of a problem is because the EA movement is sufficiently “elitist” to filter out troublemakers during the recruitment stage.
Better mechanisms for mitigating bad actors who get through means we can be less paranoid about growth.
We’re not paranoid about growth and we’re not being deliberately elitist. People won’t change their recruiting efforts just because a few people got officially kicked out. When the rubber hits the road on spreading EA, people just busy themselves with their activities, rather than optimizing some complicated function.
People in EA have done this a fair amount. I’ve heard of at least two EAs besides Jeff who have spent significant time looking at the history of social movements, and here is OpenPhil’s research in to the history of philanthropy. I assume a smart EA-type movement of the future would also do this stuff.
Yeah, EA, which is not a typical social movement. I’ve not heard of others doing this. Hardly any.
Saying that you want to experiment with EA because risking the stability of a(n unusually important) social movement just because it might benefit random people with unknown intentions who may or may not study our history is taking it a little far.
I also think that contributing to society’s stock of knowledge about how to organize people is valuable, because groups are rarely set up for the purpose of doing harm and often end up incidentally doing good (e.g. charitable activities of fraternal organizations).
Well most of them are relatively ineffective and most of them don’t study histories of social movements. As for the ones that do, they don’t look up obscure things such as this. When people spend significant time looking at the history of social movements, they look at large, notable, well documented cases. They will not look at a few people’s online actions. There is no shortage of stories of people doing online things at this low level of notability and size.
Saying that you want to experiment with EA because risking the stability of a(n unusually important) social movement just because it might benefit random people with unknown intentions who may or may not study our history is taking it a little far.
Not much of a problem except the time you wasted going after it. Few people in the outside world knew about InIn; fewer still could have associated it with effective altruism. Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity.
I’m done arguing about this, but if you still want an ex post facto solution just to ward off imagined future Glebs, take a moment to go to people in the actual outside world, i.e. people who have experience with social movements outside of this circlejerk, and ask them “hey, I’m a member of a social movement based on charity and altruism. We had someone who associated with our community and did some shady things. So we’d like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them. Could you be so kind as to tell us if this is the awful idea that it sounds like? Thanks.”
we’d like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them.
So here’s your proposal for dealing with bad actors in a different comment:
Tell it to them. Talk about it to other people. Run my organizations the way I see fit.
You’ve found ways to characterize other proposals negatively without explaining how they would concretely lead to bad consequences. I’ll note that I can do the same for this proposal—talking to them directly is “rude” and “confrontational”, while talking about it to other people is “gossip” if not “backstabbing”.
Dealing with bad actors is necessarily going to involve some kind of hostile action, and it’s easy to characterize almost any hostile action negatively.
I think the way to approach this topic is to figure out the best way of doing things, then find the framing that will allow us to spend as few weirdness points as possible. I doubt this will be hard, as I don’t think this is very weird. I lived in a large student co-op with just a 3-digit number of people, and we had formal meetings with motions and elections and yes, formal expulsions. The Society for Creative Anachronism is about dressing up and pretending you’re living in medieval times. Here’s their organizational handbook with bylaws. Check out section X, subsection C, subsection 3 where “Expulsion from the SCA” is discussed:
a. Expulsion precludes the individual from attendance or participation in any way, shape or form in any SCA activity, event, practice, or official gathering for any reason, at any time. Expulsions are temporary until the Board imposes a Revocation of Membership and Denial of Participation (R&D). This includes a ban on participation on officially recognized SCA social media (Facebook) sites, officially recognized SCA electronic email lists, and officially recognized SCA webpages.
You’ve found ways to characterize other proposals negatively without explaining how they would concretely lead to bad consequences.
Sure I did. I said it would create unnecessary bureaucracy taking up people’s time and it would make judgements and arguments that would start big new controversies where its opinions wouldn’t be universally followed. Also, it would look ridiculous to anyone on the outside.
I think the way to approach this topic is to figure out the best way of doing things, then find the framing that will allow us to spend as few weirdness points as possible. I doubt this will be hard, as I don’t think this is very weird.
Is it not apparent that other things besides ‘weirdness points’ should be factored into decisionmaking?
The Society for Creative Anachronism is about dressing up and pretending you’re living in medieval times. Here’s their organizational handbook with bylaws. Check out section X, subsection C, subsection 3 where “Expulsion from the SCA” is discussed:
You found an organization that excludes people from itself. So what? The question here is about a broad social movement trying to kick people out. If all the roleplayers of the world decided to make a Roleplaying Committee whose job was to ban people from participating in roleplaying, you’d have a point.
Sure I did. I said it would create unnecessary bureaucracy taking up people’s time and it would make judgements and arguments that would start big new controversies where its opinions wouldn’t be universally followed. Also, it would look ridiculous to anyone on the outside.
That’s fair. Here are my responses:
Specialization of labor has a track record of saving people time that goes back millenia. The fact that we have police, whose job it is to deal with crime, means I have to spend a lot less time worrying about crime personally. If we got rid of the police, I predict the amount of crime-related drama would rise. See Steven Pinker on why he’s no longer an anarchist.
A respected neutral panel whose job is resolving controversies has a better chance of its opinions being universally followed than people whose participation in a discussion is selected on the basis of anger—especially if the panel is able to get better at mediation over time, through education and experience.
With regard to ridiculousness, I don’t think what I’m suggesting is very different than the way lots of groups govern themselves. Right now you’re thinking of effective altruism as part of the “movement” reference class, but I suspect in many cases a movement or hobby will have one or more “associations” which form de facto governing bodies. Scouting is a movement. The World Organization of the Scout Movement is an umbrella organization of national Scouting organizations, governed by the World Scout Committee. Chess is a hobby. FIDE is an international organization that governs competitive chess and consists of 185 member federations. One can imagine the creation of an umbrella organization for all the existing EA organizations that served a role similar to these.
Is it not apparent that other things besides ‘weirdness points’ should be factored into decisionmaking?
I’m feeling frustrated, because it seems like you keep interpreting my statements in a very uncharitable way. In this case, what I meant to communicate was that we should factor in everything besides weirdness points, then factor in weirdness points. Please be assured that I want to do whatever the best thing is, I consider what the best thing is to be an empirical question, and I appreciate quality critical feedback—but not feedback that just drains my energy.
You found an organization that excludes people from itself. So what? The question here is about a broad social movement trying to kick people out. If all the roleplayers of the world decided to make a Roleplaying Committee whose job was to ban people from participating in roleplaying, you’d have a point.
Implementation of my proposal might involve the creation of an “Effective Altruism Assocation”, analgous to the SCA, as I describe here.
Specialization of labor has a track record of saving people time that goes back millenia.
Sounds great, but it’s only valuable when people can actually specialize. You can’t specialize in determining whether somebody’s a true EA or not. Being on a committee that does this won’t make you wiser or fairer about it. It’s a job that’s equally doable by people already in the community with their existing skills and their existing job titles.
A respected neutral panel whose job is resolving controversies has a better chance of its opinions being universally followed than people whose participation in a discussion is selected on the basis of anger
It’s trivially true that the majority opinion is most likely to be followed.
With regard to ridiculousness, I don’t think what I’m suggesting is very different than the way lots of groups govern themselves.
Sure it is. You’re suggesting that the FIDE start deciding who’s not allowed to play chess.
In this case, what I meant to communicate was that we should factor in everything besides weirdness points, then factor in weirdness points.
I don’t think the order in which you factor things will make a difference in how the options are eventually ranked, assuming you’re being rational. In any case, there are large differences. For one thing, the SCA does not care about how it is perceived by outsiders. The SCA is often rewarded for being weird. The SCA is also not necessarily rational.
Implementation of my proposal might involve the creation of an “Effective Altruism Assocation”, analgous to the SCA, as I describe here.
Then you’re suggesting something far larger and far more comprehensive than anything that I’ve heard about, which I have no interest in discussing.
Being on a committee that does this won’t make you wiser or fairer about it.
I actually think being on a committee helps some on its own, because you know you’ll be held accountable for how you do your job. But I expect most of the advantages of a committee to be in (a) identifying people who are wise and fair to serve on it (and yes, I do think some people are wiser and fairer than others) (b) having those people spend a lot of time thinking about the relevant considerations (c) overcoming bystander effects and ensuring that there exists some neutral third party to help adjudicate conflicts.
If there’s no skill to this sort of thing, why not make decisions by flipping coins?
It’s a job that’s equally doable by people already in the community with their existing skills and their existing job titles.
Well naturally, the committee would be staffed by people who are already in the community, and it would probably not be their full-time job.
Sure it is. You’re suggesting that the FIDE start deciding who’s not allowed to play chess.
Do you really think chess federations will let you continue to play at their events if you cheat or if you’re rude/aggressive?
Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity.
Looking at the links you shared it looks like these accounts weren’t so much ‘fake’ but just new accounts from Gleb that were used for broadcasting/spamming Gleb’s book on Reddit. That attracted criticism for the aggressive self-promotion (both by sending to so many reddits, and the self-promotional spin in the message).
The commenters call out angela_theresa for creating a Reddit account just to promote the book. She references an Amazon review, and there is an Amazon review from the same time period by an Angela Hodge (not an InIn contractor). My judgment is that is a case of genuine appreciation of the book, perhaps encouraged by Gleb’s requests for various actions to advance the book. In one of the reviews she mentions that she knows Gleb personally, but says she got a lot out of the book.
At least one other account was created to promote the book, but I haven’t been able to determine whether it was an InIn affiliate. Gleb says he
didn’t ask, I mean specifically that I did not in any way hint that they should do so or that doing so is a good idea 🙂 Again, I want to be clear that they might or might not have done so out of their own initiative
Ok my goal was not to launch accusations, I just wanted to point out that even when people were saying this (they thought they were fake accounts) and looking into his personal info they didn’t say anything about altruism or charity, so the themes behind the content weren’t apparent, meaning that there was little or no damage to EA. Because most of the content on the site and book isn’t about charity or altruism, it’s not clear how well this promotes people to actually donate and stuff, but it can’t be very harmful.
Kbog, I think your general mistake on this thread as a whole is assuming a binary between “either we act charitably to people or we ostracise people whenever members of the community feel like outgrouping them”. Thus your straw-man characterisation of an
exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here
Which was exactly what I disavowed at the bottom of my long comment here.
Examples of why your dichotomy is false: we could have very explicit and contained rules, such as “If you do X, Y or Z then you’re out” and this would be different from the generic approach of “if anyone tries to outgrip them then support that effort”. Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted ‘community moderators’ who were asked to make decisions about this sort of thing. In an case, these are two I just came up with, the landscape is more nuanced than you’r accounting for.
To be more clear, I’m against both (a) witch hunts and (b) formal procedures of evicting people. The fact that one of these things can happen without the other does not eliminate the fact that both of them are still stupid on their own.
we could have very explicit and contained rules, such as “If you do X, Y or Z then you’re out” and this would be different from the generic approach of “if anyone tries to outgrip them then support that effort”.
As a counterexample to the dichotomy, sure. As something to be implemented… haha no. The more rules you make up the more argument there will be over what does or doesn’t fall under those rules, what to do with bad actions outside the rules, etc.
Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted ‘community moderators’
Maybe you shouldn’t outsource my decision about who is kosher to “trusted community moderators”. Why are people not smart enough to figure it out on their own?
And is this supposed to save time, the hundreds of hours that people are bemoaning here? A formal group with formal procedures processing random complaints and documenting them every week takes up at least as much time.
The system of everyone keeping track of everything works ok in small communities, but we’re so far above Dunbar’s number that I don’t think it’s viable anymore for us. As you point out, a more formal process wouldn’t have time for “processing random complaints and documenting them every week”, so they’d need a process for screening out everything but the most serious problems.
The system of everyone keeping track of everything works ok in small communities, but we’re so far above Dunbar’s number that I don’t think it’s viable anymore for us.
Everyone doesn’t have to keep track of everything. Everyone just needs to do what they can with their contacts and resources. Political parties are vastly larger than Dunbar’s Number and they (usually) don’t have formal committees designed to purge them of unwanted people. Same goes for just about every social movement that I can think of. Except for churches excommunicating people, of course.
This is the only time that there’s been a problem like this where people started calling for a formal process. You have no idea if it actually represents a frequent phenomenon.
so they’d need a process for screening out everything but the most serious problems.
Make bureaucracy more efficient by adding more bureaucracy...
In the US, and elsewhere, they use incentives to keep people in line, such as withholding endorsements or party funds, which can lead to people losing their seat, this effectively kicking them out of the party. See whips) for what this looks like in practice.
Also, in parliamentary systems, often times you can also kick people out of the party directly, or at the very least take away their power and position.
Yes, if you’re in charge of an organization or resources, you can allocate them and withhold them how you wish. Nothing I said is against that.
In parties and parliaments you can remove people from power. You can’t remove people from associating with your movement.
The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other people’s organizations and other people’s communities to exclude certain people.
In parties and parliaments you can remove people from power. You can’t remove people from associating with your movement.
Your party leadership can publicly denounce a person and disinvite them from your party’s convention. That amounts to about the same thing.
The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other people’s organizations and other people’s communities to exclude certain people.
I don’t (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way.
Good question—not really sure, I just meant to directly answer that one question. That being said, Social movements have, to varying degrees of success, managed to distance evenhanded from fringe subsets and problematic actors. How, exactly, one goes about doing this is unknown to me, but I’m sure that it’s something that we could (and should) learn from leaders of other movements.
Of the top of my head, the example that is most similar to our situation is the expulsion of Ralph Nader from the various movements and groups he was a part of after the Bush election.
Maybe you shouldn’t outsource my decision about who is kosher to “trusted community moderators”. Why are people not smart enough to figure it out on their own?
The issue in this case is not that he’s in the EA community, but that he’s trying to act as the EA community’s representative to people outside the community who are not well placed to make that judgment themselves.
That’s an important distinction, and acting against that (trying to act as the EA community’s representative) doesn’t automatically mean banning from the movement.
The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here is that it probably would have merited the early and permanent exclusion of the Singularity Institute/MIRI from the EA community. Holden wrote a blog on LessWrong saying that he didn’t like their organization and didn’t think they were worth funding. Some assorted complaints have been floating around the web for a long time complaining about them associating with neoreactionaries and about LessWrong being cultists as well as complaints about the way they communicate and write. There’s been a few odd ‘incidents’ (if you can call them that) over the years between MIRI, LessWrong, and the rationalist sphere. It would be easy to jumble all of that together into some kind of meta-post documenting concerns, and there is certainly no shortage of people who are willing and able to write long impassioned posts expressing their feelings and saying that they want nothing to do with SIAI/MIRI and recommending others to adhere to that. We could have done that, lots of people would come out of the woodwork to add their own complaints, the conversation would reach critical mass, and boom—all of a sudden, half the steam behind AI safety goes down the tubes.
It’s easy to find online communities today where people are mind-numbingly dismissive of anything AI-related due to a poorly-argued, critical-mass groupthink against everything LessWrong. Good thing that we’re not one of them.
I agree that it’s important that EA stay open to weird things and not exclude people solely for being low status. I see several key distinctions between early SI/early MIRI and Intentional Insights:
SI was cause focused, II a fundraising org. Causes can be argued on their merits. For fundraising, “people dislike you for no reason” is in and of itself evidence you are bad at fundraising and should stop.
I think this is an important general lesson. Right now “fundraising org” seems to be the default thing for people to start, but it’s actually one of the hardest things to do right and has the worst consequences if it goes poorly. With the exception of local groups, I’d like to see the community norms shift to discourage inexperienced people from starting fundraising groups.
AFAIK, SI wasn’t trying to use the credibility of the EA movement to bolster itself . Gleb is, both explicitly (by repeatedly and persistently listing endorsements he did not receive) and implicitly. As long as he is doing that the proportionate response is criticizing him/distancing him from EA enough to cancel out the benefits.
The effective altruism name wasn’t worth as much when MIRI was getting started. There was no point in faking an endorsement because no one had heard of us. Now that EA has some cachet with people outside the movement there exists the possibility of trying to exploit that cachet, and it makes sense for us to raise the bar on who gets to claim endorsement.
Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.
My experience (as someone connected to both the rationalist and Oxford/Giving What We Can clusters as EA came into being) is that its birth came out of Giving What We Can, and the communities you mentioned contributed to growth (by aligning with EA) but not so much to birth.
You can equally draw a list of distinctions which point in the other direction: distinctions that would have made it more worthwhile to exclude MIRI than to exclude InIn. I’ve listed some already.
I don’t think this comparison holds water. Briefly, I think SI/MIRI would have mostly attracted criticism for being weird in various ways. As far as I can tell, Gleb is not acting weird; he is acting normal in the sense that he’s making normal moves in a game (called Promote-Your-Organization-At-All-Costs) that other people in the community don’t want him playing, especially not in a way that implicates other EA orgs by association.
Whatever you think of that object-level point, an independent meta-level point: it’s also possible that the EA movement excluding SI/MIRI at some point would have been a reasonable move in expectation. Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.
They’ve attracted criticism for more substantial reasons; many academics didn’t and still don’t take them seriously because they have an unusual point of view. And other people believe that they are horrible people who are in between neoreactionary racists and a Silicon Valley conspiracy to take people’s money. It’s easy to pick up on something being a little off-putting and then get carried down the spiral of looking for and finding other problems. The original and underlying reason people have been pissed about InIn this entire time is that they are aesthetically displeased by their content. “It comes across as spammy and promotional”. An obvious typical mind fallacy. If you can fall for that then you can fall for “Eliezer’s writing style is winding and confusing.”
Highly implausible.
AI safety is a large issue. MIRI has done great work and has itself benefited tremendously from its involvement. Besides that, there have been many benefits to EA for aligning with rationalists more generally.
Yes, but people are taking this case to be a true positive that proves the rule, which is no better.
Some of the criticisms I’ve read of MIRI are so nasty that I hesitate to rehash them all here for fear of changing the subject and side tracking the conversation. I’ll just say this:
MIRI has been accused of much worse stuff than this post is accusing Gleb of right now. Compared to that weird MIRI stuff, Gleb looks like a normal guy who is fumbling his way through marketing a startup. The weird stuff MIRI / Eliezer did is really bizarre. For just one example, there are places in The Sequences where Eliezer presented his particular beliefs as The Correct Beliefs. In the context of a marketing piece, that would be bad (albeit in a mundane way that we see often), but in the context of a document on how to think rationally, that’s more like… egregious blasphemy. It’s a good thing the guy counter-balanced whatever that behavior was with articles like “Screening Off Authority” and “Guardians of the Truth”.
Do some searches for web marketing advice sometime, and you’ll see that Gleb might have actually been following some kind of instructions in some of the cases listed above. Not the best instructions, mind you… but somebody’s serious attempt to persuade you that some pretty weird stuff is the right thing to do. This is not exactly a science… it’s not even psychology. We’re talking about marketing. For instance, paying Facebook to promote things can result in problems… yet this is recommended by a really big company, Facebook. :/
There are a few complaints against him that stand out as a WTF… (Then again, if you’re really scouring for problems, you’re probably going to find the sorts of super embarrassing mistakes people only make when they’re really exhausted or whatever. I don’t know what to make of every single one of these examples yet.)
Anyway, MIRI / Eliezer can’t claim stuff like “I was following some marketing instructions I read on the Internet somewhere.”, which, IMO, would explain a lot of this stuff that Gleb did—which is not to say I think copying him is an effective or ethical way of promoting things! The Eliezer stuff was, like self-contradictory enough that it was weird to the point of being original. It took me forever to figure that guy out. There were several years where I simply had no cogent opinion on him.
The stuff Gleb is doing is just so commonly bad. It’s not an excuse. I still want to see InIn shape up or ship out. I think EA can and should have higher standards than this. I have read and experienced a lot in the area of promoting things, and I know there are ways of persuading through making people think that don’t bias them or mislead them, but by getting them more in touch with reality. I think it takes a really well thought out person to accomplish that because seeing reality is only the first step… then, you need to know how to deal with it, and you need to encourage the person to do something constructive with the knowledge as well. Sometimes bare information can leave people feeling pretty cynical, and it’s not like we were all taught how to be creative and resourceful and lead ourselves in situations that are unexpectedly different from what we believed.
I really believe there are better ways to be memorable other than making claims about how much attention you’re getting. Providing questionable info of this type is certainly bad. The way I’m seeing it, wasting time on such uninspired attempts involves such a large quantity of lost potential that questionable info is almost silly by comparison. I feel like we’re worried about a guy who says he has the best lemonade stand ever, but what we should be worried about is why he hasn’t managed to move up to selling at the grocery store yet.
I can very clearly envision the difference between what Gleb has been doing, and specific awesome ways in which it is possible to promote rationality. I can’t condemn Gleb as some sort of bad guy when what he’s doing wrong betrays such deep ignorance about marketing. I feel like: surely, a true villain would have taken over the beverage aisle at the grocery store by now.
I see insight in what Qiaochu wrote here:
Right now we don’t have a procedure set up for formally deciding whether a particular person is a bad actor. If someone feels that another person is a bad actor, the only way to deal with the situation is informally. Since the community largely functions online, the discussion has a “witch hunt” character to it.
I think most people agree that bad actors exist, and we should have the capability to kick them out in principle (even if we don’t want to use it in Gleb’s particular case). But I agree that online discussions are not the best way to make these decisions. I’ve spent some time thinking about better alternatives, and I’ll make a top-level post outlining my proposal if this comment gets at least +4.
Edit: Alternatively, for people who feel it should be possible to oust a person like Gleb with less effort, a formal procedure could streamline this kind of thing in the future.
[ETA: a number of these comments are addressed to possible versions of this that John is not advocating, see his comment replying to mine.]
My attitude on this is rather negative, for several reasons:
The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements
Individual fora have their moderation policies, individual organizations can choose who to affiliate with or how to authorize use of their trademarks, individuals can decide who to work with or donate to
There was no agreed-on course of action among the contributors to this document, let alone the wider EA community
Public discussion (including criticism) allows individual actors to make their own decisions
There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I don’t think others are in a position to ask that they cut off such interactions if they find them valuable
I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and not willing to give blind trust to) a non-transparent condemnation from such a process, and I think it would reflect badly on those involved and the movement as a whole
If one wants to avoid heated online discussions , flame wars, and whatnot, they would be elicited by the outputs of the formal process (moreso, if less transparent and careful, I think)
But controversial decisions will still need to be made—about who to ban from the forum, say. As EA gets bigger, I see advantages to setting up some sort of due process (if only so the process can be improved over time) vs doing things in an ad hoc way.
Well, perhaps an official body would choose some kind of compromise action, such as what you did (making knowledge about Gleb’s behavior public without doing anything else). I don’t see why this is a compelling argument for an ad hoc approach.
Without official means for dealing with bad actors, the only way to deal with them is by being a vigilante. The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario I’m most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way.
I don’t (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way.
OK, let’s make it transparent then :) The question here is formal vs ad hoc, not transparent vs opaque.
If I see a long post on the EA forum that explains why someone I know is bad for the movement, I need to read the entire post to determine whether it was constructed in a careful & transparent way. If the person is a good friend, I might be tempted to skip reading the post and just make a negative judgement about its authors. If the post is written by people whose job is to do things carefully and transparently (people who will be fired if they do this badly), it’s easier to accept the post’s conclusions at face value.
This is a very good point. One reason I got involved in the OP was to offset some of this selection effect. On the other hand, I was also reluctant to involve EA institutions to avoid dragging them into it (I was not expecting Will MacAskill’s post or the announcement by the EA Facebook group moderators, and mainly aiming at a summary of the findings for individuals). A respected institution may have an easier time in an individual case, but it may also lose some of its luster by getting involved in disputes.
Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/ad hoc. However a more official body could have more power to do the things I mention, so I don’t think the issues are orthogonal.
True, but I suspect the worst case scenario for an official body is still less bad than the worst case scenario for vigilantism. Let’s say we set up an Effective Altruism Association to be the governing body for effective altruism. Let’s say it becomes apparent over time that the board of the Effective Altruism Association is abusing its powers. And let’s say members of the board ignore pressure to step down, and there’s nothing in the Association’s charter that would allow us to fix this problem. Well at that point, someone can set up a rival League of Effective Altruists, and people can vote with their feet & start attending League-sponsored events instead of Association-sponsored events. This sounds to me like an outcome that would be bad, but not catastrophic in the way spiraling vigalantism has been for communities demographically similar to ours devoted to programming, atheism, video games, science fiction, etc. If anything, I am more worried about the case where the Association’s board is unable to do anything about vigilantism, or itself becomes the target of a hostile takeover by vigilantes.
I suspect a big cause of disagreement here is that in America at least, we’ve lost cultural memories about how best to organize ourselves.
From the essay Bowling Alone: America’s Declining Social Capital (15K citations on Google Scholar). You can read the essay for info on big drops in participation for churches, unions, PTAs, and civic/fraternal organizations.
I don’t think formal procedures are likely to be followed and I don’t think it’s generally sensible to go to all the trouble of building an explicit policy to kick people out of EA. It’s a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly. Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology.
I’m not against online discussions on a structural level. I think they’re fine. I’m against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.
The impression I get from Jeff’s post is that the people involved took great pains to be as reasonable as possible. They don’t even issue recommendations for what to do in the body of the post—they just present observations. This after ~2000 edits over the course of more than two months. This makes me think they’d have been willing to go to the trouble of following a formal procedure. Especially if the procedure was streamlined enough that it took less time than what they actually did.
My recommendations are about how to formally resolve divisive disputes in general. If divisive disputes constitute existential threats to the movement, it might make sense to have a formal policy for resolving them, in the same way buildings have fire extinguishers despite the low rate of fires. Also, I took in to account that my policy might be used rarely or never, and kept its maintenance cost as low as possible.
Drama seems pretty universal—I don’t think it can be wished away.
There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it’d be nice to know when it’s acceptable to ban a user from the EA forum, Facebook group, etc.
I’m not especially impressed with the reference class of social movements when it comes to doing good, and I’m not sure we should do a particular thing just because it’s what other social movements do.
I keep seeing other communities implode due to divisive internet drama, and I’d rather this not happen to mine. I would at least like my community to find a new way to implode. I’d rather be an interesting case study for future generations than an uninteresting one.
So what’s the right way to take action, if you and your friends think someone is a bad actor who’s harming your movement?
I mean for the community as a whole, to say, “oh, look, our thought leaders decided to reject someone—ok, let’s all shut them out.”
There’s the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko’s Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.
Those groups can make their own decisions. EA has no central authority. I moderate a group like that and there is no chance I’d ban someone just because of the sort of thing which is going on here, and certainly not merely because the high chancellor of the effective altruists told me to.
We’re not following their lead on how to change the world. We’re following their lead on how to treat other members of the community. That’s something which is universal to social movements.
Is this serious? EA is way more important than yet another obscure annal in Internet history.
Tell it to them. Talk about it to other people. Run my organizations the way I see fit.
I think the second kind of drama is more likely in the absence of a governing body. See the vigilante action paragraph in this comment of mine.
If the limiting factor for a movement like Effective Altruism is being able to coordinate people via the Internet, then coordinating people via the Internet ought to be a problem of EA interest.
I see your objections to my proposal as being fundamentally aesthetic. You don’t like the idea of central authority, but not because of some particular reason why it would lead to bad consequences—it just doesn’t appeal to you intuitively. Does that sound accurate?
The second kind of drama was literally caused by the actions of a governing body. Specifically, one that was so self-absorbed in its own constellation of ideas that it forgot about everything that outsiders considered normal.
So you’re trying to say that the worst case scenario of setting up an official EA panel is not as bad as the worst case scenario of vigilantism. That’s a very limited argument. First, merely comparing the worst case scenarios is a very limited approach. Firstly because by definition these are events at the extreme tail ends of our expectations which implies that we are particularly incapable of understanding and predicting them, secondly because we also need to take probabilities into account, and thirdly because we need to take average, median, best case, etc. expectations into account. Furthermore, it’s not clear to me that the level of witch hunting and vigilantism currently present in programming, atheist, etc. communities, is actually worse than having a veritable political rift between EA organizations. Moreover, you’re jumping from Roko’s Basilisk type weird drama and controversy to vigilantism, when the two are fairly different things. And finally, you’re shifting the subject of discussion from a panel that excommunicates people to some kind of big organization that runs all the events.
Besides that, the fact that there has been essentially no vigilantism in EA except for a small number of people in this thread suggests that you’re jumping far too quickly to enormous solutions for vague problems.
That’s way too simplistic. Communities don’t hit a ceiling and then fail when they run into a universal limiting factor. Their actions and evolution are complicated and chaotic and always affected by many things. And hardly any social movements are led by people who look at other social movements and then pattern their own behavior based on others’.
I prefer the term ‘common sense’.
It rings lots of lots of alarm bells.
If selection of leadership is an explicit process, we can be careful to select people we trust to represent the EA movement to the world at large. If the process isn’t explicit, forum moderators may be selected in an incidental way, e.g. on the basis of being popular bloggers.
Governance in general seems like it’s mainly about mitigation of worst case scenarios. Anyway, the evidence I presented doesn’t just apply to the tail ends of the distribution.
This is an empirical question. I don’t get the impression that competition between organizations is usually very destructive. It might be interesting for someone to research e.g. the history of the NBA and the ABA (competing professional basketball leagues in the 1970s) or the history of AYSO and USYSA (competing youth soccer leagues in the US that still both exist—contrast with youth baseball, where I don’t believe Little League has any serious rivals). I haven’t heard much about destructive competition between rival organizations of this type. Even rival businesses are often remarkably civil towards one another.
I suspect the reason competition between organizations is rarely destructive is because organizations are fighting over mindshare, and acting like a jerk is a good way to lose mindshare. When Google released its Dropbox competitor Google Drive, the CEO of Dropbox could have started saying nasty things about Google’s CEO in order to try & discredit Drive. Instead, he cracked a joke. The second response makes me much more favorably inclined toward Dropbox’s product.
Vigilantes don’t typically think like this. They’re not people who were chosen by others to represent an organization. They’re people who self-select on the basis of anger. They want revenge. And they often do things that end up discrediting their cause.
The biggest example I can think of re: organizations competing in a nasty way is rival political parties, and I think there are incentives that account for that. Based on what I’ve read about the details of how Australia’s system operates, it seems like Australian politicians face a slightly better set of incentives than American ones. I’d be interested to hear from Australians about whether they think their politicians are less nasty to each other.
Was there a particular case of destructive competition between organizations that you had in mind?
Part of the reason this hasn’t been much of a problem is because the EA movement is sufficiently “elitist” to filter out troublemakers during the recruitment stage. (Gleb got through, which is arguably my fault—I’m the person who introduced him to other EAs and told them his organization seemed interesting. Sorry about that.) Better mechanisms for mitigating bad actors who get through means we can be less paranoid about growth.
Also, it makes sense to set something like this up well before it’s needed. If it’s formed in response to an existing crisis, it won’t have much accumulated moral authority, and it might look like a play on the part of one party or another to create a “neutral” arbiter that favors them.
People in EA have done this a fair amount. I’ve heard of at least two EAs besides Jeff who have spent significant time looking at the history of social movements, and here is OpenPhil’s research in to the history of philanthropy. I assume a smart EA-type movement of the future would also do this stuff.
I also think that contributing to society’s stock of knowledge about how to organize people is valuable, because groups are rarely set up for the purpose of doing harm and often end up incidentally doing good (e.g. charitable activities of fraternal organizations).
Doesn’t seem like that to me. And just because “governance in general” does something doesn’t mean we should.
Yeah, and it’s unclear. I don’t see why it is relevant anyway. I never claimed that creating an EA panel would lead to a political divide between organizations.
We’re not paranoid about growth and we’re not being deliberately elitist. People won’t change their recruiting efforts just because a few people got officially kicked out. When the rubber hits the road on spreading EA, people just busy themselves with their activities, rather than optimizing some complicated function.
Yeah, EA, which is not a typical social movement. I’ve not heard of others doing this. Hardly any.
Saying that you want to experiment with EA because risking the stability of a(n unusually important) social movement just because it might benefit random people with unknown intentions who may or may not study our history is taking it a little far.
Well most of them are relatively ineffective and most of them don’t study histories of social movements. As for the ones that do, they don’t look up obscure things such as this. When people spend significant time looking at the history of social movements, they look at large, notable, well documented cases. They will not look at a few people’s online actions. There is no shortage of stories of people doing online things at this low level of notability and size.
That’s fair.
That’s what we did for a year+. The problem didn’t go away.
Not much of a problem except the time you wasted going after it. Few people in the outside world knew about InIn; fewer still could have associated it with effective altruism. Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity.
I’m done arguing about this, but if you still want an ex post facto solution just to ward off imagined future Glebs, take a moment to go to people in the actual outside world, i.e. people who have experience with social movements outside of this circlejerk, and ask them “hey, I’m a member of a social movement based on charity and altruism. We had someone who associated with our community and did some shady things. So we’d like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them. Could you be so kind as to tell us if this is the awful idea that it sounds like? Thanks.”
So here’s your proposal for dealing with bad actors in a different comment:
You’ve found ways to characterize other proposals negatively without explaining how they would concretely lead to bad consequences. I’ll note that I can do the same for this proposal—talking to them directly is “rude” and “confrontational”, while talking about it to other people is “gossip” if not “backstabbing”.
Dealing with bad actors is necessarily going to involve some kind of hostile action, and it’s easy to characterize almost any hostile action negatively.
I think the way to approach this topic is to figure out the best way of doing things, then find the framing that will allow us to spend as few weirdness points as possible. I doubt this will be hard, as I don’t think this is very weird. I lived in a large student co-op with just a 3-digit number of people, and we had formal meetings with motions and elections and yes, formal expulsions. The Society for Creative Anachronism is about dressing up and pretending you’re living in medieval times. Here’s their organizational handbook with bylaws. Check out section X, subsection C, subsection 3 where “Expulsion from the SCA” is discussed:
Sure I did. I said it would create unnecessary bureaucracy taking up people’s time and it would make judgements and arguments that would start big new controversies where its opinions wouldn’t be universally followed. Also, it would look ridiculous to anyone on the outside.
Is it not apparent that other things besides ‘weirdness points’ should be factored into decisionmaking?
You found an organization that excludes people from itself. So what? The question here is about a broad social movement trying to kick people out. If all the roleplayers of the world decided to make a Roleplaying Committee whose job was to ban people from participating in roleplaying, you’d have a point.
That’s fair. Here are my responses:
Specialization of labor has a track record of saving people time that goes back millenia. The fact that we have police, whose job it is to deal with crime, means I have to spend a lot less time worrying about crime personally. If we got rid of the police, I predict the amount of crime-related drama would rise. See Steven Pinker on why he’s no longer an anarchist.
A respected neutral panel whose job is resolving controversies has a better chance of its opinions being universally followed than people whose participation in a discussion is selected on the basis of anger—especially if the panel is able to get better at mediation over time, through education and experience.
With regard to ridiculousness, I don’t think what I’m suggesting is very different than the way lots of groups govern themselves. Right now you’re thinking of effective altruism as part of the “movement” reference class, but I suspect in many cases a movement or hobby will have one or more “associations” which form de facto governing bodies. Scouting is a movement. The World Organization of the Scout Movement is an umbrella organization of national Scouting organizations, governed by the World Scout Committee. Chess is a hobby. FIDE is an international organization that governs competitive chess and consists of 185 member federations. One can imagine the creation of an umbrella organization for all the existing EA organizations that served a role similar to these.
I’m feeling frustrated, because it seems like you keep interpreting my statements in a very uncharitable way. In this case, what I meant to communicate was that we should factor in everything besides weirdness points, then factor in weirdness points. Please be assured that I want to do whatever the best thing is, I consider what the best thing is to be an empirical question, and I appreciate quality critical feedback—but not feedback that just drains my energy.
Implementation of my proposal might involve the creation of an “Effective Altruism Assocation”, analgous to the SCA, as I describe here.
Sounds great, but it’s only valuable when people can actually specialize. You can’t specialize in determining whether somebody’s a true EA or not. Being on a committee that does this won’t make you wiser or fairer about it. It’s a job that’s equally doable by people already in the community with their existing skills and their existing job titles.
It’s trivially true that the majority opinion is most likely to be followed.
Sure it is. You’re suggesting that the FIDE start deciding who’s not allowed to play chess.
I don’t think the order in which you factor things will make a difference in how the options are eventually ranked, assuming you’re being rational. In any case, there are large differences. For one thing, the SCA does not care about how it is perceived by outsiders. The SCA is often rewarded for being weird. The SCA is also not necessarily rational.
Then you’re suggesting something far larger and far more comprehensive than anything that I’ve heard about, which I have no interest in discussing.
I actually think being on a committee helps some on its own, because you know you’ll be held accountable for how you do your job. But I expect most of the advantages of a committee to be in (a) identifying people who are wise and fair to serve on it (and yes, I do think some people are wiser and fairer than others) (b) having those people spend a lot of time thinking about the relevant considerations (c) overcoming bystander effects and ensuring that there exists some neutral third party to help adjudicate conflicts.
If there’s no skill to this sort of thing, why not make decisions by flipping coins?
Well naturally, the committee would be staffed by people who are already in the community, and it would probably not be their full-time job.
Do you really think chess federations will let you continue to play at their events if you cheat or if you’re rude/aggressive?
Looking at the links you shared it looks like these accounts weren’t so much ‘fake’ but just new accounts from Gleb that were used for broadcasting/spamming Gleb’s book on Reddit. That attracted criticism for the aggressive self-promotion (both by sending to so many reddits, and the self-promotional spin in the message).
The commenters call out angela_theresa for creating a Reddit account just to promote the book. She references an Amazon review, and there is an Amazon review from the same time period by an Angela Hodge (not an InIn contractor). My judgment is that is a case of genuine appreciation of the book, perhaps encouraged by Gleb’s requests for various actions to advance the book. In one of the reviews she mentions that she knows Gleb personally, but says she got a lot out of the book.
At least one other account was created to promote the book, but I haven’t been able to determine whether it was an InIn affiliate. Gleb says he
Ok my goal was not to launch accusations, I just wanted to point out that even when people were saying this (they thought they were fake accounts) and looking into his personal info they didn’t say anything about altruism or charity, so the themes behind the content weren’t apparent, meaning that there was little or no damage to EA. Because most of the content on the site and book isn’t about charity or altruism, it’s not clear how well this promotes people to actually donate and stuff, but it can’t be very harmful.
Right, I just wanted to diminish uncertainty about the topic and reduce speculation, since it had not been previously mentioned.
Kbog, I think your general mistake on this thread as a whole is assuming a binary between “either we act charitably to people or we ostracise people whenever members of the community feel like outgrouping them”. Thus your straw-man characterisation of an
Which was exactly what I disavowed at the bottom of my long comment here.
Examples of why your dichotomy is false: we could have very explicit and contained rules, such as “If you do X, Y or Z then you’re out” and this would be different from the generic approach of “if anyone tries to outgrip them then support that effort”. Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted ‘community moderators’ who were asked to make decisions about this sort of thing. In an case, these are two I just came up with, the landscape is more nuanced than you’r accounting for.
To be more clear, I’m against both (a) witch hunts and (b) formal procedures of evicting people. The fact that one of these things can happen without the other does not eliminate the fact that both of them are still stupid on their own.
As a counterexample to the dichotomy, sure. As something to be implemented… haha no. The more rules you make up the more argument there will be over what does or doesn’t fall under those rules, what to do with bad actions outside the rules, etc.
Maybe you shouldn’t outsource my decision about who is kosher to “trusted community moderators”. Why are people not smart enough to figure it out on their own?
And is this supposed to save time, the hundreds of hours that people are bemoaning here? A formal group with formal procedures processing random complaints and documenting them every week takes up at least as much time.
The system of everyone keeping track of everything works ok in small communities, but we’re so far above Dunbar’s number that I don’t think it’s viable anymore for us. As you point out, a more formal process wouldn’t have time for “processing random complaints and documenting them every week”, so they’d need a process for screening out everything but the most serious problems.
Everyone doesn’t have to keep track of everything. Everyone just needs to do what they can with their contacts and resources. Political parties are vastly larger than Dunbar’s Number and they (usually) don’t have formal committees designed to purge them of unwanted people. Same goes for just about every social movement that I can think of. Except for churches excommunicating people, of course.
This is the only time that there’s been a problem like this where people started calling for a formal process. You have no idea if it actually represents a frequent phenomenon.
Make bureaucracy more efficient by adding more bureaucracy...
The Democrats have the Democratic National Committee, and the Republicans have the Republican National Committee.
Do they kick people out of the party?
More specifically, do they kick people out of ‘conservatism’ and ‘liberalism’?
In the US, and elsewhere, they use incentives to keep people in line, such as withholding endorsements or party funds, which can lead to people losing their seat, this effectively kicking them out of the party. See whips) for what this looks like in practice. Also, in parliamentary systems, often times you can also kick people out of the party directly, or at the very least take away their power and position.
Yes, if you’re in charge of an organization or resources, you can allocate them and withhold them how you wish. Nothing I said is against that.
In parties and parliaments you can remove people from power. You can’t remove people from associating with your movement.
The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other people’s organizations and other people’s communities to exclude certain people.
Your party leadership can publicly denounce a person and disinvite them from your party’s convention. That amounts to about the same thing.
Quoting myself:
Good question—not really sure, I just meant to directly answer that one question. That being said, Social movements have, to varying degrees of success, managed to distance evenhanded from fringe subsets and problematic actors. How, exactly, one goes about doing this is unknown to me, but I’m sure that it’s something that we could (and should) learn from leaders of other movements. Of the top of my head, the example that is most similar to our situation is the expulsion of Ralph Nader from the various movements and groups he was a part of after the Bush election.
The issue in this case is not that he’s in the EA community, but that he’s trying to act as the EA community’s representative to people outside the community who are not well placed to make that judgment themselves.
That’s an important distinction, and acting against that (trying to act as the EA community’s representative) doesn’t automatically mean banning from the movement.