Setting Community Norms and Values: A response to the InIn Open Letter

I’m writing this in response to the recent post about Intentional Insights documenting the many ways in which Gleb and the organisation he runs has acted in ways not representing EA values. Please take this post as representative of the views of the Centre for Effective Altruism (CEA) on the matter.

As documented in the Open Letter, Intentional Insights have been systematically misleading in their public communications on many occasions, have astroturfed, and have engaged in morally dubious hiring practices. But what’s been most remarkable about this affair is how little Gleb has been willing to change his actions in light of this documentation. If I had been in his position, I’d have radically revised my activities, or quit my position long ago. Making mistakes is something we all do. But ploughing ahead with your plans despite extensive, deep and well-substantiated criticism of them by many thoughtful members of the EA community — who are telling you not just that your plans are misguided but that they are actively harmful — is not ok. It’s the opposite of what effective altruism stands for.

Because of this, we want to have no association with Intentional Insights. We do not consider them a representative of EA, we do not want to have any of CEA’s images or logos (including Giving What We Can) used in any of Intentional Insights’ promotional materials; we will not give them a platform at EAG or EAGx events; and we will encourage local group leaders not to have them speak.

Moreover, I think we as a community should think about how we can avoid similar things from happening in the future, because it seems to me that Intentional Insight’s behaviour is indicative of the first signs of a larger problem. Intentional Insights isn’t the only example of behaviour, conducted under the EA name, that’s clearly out of line with EA values. Other examples over the past year include:
  • Someone using the effective altruism brand to solicit “donations” to a company that was not and could not become a non-profit, using text taken from other EA websites

  • People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)

  • People co-opting the term ‘effective altruism’ to justify activities that they were already doing that clearly wouldn’t be supported by EA reasoning

  • Someone making threats of physical violence to another member of the EA community for not supporting their organisation

Problems like these, it seems to me, will only get worse over time. As the community grows, the likelihood of behaviour like this increases, and the costs of behaviour like this increases too, because bad actors taint the whole movement.

At the moment, there’s simply no system set up within the community to handle this. What currently happens is: someone starts engaging in bad activities → bad activities are tolerated for an extended period of time, aggravating many → repeated public complaints start surfacing, but still no action → eventually a coalition of community members gather together to publicly denounce the activities. This, it seems to me, is a bad process. It’s bad for actually preventing inappropriate behaviour, because the response to that behaviour is so slow, and because there’s no real sanction that others in the community can make. It’s bad for the community members who have to spend hundreds of hours of their time documenting the inappropriate behaviour. It’s bad for those who receive the criticism, because they will naturally feel they’ve been ganged up upon, and have not had a ‘fair trial’. And it’s bad for onlookers who, not knowing all the details of the situation, will see a fractious movement engaging in witch hunts.

I think that in the mid to long term the consequences of this could be very great. The default outcome for any social movement is to fizzle or fragment, and we should be looking for ways that this will happen with EA. If the number of examples of bad behaviour continues to grow—which we should expect to see if we let the status quo continue—then this seems like an obvious way in which the EA movement could fail, whether through effective altruism becoming known as a community where people engage in morally dubious activities for the greater good, where the community gets a reputation for being unpleasant, or where the term ‘effective altruism’ has lost the meaning that it currently has and people start using it to refer to any attempt to make a difference that makes at least a passing nod to using data.

People often look to CEA to resolve examples of bad behaviour, but so far we have been coy about doing so. Primarily, we’re worried about overreach: effective altruism is a movement that is much larger than any one organisation, and we have not wanted to create further ‘mob rule’ dynamics by interfering in affairs that people in the community might judge to be none of CEA’s business.

For example, internally we discussed whether we should ban Gleb from the EA Forum, which we help to run, for a three-month period. I think that this response would easily be warranted in light of Intentional Insights’ activities. But, for me, that proposal rang alarm bells of overreach: the EA Forum seems to me to be a community good, and it seems to me that CEA doesn’t have the legitimacy to take that action. But, unfortunately, neither does anyone else.

So I’d like there to exist a more formal process by which we can ensure that people taking action under the banner of effective altruism are acting in accordance with EA values, and strengthening rather than damaging the movement. I think that this is vital if the EA community is going to grow substantially and reach its full potential. If we did this successfully, this process would avoid feelings that EA is run by mob rule, it would ensure that bad behaviour is nipped in the bud, rather than growing to the point where the community spends hundreds of hours dealing with it, and it would give allegedly bad actors a transparent and fair assessment.

To this end, what I’d propose is:

  • Creating a set of EA guiding principles

  • Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.

The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.

I’d imagine that, in the first instance, if there was an example of egregious violation of the guiding principles of EA, the community panel would make recommendations to the actor in question. For example, after GiveWell’s astroturfing incident, the organisation self-sanctioned: one of the cofounders was demoted and both cofounders were fined $5000. If the matter couldn’t be resolved in this way, then the panel could make recommendations to the rest of the community.

There are a lot of details to be worked out here, but I think that the case for creating something like this is strong. We’re going to try sketching out a proposal, trying to get as much feedback from the community as possible along the way. I’d be interested in people’s thoughts and reactions in the comments below.

Disclosures: I know personally all of the authors of the Open Letter. Jeff Kaufman is a donor to CEA and is married to Julia Wise, an employee of CEA; Greg Lewis is a donor to CEA and has previously volunteered for CEA; Oliver Habryka is an employee of CEA, but worked on the Open Letter on his personal time. I wasn’t involved in any capacity with the creation of the open letter.