Setting Community Norms and Values: A response to the InIn Open Letter
I’m writing this in response to the recent post about Intentional Insights documenting the many ways in which Gleb and the organisation he runs has acted in ways not representing EA values. Please take this post as representative of the views of the Centre for Effective Altruism (CEA) on the matter.
As documented in the Open Letter, Intentional Insights have been systematically misleading in their public communications on many occasions, have astroturfed, and have engaged in morally dubious hiring practices. But what’s been most remarkable about this affair is how little Gleb has been willing to change his actions in light of this documentation. If I had been in his position, I’d have radically revised my activities, or quit my position long ago. Making mistakes is something we all do. But ploughing ahead with your plans despite extensive, deep and well-substantiated criticism of them by many thoughtful members of the EA community — who are telling you not just that your plans are misguided but that they are actively harmful — is not ok. It’s the opposite of what effective altruism stands for.
Because of this, we want to have no association with Intentional Insights. We do not consider them a representative of EA, we do not want to have any of CEA’s images or logos (including Giving What We Can) used in any of Intentional Insights’ promotional materials; we will not give them a platform at EAG or EAGx events; and we will encourage local group leaders not to have them speak.
Someone using the effective altruism brand to solicit “donations” to a company that was not and could not become a non-profit, using text taken from other EA websites
People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)
People co-opting the term ‘effective altruism’ to justify activities that they were already doing that clearly wouldn’t be supported by EA reasoning
Someone making threats of physical violence to another member of the EA community for not supporting their organisation
Problems like these, it seems to me, will only get worse over time. As the community grows, the likelihood of behaviour like this increases, and the costs of behaviour like this increases too, because bad actors taint the whole movement.
At the moment, there’s simply no system set up within the community to handle this. What currently happens is: someone starts engaging in bad activities → bad activities are tolerated for an extended period of time, aggravating many → repeated public complaints start surfacing, but still no action → eventually a coalition of community members gather together to publicly denounce the activities. This, it seems to me, is a bad process. It’s bad for actually preventing inappropriate behaviour, because the response to that behaviour is so slow, and because there’s no real sanction that others in the community can make. It’s bad for the community members who have to spend hundreds of hours of their time documenting the inappropriate behaviour. It’s bad for those who receive the criticism, because they will naturally feel they’ve been ganged up upon, and have not had a ‘fair trial’. And it’s bad for onlookers who, not knowing all the details of the situation, will see a fractious movement engaging in witch hunts.
I think that in the mid to long term the consequences of this could be very great. The default outcome for any social movement is to fizzle or fragment, and we should be looking for ways that this will happen with EA. If the number of examples of bad behaviour continues to grow—which we should expect to see if we let the status quo continue—then this seems like an obvious way in which the EA movement could fail, whether through effective altruism becoming known as a community where people engage in morally dubious activities for the greater good, where the community gets a reputation for being unpleasant, or where the term ‘effective altruism’ has lost the meaning that it currently has and people start using it to refer to any attempt to make a difference that makes at least a passing nod to using data.
People often look to CEA to resolve examples of bad behaviour, but so far we have been coy about doing so. Primarily, we’re worried about overreach: effective altruism is a movement that is much larger than any one organisation, and we have not wanted to create further ‘mob rule’ dynamics by interfering in affairs that people in the community might judge to be none of CEA’s business.
For example, internally we discussed whether we should ban Gleb from the EA Forum, which we help to run, for a three-month period. I think that this response would easily be warranted in light of Intentional Insights’ activities. But, for me, that proposal rang alarm bells of overreach: the EA Forum seems to me to be a community good, and it seems to me that CEA doesn’t have the legitimacy to take that action. But, unfortunately, neither does anyone else.
So I’d like there to exist a more formal process by which we can ensure that people taking action under the banner of effective altruism are acting in accordance with EA values, and strengthening rather than damaging the movement. I think that this is vital if the EA community is going to grow substantially and reach its full potential. If we did this successfully, this process would avoid feelings that EA is run by mob rule, it would ensure that bad behaviour is nipped in the bud, rather than growing to the point where the community spends hundreds of hours dealing with it, and it would give allegedly bad actors a transparent and fair assessment.
To this end, what I’d propose is:
Creating a set of EA guiding principles
Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.
I’d imagine that, in the first instance, if there was an example of egregious violation of the guiding principles of EA, the community panel would make recommendations to the actor in question. For example, after GiveWell’s astroturfing incident, the organisation self-sanctioned: one of the cofounders was demoted and both cofounders were fined $5000. If the matter couldn’t be resolved in this way, then the panel could make recommendations to the rest of the community.
There are a lot of details to be worked out here, but I think that the case for creating something like this is strong. We’re going to try sketching out a proposal, trying to get as much feedback from the community as possible along the way. I’d be interested in people’s thoughts and reactions in the comments below.
Disclosures: I know personally all of the authors of the Open Letter. Jeff Kaufman is a donor to CEA and is married to Julia Wise, an employee of CEA; Greg Lewis is a donor to CEA and has previously volunteered for CEA; Oliver Habryka is an employee of CEA, but worked on the Open Letter on his personal time. I wasn’t involved in any capacity with the creation of the open letter.
- EA Market Testing by 30 Sep 2021 15:17 UTC; 81 points) (
- Introducing CEA’s Guiding Principles by 8 Mar 2017 1:57 UTC; 48 points) (
- Who owns “Effective Altruism”? by 1 Feb 2023 1:37 UTC; 37 points) (
- Advisory panel at CEA by 7 Mar 2017 1:49 UTC; 22 points) (
- 9 Jan 2017 16:14 UTC; 15 points) 's comment on Rational Politics Project by (
- Effective Altruism as Global Catastrophe Mitigation by 8 Jun 2018 4:17 UTC; 9 points) (LessWrong;
- [CEA Update] October 2016 by 15 Nov 2016 14:49 UTC; 7 points) (
- Effective Altruism as Global Catastrophe Mitigation by 8 Jun 2018 4:35 UTC; 7 points) (
- 30 Oct 2016 11:04 UTC; 3 points) 's comment on Concerns with Intentional Insights by (
“I think that a panel sounds like a good idea, but I’d to request that someone plays Devil’s Advocate for the other side, so we are aware of what issues may arise.”
Hi*.
(1) As soon as you write down something formal, bad actors can abuse that process.
Let’s define bad actors as people engaged in activities harmful to the EA movement, and then divide bad actors into the categories of ‘malicious’ and ‘incompetent’.
This argument relates to the malicious actors. When dealing with malicious actors, there are generally strong advantages to keeping vague, non-public, commonsense rules. This is because as soon as you have a rigid, public set of rules, there will be loopholes. This is essentially unavoidable; no organisation has ever succeeded in defining a set of rules that rules out every possible imaginable bad behaviour.
Of course, once they go through a loophole, we could modify the rules to close the loophole. But that tends to look, both to internal and external observers, very arbitrary and even like you’re just trying to pick on particular individuals the central ‘people with power’ don’t like. It defeats a lot of the point of having processes in the first place.
(2) Being publicly shamed in front of literally hundreds or thousands of people is something that most human beings find toxic to the point that they would never knowingly risk it. Accordingly, we should expect that most people caught by this will be unknowingly risking it.
This mostly relates to the incompetent actors, who I believe greatly outnumber the malicious actors. It’s particularly bad if people do actually start skipping steps in the process you described (see below) and jumping straight to community-wide sanctions. The simple fact of the matter is that for every person like Gleb, there are many more people who were incompetent actors, were quietly tapped on the shoulder and told to cease-and-desist, and actually did desist. Of course, we don’t hear about those people, which makes it hard to assess their number. Unfortunately, without knowing how many problems were quietly headed off in this way without any drama or fuss we don’t actually know that our current process is a bad one.
What happens to an incompetent actor who is shamed in this way? Presumably, they dissociate from the movement. If they found the EA movement in the first place then they probably know other people on the periphery of the movement and some people in the movement. They talk to those other people, who are generally sympathetic. Those other people dissociate. And so on; we get an organic expanding flow of people leaving the movement. I think it would be extremely easy for us to lose entire cities, or even countries in their early stages, in this way, with no meaningful hope of recovery. Of course, those people probably don’t lose interest in EA ideas entirely, so they might keep doing a lot of EA things without staying under the EA ‘brand’. Maybe they set up their own movement. And we have fragmented.
I honestly think the above circumstance is just a matter of time; if you have a death penalty then eventually you always end up executing someone innocent. Except that unlike in the analogy, the innocent martyr can actively recruit others in disavowing the community who treated them so poorly.
(3) Nope, other movements don’t do this.
If, as I argued above, this example doesn’t improve that our process is bad, where else can we look for evidence that its bad? One obvious place would be the reference class of other social movements. If other movements had panels like this and also had fewer problems with bad actors relative to their size, that would be moderately strong evidence in favour of it being a good idea.
Unfortunately, despite the claim in the OP that ‘the existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.”, I don’t think this holds. It’s not clear to me exactly what Will meant by ‘societies’, but the most-often used references for EA are other global social movements like feminism, LGBT rights movement, civil rights movement, animal rights movement, environmentalism, and so on.
Play this game with a friend: write down the set of people who you think best fills the role of the panel Will describes for each of the five movements mentioned above. How many did you agree on? Do you think you would still agree if you’d picked a friend in a different country? What about a friend from a different socio-economic background?
Panels only work if (almost) everybody involved agrees that’s where authority lies. I think it’s transparently obvious that the five global movements listed above do not agree where global authority lies, even if some subsets (e.g. the German Green youth movement) might agree where local authority lies (the German Green party).
(4) It ossifies our current lack of diversity. Or if we keep fluidly changing it, it may become emblematic of the problems its trying to solve.
I think most people have a strong intuition that any such panel should be as diverse and be as broadly representative of the views of the EA movement at large as is reasonably possible given size constraints. I agree with this intuition. However, I would like to flag up that being as diverse as the EA movement itself, while the correct bar, is really not a very high bar on many metrics. If the EA movement continues to become more diverse and continues to grow rapidly, which I hope it does, then the panel will soon be skewed away from the actual makeup of the EA community in undesirable ways. For instance, suppose a 4th major cause area gains standing in the movement on a par with Global Poverty/Animal Rights/Far Future over the next five years. That area should be represented on the panel, and by default it wouldn’t be. So we need to keep changing the makeup of the panel to match the makeup of EA, and the latter isn’t something we can measure particularly scientifically so there’s no obvious Schelling point for people to agree how to do this. It’s not even obvious what the list of characteristics we should care about ensuring approximate representation in even is.
In many societies that do have such panels, the panels are elected by members of the society. We could do that, but this is extremely messy. It will get political. Some group x will end up feeling unfairly excluded from the inner group of power brokers. At that point you have a powder keg of resentment making to explode.
And then group x has an innocent executed. Goodbye group x.
Perhaps most importantly, all this management of the panel is going to use up a ton of time to do it well at all, and it seems like the main complaint right now is that too much time got spent in the motivating case.
*I don’t actually know what I think about this, so I don’t know if this qualifies as ‘Devil’s Advocate’, more like just ‘Advocate’.
**”someone starts engaging in bad activities → bad activities are tolerated for an extended period of time, aggravating many → repeated public complaints start surfacing, but still no action → eventually a coalition of community members gather together to publicly denounce the activities”
Bad actors can also abuse informal processes. If a process is informal, the key to winning is generally effective manipulation of public opinion. When rival factions fight to manipulate public opinion, that creates the kind of conflict that leads to bloody schisms.
This seems like a definitional argument to me. EA is a movement, movements don’t have panels, therefore EA shouldn’t have a panel. Such an argument doesn’t touch on the actual consequences of having a panel. If EA is a question, maybe the answer to that question is an “association” rather than a movement, and it’s not unusual for associations to have panels.
In some cases an association will be so closely identified with the area it works in that the two will seem almost synonymous. I understand that in the US, almost everyone who wants a job as an actuary takes exams administered by a group called the Society of Actuaries. This seems like the ideal case for EA. But even if there were multiple competing associations, or not everyone chose to be a part of the main one, I suspect this wouldn’t be very bad. (Some thoughts on competition between organizations in this comment.)
I think it’s pretty normal for professional associations, at least, to have a disciplinary process. Here’s the CFA institute on how they deal with ethics violations for instance.
I’m not sure point (2) has much to do with this decision. People would still get gentle suggestions that they were harming the movement if there was a panel. The point is to have a process if gentle suggestions aren’t working.
Carl Shulman mentioned the value of transparency in another thread related to this, but it occurs to me that a person subject to disciplinary action might want to keep it private, and that could be reasonable in some cases.
I agree that authority is generally something other people grant you, not something you take for yourself. That’s part of why I’m trying to respond to the comments critical of this idea, since this sort of thing works best when almost everyone is on board.
Instead of compelling others to recognize its authority, the panel should work to earn its authority. They should say explicitly that if they’re not taking your suggestions, you’re free to vote with your feet and set up your own thing.
I think groups generally function better when they’re able to grant authority this way. I would guess that the scouting movement, which seems to have granted authority to a group called the World Organization of the Scout Movement, functions better than feminism/animal rights/environmentalism (though I will grant that the advocacy coming out of the latter three is more compelling).
Very speculatively, I wonder if an “association” is what you want when you’re trying to produce something, and a “movement” is what you want when you’re trying to change attitudes in a relatively untargeted way.
I think this is mainly desirable because people will be more willing to grant authority to a panel that’s seen as representative of the entire movement. In terms of the actual decisions being made, I can imagine a panel that’s entirely unrepresentative of the EA movement (e.g. trained mediators from Japan, where EA has little presence) doing nearly as good of a job.
If a new cause area gains standing in the movement, by definition it’s achieved buy-in from people, which probably includes panel members. And even if none of them are very excited about it, if they have much in common with the EAs I’ve met, they are fair-minded enough for it not to interfere with their work significantly.
The more power the panel has, the more process there needs to be in selecting who will serve. My current guess is that the panel’s powers should be pretty limited. They should see their mission as “facilitating discussions about effective altruism”, or something like that, not “doing the most good”.
General advice for rapidly growing (for-profit) organizations is to focus on your next order of magnitude growth.
It seems not just reasonable but almost certain that the optimal strategy for EA right now (~1K core members?) is different than the strategy for the environmental movement (~10M core members?).
Thanks so much for the detailed comment! Exactly what I was looking for!
I don’t know the particulars of the situation(s) that Will is referring to here, but as a general principle I think this is a very dangerous criterion to use for community censure and/or expulsion. What is “clearly supported by EA reasoning” is clearly in the eye of the beholder, if the endless debates on this forum and elsewhere are any indication.
I think the principle that Will is getting at is open-mindedness, or a lack thereof. Given that reason is so central to EA’s identity as a movement, we certainly don’t want to welcome or encourage ideologues who are unwilling to change their minds about things.
To me, however, there is a huge and very important difference between the following types of people:
Someone who brings strong opinions and perspectives based on prior knowledge and experience to the community, is willing to engage in good faith discussion with others about those opinions and why they might be wrong, and ultimately holds to their original views;
Someone who brings strong opinions and perspectives based on prior knowledge and experience to the community, is unwilling or unable to engage in good faith discussion with others about those opinions and why they might be wrong, and ultimately holds to their original views.
I feel that people who fit the former description can add tremendous value to the community in ways that people who fit the latter do not, especially when their views and reasoning are out of sync with the mainstream of EA thinking. But I would be very concerned about the former type of person being confused with the latter type when they decline to change their mind; after all, if one’s priors are sufficiently strong, it’s perfectly rational to require a high bar to change one’s mind! I worry that attempts to police use of the term “effective altruism” based on refusal to update visibly on non-mainstream ideas would ultimately harm intellectual diversity and be shortsighted in relation to EA’s goals.
(Edit: to be clear, I am not against the idea of a panel overall.)
FYI, we removed references to GWWC and CEA from our documents
Thanks, Gleb, it’s appreciated.
If done unilaterally, I think this would be overreach, so I’m really glad CEA has sought community input. However, the EA Forum, as I understand it, was always meant to be a community good run by the community. The EA Forum is currently maintained technically by non-CEA volunteers, non-CEA volunteers serve as community moderators, and the vast majority of content is written by people not affiliated with CEA. While I’m very grateful for everything CEA does to make this forum, and EA, a great place, I think claiming the EA Forum as run by CEA does a disservice to all the non-CEA work that myself and others have put into also making this forum great.
I agree, however, that it is a problem that there is no centrally agreed policy for handling bad actors or banning people from the forum. It’s lucky we haven’t had a problem with this yet, but I’d be really interested in seeing such a proposal. I’m glad CEA is taking a very community-focused approach to this and I’m interested in seeing what the community will come up with.
HI Peter—thanks for this comment. I didn’t mean to belittle all the non-CEA contribution to the forum, which is of course very great, and much greater than the CEA contribution to the forum. So I’m sorry if it came across that way. I only put in “which CEA runs” because I thought that many readers wouldn’t know that we are involved at all with the forum, and so wanted to give some explanation for why this might be an avenue of action for us at all. I’ve modified the text to “help to run” to make it more accurate.
Thanks Will, I appreciate it!
One example of legitimate moderator action I’d like to highlight is the recent announcement of a ban of InIn content from the EA Facebook group. Since the six moderators of the EA Forum include a diverse group of people (some CEA staff and some not) and since their actions so far have been quite limited and very transparent, my perception is that they are a trusted group capable of the legitimate (yet oligarchical) action that forums need to survive. My suggestion is that for the EA forum we also create a different, yet similarly diverse and overall independent moderator team here that also takes rare, swift, trusted, and transparent moderator action, guided by community input.
(Though also see this alternate perspective.)
The EA Forum’s moderation has a lot more in common with the moderation of the Facebook group than differences.
The EA Forum has had a diverse set of even-handed moderators who deserve much more more thanks and praise than they receive. Currently, Ali Woodman and Rebecca Raible have been doing an excellent job, and previously, so was Marcus Davis, who was working independently. I’ve been occasionally available to chip in throughout. It’s also the case that wherever possible, we either make visible comments or consult directly with the people whose comments are in dispute.
This is definitely true! I think maybe we could potentially add another moderator or two and then empower them to make decisions? And when CEA thinks it would be best to ban someone, they could then discuss it with the moderators rather than discuss it internally.
For reasons of efficiency, I’m pretty averse to letting the team blow out in size but Ali or Rebecca will probably not be available to moderate indefinitely so others are always encouraged to express their interest to us.
Hey, I haven’t had much time to respond here, and won’t for the next week, but just to say I’m really loving the statements of the concerns (AGB in particular, thank you for working through a position even though you’re unsure of your views—would love this to become a more regular norm on here). My views are that this issue is sufficiently important that we should try to get all considerations, and all possible permutations of solutions to the issue, on the table; but I’m not wedded to any particular proposal at this stage, so all the comments are particularly helpful. Plan to write more in the near future.
FYI, we decided to distance InIn publicly from the EA movement for the foreseeable future.
We will only reference effective giving and individual orgs that are interested in being promoted, as evidenced by being interested in providing InIn with stats for how many people we are sending to their websites, and similar forms of collaboration (yes, I’m comfortable using the term collaboration for this form of activity). Since GWWC/CEA seem not interested, we will not mention them in our future content.
Our work of course will continue to be motivated by EA concerns for doing the best things possible to improve the world in a cost-effective way, but we’ll shift our focus from explicitly EA-themed activities to our other area of work—spreading rational thinking and decision-making to reduce existential risk, address fundamental societal problems, and decrease individual suffering. Still, we’ll also continue to engage in collaborations and activities that have proved especially beneficial within the EA-related sphere, such as doing outreach to secular folks and spreading Giving Games, yet that will be a smaller aspect of our activities than in the past.
The only concrete change specified here is something you’ve previously claimed to already do. This is yet one more instance of you not actually changing your behavior when sanctioned.
You are mistaken, we have never claimed that we will distance InIn publicly from the EA movement.
We have previously talked about us not focusing on EA in our broad audience writings, and instead talking about effective giving—which is what we’ve been doing. At the same time, we were quite active on the EA Forum, and engaging in a lot of behind-the-scenes, and also public, collaborations to promote effective marketing within the EA sphere.
Now, we are distancing from the EA movement as a whole.
This concerns me because “EA” is such a vaguely defined group.
Here are some clearly defined groups:
The EA FB group
The EA forum
Giving What We Can
All of these have a clear definition of membership and a clear purpose. I think it is entirely sensible for groups like this to have some kinds of rules, and processes for addressing and potentially ejecting people who don’t conform to those rules. Because the group has a clear membership process, I think most people will accept that being a member of the group means acceding to the rules of the group.
“EA”, on the other hand, is a post hoc label for a group of people who happened to be interested in the ideas of effective altruism. One does not “apply” to be an “EA”. Nor does can we meaningfully revoke membership except by collectively refusing to engage with someone.
I think that attempts to police the borders of a vague group like “EA” can degenerate badly.
Firstly, since anyone who is interested in effective altruism has a plausible claim to be a member of “EA” under the vague definition, there will continue to be many people using the label with no regards for any “official” definition.
Secondly (and I hope this won’t happen), such a free-floating label is very vulnerable to political (ab)use. We open ourselves up to arguments about whether or not someone is a “true” EA, or schisms between various “official” definitions. At risk of bringing up old disagreements, the arguments about vegetarian catering at last year’s EA Global were already veering in this direction.
This seems to me to have been a common fate for vague group nouns over the years, with feminism being the most obvious example. We don’t want to have wars between the second- and third-wave EAs!
My preferred solution is to avoid “EA” as a noun. Apart from the dangers I mentioned above, its origin as a label for an existing group of people gives it all sorts of connotations that are only really valid historically: rationalist thinking style, frank discussion norms, appreciation of contrarianism … not to mention being white, male, and highly educated. But practically, having such a label is just too useful.
The only other suggestion I can think of is to make a clearly defined group for which we have community norms. For lack of a better name, we could call it “CEA-style EA”. Then the CEA website could include a page that describes the core values of “CEA-style EAs” and some expectations of behaviour. At that point we again have a clearly defined group with a clear membership policy, and policing the border becomes a much easier job.
In practice, you probably wouldn’t want an explicit application process, with it rather being something that you can claim for yourself—unless the group arbiter (CEA) has actively decreed that you cannot. Indeed, even if someone has never claimed to be a “CEA-style EA”, declaring that they do not meet the standard can send a powerful signal.
I think maybe this could be a way to implement what Will is suggesting. (Similar to your “CEA-style EA” notion?)
The essay “The Tyranny of Structurelessness” is a discussion of problems in the feminist movement due to lack of structure.
I would strongly advise against using the Freeman article, which is very out of date and doesn’t represent the almost 50 years of progress in feminist thought that have come after it. In particular, intersectional feminism, which has now become one of the leading types of feminism, directly challenges the thoughts that Freeman put down, noting that the structures in feminism actually consistently were used to silence voices within the movement that did not fit within the mainstream. Crenshaw wrote one of the seminal articles on this, but there are many other modern authors who also share this view.
Silencing of voices within a movement is a really important issue: many women in the civil rights movement were silenced in the name of ‘black unity’ (Audre Lorde is a good source on this); even today bisexual people have trouble finding a place in the LGBTQ movement (example); genderqueer women, transsexual women, non-hetero women, and women of color consistently are sidelined in women’s rights movements (Lorde again, she’s amazing); and national identities can be used to silence dissenters of any type. In the best of cases, these cases consisted of silencing the concerns of multiple members of the movement, in the worst of cases the ensuing dehumanization (“you are a danger to the cause!”) led to violence.
I’ve heard of people who don’t fit the ‘traditional EA mold’ feel that this is happening in EA as well (quick example here). Even if it’s not direct sanction, feeling that you are “not EA enough” can still create a problem.
Long story short, structure is bad, long live structurelessness.
I read a few pages of the Crenshaw article you linked. I’m not sure I see the two authors disagreeing. Every time Crenshaw talks about “structure”, she’s referring to the structure of society as a whole, which she sees as oppressive. Freeman’s point is that just because society at large has an oppressive structure doesn’t mean we should give up on the idea of structure altogether.
Re: silencing, the “Tyranny of Structurelessness” essay presented a mechanism by which this might occur under structurelessness that seemed pretty plausible to me. But even if it is actually the result of too much structure, I’m not sure that invalidates Freeman’s arguments either. If what Freeman says is true, and structurelessness also has serious downsides, that means you want to strike a balance.
Like Michael_PJ, I see the outcome feminism arrived at as one we should work to avoid. So if structurelessness is popular among feminists, I don’t see that as a very strong recommendation.
A nice balance is probably best overall, good point. Although, I do think it may be worth looking into replicating the intellectual diversity that feminism developed over time (while avoiding the pitfalls, inshallah) - it might be something that could benefit the movement going forward.
The current situation in feminism is that if people feel you are being a bad feminist, then they write public critiques, they denounce you, they protest you, ect. This is incredibly divisive and it is not what we want to emulate, which is why we are proposing a more formal mechanism.
EDIT: I feel that my comment was a lazy comment and could probably have been more nuanced. I didn’t mean that all feminists act it this way, just that there is a major issue with this occurring within feminism.
I am sorry to hear that your encounters with feminism have primarily been divisive. My experience has been a bit different, and it may help for me to go into some quick details (OK, actually this post became quite long, which I apologize for—it’s probably approaching blog length) and draw parallels with EA.
It took me a year to actually start engaging with EA. I love cost effectiveness, marginal thinking, and rigorously thinking about how to do the most good. My friends and colleagues do as well, but they do not engage with EA. To me, EA appeared, from the outside, to be a group that lays claim to something that is not unique to them, and then looks down on others—a very insular community with members that actively trash and condescend people who ‘are not EA enough’. Other critics have expressed this view as well, and my initial forays into EA did not help this perception—some of my views are not standard EA views, and I had multiple people without economics backgrounds jump on me to explain that I was wrong while condescendingly explaining basic economics to me. This would be fine, if they were actually correct to do so—most of the times the loudest critiques were the most rudimentary and off the mark (for reference, I got my masters in economics and work directly in integrating economic thinking into aid programs, so I have a decent idea of what bad economic thinking looks like). Needless to say, these experiences and others left a sour taste in my mouth, and so I stopped engaging for a while.
This is similar to some people’s experiences with feminism—when initially trying to break in, it can seem like a very insular community driven entirely by yelling at people who are not ‘feminist enough’. I liked feminist ideals in undergrad, similar to how I enjoyed EA ideals, but avoided it because my perception was that I would not get anything from engaging in feminism because I would be expunged for ‘not being feminist enough’ (similar to why I avoided EA). I also didn’t see a clear reason for engaging, since many of my friends already had feminist ideals without being a direct part of the feminist movement (similar to my friends and colleagues who hold EA ideas without engaging with EA).
The moment that really changed everything was in the first year of my masters, where I was hitting a economic problem that the tools I was using just could not solve—I went to my adviser, complaining that no one seemed to have thought about this problem before, to which he retorted “you know that the feminist economists have been working on this for decades, right? Talk to Professor XYZ and they’ll help you”. And I did, and next thing I knew I was getting a specialty in gender analysis of economics—because as I started to get more involved, I realized that behind that initial barrier was a rich world of diverse thinking on a variety of topics. I truly believe now that the most advanced and innovative thinking in economics today comes from feminist economists.
And it wasn’t just academic feminists—once I got past that initial barrier, I started looking more into the very groups I originally avoided, and I soon realized that a lot of feminist activists were actively fighting to break down the barrier that I encountered, by advocating for ‘calling in’ rather than ‘calling out’ (among other things). Once you’re inside, it is a very supportive and tolerant community, and it has helped me (and many others) grow as a person and as a thinker more than anything else in my life has.
Going back to EA, as I mentioned before there is a very similar barrier, in which to an outside person a lot of the people ‘representing EA’ online can be quite nasty to outsiders and divergent views. Once I got past this initial barrier, I realized that the majority of people identifying with EA are actually quite nice, and I realized that there are many in the EA movement who are actively trying to make people’s first experience of EA more amicable and to make the movement as a whole more tolerant and respective of divergent views. It’s essentially the EA movement’s equivalent of the ‘calling-in’ problem, and the point that these discussions are happening make me very hopeful for the future.
None of this really helps answer the ‘what about a formal mechanism’ question directly, I just want to try and express my belief that better engagement with social movements like feminism (all of whom have dealt with similar problems to the EA movement!) is important. Offhand saying that ‘feminism failed on this point, so we can’t learn from them’ without really engaging with members of the feminist movement is not a strong way forward.
In terms of examples off the top of my head of how feminist actors have tried to mitigate the ‘bad actor’ problem, my first thought is the issue of problematic ‘allies’. The response has to write guidance (less formal version here) on how to be a good ally, and to generally set forth ‘community norms’ that show up in various places (blogs, posters, listservs, whatever). When someone does not adhere to these norms, in the best of cases you can help them understand why going against the norm is bad and help them be a better ally, and in the worst of situations the movement as a whole at least has some plausible deniability (“don’t tell us that person is representative of us, they’re clearly breaking all of the norms that we’ve clearly detailed all of these places!”).
I’m sorry to hear that your initial impression of EA, much like your initial impression of feminism, consisted of ‘multiple people without economics backgrounds jump on me to explain that I was wrong while condescendingly explaining basic economics to me’. That’s a problem, and we should try to fix it.
Feminism has the same problem, I would argue on a much grander scale. If feminism is making progress towards solving this problem, I haven’t noticed; if anything the direction of travel seems to be the other way. You observe that ‘once you’re inside, it’s a very supportive and tolerant community’, but that’s very much beside the point. The problem we’re trying to solve is not how the community feels on the inside, it’s how it looks from the outside. On this score I really sincerely doubt feminism is a good example to look at, however nice (most of) the actual people involved are when you directly interact with them, which I’m sure they are.
I think to really counter this you need to argue that feminism actually has better external optics than I and casebash think it does.
Ah, I see the issue now—you are assuming that I’m saying that feminism has a model that we should directly emulate, whereas I am just saying that they are dealing with similar issues, and we have things to learn from them. In short, there are leaders in feminism who have been working on this issue, with some limited success and yes, a lot of failures. However, even if they were completely 100% failing, then there is still a very important thing that we can learn from them: what have they tried that didn’t work? It is just as important to figure out pitfalls and failed projects as it is to try and find successful case studies.
The key is getting that conversation started, and comparing notes. Your perception of feminism and the problems therein may change in the process, but most importantly we all may learn some important lessons that can be applied in EA (even if they do consist primarily of “hey this one solution really doesn’t work, if you do anything, do something else”).
If you are truly 100% not convinced that we can learn this from feminism, then that’s OK: you can talk to leaders of any other social movement instead, since many of them have dealt with and thought about similar problems. Your local union reps may be a good place to start!
“However, even if they were completely 100% failing, then there is still a very important thing that we can learn from them: what have they tried that didn’t work? It is just as important to figure out pitfalls and failed projects as it is to try and find successful case studies.”
This is completely fair. You’re right that I thought you were suggesting we should emulate, which on closer inspection isn’t an accurate reading of your post.
With that said, my experience of talking to the ‘nice’ people more internal to feminism (which includes my soon-to-be-wife, among others) about this is that they tend to deny or excuse the external optics problems, rather than making a bona fide attempt to deal with them. You can’t compare notes if they don’t have notes. If you know leaders who are aware of and actually trying to fix the problem, then I agree you should talk to them and I hope you do learn something of their positive or negative experiences which we might be able to apply.
Yeah, I can see how that could be an issue, and honestly I do lean towards the “the external optics problem is the patriarchy’s fault, not ours—telling us that we are ‘not nice enough’ is just a form of silencing, and you wouldn’t listen to us anyway if we were ‘nicer’” viewpoint, but I can see how that can make this discussion difficult. I’m just mostly hoping that the discussions on ‘calling-in’ within feminism move forward—even a quick google search shows that it’s popping up on a lot of the feminist sights targeted to younger audiences—it may be on oncoming change, and hopefully it’ll pick up steam.
Congratulations on your engagement by the way!
This is an excellent comment that clarifies a lot. I completely agree with everything you’ve said in this comment, but I also agree with AGB that, at least from an outsider’s perspective, it is hard to find people within feminism who have “notes” that we could learn from. Of course, an insider like yourself would likely have a much better ability to locate such ideas.
Thanks for providing such a detailed comment as a response to what I must admit was one of my lazier comments.
I should make my critique more nuanced. I don’t believe that all feminists or even the majority of feminists are involved in or necessarily support the kind of witch-hunts or social shaming that I see occurring on a regular basis. My claim is simply a) these witchhunts occur b) they occur regularly c) feminism does not appear (from my admittedly limited external perspective) to have made much progress dealing with this issue. That said, I will definitely read the “calling in” vs. “calling out” article.
I have to agree with AGB that most feminists seem relatively unconcerned with these issues. They may point out that this is not all feminists or even most feminists and that it is unfair to hold them responsible for the actions of other actors; both of which are true. Nonetheless, they generally fail to acknowledge that this is a systematic issue within feminism or that feminism tends to have these occur more often or with more viciousness than in many other movements. Furthermore, if this kinds of incidents occurred within EA at even a fraction of the rate that they seem to occur within feminism, then I would be incredibly concerned. This holds even if the amount of drama within feminism is “normal”—a “normal” amount of drama would still not be good for the movement.
After all, they reason, some incidents will always occur in any movement once it reaches a certain size. I, and many only observers, think that, on the contrary this is a problem that is especially bad for feminism and is a directly result of several ideas existing within the movement without any corresponding counter-balancing ideas. Nonetheless, I cannot provide any proof of this, because this is not the kind of statement that can easily be verified.
Let’s take for example the idea that external optics is the fault of the patriarchy. It is undoubtedly true that much of the criticism of feminism, especially from the right, is extremely unfair and motivated because feminism is challenging certain “patriarchial” ideas, such as traditional ideas of the family and gender. On the other hand, this can be used as a fully general purpose response to all criticism and it makes it very easy for people to dismiss criticism. On the other hand, within EA, there is a social norm that it is acceptable to Devil’s Advocate any criticism, without anyone doubting that you are on their side.
Another idea is the concept of “mansplaining”. I’m sure that many men do come into conversations with an extremely limited view or understanding. But again, this serves as a fully general purpose counter-argument and it would be against EA social norms to use an ad hominem to dismiss someone’s argument just for being somewhat naive.
So even though many EAs may believe that the current criticism is largely poor quality and motivated by entrenched interests or “emotional” arguments and even though many EAs may fail to intellectually respect their opponents (as per your critiques), the current social norms act to limit the damage by ensuring a minimum standard of decency.
Regarding economics, you are probably right that many EAs with think that they know more economics than they actually do. I make this mistake sometimes. This is definitely a problem—but at least it is a better situation compared to most other social movements. I continually hear critiques of capitalism from people with no economics knowledge whatsoever (some people with economics knowledge also critique capitalism, but these are drowned out out by the mass of people without such knowledge). EA seems to have a high enough proportion of economics majors or otherwise quantitative people, that a large proportion will have enough economics exposure to produce at least a shallow understanding of economics. This has its disadvantages, but I still consider it superior to them having no knowledge.
I’m not convinced that the “Ally” formula is an example of successful mitigation. I imagine that some people have certainly been bad allies in the past, which has motivated these issues, but I am also worried that it will harm the intellectual diversity of the feminism movement by limiting the ability of allies to defend views that don’t match that of the movement as a whole.
I think I like the ideas suggested here better than the various permutations suggested elsewhere. Or at least agree with the concerns raised.
I think that a panel sounds like a good idea, but I’d to request that someone plays Devil’s Advocate for the other side, so we are aware of what issues may arise.
On the other hand, to my knowledge, there are no procedures in place for an existing organization to officially “align” with the movement as it were.
The following idea does nothing to thwart threats from individual bad actors* but it is a way to limit damage from less competent organizations: In another community that I’ve been part of, we had the problem that guests of honor, partner organisations, trademark owners, and attendees had to rely heavily on the people who were organizing conventions. There were scores of these conventions springing up everywhere, and no one knew who to trust anymore. Actual malevolence was the exception but incompetence was rampant.
In Europe, friends of mine formed a European certification organization for conventions. To obtain its endorsement, conventions have to comply with a number of requirements, have to have organized one convention (so the application process usually takes over a year), have to be ready to coordinate with the other established conventions, etc.
The result is that we – partners, attendees, etc. – have a place to check whether a convention is trustworthy and competent, and conventions have a place to turn if they want to be officially and widely recognized.
A panel may be the only resort when it comes to individuals, but for organizations we can be more proactive and create transparent processes from the start, processes that benefit everyone including the organization itself.
If InIn had had such a process to rely on, Gleb may’ve had experienced people at CEA to bounce ideas off of, and those people may’ve noticed early on that some plans would be seen as astroturfing and that that’s bad. (I had been running a little NPO for a year or two when I first heard of the concept, and it’s possible the astroturfing of InIn is not an unambiguous textbook case from Gleb’s perspective.) Gleb could’ve also saved a lot of time he spent marketing to EAs, and had InIn already been in such a process, everyone could’ve been much more direct about their suggestions. For example, I’ve always read the ubiquitous “broad public outreach is dangerous: don’t do it” as a cipher for “broad public outreach is dangerous: we don’t trust you to get it right.” Such advice sounds hypocritical coming from people associated with or aware of organizations that have collaborated on articles on EA in NYT and WSJ. (But I heard Singer is to blame for those.)
* I’m not weighing in on the debate on whether Gleb qualifies as one. I’ve liked him for the couple months that I’ve known him and will maintain my neutrality.
This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.
I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think that the damage caused by such behavior is fairly easy to underestimate, for the reason that it is hard to keep track of all of the different ways in which such behavior causes harm.
The moderators, currently Ali and Rebecca, have complete legitimacy to ban a user if they’re breaking our fairly loose standards of discussion, and have not had any hesitations in applying these in the past.
In general, it’s encouraged for readers (CEA or otherwise) to report misbehavior so that the moderators can respond to it, or alternatively to consider improving the standards of discussion. If they have specific comments about Gleb’s behavior here (only a couple of EA Forum comments have been mentioned), then I’m sure they’ll make them known.
The jargon used in this post is confusing. An “Open Letter” addressed to Gleb was indeed drafted by Jeff and others, but that’s not the document that was published. As Jeff writes at http://www.jefftk.com/p/details-behind-the-inin-document:
Perhaps you drafted your post earlier, when Jeff was still planning to publish the open letter?
I drafted the document afterwards, but didn’t realise that the blog post was something different than the originally-planned ‘open letter’.
The discussion on the other thread is very long, so I’d like to highlight one perspective among these comments here—http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8nl—that present an alternate case worth taking seriously that perhaps no official actions are needed.
(Though also see this alternate perspective.)
I would recommend linking to Jeff’s post at the beginning of this one.
Done!
It would protect the movement to have a norm that organizations must supply good evidence of effectiveness to the group and only if the group accepts this evidence should they claim to be an effective altruism organization.
I think some similar norm should also extend to individual people who want to publish articles about what effective altruism is. Obviously, this cannot be required of critics, but we can easily demand it from our allies. I’m not sure what we should expect individual people to do before they go out and write articles about effective altruism on Huffington Post or whatever, but expecting something seems necessary.
To prevent startups from being utterly ostracized by this before they’ve got enough data / done enough experiments to show effectiveness, maybe they could be encouraged to use a different term that includes EA but modifies it in a clear way like “aspiring effective altruism organization”.
I think this rests on some equivocation of the meaning of ‘societies’. It’s true for associations but not for movements, and the EA movement is the latter, so I don’t see this helping—unless we pushed for all EAs to become members of an association like GWWC.
In general I think that most movements simply fail at this issue. A few with strong central moral leadership have pulled it off—e.g. the neoreactionaries seem to have successfully ejected an errant member from the movement—but larger, more egalitarian movements like enviromentalism simply have to suffer from association with undesirables.
I have noticed similar challenges in other movements on and offline. Two approaches have proved helpful (contact me for refs fore the first):
(a) an ombudsman service (ombudsperson?) which can initially be tried out in one part of a movement. This can be accessed by those in official positions as well as users or people affected by an EA’s behaviour. The people involved don’t have to be older, but do tend to have a “calm, considered” nature. Such a service typically doesn’t go as far as offering a full mediation or arbitration service, as that is a major undertaking, but can recommend that the parties access such a service if that seems a good way forward with a finely balanced or potentially resolvable issue.
(b) www.RestorativeCircles.org developed in Rio by Dominic Barter and others. This is low cost compared to other ADR approaches and requires little training. It can even solve the problem of one party not being willing to participate, as (if appropriate) they can be advocated for by a 3rd party. I’m not clear to what extent it can be applied with online text interaction only, but I imagine it has more potential than other processes, especially with voice communication.
An important aside, I think voice communication can often resolve things far more quickly than text alone, especially asynchronous email/forums, as the voice contact carries so much more. Even if all that happens is that it feels clear that this person can’t engage one to one with your concerns, that’s useful to know, and opens up options of mediation, arbitration, ombudservice or pause, rather than endless unproductive text.
Remember what EA is about. Doing Good Better, and that’s it. No strict principles, no Bill of Rights, just honest math.
I don’t feel like you’ve engaged with the core of Will’s post here. He proposes that the best way to Do Good Better is to set up a panel. It sounds like you want to define EA in a way that makes setting up a panel “not what EA is about”. I could try to redefine EA so it doesn’t include whatever my least favorite EA charity is, but that wouldn’t constitute a valid argument for that charity being ineffective. You appeal to “honest math”, but you don’t demonstrate how math shows that setting up a panel is a bad idea.
BTW, even if you’re a utilitarian expected utility maximizer at heart, it can make sense to abandon that in some cases for game-theoretic reasons. See moral trade, Newcomb’s paradox, or this essay.
I’m paying 5 karma to contribute to Kbog’s defense—he has spent a large amount of time sincerely engaging with these arguments. I suggest reading through http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8nl
Was I trying to engage with the core of the post? What if I don’t want to engage with the core of the post? What if I already said enough things to sufficiently engage with it before it was posted?
No… I’m saying that creating a set of EA guiding principles is not what EA is about. Try not to make up hidden meanings behind my comments, ok?
Really?
moral trades are advantageous to the utilitarian who is making them (if they’re not advantageous to her then she shouldn’t make them and it’s not a moral trade)
the Newcomb problem is about decision theory not ethics, if one-boxing is the correct decision then utilitarians ought to one box, same goes for two-boxing; those are just two competing theories for maximizing expected utility
That essay doesn’t make any clear arguments and doesn’t show that a utilitarian ought not act like utilitarian; it’s just Scott saying that things seem nicer if lots of people follow certain rules or heuristics
I downvoted both of these comments. I very rarely downvote comments.
“Was I trying to engage with the core of the post? What if I don’t want to engage with the core of the post? What if I already said enough things to sufficiently engage with it before it was posted?”
If you don’t want to engage with the post, don’t post.
If you want to point out that you have already engaged with the ideas in this post (which in your case I think is fair), then maybe link to your previous engagement as Peter did.
Okay. Thanks for telling me? I downvote people all the time. It’s not a big deal.
There is no obligation to respond to every point in a lengthy blog post in order to reply. If someone makes twenty claims, and one of them is false, I can point out that one of their claims is false and say nothing about the remaining nineteen. If I was saying “MacAskill’s blog post is totally wrong because of this one thing he said at the end,” you would have a point. But I didn’t say that.
I figured that was unnecessary, as the person I was replying to was already fully aware of what I had said in the other thread.
“There is no obligation to respond to every point in a lengthy blog post in order to reply. If someone makes twenty claims, and one of them is false, I can point out that one of their claims is false and say nothing about the remaining nineteen.”
Agreed. But you didn’t do that. You made a point which (without reading your supporting argumentation) interacted with none of what Will had said.
“I figured that was unnecessary, as the person I was replying to was already fully aware of what I had said in the other thread.”
Your first comment was actually in reply to Will Macaskill, the OP. I see no reason to assume he, or any third party reading, was fully aware of what you had already said. So you certainly didn’t ‘figure it was unnecessary’ for that reason. I’m not sure what your true reason was.
Sure it did. The OP suggested a body that made decisions based on some set of explicit principles. I objected to the idea of explicit principles.
Okay, well then let’s just be clear on what comments we’re referring to, so that we don’t confuse each other like this.
Here this what happens. I argue in Thread A that making an EA gatekeeper panel would be a terrible idea. Then Thread B is created where the OP argues for an EA gatekeeper panel which is guided by explicit principles. In Thread B I state that I don’t like the idea of explicit principles.
Apparently you think I can’t say that I don’t like the idea of explicit principles without also adding “oh, by the way, here’s a link to other posts I made about how I don’t like everything else in your blog post.” Yes, I could have done that. I chose not to. Why this matters, I don’t know. In this case, I assumed that Will MacAskill, who is making the official statement on behalf of the CEA after careful evaluation and discussion behind the scenes, knew about the comments in the prior thread before making his post.
I think that you might have a reasonable argument—but in order for it to be a valuable contribution to this discussion you would have had to have broken down your argument more. I think that if you had done this, then your comment would not have been down-voted. If you have already said things before that would strengthen your argument, then a link to these previous arguments would have gone a long way and removed the need to repeat yourself.
I would think there are several smaller principles that go along with Doing Good Better, that would be helpful to have specified. For instance, if someone was claiming to Do Good Better, but wasn’t actually Doing Good Better, in a way that is empirically obvious (e.g. murder, kidnapping, lying, scandal, increasing existential/suffering risks, movement-damage, etc.)
It also seems like we’re aiming more for guidelines, not set-in-stone bylaws.
(EDIT: Not sure what kbog’s response was, but I just realized my comment may seem like I was anchoring on Bad Things to make Gleb look bad; that wasn’t my intent. In addition to being a bit silly, I was just listing things from most severe to less severe, and stopped, partly because I am not sure exactly what principles would make good guidelines)
nope just posted by accident and couldn’t figure out how to delete the comment
I agree with you, and I’m anxious about creating an “official” broader definition of EA. That said, it would probably help prevent situations like this from arising, so it may be worth it.
I think it would be great to set up a formal panel. That way, we can have an actual calm discussion about the topics at hand. Furthermore, we can make sure that all points are thoroughly discussed and there is a clear resolution.
For example, InIn has been accused of astroturfing, etc. However, no one responded to my comments pointing out that astroturfing does not apply to our activities. The same goes for other points of disagreement with the claims of the authors of the document expressing concerns—no one has responded to my points of disagreement. A formal panel would be a good way of making sure there is actually a discourse around these topics and they can be hashed out.
So far, the impression I and many others are getting is that these accusations are unfair and unjust, and paint some of the top-level EA activists in a negative light. These concerns would be addressed in a formal procedure. I’d be glad to take the InIn situation through a formal procedure where these things can be hashed out.
Interesting to see how many downvotes this got. Disappointing that people choose to downvote instead of engaging with the substance of my comments. I would have hoped for better from a rationally-oriented community.
Oh well, I guess it is what it is. I’m taking a break from all this based on my therapist’s recommendation. Good luck!
I didn’t down vote it, but I suspect others who did were—like me—frustrated by the accusation of not engaging with you on the substantive points that are summarised in Jeff’s post. This post followed a discussion with literally hundreds of comments and dozens of people in this community discussing them with you.
I could explain why I think the term astroturfing does apply to your actions, even though they were not exactly the same as Holden’s activities, but the pattern of discussion I’ve experienced and witnessed with you gives me very low credence that the discussion will lead to any change in our relative positions.
I hope the break is good for your health and wish you well.