Why & How to Make Progress on Diversity & Inclusion in EA
This post is a collection of potential solutions I’ve come to over 3+ years of observing, experiencing, thinking about, reading material relevant to, and discussing issues of diversity and inclusion in the EA community, assisted by my experience in other communities.
“Diversity” is about representing people from diverse walks of life. “Inclusion” is somewhat more nebulous, and there seems to be a misunderstanding about its meaning in the community: Inclusion is not about welcoming everyone in, it’s about welcoming in the right people and ensuring we’re not excluding them for irrelevant criteria. I think the people who the effective altruism community should work to engage — and I assume this isn’t very controversial — are people who want to do the most good, or at least people who are interested in doing good better.
There has been a lot of loose discussion of these issues, mostly from those directly affected, but few actions taken to seriously address them. This post will address gender-based exclusion more than other issues as that’s the one I have the most knowledge on, but a lot of the practices I suggest should make the community more inclusive of a broader diversity of people. My goal is to keep the ball rolling by spurring further discussion on solutions and helping people implement the most promising ones, so especially if you are from an underrepresented group and/or have expertise in this area, please do comment with your own thoughts, information, and ideas. Feel free to message me and I can post your comment anonymously if you prefer.
Why is this something we should pay attention to?
Most people in the EA community who I speak with agree this is an important issue, but for those who don’t, I’d like to formally lay out the reasoning in this section.
Based on our demographics, my observations, and many conversations with women, people from other underrepresented backgrounds, and even people from overrepresented backgrounds who still felt or feel the community is too exclusionary, I think the EA community is not quite selecting for “people who want to do the most good” or the lighter version of that, but people who are both that and young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions. These features suggest limitations of our capabilities, both individual and collective, that could be relieved if we worked harder on diversity and inclusion.
I’ve met many people who are deeply driven to help others as much as possible and who beyond that are highly capable — e.g. high analytical ability, years of experience with nonprofit management, other specialized skills, graduate degrees in relevant fields — but who left, limit their involvement in, or never joined the EA community because of their experience with its culture and norms. Some of those who have stuck around do so begrudgingly because the community still offers them enough to be worth it or because they think the community has so much potential that bearing it to contribute what they can is worth it, but they’re not giving us all they have to offer, and many others are turning away entirely. One big effect here seems to be the exclusion of women, as suggested through my conversations with many women and the community’s gender ratio of roughly 70% men and 2.7:1 men:women. The exclusion of people of color is a noticeable problem as well, with e.g. black and hispanic persons severely underrepresented compared to U.S. demographics. We’re losing the potentially huge amounts of resources that such people could bring to the EA movement: knowledge, experience, management ability, perspective, ideas, creativity, analytical ability, emotional understanding, social competence, big-picture thinking, enthusiasm, career capital, career opportunity, a variety of specialized skills, networks, money — you name it.
See also Alexander Gordon-Brown’s post on some other characteristics EA is missing out on in terms of diversities of talent, experience, opinion, and appearance.
Not only are we missing out on those individuals and their resources themselves, but a sum that would be greater than the whole of its parts: [Edit: As a commenter noted, the content of following sentence is debated in psychology.] A group’s collective intelligence is only moderately related to its individuals’ intelligences, and gender-diverse teams score higher on collective intelligence than all-male or all-female teams (What Works: Gender Equality by Design, 10). Research also shows that diverse teams are more creative, more innovative, better at problem-solving, and better at decision-making — see Georgia Ray’s post “Diversity and team performance: What the research says” for more detail. Companies in the top quartile for diversity in gender and ethnicity are 15% and 35% more likely to outperform their industry’s median performance, respectively, and companies in the bottom quartile lag behind the median. Fortune’s top 50 workplaces for diversity list an average 24% higher year-over-year revenue growth than companies that didn’t make the list, and companies with multiple women in the C-Suite are more profitable. These are all correlations, but the effect sizes are very large and the causal explanation seems highly plausible.
There are also known issues in EA that seem likely to be mitigated or eliminated through an increase in diversity. For example, this year’s EA Global San Francisco conference focused on shifting the community towards “doing good together.” Women tend to be more collaborative than men, so if the community had better gender representation, we could already be thinking big and emphasizing doing good together.
Even if people in our community are less prejudiced than the rest of society, small biases can have big impacts: One simulation found that bias accounting for only 1% of variance in evaluation scores resulted in the top level of the simulated workforce being only 35% comprised of the discriminated-against group, instead of 50% like the original pool (What Works: Gender Equality by Design, 14).
Unfortunately I suspect some people in the community are content, implicitly or explicitly, to assume that women and people of color are inherently so much worse than white men at thinking about altruism effectively that the constitution of the community is merely an effect of this presumed difference, and that as such putting effort into diversity and inclusion would either be too difficult and costly to be worthwhile or would dilute the community. I find this argument lacking given the alignment of that thinking with demonstrated biases in society at large — i.e. people tend to think that women are more intuitively-driven and less analytical than men, which does not seem to be borne out and in fact the opposite may be more likely — and given the suspiciously large gender and race disparity in EA, as well as the the very small size of the community at present. The latter enables us to target selectively, not randomly from the general population, even if this loaded, simplistic, and to my knowledge unfounded claim is true. Moreover, there are many examples of women, white and of color, who felt or feel excluded by the EA community, despite being entirely onboard with the philosophy of EA, having participated in the community for years, and having made major changes in their thinking and lives because of EA — they just really dislike the community.
Relatedly, some may assume that our community is genuinely merit-based — that we simply reach out to and include the most qualified people, regardless of their race, gender, etc. Did you know that, at least in an experimental setting, when organizations espouse meritocracy managers show greater gender-based discrimination than those at other companies? And that having a gender quota is more likely, assuming probably that an organization is competent at hiring, to weed out mediocre men than to introduce mediocre women? Unfortunately most of the specific details of the conclusive and pervasive sexism I have experienced, seen, and heard of first-hand within the EA community are confidential — and I don’t just mean sexual harassment and assault, there are other more pernicious and more prevalent forms of sexism in society and in the community, such as the holding of women to higher standards of competence and the consistent underestimation of women’s trustworthiness (What Works: Gender Equality by Design, 27). Happily some of it has been acknowledged by its offenders, who have in some cases stated credible intentions to improve — though whether they are in fact improving takes time to assess, and without ongoing personal or even cultural support they may find improving difficult. Even without the details of specific experiences people have had, it should be sufficient to observe that there are many examples of qualified people being excluded, and no evidence has been offered to justify the assumption — which was recently voiced, to no opposition as I understand it, on a panel at EAGxBerlin — that we are merit-based. [Edit: I want to note as well that who EA seems to select for matches exceptionally well with privilege in society at large, which would be quite a coincidence.]
Some people in the community have made other thoroughly unreasonable claims to justify the status quo, such as that women would be a distraction in the workplace. If they are, the problem is entirely the men who can’t adhere to basic professional norms and who presume their contributions so important that the minor cost to them of being less sexist outweighs all of the potential contributions of all the women they’re keeping out. To my knowledge this claim was recanted — under pressure or reflection, I’m not sure — but it’s a red flag for other sexism. Someone else has said women aren’t as willing as men to take low salaries for altruistic purposes, apparently in ignorance of the rest of the nonprofit world, whose volunteers and workforce are overwhelmingly women. Such unrigorousness should be thoroughly discouraged.
I think the majority of the problem, however, is that while many people know we have a problem, they don’t know what they themselves can do about it.
What changes can we make to more effectively select for the right people?
The evidence base on effective strategies to reduce prejudice and increase inclusion in general is weak, though growing. I don’t claim that the following are all of the answers, nor necessarily the best answers, nor even that they’re all right or involve no tradeoffs. My aim is just to put my ideas out there in the interest of continued discussion and action on this issue. I also don’t claim to be perfect in implementing these myself, but I do generally aspire to embody those I’m failing in.
● Recognize that there is a problem, in society at large, in the communities EA sources from, and within the EA community. Even if you are not convinced by the evidence I’ve presented about why this is a problem our community needs to address, you should still be compelled by the fact that so many people both in EA and elsewhere think we have a serious problem. Look to data — I include barely any of the literature on sexism and other systematic biases in this post because it is vast and Googleable — and to accounts of people in the community — or no longer are — who are from the groups in question. Do not rely on your intuitions or those of anyone lacking the perspectives of people from underrepresented groups. If you disagree that this is an important problem or about any of the steps I suggest to make headway on it, let’s have a discussion so we can get to the truth of the matter.
● Recognize that it is extremely probable that you harbor biases that you are not accounting for. Recognize that recognizing bias in society and our community isn’t enough — people tend to think they are less biased than average, and tend to demonstrate the same levels of bias even when they are experienced with seeing a bias, made explicitly aware of the bias, and asked to introspect to ensure they are not making a biased judgement (What Works: Gender Equality By Design, 45-48). The latter can even backfire, which may be the effect of this whole statement, but I think transparency is sufficiently important to outweigh that risk. Even if you have evidence that you are successfully debiased in some ways — e.g. calibrating against overconfidence in online tests — the society you grew up in has many biases, and you are highly unlikely to be exempt from all of them. People in the EA community might even be particularly susceptible to some.
● Don’t penalize the “heart” as though there is only the “head.” EA is both, and one is nothing without the other in this movement. I prefer to play the long game with my own investments in community building, and would rather for instance invest in someone reasonably sharp who has a track record of altruism and expresses interest in helping others most effectively than in someone even sharper who reasoned their way into EA and consumed all the jargon but has never really given anything up for other people. I see exceptions to this being the best investment on the whole, but none who I think wouldn’t be here anyways if we were focusing much more on the former personalities. In practice, the lowest-hanging fruit to elevate the heart is to be empathetic with and kind to people. At the very least, ensure you are not being dismissive of people’s emotions, and in particular feminine-coded emotions like empathy, grief, sadness, or love — things that drive a lot of people’s altruism. Some of the most talented and resolute people in this community are here because they are deeply emotionally compelled to help others as much as possible, and we’re currently missing out on many such people by being so cold and calculating. There are ways to be warm and calculating! I can think of a few people in the community who manage this well.
● Recruit and promote women to manage teams. Women tend to be better managers than men.
● [Edit: Additional suggestion. People in high places in the movement, particularly white men, publicly state the importance of EA being diverse and inclusive to you.]
● CEA and EAF could both, or jointly, hire a Diversity & Inclusion Officer. CEA and EAF, your intention is to be institutional leaders of the EA community, so lead the way on this critical aspect of movement-building — there is definitely a full-time job’s worth of advising and other work to do, probably even just with the suggestions I list here. Some universities and companies have such a position, and I — and I’m sure others — would be happy to advise on what the position’s responsibilities would look like. (Thank you Sana Al Badri for this suggestion.)
● All organizations should hire communications staff who are versed in inclusionary communications practices. Alternatively, the Diversity & Inclusion Officer could train them.
● Adopt and enforce a clear policy — as organizations and individuals — for dealing seriously and fully with illegal actions like sexual harassment and explicit discrimination or discrimination revealed by HR or legal counsel. Commensurate consequences and reform procedures, escalating as necessary to expulsion, are critical. The perpetrator is not so much more important than the greater number of people they are driving away, the risk of a lawsuit to the organization protecting them, or the risk they bring to the community’s reputation, that such actions should be protected. If this community’s members are as smart as we like to think, using a heavy hand once, if necessary at all, should be all it takes, so long as the threat of using it again is credibly maintained.
● If you go out with colleagues, ensure you’re not just including the ones most like you. A lot of opportunity to build skills, network, and advance one’s career happens out of the office, and favoring some colleagues over others can lead to systematic disempowerment. If you are a man and can’t go out with women colleagues without thinking of them sexually and making the interaction uncomfortable, or if you can’t have a conversation about work and EA with women colleagues at lunch, you should not be managing anyone.
● If you see something, say something. Don’t leave the reporting of problematic behavior to the people who directly experience it. They are feeling disempowered and alienated and are usually in a far less capable position to do something about it.
● If you experience something, try to at least say something to someone. Whether you decide it is in your interest to say something or not, ensure you at least consider the risks to other people and the broader community if you do not. I appreciate that in many if not most cases we just want to move on with our lives, and this burden should, as noted above, not be left to the people experiencing the problem, who generally face higher risk bringing it up than other people.
● We could establish a website providing resources for legal counsel and enabling people to anonymously share experiences regarding discrimination, harassment, and assault, both to inform less-aware community members of issues in the community and to provide a sense of accountability to the movement as a whole, as the testimonies would be publicly accessible.
● Update your valuations of men’s competencies downwards, and of women’s upwards, particularly when you are forming your first impressions. People already inaccurately perceive women as less competent than men, even when their work is superior, in addition to which men overestimate and oversell themselves while women underestimate and undersell themselves. Yes, this will penalize the rare men who represent themselves accurately or under-represent themselves, and favor the rare women who represent themselves accurately or over-represent themselves, so take care, but the risk of overcorrection is not sufficient reason to resort to the prejudiced status quo. Additionally, in more long-term and formal environments, utilize standardized and objective metrics of competency whenever possible, such as trial projects when hiring. Relatedly, consider promotions and do hiring in rounds, not on a rolling individual basis.
● Amplify the contributions of people from underrepresented groups, in personal interactions, meetings, articles, podcasts, Facebook posts, conferences — everywhere.
● If a colleague from an underrepresented group can speak on an issue you’ve been asked to speak about, whether at a conference or for a quote in an article, give them the opportunity. If they decline, ask why — they may be interested but want PR training.
● Giving announcer and moderator positions to people from underrepresented groups at conferences is an easy way to start including them more. That’s not a license to not consider diversity and inclusion elsewhere else, but it is a step. Many people are very capable of being great emcees and moderators, so there’s little or no reason not to use this opportunity to include them. Note that EAG Boston and San Francisco 2017 both had a white man as the emcee.
● Don’t dismiss or trivialize the altruistic concerns ordinary people have. It’s great that people care about immigration reform, worms in children in impoverished regions, and dog rescue. It would be even more great if they put their energy into efforts of greater impact, but moving them in a more effective direction, whether within their currently preferred project or cause or to another, is easier done if they have a sense of community with you, which is easier achieved if they know you care about the issues they care about. It’s all but impossible to achieve if you stick your nose in the air at their altruism because their thinking on the weighty and new topic of effectiveness is underdeveloped — like yours once was.
● Quit the hero worship. Major progress is made by groups, not individuals. People should be praised for their individual contributions, and some people will be leaders, but that doesn’t mean other people aren’t contributing as much or more. Hero worship in EA is almost always directed towards white men, and while it’s great to celebrate their achievements, overdoing that celebration exacerbates the issue of how we represent ourselves to newcomers and outsiders, and encourages a masculine, individualistic culture where newcomers can’t thrive.
● Do not consider anyone’s arguments or positions above questioning or criticism. Never presume that someone has no place questioning someone else whose intellect you laud. No one is infallible, and no one has every answer or has considered every possible angle and argument. This particular form of hero worship is a common complaint from people who feel excluded from the community.
● People from underrepresented groups: Own your worth. Don’t apologize for an A- job while others spin their C’s as A’s. Take credit for your work, even if you don’t personally want it, because other people like you need to see your success. Don’t do the dishes when that’s someone else’s responsibility this week. Apply for the jobs you want, not just the ones you are explicitly fully qualified for, because they’re written under the assumption that people are going to apply even when they only meet half the requirements — women don’t apply to jobs unless they meet all the posted requirements, whereas men apply when they meet 60%.
● When possible, which is the vast majority of the time, use ordinary phrasing instead of jargon, at least with people who have only recently become involved.
● Stop interrupting people. Men are much more likely to interrupt than women are, and more likely still to interrupt women than other men. Not only does this disproportionately disempower women, but it’s rude and off-putting to everyone.
● When people are interested in talking through something they’ve been thinking about in EA, have a conversation about it, even if you’ve already resolved your own thoughts on the topic and even if you don’t think there’s anything for you in the conversation. The other person will likely end up more informed and feel more welcomed, and it won’t take too much of your time. Remember too that being willing to engage with newcomers and people of lower status or perceivable “usefulness” is very common in other communities, and particularly advocacy communities, so when people act otherwise it seems surprising, rude and alienating.
● Don’t emphasize earning to give too much. This has been an ongoing discussion, and I think we’re slowly doing better.
● Be just as welcoming with people who do direct work on non-priority problems as you are with people who work in finance or tech. Not only can people contribute a lot more to the community and movement than their income, but keep in mind too that finance and tech specifically are places with particularly bad reputations for their exclusion of women and other historically marginalized people.
● Emphasize that doing the most good will necessarily mean different things for different people. Even if we ourselves know we’re speaking in generalities, it can very easily come off like we’re advocating a one-size-fits-all approach, or asserting that any one cause or career path is the best path to maximum impact for everyone.
● Represent the community’s values accurately. This can be a challenge in a single 140 character tweet, but not in a whole Twitter feed or in a conversation. Consistently presenting anti-malarial nets as the community’s primary concern is going to attract people with that particular interest. This means a relatively high proportion of people who are resistant to pushing new frontiers, as global poverty is a popular cosmopolitan cause that normal people can get lots of praise for contributing to more effectively, with no or nowhere near the personal risk of less mainstream causes and projects. Such an emphasis will also mean the community gets skipped over by people who are exploring other and unconventional ways of doing good.
● Relatedly, our public image can and should be weird in the right ways. It can say true, abstract, challenging things like “We should consider the interests of all sentient beings,” “We don’t have all the answers, our goal is to find and implement them,” “How our actions affect people in the far future could vastly outweigh the impact they have now,” and “New technologies may transform the quality of life on Earth and beyond to a much greater extent than they have even in the past century.” And it can do all that without using jargon, without throwing around the term “AI” with no qualification or explanation, without looking or sounding like a young socially awkward white guy in tech, and while emphasizing the altruism motivating these intellectual explorations and providing palatable examples of relatively high-impact actions people can take — including, but not inordinately emphasizing, those that best help individuals in poverty. It’s not a question of either being weird AI fanboys or mainstream philanthropists.
● Don’t get hostile in conversations. Keep the focus on the information and arguments at hand.
● Don’t reward people for aggressive communication styles. If you want to express agreement with their content, but their delivery is bad form, you can say for instance “I agree, but your [snarkiness, ad hominem comment, exaggeration, etc] was unnecessary and not conducive to rigorous discussion.”
● Do not disproportionately penalize women for aggressive communication styles. When a man and woman are equally aggressive, people tend to see the man as more persuasive but the woman as less credible, and women are given feedback that they’re “too aggressive” three times as often as men. Both positions seem highly unlikely to line up with reality and are more likely unconscious efforts to punish nonconformity to gender stereotypes.
● Relatedly, if you find yourself judging that a woman is too emotional, consider the men you know who are confrontational, who argue aggressively, who have expressed strong feelings about people they don’t know well, who can’t work well with attractive women, who jump to conclusions based on unexamined intuitions, who are obsessed with obtaining status, who are snarky, who level insults at others regularly, or who stoop to pissing contests. If you’re in the EA community, you know lots of men who demonstrate multiple such tendencies. In all likelihood men just hide their emotions better than women, which does not mean their judgements are less emotionally-motivated. It’s even possible that men’s judgements are more emotionally-motivated, as girls and women in society tend to have more social encouragement and opportunity to examine their emotions.
● Replace competitiveness with collaborativeness. In successful communities, people empower each other and become better off on the whole for it — another EA’s success strengthens and grows the community, and the community’s strength and size helps you and your purposes. So: Is someone’s counter to your argument making you feel defensive? This is an opportunity to get closer to the truth, together. Is someone considering starting a project that you were also thinking of? Combine your resources, and if it needs just one leader, sort out who’s best positioned for it — that’s great for the project. Are your donors shifting funds to a new organization? Sounds like you should drop inferior programs, and also like the community needs to grow the donor pool.
● Don’t try to take shortcuts to status, and particularly don’t try to gain status by disempowering other people. Status for most of us is not a zero-sum game. In fact, there is a lot of status to be gained by developing a reputation as someone who empowers other people. So it doesn’t matter what you’ve accomplished, you are not above giving a few minutes to an enthusiastic new EA who wants to learn how to get more involved, or at the least directing them warmly to someone who has more time to engage. And your public/semi-private conversation at an EA event is not so important that you can’t take a few seconds to say hello to someone trying to enter the conversation and fill them in, or to change the topic for the new person — you can pick up the other conversation again later.
● Relatedly, empower people, don’t use them — act in good faith, and show faith in your community members. Consider the other people in the community your collaborators, neither your competition nor a means to your ends. When collaborating with other EAs, be honest about your information, goals, and thought processes. Even if, for instance, you really just want someone to donate to or work for your project, and they’re deciding between yours and another, you should still give them your honest thoughts — or a better source — and critical or full information on the tradeoffs you see, not just what you think will convince them to support you instead of the other project. Introduce them to people at the other project if they aren’t introduced already. Help them make their own decision. Doing otherwise incentivizes further dishonesty and manipulativeness in the community.
● Relatedly, consider the bigger picture, in everything you do. The good you can do does not just encompass the direct impact of your actions, but also how they influence other people. Establishing stronger norms of honesty would both incentivize stronger norms of intellectual rigor and select more strongly for new members who are intellectually rigorous rather than manipulative or manipulable. It’s also helpful to probably everyone as individuals to have a variety of people out there who appreciate you and will be enthusiastic about lending you a hand when you choose to ask for one, so be careful handling fire around bridges.
Similarly, when considering whether to go vegetarian or take some other step to avoid participating in a major moral problem, consider how not doing so could validate and perpetuate the biases and selfishness that enable people commit that act normally, and how that act could help others feel licensed to do other selfish and harmful things that you disagree with, like lying to sexual partners about having an STI or being dishonest and uncharitable in representations of your organization or preferred cause area.
Some people may have their own reasons for thinking that it’s good for them to act in and use people in short-sighted ways, and to be confident that they have nothing left to learn and no need to build social capital, but even if that actually is the right call for them individually, such short-sighted self-interest is bad for the broader EA community and limits what it can accomplish, so it should be discouraged. Controversy here may point to a deeper issue, of which I have seen concerning evidence, of some people using the broader EA community as a mere conduit to their preferred issue rather than a meeting place for everyone to learn from each other and help each other and grow the broader community and each other’s sub-communities on the whole. The community has a lot of room to grow, and actively trying to cannibalize each other is probably not in anyone’s long-run interest. So when, for instance, newcomers ask me about AI safety, I give them a clear and palatable introduction and I answer their questions or direct them to people who can answer better, and I do so even if we might not get a chance to talk about things I suspect would be a better use of their resources and which I have resolved are better use of mine. For me, the EA community isn’t just another place to pitch animal advocacy, it’s a place where I can learn and grow as an effective altruist, and where I can help others learn and grow as effective altruists. It’s a place where, in its better moments, people do good together, not alone.
● Give to the people in your community. Acknowledge their contributions, introduce them to people they might be interested in knowing, offer them your expertise, help them when they need a favor… this community is no exception to all communities’ needs for basic positive social norms.
● When people make mistakes, kindly and clearly identify them. If the mistake was not just an intellectual error but harmed someone, identify it in the interest of achieving justice for the person who was wronged, but also and perhaps more importantly in the interest of helping the person who made the mistake to grow and improve. That is to their benefit, the benefit of the community, the benefit of other people they would have gone on to wrong, and the benefit of others still who they’d be failing to help by falling short of who they could be. Encourage and reward good behavior privately and publicly, and discourage bad behavior privately, and more publicly and severely as it becomes more necessary to raise the costs to people of refusing to adopt better attitudes and behaviors. If we are only concerned with the direct impact of our own actions or don’t care about our omissions, we won’t get far in improving this community — we need to empower others to do better as well, so give people a genuine chance to improve.
To be clear, I’m not suggesting endless second chances, and some actions taken even once will warrant zero tolerance and immediate expulsion.
Also, even people who are exceptionally humble and exceptionally interested in personal growth still need to feel accepted and their egos can still be wounded, so take care not to overload people — give criticisms seriously but compassionately, focus on priorities, be clear about what happened, why the action was a problem, and what you think the person should have done instead, and in normal circumstances it’s probably best to give criticisms sparingly. Criticisms also have more credibility and are less hurtful when the critic has gained the respect and camaraderie of the criticized.
● Accept that you will make mistakes, and take responsibility when you do. Encourage yourself to value humility and growth even if it hurts your pride. We all make mistakes! When we are informed or otherwise realize that we have, we should take responsibility, rather than ignore the mistake or defend it and lose the opportunity to improve — not to mention incentivizing others to prioritize their own pride over self-improvement. Especially if you can feel an accusation wounding your ego and alerting your defenses, or if you can’t explicitly argue against an accuser’s points, you are probably not thinking very clearly. It should be a norm in the community to comfortably and casually admit “oh, you’re right I got that wrong” and “good point, I’ve changed my mind” and “I was not thinking about that effect of my actions, I’m sorry and thank you for bringing this to my attention.”
● Take up that humility more generally. Don’t judge that you’re right and another party is wrong before ensuring you know their reasoning — ask someone why they hold the position they do, maybe they’ve thought of something you haven’t just as you may be assuming you’ve thought of things they haven’t.
● You can disagree with people while entirely respecting their positions, appreciating their contributions, and recognizing them as an ally. The reason I spend my time strategizing to bring down animal farming and to expand humanity’s moral circle instead of working — directly at least — on AI safety generally seems to come down to intuitive differences between myself and people who prioritize direct work on AI safety. These differences are sometimes minor and in my experience generally irreconcilable with available information. I also disagree that near-term interventions to help individuals in poverty are the best use of most EA’s resources because I don’t think lives matter as much as well-being, and poverty interventions are not relatively robust in their address of well-being. I disagree with many of my allies and colleagues about the value of farmed animal welfare reforms and other near-term interventions ultimately because I tend to be more risk-tolerant and compelled by expected value than they are, and because I consider the net impacts of near-term interventions sufficiently uncertain that I don’t think it’s useful to consider them categorically more measurable than interventions whose intended impacts are less direct or further in the future.
Nonetheless, I’m very excited that these people are working on these projects, which I still consider important even if I disagree that they’re the best use my or these individuals’ resources, and I still have a lot of respect for some of these allies’ and colleagues’ analyses, and I am deeply moved by their altruistic drives and grateful for their contributions to the EA community and to my own thinking on these issues. Disagreement is critical for finding the best answers to the kinds of questions EAs ask.
● There is a point at which championing “free speech” actually inhibits it, enabling what was once innovative, challenging, rigorous discussion to become regressive, harmful, thoughtless trolling and/or identity politics. When people say severely intolerant things that disenfranchise other people — especially if they for instance cannot justify it, respond to criticisms of it with aggressive repetition of their claims with no evidence and/or with personal attacks, and cannot explain why it’s important that they say it at all — don’t tolerate it.
For instance, it should be outright unacceptable for someone to say that women do not contribute to society and are leeches if they don’t offer men sex. This actually happened, recently, and is a problem for two reasons: One, the factual claim is highly contrary to economic and other data as well as extensive anecdotal evidence, and such unrigorousness should be discouraged. Two, the value judgement, which is explicitly sexist to an atypically extreme degree, is well beyond the limit of what the community should accept as any kind of a “diversity of opinion” unless we want to severely limit our diversity of participants, and as such that very diversity of opinion. People are both less able to and less interested in contributing their resources to the community when they are treated with such hostility and when such hostility is accepted by the community. It is women’s interests to assume that every man who is okay with this person’s behavior has an appallingly poor understanding of sexism in society, if not also of basic social norms generally, and that as such he probably harbors a dangerous level of sexism himself, if not also a shockingly — contextually — limited intellectual capabilities given the obvious lack of intellectual rigor in the offender’s comments. So toleration of such comments makes the whole community look highly unappealing.
Happily, this particular individual — who is probably a troll in general — was banned from the groups where he repeatedly and unrelentingly said such things, though it’s concerning there was any question about whether this was acceptable behavior. Maybe we should have a reference document of what kinds of actions in online forums warrant an explanation of the problem, ensuing non-engagement, warnings from the moderator, and bans.
To be clear, by tolerating rude and intellectually unrigorous behavior we are in fact choosing to have such people in the community in the place of the more rigorous and compassionate people they are likely to put off. Such toleration of intolerance is also likely to normalize that intolerance and as such to increase the biases in the rest of community. It concerns me that I even have to bring this up as a problem, as I think e.g. most Fortune 500 companies have by now figured out that it’s very important that employees not be outright assholes to other employees. [Edit: example that came to mind redacted because while problematic, I would not describe the person as an “outright asshole,” though this action was still a serious problem.] Yes, some people in broader society now respond to correctable offenses with a mob mentality and too much readiness for ostracization, but just because some people have swung too far past the mark doesn’t mean we should default to a status quo that falls so short of it.
● Hiring processes and employee management are a big topic, but for starters, take care with job postings: Use less masculine language; talk about the concrete skills and experience you’re interested in instead of appealing to people with “startup” experience; ensure that the qualities you say are “required” are actually required; and appreciate that women may conceive of their achievements differently than men tend to, for instance attributing their successes more to their team rather than to themselves.
● Men, accept that many women will be your equals, and others your superiors, in intelligence, knowledge, and other abilities you aspire to or pride yourself in. Even those who aren’t will sometimes have a better argument or more relevant information than you. And no, just because you can point to one or two women whose intellects and other competencies you appreciate does not mean you are evaluating other womens’ fairly — especially if the women you are thinking of are in your community and share your positions. The same goes for people of color, and others.
● People who belong to currently disenfranchised groups, adopt the attitude that the success of other people who are disenfranchised, particularly for the same reasons as you, is your success. Women who encounter discrimination early in their careers may distance themselves from other women, refuse to help them, and align themselves with men at other women’s expense. The disempowerment of women in the EA community may make women feel as though there is only room for a few women to have some voice, but we don’t need to accept someone else’s narrative that we have to compete with each other — we can make more room for each other, like women in other masculine men-dominated communities have done before us and are doing alongside us, by empowering each other. As I’ve said already, this is not a zero-sum game: Every person of color’s success should, with sustained inclusionary efforts from the rest of the community, reduce some racism in the community, which in turn increases opportunity for other people of color in a virtuous circle.
● Mentor people from underrepresented groups. Or if you belong to an underrepresented group, seek out mentors.
● Take an interest in people. You will at times, often even, have to judge when someone isn’t going to be so involved in the movement that it’s worth your time to continue engaging, but give people a chance, and try to be mindful of your intuitions, some of which will be more valid and useful than others and some of which will be plain biased — try to be conscientious in that judgement and focus on concrete measures of a person’s likelihood to engage well enough that they’ll learn to do good better.
● Finally: Take responsibility for improving diversity and inclusion in EA. Whatever your role in the community and movement and however inclusive your actions tend to be already, there is more you can do, and saying it’s someone else’s problem to solve will only result in a collective action problem..
Other notes
See also Kelsey Piper’s notes on failure modes in efforts to increase demographic diversity, Julia Wise’s post on specific actions people can take to be more welcoming at events, and Owen Cotton-Barrett’s post on being welcoming.
I should note that I put vastly more time and effort into working with people outside of EA to develop their thinking on effectiveness independently of the EA community than I do bringing new people into the community, which frankly I only do when they’ve explicitly expressed interest. This is because I usually expect introducing them to the community to waste their time, cause them stress, cost some of my relationship with them because of that, and most importantly, turn them off from thinking about effectiveness. In fact, I think we backfire often just because we present ourselves so suboptimally.
The time I have spent on EA community-building, which has been substantial, has focused on supporting individuals who are already in the community, for the most part in the wing that intersects with the animal advocacy community. I should note, brusque though this comment may be, that the animal-advocacy-focused sub-community of EA tends to be significantly more socially competent, welcoming, and proficient in the kinds of inclusionary practices I’ve suggested here than some other parts of the community. This may be largely explained by how women-dominated the animal advocacy community is — though heavily white and guilty of other failings — and how its members are generally much better versed in issues of discrimination and inclusion than EAs are. Animal advocates, particularly in the farmed animal wing, tend to be highly liberal and generally actively encourage concern for broad social justice — which stands in stark contrast to the many people in the EA community who use strawmans and the worst of the social justice community to dismiss, insult, and otherwise actively discourage any association with the term, to the point of taking pride in that opposition.
I should also note that most other women, white and of color, who have been in the community for several years and who I have spoken with about diversity and inclusion issues, are exhausted from talking about and even thinking about this problem for so long and to so little avail. True, the vast majority of that conversation has been in private or otherwise sequestered discussions, and mostly among people who agree there’s a problem and aren’t contributing to it as much as others, whether by act or omission. That’s why I’m putting all of these thoughts online. Regardless, people from more represented backgrounds and who are otherwise in more influential positions need to take up this mantle.
Also FYI, I am currently reading and taking notes on What Works: Gender Equality by Design and intend to share its insights — even if they’re potentially somewhat cherry-picked and otherwise weaker evidence than we’d like, as pop science books often are — hopefully within the next month or so.
Thank you Jennifer Fearing for the handful of suggestions I took from your advice to animal advocates on how to promote gender inclusion in animal advocacy leadership.
- EA is too reliant on personal connections by 1 Sep 2022 14:26 UTC; 230 points) (
- Red-teaming contest: demographics and power structures in EA by 31 Aug 2022 4:36 UTC; 110 points) (
- EA considerations regarding increasing political polarization by 19 Jun 2020 8:25 UTC; 109 points) (
- EA Forum: Data analysis and deep learning by 12 May 2020 17:39 UTC; 82 points) (
- Bridging EA’s Gender Gap: Input From 60 People by 30 Apr 2023 16:20 UTC; 82 points) (
- Effective animal advocacy movement building: a neglected opportunity? by 11 Jun 2019 20:33 UTC; 68 points) (
- EA Israel Strategy 2020-21 by 26 Sep 2020 13:22 UTC; 64 points) (
- Should EAs be more welcoming to thoughtful and aligned Republicans? by 20 Jan 2020 2:28 UTC; 38 points) (
- Modesty and diversity: a concrete suggestion by 8 Nov 2017 20:42 UTC; 30 points) (LessWrong;
- 1 May 2023 10:31 UTC; 22 points) 's comment on Bridging EA’s Gender Gap: Input From 60 People by (
- 30 Nov 2020 14:09 UTC; 20 points) 's comment on Introducing High Impact Athletes by (
- In diversity lies epistemic strength by 6 Feb 2021 15:54 UTC; 10 points) (
- 11 Aug 2018 10:10 UTC; 9 points) 's comment on Are men more likely to attend EA London events? Attendance data, 2016-2018. by (
- 19 Jan 2022 19:55 UTC; 5 points) 's comment on EA Diversity: Unpacking Pandora’s Box by (
- 29 Oct 2017 16:26 UTC; 4 points) 's comment on Pitfalls in Diversity Outreach by (
- 19 Jun 2020 21:54 UTC; 3 points) 's comment on EA considerations regarding increasing political polarization by (
- 29 Jul 2019 5:34 UTC; 1 point) 's comment on Making discussions in EA groups inclusive by (
Opinions mine, not my employer’s.
Very important article Kelly, thanks for writing! I don’t agree with 100% of your diagnoses or prescriptions (honestly I rolled my eyes at some of them), but absolutely share your concern that a lack of gender and racial diversity is hurting EA. I’d also add age diversity to the mix, and in my experience (which I doubt is unique) this issue interacts with the gender and racial issues in a problematic way.
Back in my 20s, I would have brushed off and rationalized away your diversity concerns. At that time, I was the type of person over-represented in EA: young, male, studied econ at an elite school, working as a hedge fund quant in an explicitly hyper-rational and confrontational work environment, maximum “thinker” assessment on the Myers-Briggs thinker vs. feeler spectrum, etc. Many (probably “most”, or even “almost all”) of my friends and co-workers fit the same description. And I placed a very high value on my opinion, and the opinions of people like me.
Now I’m pushing 40, and I’m still a quanty, thinker vs. feeler guy with a blunt communication style. But I’ve acquired a valuable perspective on just how stupid really smart 20 somethings can be. When you work at a place that hires lots of people that fit the same profile year after year, certain patterns become obvious. You see the first year analyst class making the same mistakes each year, and realize they’re the same mistakes you and your cohorts made when you were first year analysts. You see that some people, with impeccable backgrounds/resumes, simply aren’t very good at their jobs for a variety of reasons. It turns out that even really really smart people mess up in very systematic ways. For instance, the type of people overrepresented at EA (myself included) generally aren’t that great at being humble (probably because of all the good grades and accomplishments). They also undervalue people skills- until I was lucky enough to meet an enormously talented salesperson and watch him build and nurture relationships that were critical to landing many multibillion dollar accounts, I thought the marketers were just people who couldn’t hack the math to do real finance work. I’m sure I still carry this bias to some degree.
When I was younger, I would have fallen in the “sure EA is homogeneous, but can you prove that’s a problem?” camp. With another ~15 years of perspective, I think that gets the burden of proof backwards. We’ve already experienced some of the negatives- remember when an EA journalist went to EA Global and felt a big part of the story was EA naiveté? We know the EA community and its leadership disproportionately represent populations who systematically lack humility (the “best and brightest”), experience (the young), and access to alternative perspectives (the women, people of color, people who remember the 70s, etc. who are mission aligned but think EA is too much work to interact with). That’s a lot of red flags (and FWIW most of my background is in risk management).
So now I’ve come around to the view that the EA community should seek out low cost ways to improve diversity (e.g. limiting jargon), and at least weigh the costs of changes that could significantly improve diversity (e.g. a community diversity officer). And if people want to argue that the lack of diversity in EA isn’t a problem, I think the burden of proof is clearly on them.
I’m amazed and inspired by all the young EAs who want to make the world a better place- I spent my time in college getting drunk at my frat, not reading 80,000 hours. The last thing I want to is discourage any of them. And I’m still kind of young and plenty dumb. So please just consider this a perspective to consider, and an endorsement of the principle of considering different perspectives.
“I think that gets the burden of proof backwards”—I agree that claiming that there are some ways in which we could improve diversity is really an anti-prediction. On the other hand for any specific that we should do X, the burden of proof is on the person who wants us to do it.
Thanks for this post. There’s a lot I agree with here. I’m in especially vigorous agreement with your points regarding hero worship and seeing newcomers as a source of fresh ideas/arguments instead of condescending them.
There are also some points I disagree with. And in the spirit of not considering any arguments above criticism, and disagreement being critical for finding the best answers, I hope you won’t mind if I lay my disagreements out. To save time, I’ll focus on the differences between your view and mine. So if I don’t mention a point you made, you can default to assuming I agree with it.
First, I’m broadly skeptical of the social psychology research you cite. Whenever I read about a study that claims women are more analytical than men, or women are better leaders than men, I imagine whether I would hear about it if the experiment found the opposite result.
I recommend this blog post on the lack of ideological diversity in social psychology. Social psychologists are overwhelmingly liberal, and many openly admit to discriminating against conservatives in hiring. Here is a good post by a Mexican social psychologist that discusses how this plays out. There’s also the issue of publication bias at the journal level. I know someone who served on the selection committee of a (minor & unimportant, so perhaps not representative) psychology journal. The committee had an explicit philosophy of only publishing papers they liked, and espousing “problematic” views was a strike against a paper. Anyway, I think to some degree the field functions as a liberal echo chamber on controversial issues.
There’s really an entire can of worms here—social psychology is currently experiencing a major reproducibility crisis—but I don’t want to get too deep in to it, because to defend my position fully, I’d want to share evidence for positions that make people uncomfortable. Suffice to say that there’s a third layer of publication bias at the level of your Facebook feed, and I could show you a different set of research-backed thinkpieces that point to different conclusions. (Suggestion: if you wouldn’t want someone on the EA Forum to make arguments for the position not X, maybe avoid making arguments for the position X. Otherwise you put commenters in an impossible bind.)
But for me this point is really the elephant in the room:
I would like to see a much deeper examination here. Insofar as I feel resistant to diversity efforts, this feels like most of what I’m trying to resist. If I was confident that pro-diversity people in EA won’t spiral towards this, I’d be much more supportive. Relevant fable.
All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity. Take a statement like this one:
Being warm and calculating sounds great, but what if there’s actually a tradeoff here? Just taking myself as an example, I know that as I’ve become aware of how much suffering exists in the grand scheme of things, I’ve begun to worry less about random homeless people I see and stuff like that. Even if there’s some hack I can use to empathize with homeless people while retaining a global perspective, that hack would require effort on my part—effort I could put towards goals that seem more important.
Again, I think there’s a real tradeoff between “free speech” and sensitivity. I view the moderation of online communities as an unsolved problem. I think we benefit from navigating moderation tradeoffs thoughtfully rather than reactively.
Reminding people off the forum to upvote this post, in order to deal with possible hostility, is also a minor red flag from my perspective. This resembles something Gleb Tsipursky once did.
None of this seems very bad in the grand scheme of things, especially not compared to what I’ve seen from other champions of diversity—I just thought it’d be useful to give concrete examples.
Anyway, here are some ideas of mine, if anyone cares:
Phrase guidelines as neutrally as possible, e.g. “don’t be a jerk” instead of “don’t be a sexist”. The nice thing about “don’t be a jerk” is it at admits the possibility that someone could violate the guideline by e.g. loudly calling out a minor instance of sexism in a way that generates a lot of drama and does more harm than good. Rules should exist to serve everyone, and they should be made difficult to weaponize. If most agree your rules are legitimate, that also makes them easier to enforce.
Team-building activities, icebreakers, group singalongs, synchronous movement, sports/group exercise, and so on. The ideal activity is easy for anyone to do and creates a shared EA tribal identity just strong enough to supersede the race/gender/etc. identities we have by default. Kinda like how students at the same university will all cheer for the same sports team.
Following the example of the animal-focused EAs: Work towards achieving critical mass of underrepresented groups. Especially if you can saturate particular venues (e.g. a specific EA meetup group). I know that as a white male, I sometimes get uncomfortable in situations where I am the only white person or the only man in a group, even though I know perfectly well that no one is discriminating against me. I think it’s a natural response to have when you’re in the minority, so in a certain sense there’s just a chicken-and-egg problem. Furthermore, injecting high-caliber underrepresented people into EA will help dismantle stereotypes and increase the number of one-on-one conversations people have, which I think are critical for change.
Take a proactive, rather than reactive, approach to helping EA men with women. Again, I think having more women is playing a big role for animal-focused EAs. More women means the average man has more female friends, better understands how women think, and empathizes with the situations women encounter more readily. In this podcast, Christine Peterson discusses the value of finding a life partner for productivity and mental health. In the same way that CFAR makes EAs more productive through lifehacking, I could imagine someone working covertly to make EAs more productive through solving their dating problems.
Invite the best thinkers who have heterodox views on diversity to attend “diversity in EA” events, in order to get a diverse perspective on diversity and stay aware of tradeoffs. Understand their views in enough depth to market diversity initiatives to the movement at large without getting written off.
When hiring a Diversity & Inclusion Officer, find someone who’s good at managing tradeoffs rather than the person who’s most passionate about the role.
Again, I appreciate the effort you put in to this post, and I support you working towards these goals in a thoughtful way. Also, I welcome PMs from you or anyone else reading this comment—I spent several hours on it, but I’m sure there is stuff I could have put better and I’d love to get feedback.
It’s not unheard of, but it seems more common than it is because only the movements and initiatives which go too far merit headlines and attention. The average government agency, F500 company, or similar organization piles on all kinds of diversity policies without turning into the Nightmare on Social Justice Street.
The pattern I see is that “organizations” (such as government agencies or Fortune 500 companies) usually turn out OK, whereas “movements” or “communities” (e.g. the atheism movement, or the open source community) often turn out poorly.
Hm, that’s a good point. I can’t come up with a solid counterexample off the top of my head.
An explanation of what you mean by “turn out OK” would be helpful. For instance, do movements that err more towards social justice fare worse than those that err away from it (or than those that sit at the status quo)?
Whether that’s the case for the atheism movement or the open source community is a heavy question that merits more explanation.
Actually, I would think that any overshooting you see in these communities is a reaction to how status-quo (or worse) both of those communities are. Note for instance that when women are not collaborators on a project (but not when they are), their open-source contributions are more likely to be accepted than men’s when their gender is not known but despite that they’re less likely to be accepted than men’s when their gender is known.
The Atheism Plus split was pretty bad. They were a group that wanted all atheists to also be involved in social justice. Naturally many weren’t happy with this takeover of the movement and pushed back. The Atheism Plus side argues that this was due to misogyny, ect, ignoring the fact that some people just wanted to be atheists and do atheist stuff and not get involved in politics. The end result was Atheism Plus was widely rejected, many social justice leaning atheists left the movement, Atheism widely defamed, remaining atheists not particularly open to social justice.
I don’t know very much about open source, but I’ve heard that there’s been some pretty vicious/brutal political fights over codes of conduct, ect.
Came to say this as well.
See, for example:
https://www.reddit.com/r/atheism/comments/2ygiwh/so_why_did_atheism_plus_fail/
The atheists even started to disinvite their intellectual founders, e.g. Richard Dawkins. Will EA eventually go down the same path—will they end up disinviting e.g. Bostrom for not being a sufficiently zealous social justice advocate?
All I’m saying is that there is a precedent here. If SJW-flavored EA ends up going down this path, please don’t say you were not warned.
People nominally within EA have already called for us to disavow or not affiliate with Peter Singer so this seems less hypothetical than one might think.
‘Yvain’ gives a good description of a process along along these lines within his comment here (which also contains lots of points which pre-emptively undermine claims within this post).
I entirely appreciate the concern of going too far. Let’s just be careful not to assume that risks only come with action—the opposite path is an awful one too, and with inaction we risk moving further down it.
Kelly, I don’t think the study you cite is good or compelling evidence of the conclusion you’re stating. See Scott’s comments on it for the reasons why.
(edited because the original link didn’t work)
Thanks, clarified.
Even after clarification, your sentence is misleading. The true thing you could say is “Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women.”
As a side note, I find the way you’re using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you’ve presented isn’t very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.
This is a similar issue that’s going on in another thread where people feel you’re cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:
Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.
Person Two: Actually if you do a comprehensive survey of the literature, you’ll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there’s no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.
Person One: Thanks for the correction! [Edits post to say: “Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.”]
Person Two: I mean… that’s technically true, but I don’t feel the problem is solved.
To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.
I dearly hope we never become one of those parts of the internet.
And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.
Me too. However, I’m not entirely clear what incentive gradient you are referring to.
But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There’s a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass.
As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it’s very much something to be aware of.
Full disclosure: I’m not much of a paper scrutinizer. And the way I’ve been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan’s blog post covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long. Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn’t.
I’m not even sure it would be useful for me to do that—the best scrutinizer is someone who feels motivated to disprove a paper’s conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn’t.
I don’t know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who’s an ideological minority, to try & forestall evaporative cooling effects. [Also could be a good way to fight ingroup biases etc.]
As a side note, I suspect we should re-allocate resources away from social psychology as a resolution for SJ debates, on the margin. It provides great opportunities for IQ signaling, but the flip side is the investment necessary to develop a well-justified opinion is high—I don’t think social psych will end up solving the problem for the masses. I would like to see people brainstorm in a larger space of possible solutions.
The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you’ve audited in the past, or someone who’s made sound predictions in the past). You can totally decide not to engage with an issue because it’s not worth the time.
But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.
The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.
Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It’s about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone’s beliefs. Because it is terrible, and does not track the truth. And we don’t need writings like that, regardless of whose conclusions they happen to support.
How does “this should be obvious” compare to average social science reporting on the epistemic hygiene scale?
Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper’s conclusion predicts flaw discovery better. I don’t think the result of such an experiment is obvious.
Flaws aren’t the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things
[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.
(see e.g. this and this).
I actually tend to observe the other effect in most intellectual spaces. Any liberal supporting result will get a free pass and be repeated over and over again, while any conservative leaning claim will be torn to shreds. Of course, you’ll see the opposite if you hang around the 50% of people who voted Trump, but not many of them are in the EA community.
Do you know of any spaces that don’t have the problem one way or the other?
I would say that EA/Less Wrong are better in that any controversial claim you make is likely to be torn to shreds.
I am disinclined to be sympathetic when someone’s problem is that they posted so many bad arguments all at once that they’re finding it hard to respond to all the objections.
Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.
As a concrete example (far from alone, and selected not because it is ‘particularly bad’, but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:
The ‘you-locutions’ do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. “Give them a break, this mistake is understandable given some other factors”/ “No, this is a black mark against them as a thinker, and the other factors are not adequate excuse”).
Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about ‘buzz talk’), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:
The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.
I’m referring to mob mentality, trigger-happy ostracization, and schisms. I don’t think erring towards/away from social justice is quite the right question, because in these failure cases, the distribution of support for social justice becomes a lot more bimodal.
Sounds plausible. That’s a big reason why I support thoughtful work on diversity: as a way to remove the motivation for less thoughtful work.
I can’t address all of this but will say three quick things:
I appreciate it’s weakness, but it’s at least some evidence against people’s intuitions and in addition to the literature on how those intuitions are demonstrably false and discriminatory it should update people away from those discriminatory beliefs.
[Edit: I appreciate that I should generally behave as though my community will behave well, and as such I should not have requested that people upvote even if I just asked them to “upvote if [they] find the post useful.” I want to be sure to flag in this response though the incredibly poor way in which people who disagree with claims and arguments in favor of diversity and inclusion are using their votes, in comments and on the whole post. It’s worth explicitly observing that identity-driven voting here is not equal among opposers and supporters, but seems clearly dominated by opposers.]
I appreciate your suggestions a lot, but caution you to be careful of your own assumptions. For instance, I never suggested that a Diversity & Inclusion Officer should be the person most passionate about the role instead of most smart about it.
To emphasize though, so it doesn’t get lost behind those critical thoughts: I thoroughly appreciate the suggestions you’ve contributed here.
[Edit: Apologies for some excessive editing. I readily acknowledge that in an already a hostile environment, my initial reaction to criticism regarding an important issue that is causing a lot of harm is too defensive.]
Another idea I had: add questions to the EA Survey to understand how people feel about the issues you are describing. This accomplishes a few things:
It allows us to track progress more effectively than observing our demographic breakdown. Measuring how people feel about EA movement culture gives us a shorter feedback loop, since changes in demographics lag behind culture changes. Furthermore, by attempting to measure the climate issue directly, we can zero in on factors under our control.
It helps fight selection effects that occur in online discussion of these issues. People on both sides can be reluctant to share their thoughts & ideas in a thread like this one. Online discussions in general can be wildly unrepresentative. I was surprised to learn about polls which found that most Native Americans aren’t offended by the use of “Redskins” as a team name (criticism of this poll), and that a majority of black people are against affirmative action. And among the “anti-SJW” crowd, there’s a perception that some folks are going to see racism/sexism in everything, and they will never be satisfied. So taking a representative poll of EAs, and perhaps comparing the results to some baseline, can help us come to agreement on the degree to which we have issues.
I like this idea. It will be skewed towards people who aren’t turned off by the culture, as those who are will have less interest in, and in some or many cases may not even be exposed to, the survey, but getting more systematic info on people’s feelings here would be very useful.
Some more thoughts:
I mentioned my concern that pro-diversity efforts in EA might “spiral” towards a mob mentality. I think one way in which this might happen is if the people working towards diversity in EA recruit people from underrepresented groups that they know through other pro-diversity groups, which, as you mention, frequently suffer from a mob mentality. If the pool of underrepresented people we draw from is not selected this way (e.g. if the majority of black people who are joining EA are against affirmative action, as is true for the majority of the black population in general), then I’m less worried.
I think some of your suggestions are not entirely consistent. For example, you mention that EA should not “throw around the term “AI” with no qualification or explanation”. From my perspective, if I was hearing about EA for the first time and someone felt the need to explain what “AI” was an acronym for, I would feel condescended to. I imagine this effect might be especially acute if I was a member of a minority group (“How dumb do these people think I am?”) Similarly, you suggest that we cut our use of jargon. In practice, I think useful jargon is going to continue getting used no matter what. So the way this suggestion may be interpreted in practice is: Don’t use jargon around people who are members of underrepresented groups. I think people from underrepresented groups will soon figure out they are being condescended to. I think a better idea is to remember that we were once ignorant about jargon ourselves, and make an effort to explain jargon to newbies. Hopefully they feel like members of the ingroup after they’ve mastered the lingo.
Relatedly, there is a question which I think sometimes gets tied up with the diversity question, but perhaps should not get tied up, which is the question of whether EA should aim more to be a committed, elite core vs a broad church. My impression is lots of people privately favor the committed, elite core approach. I think we can have both diversity and a committed, elite core: consider institutions such as Harvard which are both elite and diverse. Furthermore, I think being more public about our elitism might actually help with diversity, because we’d be making our standards clearer and more transparent, and we could rely less heavily on subjective first impressions. (CC Askell on “buzz talk”.) To put it another way: although “diversity” and “inclusion” are often treated as synonyms, it’s actually possible to be both “diverse” and “exclusive” (and this seems likely ideal).
A benefit of diversity you didn’t mention: Insofar as the EA movement has world peace and global cooperation as part of our goals, it’s useful to have people from as many different groups as possible. This is also useful if we want to be able to speak authoritatively on topics like how AI should be used for the benefit of humanity and whatnot.
Unjustified hunch here, but I think maybe another failure mode that can come up when a movement tries to increase diversity is that people who are underrepresented start to receive more attention. Even if this attention is positive (e.g. “How can we cater to people like you better?”), I think this can result in an increased level of self-consciousness. (See my previous point about how people who look different may feel self-conscious by default even if they’re not discriminated against.) Further unjustified conjecture: the sort of black person who supports affirmative action tends to enjoy the power they get from this, whereas the sort of black person who doesn’t support affirmative action doesn’t like it, thereby enhancing the “spiral” effect.
Another possible failure mode: Diversity advocates see something they don’t like (e.g. a person suggesting that women do not contribute to society and are leeches if they don’t offer men sex), and they want to root the problem out. In order to rally support, they let everyone know about the problem (like you did in this post). But by letting everyone know about the problem, they’ve also made it in to a bigger problem: now every woman who reads this post knows that someone, at one point in an EA-related discussion somewhere, made this outrageous claim—which results in those women feeling less welcome and more on edge. The toxic echo of this person’s post continues to reverberate as it is held up as part of a broader trend within EA, even though their post itself was long ago deleted. (This could contribute to the “spiral” effect I described, if the women who stick around after hearing about posts like these are disproportionately those that enjoy engaging in flame wars with people who make outrageous statements.)
I mentioned the EA Survey. One thing you could do is look at existing EA survey data and try to understand whether our issues with underrepresentation seem to be getting better or worse over the years. My impression is that gender thing, at least, has gotten much better since EA was founded. In any case, if things are already on a good path, I’m more skeptical about major diversity initiatives—”if it ain’t broke, don’t fix it”.
Incidentally, I realized some of the points I’m making here are redundant with this essay which was already posted. (But I highly recommend reading it anyway, because it has some great points I hadn’t thought of.)
This can get very dangerous as it opens a door for trolls to negatively impact the community and potentially damage its reputation. Maybe these kinds of discussions need to be gated in some way, or be had offline or something.
Risk does come with greater publicity of such behavior, but that’s part of the point of making it more public (in addition to the information value for people who want to avoid or address it). This is the first I’ve ever publicly said something about these issues in EA, after three years of many private conversations that seem to have resulted in limited or no impact. Greater publicity means greater accountability and motivation for action, both for the people who behave poorly and the people who let them do so without consequence.
Out of curiosity, have you tried anything besides private conversations?
Since I’m already working on inclusionary practices myself, there’s not much else to do but private or public discussion.
The private discussions I have had explicitly around the issue have varied a lot in their content and purpose and can be characterized as any of the following or a combination thereof: Listening to people’s experiences; sharing my own; discussing solutions; actively (beyond just listening) supporting people who were treated poorly; sharing information and concern about the issue with people in a better or still good position to do something about it; trying to discuss why this or more specific issues of exclusion are a problem with people who prefer the status quo; or endeavoring to show people why something they did was a problem and what they should do differently.
Dealing with a bewilderingly amateur situation myself and working to privately help the people responsible to understand the problem and improve took a month out of my life, and with a really important counterfactual, and that’s strictly in time spent on the issue that I don’t think I would have had to lose in e.g. the animal advocacy community, and not accounting for the emotional toll. I have good reason for (cautious) optimism that that was fruitful but also a red flag restraining that optimism and regardless only time will tell.
Basically I’ve spent a huge amount of time on those private and often solution-oriented conversations and have been hanging over the precipice of burnout with the community since day 1 several years ago. (The broader community at least, not the animal advocacy sub/intersected-community. And disclaimer that there are great individuals throughout the broader community who are my friends and/or whose presence in the community I am so happy for, etc.) And I’m definitely not alone in that.
I can do more to have private conversations with people in better positions than myself to make change here (such as people who are looked up to in the community by the people whose behavior could be more inclusionary, or donors to EA orgs), and I might if this post and the discussion here doesn’t inspire other people to take more action on this issue, which is my hope.
Thanks.
I’m also finding the voting in this thread frustrating.
Sorry about that.
Glad to hear it :)
I’m an excessive editor too, I’m not sure it’s something you need to apologize for :)
xccf, I’d be interested to hear an examples of comments which you think were excessively downvoted.
If I recall correctly, this comment was at −2 when I first saw it, which frustrated me because I think people who publicly admit mistakes should get upvotes. Publicly admitting mistakes is really hard to do. I think we should take a moment to give people credit for this before demanding that they confess their sins even more thoroughly.
I don’t think it is, at all, any more than Daryl Bem’s research updates me towards thinking ESP is real. Like, who knows, the world is a crazy place, maybe the papers here are in the 36% of published psychology papers which hold up under replication. But I don’t think that it makes sense to update against your beliefs about this stuff based on the published science—if you think that the scientists would have published these papers regardless of their truth, as I do, you shouldn’t regard them as evidence.
I think you’re overstating your case.
This strikes me as a misunderstanding of how Bayesian updates work. The reason you still don’t believe in ESP is because your prior for ESP is very low. But I think hearing about Bem’s research should still cause you to update your estimate in favor of ESP a tiny amount. In a world with ESP, Bem finds it easier to discover ESP effects.
I don’t think social psychologists are that dishonest. Even 36% replicability suggests some relationship between paper-publishing and truth.
Furthermore, I think the fact that social psychologists are so liberal should cause some update in the direction that studying humans causes you to realize liberal views about human nature are correct.
I think you slightly misunderstand me. What I’m saying is that Bem’s work isn’t really a Bayesian update for me, because I think Bem is approximately as likely to publish papers in the world where (extremely weak) ESP works as the worlds where it doesn’t. The strength of my prior doesn’t feel relevant to me.
I think you’re right that I slightly overstated my case.
Christine Peterson’s life partner discussion is around 1:17:20 at the above link^^
It’s part of a broader discussion about supporting yourself while being altruistic over the long haul (starts around 1:15:00).
As a general note for the discussion: Given the current incentive landscape in the parts of society most EAs are part of, I expect opposition to this post to be strongly underrepresented in the comment section.
As a datapoint, I have many disagreements with this article, but based on negative experiences with similar discussions, I do not want to participate in a longer discussion around it. I don’t think there is an easy fix for this, but it seems reasonable for people reading the comments to be aware that they might be getting a very selective set of opinions.
So as a general principle, it’s true that discussion of an issue filters out (underrepresents) people who find or have found the discussion itself unpleasant*. In this particular case I think that somewhat cuts both ways, since these discussions as they take place in wider society often aren’t very pleasant in general, for either side. See this comic.
To put it more plainly, I could easily name a lot of people who will strongly agree with this post but won’t comment for fear of criticism and/or backlash. Like you I don’t think there is an easy fix for this.
*Ironically, this is part of what Kelly is driving at when she says that championing free speech can sometimes inhibit it.
I would agree that the comments will likely be from a small subset of real opinions because this topic can be quite emotionally charged. From a look at the comments landscape right now - in particular, the number of posts that seem to question the existence of sexism—I think it’s plausible that a woman who had experienced sexism in EA would not be incentivized to comment.
An example of a particular practice that I think might look kind of innocuous but can be quite harmful to women and minorities in EA is what I’m going to call “buzz talk”. Buzz talk involves making highly subjective assessments of people’s abilities, putting a lot of weight in those assessments, and communicating them to others in the community. Buzz talk can be very powerful, but the beneficiaries of buzz seem to disproportionately be those that conform to a stereotype of brilliance: a white, upper class male might be “the next big thing” when his black, working class female counterpart wouldn’t even be noticed. These are the sorts of small, unintentional behaviors that I that it can be good for people to try to be conscious of.
I also think it’s really unfortunate that there’s such a large schism between those involved in the social justice movement and people who largely disagree with this movement (think: SJWs and anti-SJWs). The EA community attracts people from both of groups, and I think it can cause people to see this whole issue through the lens of whatever group they identify with. It might be helpful if people tried to drop this identity baggage when discussing diversity issues in EA.
I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.
My guess re the mechanism: Because we don’t have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.
My advice would be:
When assessing someone’s talent, focus on the content of what they’re saying/writing, not the general feeling you get from them.
When discussing how talented someone is, always explain the basis of your view (e.g., I read a paper they wrote; or Bob told me).
How we do we know that we are not bad at assessing talent in the opposite direction?
Maybe voters on the EA forum should be blinded to the author of a post until they’ve voted!
Variant on this idea: I’d encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.
Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they’re a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).
Problem is that the participants would not be not blinded, so they would post differently. People act to play the role that society gives them.
I appreciate this comment for being specific!
I don’t understand what you mean by that; could you clarify?
So I think that if you identify with or against some group (e.g. ‘anti-SJWs’), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you’re inclined to attribute way more to the person’s statements than what they’re actually saying or you set an overly demanding bar for them to “prove” that what they’re saying is correct. And I think all of that is pretty bad for discourse.
I also suspect that if we take a detached attitude towards this sort of thing, disagreements about things like how much of a diversity problem EA has or what is causing it would be much less prominent than they currently are. These disagreements only affect benefits we expect to directly accrue from trying to improve things, but the costs of doing these things are usually pretty low and the information value of experimenting with them is really high. So I don’t really see many plausible views in this area that would make it rational to take a strong stance against a lot of the easier things that people could try that might increase the number of women and minorities that get involved with EA.
Agreed. I’m not sure how we escape from that trap, except by avoiding loaded terms, even at the expense of brevity.
This used to be me… It wasn’t so much my beliefs that changed (I’m not a leftist/feminist/etc). It was more a change in attitude, related to why I rejected ultra-strict interpretations of utilitarianism. Not becoming more agreeable or less opinionated… just not feeling like I was on a life-or-death mission. Anyway, happy to discuss these things privately, including with people who are still on the anti-SJW mission.
I think that your link to Georgia Ray’s piece should make it clearer that her conclusion is
Your link implies that Georgia’s post is overall positive on the effect of diversity on the performance of teams or groups, which I think is incorrect.
Georgia here—The direct context, “Research also shows that diverse teams are more creative, more innovative, better at problem-solving, and better at decision-making,” is true based on what I found.
What I found also seemed pretty clear that diversity doesn’t, overall, have a positive or negative effect on performance. Discussing that seems important if you’re trying to argue that it’ll yield better results, unless you have reason to think that EA is an exception.
(E.g., it seems possible that business teams aren’t a good comparison for local groups or nonprofits, or that most teams in an EA context do more research/creative/problem-solving type work than business teams, so the implication “diversity is likely to help your EA team” would be possibly valid—but whatever premise that’s based on would need to be justified.)
That said, obviously there are reasons to want diversity other than its effect on team performance, and I generally quite liked this article.
As a relevant piece of data:
I looked into the 4 sources you cite in your article as improving the effectiveness of diverse teams and found the following:
1 didn’t replicate, and the replication found the opposite effect with a much larger sample size (which you link to in your article)
One is a Forbes article that cites a variety of articles, two of which I looked into and didn’t say at all what the Forbes article said they say, with the articles usually saying “we found no significant effects”
One study you cited directly found the opposite result of what you seemed to imply it does, with its results table looking like this:
https://imgur.com/a/dRms0
And the results section of the study explicitly saying:
“whereas background diversity displayed a small negative, yet nonsignificant, relationship with innovation (.133).”
(the thing that did have a positive relation was “job-related diversity” which is very much not the kind of diversity the top-level article is talking about)
The only study that you cited that did seem to cite some positive effects was one with the following results table:
https://imgur.com/a/tgS6q
Which found some effects on innovation, though overall it found very mixed effects of diversity, with its conclusion stating:
“Based on the results of a series of meta-analyses, we conclude that cultural diversity in teams can be both an asset and a liability. Whether the process losses associated with cultural diversity can be minimized and the process gains be realized will ultimately depend on the team’s ability to manage the process in an effective manner, as well as on the context within which the team operates.”
I find this troubling. If a small sample of the evidence cited has been misreported or is weak, this seems to cast serious doubt on the evidence cited in the rest of the piece. Also, my prior is that pointing to lots of politically amenable social psychology research is a big red flag.
So your research suggests that it improves creativity, innovation, problem solving and decision making, but not performance. That is a rather unexpected result. Do you have any thoughts on why this did not result in an improvement in total performance?
I didn’t mean to imply that — I just cited it as a source for the specific claims in that sentence. The other evidence I cite seems to imply it overall, and she doesn’t seem to account for all of that evidence.
I can’t tag here, but Georgia, if you see this I’d be curious for your opinion on how the totality of evidence weighs, particularly in expectation regardless of how robust it is.
It feels like a bad practice to take a post which concludes that the effects are mixed or small, then just cite the effects in that post which seem positive and not mention the ones that seem negative or that the post overall disagrees with what you’re trying to use it to argue for.
That doesn’t seem like what I’m doing. Georgia doesn’t seem to be disagreeing with my post’s overall argument (that EA would benefit from diversity; she actually seems to explicitly agree with that in her last paragraph), and she doesn’t explicitly agree or disagree with the argument of that specific paragraph (that diversity tends to be net beneficial for groups). The quote you cite is about a “clear” effect on groups, from the evidence she evaluates, and I might not have the same bar for robustness that she’s thinking of with that claim.
Moreover, her post argues
and goes onto explore these effects. The negative ones seem related to something like tribalism (e.g. less identification with the group), and I hope the EA community is able to overcome these avoidable downsides so it can on net benefit from diversity. I didn’t mention them in the post because I think we can overcome them given our desire to de-bias ourselves, and given the tools that Georgia mentions we have to overcome them:
I linked to her whole post so readers could see all of that. Linking directly to the citations I was pointing to in her post would have felt like cherry-picking. I could have given more explanation of her whole post in my own, and if I had spent more time writing this post, I probably would have done that.
[Edit: Georgia made a comment above that suggests she believes the statement without the robustness qualification, so we do have disagreement here.]
To speak to the section about EA orgs hiring a diversity & inclusion officer:
That’s essentially my role at CEA as Community Liaison, with help from other staff. Some of my work is focused on helping CEA work well for lots of kinds of people, both internally as a workplace and externally in our events and projects for the community.
I also try to be a resource on these topics for other EA orgs, groups, and individuals. I’m very happy to be contacted (julia.wise@centreforeffectivealtruism.org) about anything in this area where I might be able to give information or advice. Some examples of things we’ve helped with:
How to run a “Living on Less” campaign in a way that’s respectful of people actually living in poverty
How to help a group member who has just experienced a mental health crisis
Designing policies for Facebook groups that balance competing needs/wishes from different group members
Serving as a contact point for people who have experienced abuse or harassment within the community, either simply to provide support or to also take next steps if the person wishes
Promoting pro-social norms through measures like the Guiding Principles of EA.
Edited to add: someone pointed out that I didn’t mention confidentiality. I will keep anything you tell me as confidential as you want it to be kept.
Julia,
I appreciate the work you’ve done and continue to do on community-building. It seems though that there is a lot more productive work that can done than can be achieved by one part-time role, and that there are angles we’re not addressing.
For instance, we could bring in someone who can advise on all forms of communication from job postings to website UX to social media content and strategy; assist with speaker recruitment and selection and provide feedback for presentations at conferences; and conduct reviews of inclusionary performance in organizations’ hiring and management practices, in outreach efforts, and in local communities’ practices.
Many other commentators have already pointed out the problems with other pieces of evidence cited in the post, but I thought it was worth noting that this study also failed to replicate:
Thanks for taking the time to post this result four years later!
I believe that Toby Ord has talked about how, in the early days of EA, he had thought that it would be really easy to take people who are already altruistic and encourage them to be more concerned about effectiveness, but hard to take effectiveness minded people and convince them to do significant altruistic things. However, once he actually started talking to people, he found the opposite to be the case.
You mention “playing the long game” – are you suggesting that the “E first, A second” people are easier to get on board in the short run, but less dedicated and therefore in the long run “A first, E second” folks are more valuable? Or are you saying that my (possibly misremembered) quote from Toby is wrong entirely?
I only hold this view weakly, but yes, I’m worried that, as you put it, “E first, A second” people are less likely to stick around.
I don’t think “A first, E second” people are necessarily easier to get in the first place though, as they are more likely to already have a calling (and so to have less personally to gain) and to be committed to other altruistic pursuits that are hard for them to drop as “ineffective.”
That said, I’ve seen significant movement among heavily committed farmed animal advocates towards thinking more about and acting in the interest of maximizing impact… though farmed animal advocates are often already doing that advocacy because they’re already thinking about effectiveness: they see the issue as massively important and very tractable. So I suppose realistically I’m putting most of my investments in people who are A first, but still clearly already E.
From what I’ve heard, most of the people would are A first are already involved in causes. Now, unfortunately, there is a sense in which EA unavoidably is threatening, as the logical implication is often is that the work that they have done is less impactful than it could have been and that their current work or things they are working towards are less effective than it could have been. And we can phrase things as nicely as we want, and talk about how you can do EA plus other things and that all charity work is valuable even if it isn’t EA and that there are valuable causes we haven’t discovered yet, ect., but at the end of the day, this is still the logical implication and no matter what we do, this will make people uncomfortable. This effect is especially bad since if everyone adopted EA, it is likely certain organisations would cease to exist.
Further, because we unavoidably threaten current power structures within charity, many people there have written incredibly unfair articles articles criticising EA and misrepresenting us (there has been valid criticism too, but this is a minority). This makes recruiting A people even harder.
I think this is a big deal, unfortunately. I try to talk about EA very carefully when talking to people who’re “A first”, but people can sense any implicit criticism a mile off. It’s really hard to avoid some variant of “So you think I’ve been wasting my time, then?”
Strangely, “E first” people may be easier to reach because they’re less likely to be already invested in something.
My gut reaction is that most of the people who have stuck around are “E first”, but I think there’s probably a higher base rate of those amongst early adopters, so hard to say.
It seems like we could gather some data on this, though. It’s a vague question, but I suspect most people would be able to answer some variant of “Were you E first or A first? E/A/Other”. Then we could see if that had any relationship to tenure in the community, or anything else. Perhaps an item for the next Effective Altruism survey?
Unfortunately since the respondents would be members of the EA community, it would be hard to control that data for cultural fit in order to get at how robustly EA people from each demographic are. People have stuck around in the community for reasons other than how EA they are or can be, as I hope I’ve shed some light on.
Katja Grace gives a related [edited—said “the same”—see Katja’s comment below] argument here:
https://meteuphoric.wordpress.com/2013/07/09/effectiveness-or-altruism/
“When I was younger, I thought altruism was about the most promising way to make the world better. There were extremely cheap figures around for the cost to save a human life, and people seemed to not care. So prima facie it seemed that the highly effective giving opportunities were well worked out, and the main problem was that people tended to give $2 to such causes occasionally, rather than giving every spare cent they had, that wasn’t already earmarked for something more valuable than human lives.
These days I am much more optimistic about improving effectiveness than altruism, and not just because I’m less naive about cost-effectiveness estimates.”
She goes on to list several reasons, including greater past success and greater neglect.
It seems worth distinguishing ‘effectiveness’ in the sense of personal competence (as I guess is meant in the first case, e.g. ‘reasonably sharp’) and ‘effectiveness’ in the sense of trying to choose interventions by cost-effectiveness.
Also remember that selecting people to encourage in particular directions is a subset of selecting interventions. It may be that ‘E not A’ people are more likely to be helpful than ‘A not E’ people, but that chasing either group is less helpful than doing research on E that is helpful for whichever people already care about it. I think I have stronger feelings about E-improving interventions overall being good than about which people are more promising allies.
Easy money: https://userstyles.org/styles/150270/effective-altruism-form-anti-kibitzer
I’d tell you to keep it or donate it, but I want to encourage the norm that such offers represent a real cost, so I hereby commit to use this money entirely on hedonistic pleasures.
While I agree with a lot of this, it’s worth pointing out that the claims about gender diversity increasing ‘collective intelligence’ are controversial within psychology. For example, see this paper:
“We examined group-IQ in three independent studies.
• Gender balance and turn-taking were unrelated to group performance. • Social sensitivity had no impact on latent group-IQ. • Individual IQ emerged as the cause of group-IQ. • Group-IQ almost exclusively reflects individual cognition.
What allows groups to behave intelligently? One suggestion is that groups exhibit a collective intelligence accounted for by number of women in the group, turn-taking and emotional empathizing, with group-IQ being only weakly-linked to individual IQ (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). Here we report tests of this model across three studies with 312 people. Contrary to prediction, individual IQ accounted for around 80% of group-IQ differences. Hypotheses that group-IQ increases with number of women in the group and with turn-taking were not supported. Reading the mind in the eyes (RME) performance was associated with individual IQ, and, in one study, with group-IQ factor scores. However, a well-fitting structural model combining data from studies 2 and 3 indicated that RME exerted no influence on the group-IQ latent factor (instead having a modest impact on a single group test). The experiments instead showed that higher individual IQ enhances group performance such that individual IQ determined 100% of latent group-IQ. Implications for future work on group-based achievement are examined.”
http://www.sciencedirect.com/science/article/pii/S0160289616303282
There’s also some evidence that a more critical style stimulates higher levels of creativity and improves the quality of ideas:
“Nemeth’s studies suggest that the ineffectiveness of brainstorming stems from the very thing that Osborn thought was most important. As Nemeth puts it, “While the instruction ‘Do not criticize’ is often cited as the important instruction in brainstorming, this appears to be a counterproductive strategy. Our findings show that debate and criticism do not inhibit ideas but, rather, stimulate them relative to every other condition.” Osborn thought that imagination is inhibited by the merest hint of criticism, but Nemeth’s work and a number of other studies have demonstrated that it can thrive on conflict.
According to Nemeth, dissent stimulates new ideas because it encourages us to engage more fully with the work of others and to reassess our viewpoints. “There’s this Pollyannaish notion that the most important thing to do when working together is stay positive and get along, to not hurt anyone’s feelings,” she says. “Well, that’s just wrong. Maybe debate is going to be less pleasant, but it will always be more productive. True creativity requires some trade-offs.””
https://www.newyorker.com/magazine/2012/01/30/groupthink
For obvious reasons any criticism should be done as politely as possible, and must remain focussed on improving ideas rather than attacking people.
Thanks! I added a note about the debate.
I’m not sure what your comments about critical discussion style are referring to in the post.
One defining feature of the #metoo movement has been its exposing of powerful men, often in leadership positions and protected by influential friends, who sexually harassed women.
As you know, Jacy Reese (previously known as Jacy Anthis), recently admitted to harassing women in the EA movement. While exactly what he did is not public, it was apparently severe enough that CEA has found it necessary to ban him from all CEA-associated events. His undergraduate college, Brown University, expelled him in 2012 after similar accusations, though he denied them at the time.
https://forum.effectivealtruism.org/posts/8XdAvioKZjAnzbogf/apology
In light of the above, does the Sentience Institute agree that CEA’s actions were reasonable to protect women in the EA movement?
Secondly, could you explain how the Sentience Institute is handling the issue? To my knowledge Jacy is currently Head of Research, and many organizations instinctively protect their leaders. Additionally, Jacy is in a relationship with you, the Executive Director, which seems like a conflict of interest. How are you reassuring people that he will not receive more lenient treatment as a result?
You report EA as being 70% male. How unusual is that for a skew? One comparison point for this, for which data is easily abundant, is readerships of websites that are open-to-read (no entry criteria, no member fees). Looking at the distribution of such websites, 70% seems like a relatively low end of skew. For instance, Politico and The Hill, politics news sites, see 70-75% male audiences (https://www.quantcast.com/politico.com#demographicsCard and https://www.quantcast.com/thehill.com#demographicsCard) whereas nbc.com, a mainstream TV, entertainment, and celebrity site, sees a 70% female audience: https://www.quantcast.com/nbc.com#demographicsCard
(I’m not trying to pick anything too extreme, I’m picking things pretty close to the middle. A lot of topics have far more extreme skews, like programming, hardcore gaming, fashion, see https://www.wikihow.com/Understand-Your-Website-Audience-Profile#Understanding_the_gender_composition_and_index_of_your_website_sub for more details on how the gender skew of websites differs based on the topic).
Based on this, and similar data I’ve seen, a 70% skew in either gender direction feels pretty unremarkable to me in the context of today’s broader society and the domain-specific skews that are common across both mainstream and niche domains. I expect something similar to be true for race/ethnicity based on the Quantcast and similar data but I haven’t obtained that much familiarity with the numbers or their reliability.
Obligatory SlateStarCodex post for the graphs: http://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/
“We can relax the Permanent State Of Emergency around too few women in tech, and admit that women have the right to go into whatever field they want, and that if they want to go off and be 80% of veterinarians and 74% of forensic scientists, those careers seem good too.”
I take your point that skews can happen, but it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards white dudes.
Edit: I previous said “straight white dudes” but removed the “straight”. See below.
This reminds me of a pattern I see in social justice movements, which goes something like this: We are observing some kind of gender or race-based disparity, with a variety of different hypotheses for why it might be occurring. Some people think discrimination is the most likely hypothesis. Other people have other hypotheses. The people who think discrimination is the most likely hypotheses see the people suggesting other hypotheses and loudly decry those people as discriminatory. Those people get quieter. The gender or race-based disparity persists. The only hypothesis that anyone is allowed to talk about is the discrimination one. So it’s more clear than ever that discrimination is the only possible explanation. Given this clarity, the people pushing the discrimination hypothesis have the mandate to decry milder and milder instances of discrimination. Eventually, the community undergoes a schism over the issue of whether to be hypersensitive to mild instances of discrimination or not.
The Google memo Kelly references is a good case study. Kelly implies that the author is an “outright asshole”. I assume she makes this judgement solely based on the author’s willingness to explore hypotheses besides the discrimination one—in terms of communication style, it’s clear that the author takes pains to be as civil as possible.
The question for me is: How long do we need to test out the discrimination hypothesis before it’s disconfirmed? If it’s been 5 years since anyone talked about any hypothesis besides the discrimination one, and the disparity still persists, are we allowed to consider the possibility that the discrimination hypothesis is incorrect? What if it’s been 10 years?
Realistically, if there’s a disparity, there’s probably a combination of several things going on. So, how can we capture the low-hanging fruit from fighting discrimination without putting ourselves on a path towards a schism?
The histories of many forms of prejudice are histories biological essentialism and biological determinism. Even if such claims are now made out of a “willingness to explore” alternative hypotheses despite this long history of precisely being an unwillingness to explore the much newer hypothesis of prejudice, they tend to be over-simplistic, as in the memo, and tend to have the effect—if not also the intention—of dismissing the other, newer hypothesis of prejudice, which is robustly supported by data that the memo’s author fails to include.
That’s not to say it’s a black and white matter of total biological similarity or total culturally-imposed disparities and prejudice. That’s what the author of the memo implies, and I disagree. The evidence that prejudice is a major problem that is holding people back is substantial nonetheless.
Some of his suggestions for ways to reduce the gender gap are worth considering, and charitably he’s not exceptionally prejudiced and is able to analyze information that has found its way to him, but is just very poorly informed and has no willingness to explore the alternative explanation of prejudice. At most charitable this still enables that prejudice.
Given the extent of my knowledge, which is just the words in the memo, I can agree he’s not an outright asshole, and I should have phrased my side note about this example of zero tolerance with a heavy hand differently. It may even be a poor example, as I would say corrective action should have been taken in his case before he was fired if it wasn’t, which I don’t know about either way.
Thanks for the reply, Kelly, and I’m sorry you’re getting downvoted. I really appreciate your willingness to be charitable and admit your mistakes, and I will strive to emulate your example.
Hm, that’s not how I read it. For example, in the first sentence, he says he doesn’t deny that sexism exists. Later, he writes: “Of course, men and women experience bias, tech, and the workplace differently and we should be cognizant of this...” My interpretation is that Google already has a ton of discussion of the impact of sexism, bias, etc. and Damore wanted to fill in the other side of the story, so he didn’t bother to repeat stuff that everyone already agrees on. Maybe that was a mistake in retrospect.
I agree that that qualification suggests his view on the contribution of biology to the gender gap is weaker than his otherwise definitive framings suggest. [Edit: Sentence here removed because I’m too tired and my thoughts are not in order, will get sleep before responding to any more comments. Replacement: He’s still presenting it as a black-and-white issue if he’s only presenting one side.]
Google may have had that conversation on prejudice going, but he is very oversimplistic and offers the essentialist view as so definitive that his solutions are the right ones, that Google is the “biased” party for talking about prejudice, and that it isn’t worth even mentioning that evidence demonstrating a bias against women exists (if he even knows or believes that), not to mention that the evidence for the real-world effect of prejudice is far more vast and robust than his evidence for biological causes. And he does all this when the essentialist view has been so dominant and people are only talking so much about prejudice because they’re trying to overcome the essentialist thinking that so inhibits people. (Sure, there are differences, but there are even more misconceptions, as well as oversimplistic and deterministic assumptions about what real differences mean.)
[Edit for clarification and additional analysis: In a context of prejudice, presenting stereotypes is a delicate matter even if you think them sufficiently biologically valid and are content to make simplistic inferences about their real-world effects. Doing so without acknowledgement of the prejudices people experience which line up with these stereotypes and which harm them serves to reinforce those stereotypes and prejudices.]
So it’s not an appropriate way to contribute to the conversation—at best it’s reacting to perceived overshooting by retreating to a flawed status quo.
Can I suggest that the Damore issue be parked? Even though it is currently producing a high quality, civil conversation, I worry that talking about such a highly polarised topic is somewhat risky as you never know who might join the thread.
I think there’s a bit of an empathy gap in this community. When people are angry for what seems to be no reason, a good first step is to ask whether you’ve done something that made them feel unsafe/humiliated/demeaned/etc, even if that wasn’t your intention. It doesn’t take a lot of imagination to see how unsolicited exploration of “other hypotheses” (cough cough) for racial and gender disparities could be very distressing for the people who are being discussed as if they’re not there.
I actually think we should discuss other hypotheses.
Firstly, “other hypotheses” includes all kinds of inoffensive explanations like the primary cause of a difference being:
Broader society has instilled certain social norms in people, as opposed to it being anything specific about this group
Founder effects—A guy gets a few of his mates to start the group, they rope in their mates, ect.
That the message happens to resonate among groups of people that are currently disproportionately one gender (ie. programmers)
But going further than this, I don’t think we should limit discussion of different intrinsic preferences either, especially if someone makes an argument that is dependent on this being false.
I think I’ve noticed a pattern where basically any hypothesis that’s not the discrimination hypothesis gradually leaves the Overton window.
Where do we draw the line? Is intrinsic abilities an acceptable topic of casual discussion? Do you think it would be humiliating for people who are being discussed as having less intrinsic ability?
I think it depends on the particular space. The rationality community should aim to have everything open to discussion because that is its purpose. The EA community should minimise these discussions in that they are rarely necessarily and quite often a distraction. In most groups I’ve been in, social norms can prevent the need for formal rules though.
Oh, I totally agree, and I don’t think we should explore them. [I edited my comment in an attempt to clarify this.]
But you don’t want discrimination hypotheses to be discussed either? I guess that could be an acceptable compromise, to not debate the causes of disparities but at the same time focus on improving diversity in recruitment.
Yeah. I’m also in favor of trying to grab low-hanging fruit from addressing discrimination, as long as we don’t get overzealous. But in terms of trying to make our demographics completely representative… there are already a lot of groups trying and failing to do that, sometimes in a way that crashes & burns spectacularly, so I would rather hang back and wait for a model that seems workable/reliable before aiming that high.
FWIW, I’m sympathetic to the google guy. However, it’s not clear to me this case in the same. It might be, but I’d want someone to give me a series of reasons, backed by evidence, before we conclude “oh, it turns out affluent white males are just a lot more moral than everyone else and there’s nothing to explain here”.
“oh, it turns out affluent white males are just a lot more moral than everyone else and there’s nothing to explain here”
Do you think it is possible that EA could be majority white affluent male because programmers, philosophers, mathematicians, ect. are disproportionately white affluent male and EA has become good at recruiting these specific audiences?
I think that’s a huge part of the reason why we overrepresent people the demographics we do. But offloading responsibility onto part of the pipeline below us isn’t sufficient, least of all when we can source from other pipelines.
Interesting. Hadn’t put these together in my mind. Could well be something here.
I don’t actually believe that affluent white males are a lot more moral than everyone else, but anyway, let’s put aside the question of whether such evidence exists for a moment and ask: if such evidence did exist, would it be sensible for us to discuss it? My answer is no. I would rather take a compromise position of addressing clear cases of discrimination, being mildly worried about mild cases, and letting sleeping dogs lie.
The difficulty for movements against discrimination (between humans) in a lot of modern society lies in that definition of what constitutes “clear” discrimination. For instance, people don’t say explicitly discriminatory things as much as they used to, but they still hold discriminatory beliefs that make them e.g. mistrust, discredit and undervalue others, and we can for the most part only assess e.g. hiring bias by looking at whole samples, not at any one individual.
I don’t think we should police thoughts, only actions.
We don’t make it a crime to fantasize about killing someone—you only become a criminal when you act on those thoughts. This illustrates a useful and widely applied principle of our legal system. The willingness of some diversity advocates to disregard this principle is a good example of diversity advocates getting overzealous about diversity and sacrificing other values, as I complain about in this comment.
Furthermore, I don’t think condemning people for having beliefs we don’t want is an effective way to change those beliefs—a variety of research seems to indicate this doesn’t work (though, I generally don’t put too much stock in social psychology research, which includes those links, and I’m also not a good paper scrutinizer).
The problem is that those thoughts, as I noted, become actions, just actions we can usually only see as systematic trends. Just because someone does not say “women are incompetent” does not mean they aren’t underestimating women’s competence and e.g. hiring them less than he should. Taking action on this just requires a more systematic approach than explicit discrimination does.
I agree that in terms of what works, just pointing out bias doesn’t seem to help and can even backfire, as I mentioned, which is why I provided a list of other possible solutions.
The flip side of it being hard to discern whether people have bad thoughts and act biasedly except by drawing inferences from broader patterns is that it’s also hard to discern whether people actually do have bad thoughts and acted biasedly from those broader patterns. (c.f. the many fields where women dominate men in terms of prevalence and performance, as well as EAs many other demographic biases which don’t receive the same treatment e.g. a 14:1 left-right bias, and a 4:1 20-35:any age over 35 bias).
“I assume she makes this judgement solely based on the author’s willingness to explore hypotheses besides the discrimination one”—This seems like a very uncharitable assumption to make. I can easily think of multiple other reasons why she might consider him an asshole.
What evidence causes you to think that heterosexuality is overrepresented in EA? That seems backwards to me.
I believe sexuality is a demographic we do well on.
We also have way more trans women than society at large.
I think there are some varied skews here. It seems that we do well on representation of trans people generally and queer women relative to the total number of women, but not on queer men relative to the number of men. I think there are probably more political queer men (rejection of gay/straight binary sort of thing) than in most communities, but not many men who regularly sleep or seek to sleep with men. I know I was in the community for years before meeting one.
So yes, I think the skew is toward straight, white dudes, and I’ll say I do find the machismo off-putting even as a fairly straight-passing gay man.
Hmm. Maybe EA is more inclusively representated on the sexual dimension. I’d hadn’t really noticed this either way, and more typed this out of habit. I stick by there being an oddly high number of white dudes though.
If you aren’t going to defend the claim made in the original comment, I would suggest that it would be good practise to edit the word “straight” out of the comment. There are a lot of Cached Thoughts on both sides of the debate and I would like to encourage people to break out of them.
okay. point taken
“I take your point that skews can happen, but it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes.”
(1) Where did “straight” come into this picture? The author says that EAs are well-represented on sexual diversity (and maybe even overrepresented on some fairly atypical sexual orientations), and my comment (and the data I used) had nothing to say about sexual orientation?
(2) “”“it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes”””
I didn’t say that desire to be effective and altruistic is heavily skewed toward men. I just said that membership in a specific community, or readership of a specific website, and things like that, can have significant gender skews, and that is not atypical. The audience for a specific community, like the effective altruist community, can be far smaller than the set of people with desire to be effective and altruistic.
For instance, if a fashion website has a 90% female audience (a not atypical number), that is not a claim that the “desire to look good” is that heavily skewed toward female. It means that the specific things that website caters to, the way it has marketed itself, etc. have resulted in it getting a female audience. Men could also desire to look good, albeit in ways that are very different from those catered to by that fashion website (or more broadly by the majority of present-day fashion websites).
Your suspicions provide a small amount of Bayesian evidence, but could you explain why you believe none of the alternate explanations that have been proposed seem satisfactory?
Politics is rarely used as an example of a positive environment for women.
It’s not just the actual numbers that are concerning (though I disagree with you that a 70% skew can be brushed off). It’s the exclusionary behavior within EA.
I find it interesting that most of the examples given in the article conform to mainstream, politically correct opinion about who is and isn’t overrepresented. A pretty similar article could be written about e.g. math graduate students with almost the exact list of overrepresented and underrepresented groups. In that sense it doesn’t seem to get to the core of what unique blind spots or expansion problems EA might have.
An alternate perspective would be to look at minorities, subgroups, and geographical patterns that are way overrepresented in EAs relative to the world population, or even, say, the US population; this could help triangulate to blind spots in EA or ways that make it difficult for EA to connect with broader populations. A few things stand out.
Of these, I know at least (1) and (2) have put off people or been major points of concern.
(1) Heavy clustering in the San Francisco Bay Area and a few other population centers, excluding large numbers of people from being able to participate in EA while feeling a meaningful sense of in-person community. It doesn’t help that the San Francisco Bay Area is one of the most notoriously expensive in the world, and also located in a country (the United States) that is hard for most people to enter and live in.
(2) Overrepresentation of “poly” sexual orientations and behaviors relative to larger populations—so that even those who aren’t poly have trouble getting along in EA if they don’t like rubbing shoulders with poly folks.
(3) Large proportion of people of Jewish descent. I don’t think there’s any problem with this, but some people might argue that this makes the ethics of EA heavily influenced by traditional Jewish ethical approaches, to the exclusion of other religious and ethical traditions. [This isn’t just a reflection of greater success of people of Jewish descent; I think EAs are overrepresented among Jews even after education and income controls].
(4) Overrepresentation of vegetarians and vegans. I’m not complaining, but others might argue that this reduces EAs’ ability to connect with the culinary habits and food-related traditions of a lot of cultures.
I can see 1-3 being problems to some extent (and I don’t think Kelly would disagree)… but “overrepresentation of vegetarians and vegans”?? You might as well complain about an overrepresentation of people who donate to charity.
I think about this a different way. I think it weird, given there’s so much mainstream discussion of inclusion, that it hasn’t seemed to penetrate into EA. That makes EA the odd one out. Hence it might be good to identify the generic blindspots, even if we haven’t yet honed in on EA specific ones.
I think you’re approach of looking for over-represented people is interested and promising. What I find surprising is that you didn’t zone in on the most obvious one, which is that EA is really heavily weighed with philosophers and maths-y types, such as software engineers.
I tried to avoid things that have already been discussed heavily and publicly in the community, and I think the math/philosopher angle is one that is often mentioned in the context of EA not being diverse enough. The post itself notes:
“”“people who are both that and young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions.”””
This also mentioned in the post by Alexander Gordon-Brown that Kelly links to: http://effective-altruism.com/ea/ek/ea_diversity_unpacking_pandoras_box/
“”“EA is heavy on mathematicians, programmers, economists and philosophers. Those groups can get a lot done, but they can’t get everything done. If we want to grow, I think we could do with more PR types. Because we’re largely web-based, people who understand how to make things visually appealing also seem valuable. My personal experience in London is that we would love more organisers, though I can imagine this varying by location.”””
I’m really not sure why my comment was so heavily downvoted without explanation. I’m assuming people think discussion of inclusion issues is a terrible idea. Assuming that is what I’ve been downvoted for, that makes me feel disappointed in the online EA community and increases my belief this is a problem.
I think this may be part of the problem in this context. Some EAs seem to take the attitude (i’m exaggerating a bit for effect) that if there was a post on the internet about it once, it’s been discussed. This itself is pretty unwelcoming and exclusive, and it penalises people who haven’t been in the community for multiple or haven’t spend many hours reading around internet posts. My subjective view is that this topic is under-discussed relative to how much I feel it should be discussed.
“I’m assuming people think discussion of inclusion issues is a terrible idea.”
This is a misreading. I’m almost sure you were downvoted because readers perceived this to be the reverse of the truth: “I think it weird, given there’s so much mainstream discussion of inclusion, that it hasn’t seemed to penetrate into EA.”
It’s a topic that has been discussed intensely, frequently and continuously in EA since its inception, both online and off. If someone had asked me to compile a list of the all-time most-discussed topics in EA, this would be near the top. That’s not to say we shouldn’t continue discussing it here of course and I appreciate Kelly’s quite comprehensive list of the possible ways we could try to increase diversity in the community.
So many different boxes to reply to! I’ll do one reply for everything here.
My main reflection is that either 1. I really haven’t personally had much discussion of inclusivity in my time in the EA movement (and this may just be an outlier/coincidence) or 2. I’m just much more receptive to this sort of chat than the average EA. I live among Oxford students and this probably gives me a different reference point (e.g. people do sometimes introduce themselves with their pronouns here). I forget how disconcertingly social justice-y I found the University when I first moved here.
Either way, the effect is I really haven’t felt like I’ve had too many discussion in EA about diversity. It’s not like it’s my favourite topic or anything.
FWIW, I read your comments as a useful data point (“Huh. Here’s someone who’s been pretty involved in EA for a year or two [not certain that’s accurate] and hasn’t come across many discussions of diversity/inclusion.”)
It’s extremely hard to generalize here because different geographies have such different stories to tell, but my personal take is that the level of (public) discussion about diversity within EA has dipped somewhat over time.
When I wrote the Pandora’s Box 2.5 years ago, I remember being sincerely worried that low-quality discussion of the issue would swamp a lot of good things that EA was accomplishing, and I wanted build some consensus before that got out of hand. I can’t really imagine feeling that way now.
I’m not sure why you brought up the downvoting in your reply to my reply to your comment, rather than replying directly to the downvoted comment. To be clear, though, I did not downvote the comment, ask others to downvote the comment, or hear from others saying they had downvoted the comment.
Also, I could (and should) have been clearer that I was focusing only on points that I didn’t see covered in the post, rather than providing an exhaustive list of points. I generally try to comment with marginal value-add rather than reiterating things already mentioned in the post, which I think is sound, but for others who don’t know I’m doing that, it can be misleading. Thank you for making me notice that.
Also:
In my case, I was basing it on stuff explicitly, directly mentioned in the post on which I am commenting, and a prominently linked post. This isn’t “there was a post on the internet about it once” this is more like “it is mentioned right here, in this post”. So I don’t think my comment is an example of this problem you highlight.
Speaking to the general problem you claim happens, I think it is a reasonable concern. I don’t generally endorse expecting people to have intricate knowledge of years’ worth of community material. People who cite previous discussions should generally try to link as specifically as possible to them, so that others can easily know what they’re talking about without having had a full map of past discussions.
But imo it’s also bad to bring up points as if they are brand new, when they have already been discussed before, and especially when others in the discussion have already explicitly linked to past discussions of those points.
Sorry. That was a user error.
This seems like a lot to infer from some downvotes.
FWIW I didn’t downvote your comment but it annoyed me. It was this:
I feel like I’ve seen quite a lot of discussion of diversity in EA, and I don’t think it’s been overly unsophisticated. This comment therefore feels frustrating, like the “why doesn’t EA talk about systemic change?” comments. I would guess this is a common feeling, given the positive response to http://effective-altruism.com/ea/1g3/why_how_to_make_progress_on_diversity_inclusion/c7n . That might explain the downvotes. On the other hand, this
feels much more positive to me. Okay, Michael Plant thinks we need to have a lot of discussion about this for some reason. Fair enough.
Hmm, I am surprised that people downvoted it so much as well. Perhaps people thought that the comment was naive, ie. that even if you were in favour of taking some diversity measures, we shouldn’t do it just because everyone else is jumping on the bandwagon.
I suspect that you would have received a more positive response if you’d written something more measured like: “The fact that so many other groups have decided to implement diversity measures provides some degree of Bayesian evidence that it is a good idea”
I wouldn’t concern yourself much with downvotes on this forum. People use downvotes for a lot more than the useful/not useful distinction they’re designed for (most common other reason is to just signal against views they disagree with when they see an opening). I was recently talking to someone about what big improvements I’d like to see in the EA community’s online discussion norms, and honestly if I could either remove bad comment behavior or remove bad liking/voting behavior, it’d actually be the latter.
To put it another way, though I’m still not sure exactly how to explain this, I think no downvotes and one thoughtful comment explaining why your comment is wrong (and no upvotes on that comment) should do more to change your mind than a large number of downvotes on your comment.
I’m really still in favor of just removing downvotes from this forum, since this issue has been so persistent over the years. I think there would be downsides, but the hostile/groupthink/dogpiling environment that the downvoting behavior facilitates is just really really terrible.
I previously defended keeping down-votes, I confess I’m not so sure now.
A fairly common trait is people conflate some viewpoint independent metric of ‘quality’ with ‘whether I like this person of the view they espouse’. I’m sure most users have voting patterns that line up with these predictors pretty strongly, although there is some residual signal from quality: I imagine a view where one has a pretty low threshold for upvoting stuff sympathetic to ones view, and a very high one for upvoting non-sympathetic, and vice versa for downvotes.
I’m not sure how the dynamic changes if you get rid of downvotes though. Assuredly there’s a similar effect where people just refrain to upvote your stuff and slavishly upvote your opponents. There probably is some value in ‘nuking’ really low quality remarks to save everyone time. Unsure.
Yeah, I’m totally onboard with all of that, including the uncertainty.
My view on downvoting is less that we need to remove it, and more that the status quo is terrible and we should be trying really hard to fix it.
The idea of introducing social justice into an existing movement has already been tried, and I think it’s worth going over the failures and problems that social justice has caused in the atheist movement before jumping headlong into it in the EA movement. This reddit page about why Atheism+ failed makes for interesting reading: https://www.reddit.com/r/atheism/comments/2ygiwh/so_why_did_atheism_plus_fail/
See also this: https://athefist.wordpress.com/2013/09/04/the-atheism-plusftb-problem/
it continues:
The post by Kelly that I am responding to seems to contain several red flags indicating that EA+SJ is falling into the same traps that Atheism+SJ fell into;
suspending healthy skepticism of questionable claims,
advocating identity categories over competence,
supporting the silencing of dissenting opinions and abandoning free speech
As I said in another comment: don’t say you weren’t warned if this goes badly.
Regarding your “red flags”:
1) The post does not advocate for identity categories over competence, but competence over identity categories. As I’ve argued, we’re missing out on a lot of people because they don’t match irrelevant criteria.
2) No skepticism of questionable claims has been suspended. You are welcome, as others have, to point out what claims are too confident and why. You’ll note that I’ve edited the post to qualify a claim I made that a commenter pointed out is debated in the literature, and an implication I made that a commenter convinced me I made too confidently.
You are also welcome to provide arguments for the position you seem to take that the status quo (or an even more exclusive community, which we may be becoming) is better than a more inclusive community. Bringing up the risk is a valuable contribution to this discussion and I really appreciate it. Let’s go further with our analysis of tradeoffs and discuss specific steps we can take to become more inclusive while limiting the risks in either direction, and let’s have a healthy skepticism of the status quo.
3) A dismissal of the whole project of inclusion because of the risk that it will go too far is itself something of a silencing of dissenting opinions and an abandoning of free speech. As I said very explicitly in my comment about free speech, the term is often used to justify speech that pushes people out and reduces the diversity of opinions in the community and the freedom that people have to speak. The question is where the line is—and it’s probably a blurry, messy one—and how we should address transgressions of it to keep our debates as free and productive as possible.
I already commented this on your earlier, similar comment, but since you’re repeating this here I will too so it’s not missed:
I entirely appreciate the concern of going too far. Let’s just be careful not to assume that risks only come with action—the opposite path is an awful one too, and with inaction we risk moving further down it.
Someone who prefers to remain anonymous shared with me that there were multiple issues that made her and other women interns feel excluded at an EA organization, but she felt it was too intimidating to bring them up because the staff seemed too tight, including the women, and the interns felt too separate from them.
The same person, in response to the point “Don’t dismiss or trivialize the altruistic concerns ordinary people have,” said:
Agree – this is one of the most alienating parts of EA groups I have come across. Charity snobbishness has become quite extreme in some contexts I’ve been in, and I found it to be a somewhat closed-minded approach to altruism generally. At one point, I became persuaded by this attitude and even noticed myself becoming judgmental with the people around me. It was only when my mum told me she thought I had become more judgmental, and not for the better, that I took initiative to really analyse why I was behaving like I was, and to understand that this is not a way to do the most good for people around you nor for trying to encourage people to give their time and money more effectively. I think many people in EA should take a step back and realise that in their attempt to do the most good, they are acting in a closed-minded way, which is actually preventing them to be able to achieve the most good they can.
On one hand, it is technically better to change things if that motivates people to become involved in the community. But on the other hand, if someone is ethically motivated to do the right thing, and they find that EA is plausibly in the right lane for this purpose, then you would expect them to be involved in productive activities regardless of whether their personality types are similar or not. That’s not any more of a sacrifice than we make in other sorts of things: I recruited for a finance career, despite the fact that the personality types and culture are antithetical to my own, I am in the military, despite the fact that the personality types and culture are antithetical to my own, I donate money, despite the fact that I would have more personal happiness than if I didn’t, and so on.
The kinds of people who would be doing EA things if and only if we were a little bit more appealing are the kinds of people who won’t take the ethically optimal career route, because the ethically optimal career route is not likely to be optimally appealing, and that is something that we can’t change. If someone can only be brought into the movement by catering, they’re not going to suddenly change and automatically act as forcefully and positively as the rest of us; they’ll still need to be catered to for additional steps on and on into the future. You can see examples of this with activist groups on college campuses, where administrations make costly room and concessions to activists and yet continually face additional demands and disruption.
This, of course, is not a statement that all people who are motivated in such a way are like this nor is it a statement that such people would not bring substantial positive value to the movement on balance. And I’m not making any judgements about character, just observing the ethically relevant facets of human behavior. The point is that the positive impact of such expansion is more limited than you would naively think and therefore warrants less resource allocation than expansions which would attract similar numbers of other types of people.
I’m also skeptical that concessions like this actually do much. If you want an example, look at the early criticisms of EA, where people talked about how we ‘neglect systemic change’. Over and over and over again, we explained that you can do systemic change in EA, please come and do systemic change with us, we’re compatible, and so on. There were a couple articles saying “Can EA change the world? It already has” and “We love systemic change.” Now there have even been two peer-reviewed published philosophy papers driving this point home, by Joshua Kissel and Brian Berkey. Then 80,000 Hours said “hold up guys, only 15% of people should earn to give, we don’t want to be misunderstood.” And on and on and on. This was all a fine response, of course.
But has it actually changed the behavior of the people who raised those critiques? Have any of the critics recanted and said “okay, I’ll join EA now, and develop some EA-based systemic change”? Are EA’s ranks swelling with a new crop of excited leftist revolutionaries? No! This kind of movement growth is nowhere to be found! When was the last time you heard ANY Effective Altruist argue that poverty alleviation is neutral or harmful because it reduces the probability that capitalism will be superseded? Never! Yes, there are a few EA leftists whose main priority is to systemically reform capitalism, but not significantly more than there were in the first place, and they are a tiny group in comparison to the liberals, the conservatives, the vegans, the x-risk people, and so on. As far as I can tell, the impact of all these articles and comments in bringing leftists into active participation with EA was totally nonexistent.
So while there is a difference between the demographics under consideration here and the progressive-leftist political group in this example, don’t expect to work any wonders from piling on disclaimers and bureaucracy and openness and other things. You can look outside EA for other examples. The US military does many of the things that you suggest. SHARP/EO representatives, briefings, policies, and on and on and on. But that didn’t substantially change our demographics, culture, sexual assault rates, or anything of the sort. It didn’t stop PVT Manning from going sufficiently crazy and disassociated with military culture on the basis of gender dysphoria to decide to harm the organization.
So instead of engaging in the knee-jerk logic of “there’s a diversity problem—let’s start doing pro-diversity things!” we should be focusing on reason and evidence so that we only spend time and money on solutions where we have a significant expectation of something meaningful being accomplished.
Of course you are pretty clear in your post that, yes, things should be evidence based, there is weak-but-mounting evidence supporting interventions like this, and so on. But I want to emphasize a higher bar of skepticism than what people are likely to take away from your post, especially since the opportunity cost for EA resources is much higher than it is in other contexts.
E.g.:
These are both moderately costly additions to bureaucracy, and I don’t really see what their value is. I’m aware that lots of organizations put emphasis on these types of things, but what are the exact outputs and impacts?
While I’m not saying this is a bad thing, I don’t see what the motivation is. If the problem is gender bias, or ineffective marketing, or narrow appeal, or things of that nature, then they should be dealt with appropriately. What we should not do is lump together every gender-related problem as part of a monolithic reason to implement all gender-related solutions. It’s simply less efficient.
So I was the one who, more than anyone else, told that person that they were an idiot who ought to shut up. But it wasn’t just because they were sexist. I think one of the underlying problems here is not just sexism but people who just don’t care enough about the EA movement itself. I would expect any sexist or racist to at least be decent and intelligent enough to know that there are some pots you don’t stir, purely as a practical means of maintaining a productive movement. I see lots of people talk about making EA more appealing or more diverse, which is fine, but one of the underlying causes of all of these issues, both when it comes to sexists picking fights and when it comes to members of marginalized groups refraining from contributing, is that people care more about things like lifestyle, community and tribal affiliation than they do about sitting down to do productive ethical work. And that’s a super hard thing to change, but it warrants some attention. We can’t just sit around and rely on a shaky combination of atypical saints on one hand and clever marketing on the other.
Edit: also, I have to add that you are being a little bit uncharitable to the person who said that. They said something bad, but not quite how you describe it. I’m not saying this because I care about them, but just because it’s bad if people read this and think “omg, an effective altruist said this! Look how sexist EAs are!” and it gets repeated and spread as a false rumor.
I’m not sure I count or not. My work on autonomy can be seen as investigating systemic change. I’ve been to a couple of meetups and hung around this forum a bit and I can tell you why the community is not very enticing or inviting from my point of view, if you are interested.
Edit to add:
I can only talk about EA London which I went to a couple of the meetups. To preface things I had generally good interactions with people they were nice and we chatted a bit about non-systemic EA interests (which I am also interested in). There was lots of conversation and not too much holding-forth.
I was mainly trying to find people interested in discussing AI/future things as any systemic change has to take this into consideration and there is lots of uncertainty. I was asked what I was interested in by organisers and asked if anyone knew people primarily interested in AI, and I didn’t get any useful responses. At the time I didn’t know enough about EA to ask about systemic change (and wasn’t as clear on what I exactly wanted).
This slightly rambling point is to illustrate that it is hard to connect with people on niche topics (which AI seems to be in London). There probably needs to be a critical mass of people joining at once for a locality to support a topic.
I’ve joined a London EA facebook group focused on the future so I have my hopes.
That is pretty benign, a problem but not a large one. More could be done, but more could always be done.
The second, which I think might be more exclusionary, is EAG. I applied for tickets and to volunteer but I’ve heard nothing so far. I’m unsure why there is even selection on tickets.
I suspect I don’t look like lots of EAs on an application form: I don’t earn to give, but have taken a pay cut to work part time on my project, which I hope will help everyone in the long run. I may not have quite the same chipper enthusiasm.
I suspect other people interested in systemic change will look similarly different from lots of EAs, and the curation of EAG might be biased against them. If it is, then I probably have not lost out much by not going!
I mainly wrote this comment to try and give some possible reasons for the lack of a significant group interested in systemic change (despite articles/comments to the contrary). I’m not expecting EA to change, you can’t be a group for everyone and you do interesting and good things. But it is good to know some of the potential reasons why things are how they are.
Edit2: I got a polite email from Julia Wise telling me that the reason I didn’t get an invite was because London was a smaller event and that people were selected on the basis of “those who will benefit most from attending EA Global London.” It would be nicer if these things were a little more transparent, e.g. you are applicant #X we can only accept #Y applicants, to give you a better idea of the chances. From my own perspective for the people that are interested in current niche EA topics it is important to be able to potentially meet other people from around the world interested in their topics. EAG might not be the place for that though.
I’d like to move towards an inclusive community that doesn’t damage the valuable aspects of EA. I think this post mostly did a good job of suggesting things in that vein (I was heartened to see “don’t stop being weird” as an item), but I’d like to push on the point a bit more.
For example, I’m hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and “you are not your ideas” is common knowledge. If we ban this, then we make some parts of our discourse worse. Overly zealous pursuit of formalized markers can destroy a lot of value.
Of course, the solution is “don’t do that”, but the most obvious approach to “have more X” is “pick some formal markers of X and select for them”. Doing better is harder, perhaps something like “have workshops/talks on good disagreement”, “praise people who’re known for being excellent at this” etc.
I agree with others that there are too many suggestions in this post. They’re also a bit of a grab bag. I can see a few categories:
Miscellaneous criticisms, many of which seem plausible, but aren’t obviously any more important for diversity than for their other benefits (collaborative discussions, humility, less hero-worship, better interpersonal interactions etc.).
Larger-scale shifts of uncertain effect (head vs heart, jargon, caution over “free speech”, etc.). A lot of these are unclear to me, and I think we’d want to take a clear-headed look at the costs and benefits.
More specific diversity-boosting measures (female speakers, try to counteract bias, mentor people etc.). These seem clearest to me, and hopefully we can look and see what’s worked well in other places vs the costs.
I think the miscellaneous improvements could (and should!) go stand on their own; the larger-scale shifts are perhaps best discussed individually; and what I think a diversity criticism is uniquely placed to bring is more of the third kind of thing.
Regarding discussion style: I think several EAs are great at discussions where they’re fully critical of each other but aren’t combative (e.g. they don’t raise their voices, go ad hominem, tear apart one aspect of an argument to dismiss the rest, or downvote comments that signal an identity that theirs is constructed in opposition to). I think it’s possible to get all the benefit of criticism and disagreement without negative emotions clouding our judgement.
I think the key may be to work against the impulse to be right, or the impulse that someone who disagrees with you is your enemy. I’m much better than I used to be at seeing disagreement as the route to everyone in the discussion getting closer to the truth, though unfortunately that takes a constant drive to improve. (It does help a lot to just remind myself that the person I’m disagreeing with -- in most cases at least—is on my team in the bigger picture.) Doing more to penalize combative behavior and reward constructive behavior—like how downvotes and upvotes are supposed to be used in this forum—seems like a feasible solution.
Regarding the grab-bag: That was my intention, to get the ball rolling. I hope for others to bring in their own thinking on prioritization and implementation.
As I said, I’m totally in favour of collaborative discussions, i.e. this stuff
(except possibly raised voices), but I wanted to argue that sometimes things that look like combative discussion aren’t. Imagine:
A:
B: I think that’s a pretty bad argument because . seems much better.
A: No, you didn’t understand what I’m saying, I said .
This could be a snippet of a tense combative argument, or just a vigorous collaborative brainstorming session. A might feel unfairly dismissed by B, or might not even notice it. If we were trying to combat combtiveness by calling out people abruptly shooting down other people’s ideas, then we might prevent people from doing this particular style of rapid brainstorming.
(Sorry, this stuff is hard to talk about because it’s very contextual. I should probably have picked a better example :))
What I’m trying to say is that we just need to be a little bit careful how we shoot for our goals.
I see, we’re just thinking of “combative” differently.
Yeah, we have already gone too far with condemning combaticism on the EA forum in my opinion. Demanding that everyone stop and rephrase their language in careful flowery terms is pretty alienating and marginalizing to people who aren’t accustomed to that kind of communication, so you’re not going to be able to please everyone.
I do think that there should be higher bars for overtly signalling collaborativeness online, because so many other cues are missing.
I’m confused, you mean people should be expected to explicitly signal that they are being collaborative?
In my view the basic structure of a “combative” debate need not entail any negative connotation of hostility or interpersonal trouble. Point/counterpoint is just a standard, default, acceptable mode of discussion. So ideally, when you see people talking like that, as long as things are reasonably civil then you don’t feel a need to worry about it. It’s a problem that some people don’t see “combative” discussions in this way, but I don’t think there is any better solution in the long run. If you try to evolve norms to avoid the uncertainty and negative perceptions then you run along a treadmill—like the story with politically correct terminology. It’s okay to have a combative structure as long as you stick within the mainstream window of professional and academic discourse, and I think EA is mostly fine at that.
Whether a discussion proceeds as collaborative or combative depends on how the participants interpret what the other parties say. This is all heavily contextual, but as with many things involving conversational implicature, you can often spend some effort to clarify your implicature.
The internet is notoriously bad for conveying the unconscious signals that we usually use to pick up on implicature, and I think this is one of the reasons that internet discussions often turn hostile and combative.
So it’s worth putting in more signals of your intent into the text itself, since that’s all you have.
The right approach is to only look at actual points being made, and not try to infer implications in the first place.
When someone reacts to an implication, the appropriate response is to say “but I/they didn’t say anything about that,” ignore their complaints and move on.
You only have control over your own actions: you can’t control whether your interlocutor over-interprets you or not.
Your “right approach”, which is about how to behave as a listener, is compatible with Michael_PJ’s, which is about how to behave as a speaker: I don’t see why we can’t do both.
But I can control whether I am priming people to get accustomed to over-interpreting.
Because my approach is not merely about how to behave as a listener. It’s about speaking without throwing in unnecessary disclaimers.
That sounds potentially important. Could you give an example of a failure mode?
Consider how my question “Could you give an example...?” reads if I didn’t precede it with the following signal of collaborativeness: “That sounds potentially important.” At least to me (YMMV), I would be like 15% less likely to feel defensive in the case where I precede it with such a signal, instead of leaping into the question—which I would be likely (on a System 1y day) to read as “Oh yeah? Give me ONE example.” Same applies to the phrase “At least to me (YMMV)”: I’m chucking in a signal that I’m willing to listen to your point of view.
Those are examples of disclaimers. I argue these kinds of signals are helpful for promoting a productive atomsphere; do they fall into the category you’re calling “unnecessary disclaimers”? Or is it only something more overt that you’d find counterproductive?
I take the point that different people have different needs with regards to this concern. I hope we can both steer clear of typical-minding everyone else. I think I might be particularly oversensitive to anything resembling conflict, and you are over on the other side of the bell curve in that respect.
The failure mode where people over-interpret things that other people say, and then come up with wrong interpretations.
Well you should probably signal however friendly you are actually feeling, but I’m not really talking about showing how friendly you are, I’m talking about going out of your way to say “of course I don’t mean X” and so on.
https://www.overcomingbias.com/2018/05/skip-value-signals.html
It looks like we were talking at cross purposes. I was picking up on the admittedly months-old conversation about “signalling collaborativeness” and [anti-]”combaticism”, which is a separate conversation to the one on value signals. (Value signals are probably a means of signalling collaborativeness though.)
I think politeness serves a useful function (within moderation, of course). ‘Forcing’ people to behave more friendly than they feel saves time and energy.
I think EA has a problem with undervaluing social skills such as basic friendliness. If a community such as EA wants to keep people coming back and contributing their insights, the personal benefits of taking part need to outweigh the personal costs.
Not if people aren’t attracted to such friendliness. Lots of successful social movements and communities are less friendly than EA.
Can you say what you mean by ‘formal markers’? I’ve never heard this term before.
Sorry, that was me being unclear! The situation I’m envisaging is:
We want more X.
We can’t detect X directly, so we’ll pick some marker for X that looks like X (that’s what I was going at with “formal”, “relating to the form of”), and then aim for that.
Oops our markers don’t capture X, or even exclude some important bits of X.
I like Michael’s distinction between the style and core of an argument. I’m editing this paragraph to clarify the way in which I’m using a few words. When I talk about whether an argument is actually combative or collaborative, I mean to indicate whether it is more effective at goal-oriented problem-solving or at achieving political ends. By politics, I mean something like “social maneuvers taken to redistribute credit, affirmation, etc. in a way that is expected to yield selfish benefit”. For example, questioning the validity of sources would be combative if the basic points of an argument held regardless of the validity of those sources.
Claims like “EA would attract many additional high quality people if women were respected” or “social justice policing would discourage many good people from joining EA” are, while true, basically all combative, and the framing of effectiveness is just helping people self-deceive into thinking they’re motivated by impact or truth. They’re using a collaborative style (the style of caring about impact/truth) to do a combative thing (politics, in the wide definition of that word).
Ultimately, I can spin the observation that these things are combative into a stylistically collaborative but actually combative argument for my own agenda, so everything I’m saying is suspect. To illustrate: the EA phrase “politics is the mindkiller” is typically combatively used in this way, and I have the ability to do something similar here. “Politics is the mindkiller” is the mindkiller, but recognizing this won’t solve the problem, in the same way recognizing politics is the “mindkiller” doesn’t.
People can smell this, and they’d be right to distrust your movement’s ability to think clearly about impact, if you’re using claims of impact and clearer thinking to justify your own politics. People who are bright enough to figure this out are typically the ones I’d want to be working with.
Yeah, you all have a problem with how you treat women and other minority groups. Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously. You let people who want to disparage women get away with doing so by using a collaborative “impact and truth” discussion style to achieve combative, political aims. That’s just the way the social balance of power lies in EA. People would use “impact and inclusivity” as a collaborative style to achieve political aims if the balance of power were flipped. Plausibly there’s an intermediate sweet spot where this happens less overall, though shifting the balance of power to such spots is never a complete solution. I suspect a better approach would be to get rid of the politics first; this will make it easier to make progress on inclusivity.
The norm of letting people stylize politics with talk of impact and truth is deeply ingrained in EA. It’s best to work outside the social edifice of EA, if you want to think clearly about impact and truth. Which feels like a shame, but isn’t too bad if you take the view that good people will eventually be drawn to you if you’re doing excellent work. That was GiveWell’s strategy, and it worked.
“Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously.”—Kelly put in a lot of work, but there were a lot of issues with the original post. I think this was inevitable to an extent, unless you’re already a policy expert producing high quality work really needs a group of people or multiple feedback cycles. I think that it is especially important to maintain high standards of evidence in regards to this issue because the increasing political polarisation means that both sides of the spectrum are dropping their own standards.
I don’t see many people who want to figure out how much of a problem there is, and then apply e.g. utilitarianism to decide what to do about that. That would count as acting seriously.
You made an extremely long list of suggestions. Implementing such a huge list would mean radically overhauling the EA community. Is that a good idea?
I think its important to keep in mind that the EA community has been tremendously successful. Givewell and OpenPhil now funnel tremendous amounts of money towards effective global poverty reduction efforts. EA has also made substantial progress at increasing awareness of AI-risk and promoting animal welfare. There are now many student groups in universities around the world. EA has achieved these things in a rather rapid timeframe.
Its rather rare for a group to have comparable success to the current EA community. Hence I think its very dangerous to overhaul our community and its norms. We are doing very well. We could be doing better, but we are doing well. Making changes to the culture of a high performance organization is likely to reduce performance. Hence I think you should be very careful about which changes you suggest.
In addition to being long your list of changes has many rather speculative suggestions. Here are some examples: ” —You explicitly say we should be more welcoming towards things like “dog rescue”. Does this not risk diluting EA into just another ineffective community. —You say that suing the term “AI” without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term “AI” your standards of accessibility seem rather extreme. —You claim we should focus on making altruistic people effective instead of effective people altruistic. However Toby Ord claims he initially had the same intuition but his experience is that the later is actually easier. How many of your intuitions are you checking empirically? (This has been mentioned by other commenters)
In general I think you should focus on a much smaller list of core suggestions. It is easier to argue rigorously for a more conservative set of changes. And as I said earlier EA is doing quite well so we should be skeptical of dramatic culture shifts. Obviously we should be open to new norms, but those norms should be vetted carefully.
I second most of these concerns.
The core of EA is cause-neutral good-maximization. The more we cater to people who cannot switch their chosen object-level intervention, the less ability the movement will have to coordinate and switch tracks. They will become offended by suggestions that their chosen intervention is not the best one. As it is I wish more people challenged how I prioritize things, but they probably don’t for fear of offending others as a general policy.
I am in favor of non-dumbed-down language as it creates an added constraint in how I can communicate when I have to keep running a check on whether a person understands a concept I am referring to. I do agree that jargon generation is sometimes fueled by the desire for weird neologisms moreso than the desire to increase clarity.
I once observed: “Effectiveness without altruism is lame; altruism without effectiveness is blind.” ‘Effectiveness’ seems to load most of the Stuff that is needed; to Actually Do Good Things requires more of the Actually than the Good. It seems that people caring about others takes less skill than being able to accomplish consequential things. I am open to persuasion otherwise; I’ve experienced most people as more apathetic and nonchalant about the fate of the world, an enormous hindrance to being interested in effective altruism.
“god rescue”—Well, maybe he is a utility monster.
But thanks for posting this comment. For several of the cultural changes suggested, diversity could be a relevant factor, but it would be unlikely to be the most significant consideration. Things like changing our marketing or reducing competitiveness are plausibly good changes, diversity is only one factor to consider.
As a guy who used to be female (I was AMAB), Kelly’s post rings true to me. Fully endorsed. It would be particularly interesting to hear about AFAB transmen’s experiences with respect to this.
The change in how you’re treated is much more noticeable when making progress in the direction of becoming more guyish; not sure if this is because this change tends to happen quickly (testosterone is powerful + quick) or because of the offsetting stigma re: people making transition progress towards being female. I could also see this stigma making up some of the positive effect that AMAB people feel on detransitioning, though it’s mostly possible to disentangle the effect of the misogyny from that of the transmisogyny if you have good social sense.
In anticipation of being harassed (based on past experience with this community), I’ll leave it at that. I’m not going to respond to any BS or bother with politics.
Your portrait of what the EA community could be is a beautiful one and made me tear up. You hit the nail on the head many times in this post on the subtle connections between things that I think can be hard to identify: the connection between heart and head, the E and the A, the overuse of jargon, and the hero worship, and so on. I have to say that as a fairly straight-passing gay man with immense amounts of privilege, even I feel many of these pressures and am often put off by the alpha-male machismo you often see in EA spaces.
I’ve witnessed discrimination and harassment, and heard of assault, in EA-ish spaces, and it seems pretty clear that this is contributing to the gender gap. I’ve definitely exhibited some of the combative and argumentative behaviors you mention. When I got into the EA community a few years ago, I began in global poverty and animal advocacy circles, and I found they were much better on these issues than the community is now, sadly. (That’s with both of those areas’ having plenty of problems.)
I think Kelly moved us toward a type of dialogue on this issue that is lacking in the world, and I hope we can have more of it. Right now, discussions around diversity and inclusion seem polarized between the sort of “rationalist” discussion that’s snarky and dismissive on the one hand and an ostracizing mob mentality on the other hand. I don’t want to say EA should chart a middle path, because I think we should lean toward being overly zealous on diversity and inclusion rather than away, but I think EA and its aligned movements (animal advocacy in my mind) would benefit from a conversation that is at the same time inclusive and data-based. I don’t think the world has that type of conversation very often.
The lack of conversations that are both inclusive and data-based seems to lead to pretty bad results, where diversity and inclusion are may not be promoted in the most effective ways, and people opposed to diversity and inclusion harbor suspicions about the world (e.g. that discrimination does not exist) that continue to fester unaddressed.
From my exploration of these matters, I’ve come to see that generally, when one reads about data on discrimination, differences between groups, etc. one finds that (a) discrimination exists and can be quite powerful; (b) there are differences between genders, but the differences are subtle and go in varied directions (e.g. men are more combative, and women are more collaborative, as Kelly notes); and (c ) these differences are not the reason for the vast majority of gaps that we see.
I think that because discussion about differences between genders is often consigned to the more diversity-hostile corners of the internet, though, ideas that would be proven wrong by the data go unchallenged. Again, I think if we were to have the right sort of conversation on these issues, we would find that discrimination is indeed the primary cause of the gender gap in EA, but without that conversation, people will not be convinced. (And if an honest conversation engaged with data and personal experiences came to the conclusion that this was not the case, that would probably be good information to have.)
For instance, I read the Damore memo, but then saw this graph which seems to be pretty good evidence that the vast majority of the gap in tech is not from biological differences (and so likely some iteration of discrimination, implicit or explicit). I don’t remember where I came across this graph, but it was very helpful to me. Without looking at the whole picture, though, one can look solely at the individual components of the picture (e.g. Damore’s arguments on specific gender differences) and come to conclusions that would be put in doubt with fuller information.
As an additional reason why I think EA is a movement that could have the right conversation on this, I think that EAs recognize a moral principle similar to equality of interests, where differences in personal traits do not lead to moral differences. It seems that in many diversity and inclusion conversations, both the right and the left consider personal trait differences to imply moral differences, and I think EAs can challenge and move beyond that assumption–though with care and only after we start improving on our demographics.
This is a very challenging issue because, as noted in a comment below, racism and sexism have long been motivated by biological essentialism, and it’s extremely disturbing to have people talk about a group you are a part of in this way. (As a Jew, I can say that I feel discomfort with the conversation about Jewish values below, for instance, though I don’t have a strong opinion on its propriety.) I think that the way to deal with this problem is to exercise caution when speaking about these sorts of things, to avoid casual discussion of them, and to have a higher evidence standard for when we talk about these things. I think that our community can learn the appropriate maturity to do that, though.
Anyway, all this is to say that I hope that as this conversation goes on, we can bring data to bear and recognize the implications of the way we speak for others in this community. Words and ideas do cause harm, and we should be utilitarians about the way we speak. With appropriate caution, though, I think that EAs can have a conversation that gets to the heart of the matter and offers a model for how these conversations can be had.
————————————
For those looking for examples of places where these discussions could be valuable, I have a few:
Gender and cosmopolitan values–The Better Angels of Our Nature cites feminism as one of the reasons for declines in all sorts of violence (war, sexual violence, torture), and I’ve seen enough data to match my intuition that feminism is also very good for animals. I think there are lots of things to explore empirically in this domain (that likely would have implications for the A vs. E debate), but they probably involve engaging with uncomfortable questions about where these gender differences arise.
On another note, animal advocates will often assert that if we focus on multiple causes, we will solve our diversity and inclusion problem. I think this is a very important claim to test, because focusing on multiple causes may be quite costly. I’m fully supportive of focusing on creating justice within our movements and groups, e.g. by aggressively fighting sexual assault and getting rid of income barriers, but I think the claim about movements’ outward focus is a debatable one that really needs to be empirically explored.
Similarly to the above note, animal advocates often work on issues to promote diversity and inclusion including things like fighting urban food deserts without looking into the evidence around them. This could not only hinder direct impacts but also create the impression that advocates’ diversity and inclusion efforts are an afterthought without the same rigor applied to it that advocates apply to work for animals.
Just in reply to the graph section—this post made me think about possible reasons for the discrepancy between computer science and law/medicine.
http://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/
Yeah, I’ve read that and think there are very good points in there. I think I’d actually thought the graph said “physics” rather than “physical sciences,” so I now realize I misread it a bit. I do think that SSC piece leaves two questions open though:
First, do we think that EA should be more like physics or more like medicine? This probably speaks to the E vs. A question Kelly addressed above. I think EA could benefit from having more people in it emphasizing the A. This is something we should all talk about at length, though.
Second, even if there are gender differences in interest that mean that an equitable distribution in a field would be unequal, the gap may be larger than what the differences suggest. I think that’s actually what we should expect: in fields that men are more interested in, the higher concentration of men should breed more sexism, and the gap should be inflated.
“fighting urban food deserts without looking into the.” I think there’s a word or phrase missing.
I’m seeing a lot of comments questioning the literature around diversity improving performance. EA prizes accuracy, so that’s a good thing.
However, I’m concerned we’re falling into two very common traps of requiring women to prove themselves more competent than men and status quo bias.
In general, I’d expect teams to be diverse unless a non-diverse team can be proven more effective. Because so many EA leaders are currently white men, I can imagine some reasons why we might have less-diverse teams in the short-term, but my baseline expectation would always be to prefer a more diverse team all other things being equal.
How do you think we are requiring women to prove themselves more competent than men? For example, if A says study X says P and B points at that it actually says Q, is this an example? Or were you thinking about something else?
Status quo bias exists, but there is also a counter-balancing action bias. When someone cares about an issue, they want to do something about it and they often fail to consider all of the secondary effects are reasons why you might not want to take a particular action.
That said, I am doubtful that there is nothing we can do on diversity.
Yeah, this sort of thing is basically always in danger of becoming politics all the way down. One good heuristic is to keep the goals you hope to satisfy by engaging in mind—if you want to figure out whether to accept an article’s central claim, is the answer to your question decisive with respect to your decision? If you’re trying to sway people, are you being careful to make sure it’s plausibly deniable that you’re doing anything other than truthseeking? If you’re engaging because you think it’s impactful to do so, are you treating your engagement as a tool rather than an end?
EA prizes accuracy? Seriously? When? Where? I have zero experience of that being the case so far.
For what it’s worth, I think if you had instead commented with: “As a newcomer to this community, I see very little evidence that EA prizes accuracy more than average. This seems contrary to its goals, and makes me feel sad and unwelcome,” (or something similar that politely captures what you mean) that would have been a valuable contribution to the discussion.
That being said, you might have still gotten downvoted. People’s downvoting behavior on this forum is really terrible and a huge area for improvement in online EA discourse.
Thanks Kelly. I agree that this is a problem in EA in ways that people don’t realize. In retrospect, I feel stupid for not realizing how casual discussion of IQ and eugenics would be hurtful. Same thing with applying that classic EA skepticism to people’s lived experiences.
Culture isn’t the main reason I left EA, but it’s #3. And I think it contributes to the top two reasons I felt alienated: the mockery of moral views that deviate from strict utilitarianism, and what I believed were naive over-confident tactics.
“Same thing with applying that classic EA skepticism to people’s lived experiences”
I suppose this comes down to why the person is sharing their lived experience. If someone is just telling you their story, you want to try and keep an open mind. On the other hand, if someone is sharing their lived experiences in order to make a political argument, a certain amount of criticism, whilst not being unnecessarily insensitive, is fair game.
If people have to opt into it, we can assume the people who currently misuse their votes won’t.
Another concrete suggestion: I think we should stop having downvotes on the EA Forum. I might be not appreciating some of the downsides of this change, but I think they are small compared to the big upside of mitigating the toxic/hostile/dogpiling/groupthink environment we currently seem to have.
When I’ve brought this up before, people liked the idea, but it never got discussed very thoroughly or implemented.
Edit: Even this comment seems to be downvoted due to disagreement. I don’t think this is helpful.
Just for the record, I think this is a bad idea: I think it’s costly for the community when people make bad arguments, and I think that the community is pretty good at recognizing and downvoting bad arguments where they appear, and I don’t think it too often downvotes stuff it shouldn’t.
Yeah, I don’t think downvotes are usually the best way of addressing bad arguments in the sense that someone is making a logical error, mistaken about an assumption, missing some evidence, etc. Like in this thread, I think that’s leading to dogpiling, groupthink, and hostility in a way that outweighs the benefit of downvoting from flagging bad arguments when thoughtful people don’t have time to flag them via a thoughtful comment.
I think downvotes are mostly just good for bad comments in the sense that someone is purposefully lying, relying on personal attacks instead of evidence, or otherwise not abiding by basic norms of civil discourse. In these cases, I don’t think the downvoting comes off as nearly as hostile.
If you agree with that, then we must just disagree on whether examples (like my downvoted comment above) are bad arguments or bad comments. I think the community does pretty often downvote stuff it shouldn’t.
Hmm, part of the problem is that downvotes are overloaded. They can either indicate:
This is a good comment OR
This is a good policy
I don’t think that people think it is a bad comment, they just think it is a bad policy.
I agree with this. Contra Buck, I think people use downvotes to express things they ultimately disagree with, rather that because they genuinely find someone’s comments ‘unhelpful’, i.e. malicious, lazy, something like that. I might also prompt people to say what they didn’t like with the other person’s vote, rather than just voting anonymously (and snarkily) with karma points.
The problem is that this takes a lot of time, and people with good judgement are more likely to have a high opportunity cost of time; you want to make it as cheap as possible for people with good judgement to discourage bad comments; I think that the current downvoting system is working pretty well for that purpose. (One suggestion that’s better than yours is to only allow a subset of people (perhaps those with over 500 karma) to downvote; Hacker News for example does this.)
Please let’s not give people any more incentives to game the karma system than they already have.
Another (possibly bad, but want to put it out there) solution is to list names of people who downvoted. That of course has downsides, but it would have more accountability, especially when it comes to my suspicion that it’s a few people doing a lot of the downvoting against certain people/ideas.
Another is to have downvotes ‘cost’ karma, e.g. if you have 500 total karma, that allows you to make 50 downvotes.
This would make it harder for people to downvote on topics like this one where it’s really risky to admit disagreeing with people.
That’s a fascinating suggestion.
Thank you for the interesting post Kelly. I was interested in your comment:
And followed the link through to Forbes. I think the part you are citing is this:
Unfortunately, the link there is broken. Do you know what the original source is?
I read the Forbes article which links to a empty page on ‘talentlens.co.uk‘. It becomes clear the the Forbes article is only referring to results on a test called the ‘Cognitive Style Index.’ If you read the Talent Lens ‘Technical Manual and User Guide’ for the Cognitive Style Index, you can find the same reference to there being 32 studies, 13 of which found women to be more analytic than men, and 8 which found women to have lower scores than men (I assume these results were not statistically significant and so weren’t counted).
The CSI seems to be used almost exclusively in a business and management/marketing context (which it what it was originally designed for) hence, of the 32 studies reported, almost all are on business students or managers. In Psychology, we tend to use the Rational-Experiential Inventory http://www.sjdm.org/dmidi/Rational-Experiential_Inventory_-_revised_40_item.html which tends to find results in the opposite direction though inconsistently, and the Cognitive Reflection Test finds even larger differences (unlike the others is not based on self-report). Personally I don’t put much stock in any of these results, but I don’t think that saying that women tending to be more intuitive and men more analytical/deliberative” does not seem to be borne out and in fact the opposite may be more likely” is an appropriate summary of the evidence.
I couldn’t find it unfortunately, and was just relying on the veracity of that rather detailed extract and the credibility of the source publication. I considered not putting that in at all since what matters is that the prejudiced view doesn’t seem to have backing, but I figured this was still information, worth a “may be more likely” even if I couldn’t confirm that it’s been demonstrated.
Moving you to my answer to the same question above to for further discussion :)
Thanks for putting in the effort to write such a detailed post, I imagine that this would have taken a lot of time and effort. I also appreciate that you offered to have a discussion on the extent to which this is a problem. I have had negative experiences in the past with people who were opposed to this line of questioning, so I am really glad to see you actually invite this discussion.
Firstly, in regard to the communities we draw from, I’ve seen a lot of articles about how problematic the IT industry is, but I am rather skeptical of these claims and hence of the claim that EA is especially problematic. Clearly there are a lot of bad things that have happened in certain places, but people are acting as though it is an established fact industry is worse than other industries without there being any research backing it up. I think we need to be careful about anecdotal evidence because of the Chinese Robber Fallacy—ie. neglect of base rates. I feel that many people look at the industry with the preconception that it is sexist because it is majority male and it is hardly surprising that they find confirmatory evidence. This isn’t to say that the industry might not be disproportionately sexist, my suspicion is that we do not know because no-one has conducted the research.
Secondly, I would suggest that similarly it is not clear whether the problem in EA is larger than in the rest of society. It might be larger or it might be smaller, but I haven’t seen strong evidence either way. I’m not saying that we should therefore not try to be more inclusive, I’m just pushing back a tendency that I’ve seen to declare a group especially bad without sufficient evidence to justify claiming this as an established fact.
Thirdly, I would be wary about relying to much evidence about correlations. Does diversity increase the success of companies or due more successful and wealthier companies have an easier time attracting women and minorities? I would not be surprised if it was a mix, although I won’t guess at the split.
Fourth, I wouldn’t say that the effects of quotas are clear from the one study (http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/). As it says, “Competence was measured by comparing the private incomes across people with the same education, occupation, age, and residence in the same geographical region. Those with higher incomes were deemed more competent”. That’s an interesting result, but it needs to be combined with other research to be reliable.
Fifth, “some people in the community have made other thoroughly unreasonable claims to justify the status quo, such as that women would be a distraction in the workplace”. Where were these claims made and how many people made them?
Sixth, I’m in favour of people being able to anonymously share experiences, but I believe it would be of the utmost importance to ensure that we did not create a site that enabled people to anonymously spread rumours that harmed the reputation of other people and could possibly lead to divisions within the movement. If such a site is made, I would suggest that there be rules against identifying information be shared in the general discussion area. If someone wants the kind of advice that requires them to identify the specific individual, such as if they were intending to lodge a formal complaint with an organisation, then there could be specific persons tasked tasked with advising them. There’s a huge debate about anonymous vs. identified complaints that I don’t want to get involved in, but anonymous, public, unscreened accusations are a recipe for disaster.
Lastly, you write, “Unfortunately I suspect some people in the community are content, implicitly or explicitly, to assume that women and people of color are inherently so much worse than white men at thinking about altruism effectively that the constitution of the community is merely an effect of this presumed difference”. I believe there are two main issues with recruiting women: a) Women are less likely to be in the groups we have been successful in recruiting from (ie. maths, philosophy, computer science) b) In our society, women are encouraged to embrace their emotions more and men are encouraged to be objective. Given this socialisation, men are more likely to find EA matches their thought process.
I made a survey for all the suggestions to try to sort them out, cause it seemed like people thought there were too many.
Unfortunately only 3 people responded, but you can also answer or share it around more if you wish.
https://docs.google.com/forms/d/e/1FAIpQLSd0QsGqbyVN49mtAqgN-z8t7P6dkTpNugZ7tVPShFIRvs5dig/viewform?usp=sf_link
Since I’ve not seen it mentioned here, unconferences seem like a inclusive type of event as described above. I’m not sure how EAG compare.
Thank you for writing this! You’ve made a lot of good points in here, some of which I’ve been thinking of myself.
A note on this point though:
I generally agree with your position re: the potential harm of perpetuating harmful behaviors in the status quo by ignoring the moral issue of how diverse groups are treated. However, the vegetarian example used conflates declining to be vegetarian with “selfish and harmful” act of avoiding participating in a major moral problem. The problem is, there are a number of people in the world who have limited access to vegetarian substitutes and/or food in general, diverse cultural pressures, medical conditions, etc. which make changing their diet difficult or in some cases impossible. Therefore it may be more respectful and inclusive to caveat this statement by indicating that not all choices to abstain from vegetarianism are for simplistic or selfish reasons due to ignoring the moral problem.
I saw some comments countering this, but I think such a position would add value and is necessary for a philosophy which is snowballing into a perception of ‘white male’ pervasion.
I would be keen for more diversity and women’s EA events to be arranged more regularly, perhaps at university societies etc.
I think this could be useful, but I feel as though the women’s EA Facebook group serves this purpose to an extent. My worry with making something like this a public website would be if non-EAs looking in believe discrimination and harassment to be very common in EA circles and to put them off altogether, when it’s actually widespread everywhere. I would be in favour of some sort of platform where we can share experiences, issues and guidance, but perhaps not referencing EA explicitly so that it does not ruin the brand of the philosophy itself, but rather brings to light individual experiences.
I think citing this article weakens your overall argument. The study has n=30 and is likely more of the same low-quality non-preregistered social psychology research that is driving the replication crisis. Your argument is strong enough (to think about examples of men being snarky, insulting others, engaging in pissing contests) without needing to cite some flimsy study. Otherwise, people start questioning whether your other citations are trustworthy.
Is it true that men score higher than women in ‘thinking’ vs ‘feeling’? If so, the EA community (being dominated by men) might be structured in ways that appeal to ‘thinkers’ and deter ‘feelers’. To reduce the gender gap in EA, we would have to make the community be more appealing to ‘feelers’ (if women are indeed disproportionately ‘feelers’).
I think we score quite a bit worse on “feeling” than most altruistically-driven communities and individuals, men included.
[Edit: Point being, yes we’re lacking in feeling, but “thinking vs. feeling” is not a tradeoff we have to make to increase our A (or our gender parity, which isn’t an inherent problem but is tightly related to our problems). EA’s whole purpose is to combine both and we should aim to recruit people who score high on both, not just one or the other. Sorry for the excessive edits.]
My understanding of Myers Briggs is that ‘thinking’ and ‘feeling’ are mutually exclusive, at least on average, in the sense that being more thinking-oriented means you’re less feeling-oriented. The E vs. A framing is different, and it seems you could have people who score high in both. Is there any personality research on this?
Doesn’t personality psychology use the BIg Five instead of Myers Briggs? AFAIK there isn’t enough research to determine the validity and usefulness of the ‘thinking’ / ‘feeling’ categories (and Myers Briggs in general).
I’d like to point out that the main post is written in a somewhat “culture war”-y style, which is why it has attracted so many comments/criticisms (and within 3 days it already has more comments than any other thread one these forums, ever, as far as I can tell). Here’s a somewhat similar thread that makes some good suggestions about diversity without getting too much into politics: http://effective-altruism.com/ea/mp/pitfalls_in_diversity_outreach/ (also take a look at the top comment).
Yeah, the original post was much more culture war-y, but fortunately Kelly edited it to make it less so.
The most efficient point of intervention on this issue is for confident insiders to point out when a behavior has unintended consequences or is otherwise problematic.
The post mentions this. It’s hard to get stable, non-superficial buy-in for this from the relevant parties; everyone wants to talk the talk. But when you do, you’ll get a much different effect than you will from hiring another diversity & inclusion officer.
I know of a few Fortune 500 companies that take the idea that this stuff affects their bottom line seriously enough that people in positions of power act on it, but EA seems more like a social club.
Great post.
I think there’s an “in” missing between “people” and “broader”
Danke schoen :)
Thank you very much for bringing this up. Discussion about inclusivity is really conspicuous by it’s absence within EA. It’s honeslty really weird we barely talk about it.
Three thoughts.
I’d like to emphasise how important I think it is that members of the community trying and speak in as jargon-free a way as possible. My impression is this has been getting worse over time: there seems to be something of a jargon arms race as people (always males, typically those into ‘rationality’-type stuff) actively try to drop in streams of unnecessary, technical, elitist, in-group-y words to make themselves look smart. I find this personally annoying and I assume it’s unwelcoming to outsiders.
You gave loads of suggestions (thanks!). There were so many suggestions though, I can’t possibly remember them all. Do you think you could pick out what you think the most important 2 or 3 are and highlight them somewhere?
On a personal note
Ouch. I find this a painful and mostly accurate description of myself. Except emotions. Those are fine.
Are you sure? Here are some previous discussions (most of which were linked in the article above):
http://effective-altruism.com/ea/1ft/effective_altruism_for_animals_consideration_for/ http://effective-altruism.com/ea/ek/ea_diversity_unpacking_pandoras_box/ http://effective-altruism.com/ea/sm/ea_is_elitist_should_it_stay_that_way/ http://effective-altruism.com/ea/zu/making_ea_groups_more_welcoming/ http://effective-altruism.com/ea/mp/pitfalls_in_diversity_outreach/ http://effective-altruism.com/ea/1e1/ea_survey_2017_series_community_demographics/ https://www.facebook.com/groups/effective.altruists/permalink/1479443418778677/
I recall more discussions elsewhere in comments. Admittedly this is over several years. What would not barely talking about it look like, if not that?
All these threads are framed in a very non-”culture war”-y style, and there is little disagreement or criticism expressed in the comments, which is why they feel inconspicuous. This one, on the other hand, has already amassed 200+ comments within 3 days, which is more than any other thread on this forum, as far as I can tell (the only one that gets anywhere close is an II/Gleb drama thread).
I guess I’m basing my subjective judgement of ‘conspicuous by it’s absence’ by comparing how much inclusivity gets discussed in wider society vs how much it gets discussed in EA. I don’t think a few posts over a few years really cuts the mustard, not when it’s not obvious how much is being done on this issue.
With the exception of groups which specifically exist for the purpose of promoting inclusivity, I can’t think of any groups which discuss it more than EA.
Heck, even groups like that—e.g., BLM or anti-GamerGate groups or other leftist cultural movements—don’t spend significantly more time worrying about their own inclusivity than EA does.
Animal advocates definitely discuss inclusion in their movement(s) more, or at least more productively. A small organization was even established in the space recently to increase racial inclusion in the movement. EA discussion on the issue has led to far less action and results in a lot more pushback and hostility. If EAs do discuss it more, I’d say the excess is in people expressing frustration and that not going anywhere.
(My source is observation—I have been heavily involved in both communities for several years.)
In terms of wider society, it’s an issue that people and institutions from governments to non-profits that exist to solve the issue to tech companies are putting a lot of discussion and action into. BLM isn’t something separate, it’s part of the discussion in wider society. And IIRC US companies spend $8bn on diversity programs annually. (How effectively they’re spending it is another matter, but the point is it’s getting a lot of attention.)
I haven’t thought about prioritization yet, and was hoping other people would discuss that here. Since a lot of these are actions individuals can take, it will vary a lot by what roles an individual plays and what they have the most room for improvement in.
That said… toning down jargon, I suspect you’d agree, is probably pretty cost-effective, as I would think is toning up the visibility of people from underrepresented groups. A Diversity & Inclusion Officer who could review and advise on social media communications, ads, community recruitment, website UX, conference speakers, talk content and descriptions, job postings and hiring processes, etc, and who could establish metrics and goals for and conduct annual reviews on inclusionary practices, sounds easily worth their salary, at the very least as an experiment for a year.
I have no doubt that there are things worth doing in this field, but I do worry about the potential for this to take attention away from even higher priorities. One reason why big organisations move slowly is because they have to get approval/input from so many people before they can actually do anything. Secondly, I worry that this is an example of dispersed costs and concentrated benefits in that optimising on second factor tends to be making some sacrifice or compromise on the first.
There is likely to be an adverse selection effect in that the kinds of people who would want to be a diversity officer tend to be the kinds of people who also take the strongest stance and hence are more likely to push us toward prioritising this more than we should.
I am not saying that this is necessarily a bad idea, just it isn’t as obviously good as it looks at first glance.
I feel that many of the words or terms that we have created are useful because they allow us to create common knowledge. However, if you want to reduce the amount of jargon, I would suggest writing up a list of words that could be replaced with non-jargony words or phrases (for reference, I posted in the LW fb group suggesting using Co-ordination problems or Race to the Bottom instead of Moloch and I got a large number of upvotes).
Don’t just try to be more inclusive, include!
ie. go to where women and BME people already are, find those who also want their altruism to be effective, and see how they are ALREADY organising themselves, and support that....
....rather than imagining that we can co-opt THEIR skills and talent into OUR network!
It may be that we are not the ones who are best placed to shift the EA network to something more welcoming to women and BME people. I’m not saying effort isn’t worthwhile, and valuable in itself, it is. but forming an alliance may be much more viable than trying to “include people in”. be our best selves, rather than pretend we can be a level playing field. Humility doesn’t suck! Presumption does.
I can’t speak to where women with a likely interest in EA goals are, but for people from other cultures, two movements stand out: for South America, the Freirian movement, inspired by Paolo Freire and others. In India and Africa (and Canada and UK) the participatory movement and especially Participatory Evaluation—something we could learn a lot from!
I just played with Parable of the Polygons recently http://ncase.me/polygons/ and I think it illustrates a simple general strategy for building more diversity which may underly many of the strategies in this article. The simple strategy is to have a preference against high levels of sameness (/homogeneity), given that one already has a preference for more diversity. I think it is important to not be okay with a demographically homogenous EA movement, manifested with more general strategies, e.g. ‘I will try to notice, feel bad/ disapproving, and do something about an EA meeting with over 80% white, male, tech, etc.’, along with the many great general and specific strategies in this article.
Thanks for the article Kelly.
Also, parable of the polygons is fun to play around with, aside from its instructive benefits. :)
This will only work if you get the 80% to have a preference for more diversity first (as explained in the Parable of the Polygons), otherwise it will be ineffective even if you are pushing for more diversity from the top.
Thanks so much for this thoughtful and well-researched write-up, Kelly. The changes you recommend seem extremely promising and it’s very helpful to have all of these recommendations in one place.
I think that there are some additional reasons that go beyond those stated in this post that increase the value of making the EA a more diverse and inclusive community. First, if the EA movement genuinely aspires to cause-neutrality, then we should care about benefits that accrue to others regardless of who these other people are and independent of what the causal route to these benefits is. As such, we should also care about the benefits that becoming a diverse and inclusive movement would have for women, people of color, and disabled and trans people in and outside of the community. If, as you argue and as is antecedently quite plausible, the EA movement is essentially engaging in the very same discriminatory practices in our movement-building as people tend to engage in everywhere else, then as a result we are artificially boosting the prestige, visibility, and status perception of white, cis, straight, able-bodied men, we are creating a community that is less sensitive to stereotype threat and to micro- and macroaggressions than it otherwise could be, and we are giving legitimacy to stereotypes and to business and nonprofit models which arbitrarily exclude many people. All of this causes harm or a reduction in the status or power of women, people of color, and disabled and trans people and advances their discrimination—which is a real and significant cost to organizing in this way.
Second, even if one thinks that this effect size will be very small compared to the good that the EA movement is doing (which is less obvious than EAs sometimes assume without argument), 1) these are still pure benefits, which strengthens the case for and the reasons favoring improving the EA community in the respects you argue, and 2) if the EA community fails to become more diverse and inclusive we’ll suffer reputation costs in the media, in academia, among progressives, and in the nonprofit world for being a community that is exclusionary. This would come at a significant cost to our potential to build a large and sustainable movement and to create strong, elite networks and ties. And at this point, this worry is very far from a mere hypothetical:
https://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai
https://www.lrb.co.uk/v37/n18/amia-srinivasan/stop-the-robot-apocalypse
https://www.youtube.com/watch?v=ICKxhc5VOr8&feature=youtu.be&t=33m24s
I think we have our work cut out for us if we want to build a better reputation with the world outside of our (presently rather small) community, and that the courses of action you recommend will go quite a long way to getting us there.
I just want to quickly call attention to one point: “these are still pure benefits” seems like a mistaken way of thinking about this—or perhaps I’m just misinterpreting you. To me “pure benefits” suggests something costless, or where the costs are so trivial they should be discarded in analysis, and I think that really underestimates the labor that goes into building inclusive communities. Researching and compiling these recommendations took work, and implementing them will take a lot of work. Mentoring people can have wonderful returns, but it requires significant commitments of time, energy, and often other resources. Writing up community standards about conduct tends to be emotionally exhausting work which demands weeks of time and effort from productive and deeply involved community members who are necessarily sidelining other EA projects in order to do it.
None of this is to say ‘it isn’t worth it’. I expect that some of these things have great returns to the health, epistemic standards, and resiliency of the community, as well as, like you mentioned, good returns for the reputation of EA (though from my experience in social justice communities, there will be articles criticizing any movement for failures of intersectionality, and the presence of those articles isn’t very strong evidence that a movement is doing something unusually wrong). My goal is not to say ‘this is too much work’ but simply ‘this is work’ - because if we don’t acknowledge that it requires work, then work probably will not get done (or will not be acknowledged and appreciated).
Once we acknowledge that these are suggestions which require varying amounts of time, energy and access to resources, and that they impose varying degrees of mental load, then we can start figuring out which ones are good priorities for people with limited amounts of all of the above. I’ve seen a lot of social justice communities suffer because they’re unable to do this kind of prioritization and accordingly impose excessively high costs on members and lose good people who have limited resources.
So I think it’s a bad idea to think in terms of ‘pure benefit’. Here, like everywhere else, if we want to do the most good we need to keep in mind that not all actions are equally good or equally cheap so we can prioritize the effective and cheap ones.
I’m also curious why you think the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans. To be clear about where I’m coming from, I think the most important thing the EA community can do is be a community that fosters fast progress on the most important things in the world. Obviously, this will include being a community that takes contributions seriously regardless of their origins and elicits contributions from everyone with good ideas, without making any of them feel excluded because of their background. But that makes diversity an instrumental goal, a thing that will make us better at figuring out how to improve the world and acting on the evidence. From your phrasing, I think you might believe that harmful societal structures in the western world are one of the things we can most effectively fix? Have you expanded on that anywhere, or is there anyone else who has argued for that who you can point me to?
While I thoroughly appreciate your thoughts here and I’m glad you voiced them, I think you started on a miscommunication:
I don’t think the fact that there are costs to this, as anything, is controversial (though I know its cost-effectiveness is), and it sounds to me like Tyler just meant “intrinsic benefits,” in addition to the instrumental benefits to EA community-building. If he thought improving diversity and inclusion in the community had no cost, I would think he’d say its case is irrefutable, not that these benefits merely “strengthen” its case.
Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this—I was talking in terms of these benefits as “pure” benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly’s piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I’ve pointed out above are “pure” because they come along for free with that labor involved in making the EA community more inclusive, and don’t require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting—even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.
I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you’ve characterized that. I don’t think that we should be a community doing work that fosters “fast progress on the most important things,” because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering “fast progress” on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered—unless we define “fosters fast progress” in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.
To your question as to why “the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans,” I unfortunately haven’t written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don’t think it’s obvious that the EA movement’s contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was “very small” compared to the positive effects we have. It’s something more EAs should treat as non-negligible more often than they do.
Still, here are some of the basic reasons why I think that the EA movement’s contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky—much stickier than norms favoring helping others except in certain particular cases (e.g. within one’s own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I’m out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won’t come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it’s reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction—measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.
I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there’s a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.
Thank you so much for writing this.
Aside from the direct question of cause prioritization which has already been mentioned, I think it’s bad to be explicitly self-serving. Even if the concept would technically work out in the grand calculus, it’s better for social-moral reasons to not treat ourselves as ultimate ends. It runs counter to the idea of an altruist movement.
The people who get bothered along these lines to such a degree—as in, they think negatively of EA for being “exclusionary” just because we don’t do enough catering and decide to condemn it—are not a substantial proportion of media, academia, or the broad liberal political sphere. They are a small group of people who care more about tribal politics than they do about ethical work, and they won’t turn around and cooperate just because you want to get along with them. In the long run, it’s bad to fall victim to these kinds of heckler’s vetoes. (The phrase “negotiating with terrorists” comes to mind.)
“Even if one thinks that this effect size will be very small compared to the good that the EA movement is doing”—I would like to hear why you believe that the effects that you mention in your first paragraph might be comparable to the direct good that we do. I mean, I would be rather surprised if this was the case, but I haven’t heard your argument.
Many of these are good arguments, and I really appreciate the honesty and detail Kelly has put forth here. But zoinks, some are also quite controversial, and not accepted by many scholars in academia.
Here are some sharp thinkers who argue against the notion that diversity and inclusion can be enforced without severe repercussions:
Prof. Jonathan Haidt’s presentation on Truth and Social Justice.
Prof. Jordan Peterson’s presentation on equity and authoritarianism.
Prof. and EA Geoff Miller’s argument about the neurodiversity case for free speech.
Prof. Steven Pinker, who endorses MacAskill’s book, and EA, and identifies as utilitarian, said these words in his book on the decline of violence:
“Politically correct sensibilities may bridle at the suggestion that a group of people, like a variety of fruit, may have features in common, but if they didn’t, there would be no cultural diversity to celebrate and no ethnic qualities to be proud of. Groups of people cohere because they really do share traits, albeit statistically. So a mind that generalizes about people from their category membership is not ipso facto defective. African Americans today really are more likely to be on welfare than whites, Jews really do have higher average incomes than WASPs, and business students really are more politically conservative than students in the arts — on average.
The problem with categorization is that it often goes beyond the statistics. For one thing, when people are pressured, distracted, or in an emotional state, they forget that a category is an approximation and act as if a stereotype applies to every last man, woman, and child. For another, people tend to moralize their categories, assigning praiseworthy traits to their allies and condemnable ones to their enemies. During World War II, for example, Americans thought that Russians had more positive traits than Germans; during the Cold War they thought it was the other way around.”
EA supporter Peter Thiel’s comments on diversity from twenty years ago.
Professor Thomas Sowell gives his perspective on social justice. Sowell’s book Vision of the Anointed has set ground for much of the discussion that came later.
There is the Heterodox EA Facebook group, inspired by Haidt’s Heterodox Academy.
Thanks so much for taking the time to write this. As a man, the crux of my feeling disaffected from EA has been this part: “● Take up that humility more generally. Don’t judge that you’re right and another party is wrong before ensuring you know their reasoning — ask someone why they hold the position they do, maybe they’ve thought of something you haven’t just as you may be assuming you’ve thought of things they haven’t.” As a rule, I have found that EA people believe that they are the world leading expert on literally every single topic. A fellow, for example, said he was starting a blog and wanted submissions on certain topics which I have presented academic papers related to at international conferences. I offered to provide articles for his blog, free. He responded that he would have to see my previous work so he could review it. He had a Bachelors in computer programming and wanted to review my academic work in evolutionary psychology and political science that had been presented at leading international conferences. Because he is the leading expert in every single thing. Just this week I suggested that EA should focus more on threats to bees and other insect pollinators. Another fellow responded that all the claimed problems are false and the issue that does exist is easily solved. Amazing that he knows more than scores of professional entomologists publishing in peer reviewed journals, despite not being an entomologist or in science at all. But again, he is clearly the world leading expert in literally every topic.
At the same time, people like myself who have put serious effort and several decades into developing our knowledge on certain topics, and who have lengthy records of achieving altruistic results dating long before your group existed, find that our posts are not approved by the “moderators” of the FB main group, we are not invited to conferences, our views are not respected as a rule.
Whatever data and techniques you have, as currently constituted EA is more counter-effective than effective and on the road to irrelevance. However intelligent you are, being sure that you literally know everything makes you one of the dumbest people alive.
The reason that we don’t blindly follow credentials is not that we think that we’re better than everyone else. It’s because we’re the first to approach new kinds of problems which haven’t been addressed before with a rigorous, interdisciplinary framework. When you have those goals and ingredients, new people can make progress just as well as traditional experts can.
I think thoughtful, rationality-focused people (not just EA, but even, say, young software engineers) can often outperform the average ‘expert,’ with expertise measured by traditional credentials like having a PhD. There are many biases that pervade academia and other fields (e.g. publication bias, status quo bias, publish or perish incentives), and thoughtful people have often done a lot more than traditional experts to understand and overcome these biases. They also get the benefit of going into a field without as many preconceptions and personal investments, allowing them to better synthesize the literature in a less-biased way.
I don’t have many examples on hand (and would really like if someone else can provide them), but I feel there’s a solid track record of a thoughtful, rationality-focused person disagreeing strongly with traditional experts. Only two are coming to mind right now:
One is Eliezer Yudkowsky, a self-educated blogger, advocating for a focus on safety in the AI community that most traditional AI experts thought was crazy, but now the traditional AI community has shifted heavily towards Yudkowsky.
Another one is the Superforecasters discussed by Phil Tetlock doing very well at predicting future events (e.g. whether there will be a civil war in a certain country), despite traditional experts doing little better than chance.
This smells a lot like a Social Justice Warrior takeover of effective altruism. The idea of restricting free speech is particularly worrying. I would write a full rebuttal, but it might not be worth my time or that of others—the movement might already be unsalvageable (does anyone agree/disagree with that?)
EDIT: Replying to XCCF below: I don’t think there’s much in this post that doesn’t qualify as generic SJW ideology and talking points.
EDIT: Regarding the noncentral fallacy: I think this is a pretty central example of an SJW takeover from a pretty central SJW, but I’m open to new information.
I found this comment frustrating because I see it making the mistake described here:
I.e. “rounding to the nearest outgroup” instead of trying to understand what Kelly in particular is trying to communicate.
Anyway, I wrote a long reply here where I took a first stab at differentiating between “SJWs” vs “diversity advocates I can get behind”.
I am also worried about something similar; that the social justice community has certain epistemic problems that I do not want to see us make the same mistakes in EA. So I’d like to encourage you comment on this issue, but in a way that is less combative, as you might then find more success. In particular, I would like to note that several people here have made critiques of part of the argument and been upvoted.
So it is “social justice warrior” ideology. So what? Maybe some kinds of social justice warrior ideology are good.
See: http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/