As another resource on effective D&I practices, HBR just published a new piece on “Diversity and Inclusion Efforts that Really Work.” It summarizes a detailed report on this topic, which “offers concrete, research-based evidence about strategies that are effective for reducing discrimination and bias and increasing diversity within workplace organizations [and] is intended to provide practical strategies for managers, human resources professionals, and employees who are interested in making their workplaces more inclusive and equitable.”
Very interesting to see this data- thanks so much for collecting it and writing it up! I hope future versions of the EA Survey will adopt some of your questions, to get a broader perspective.
Thanks Ben! That’s an interesting reference point. I don’t think there are any perfect reference points, so it’s helpful to see a variety of them.
By way of comparison, 1.8% of my sample was black (.7%) or Hispanic (1.1%).
I don’t think placing no value on diversity is a PR risk simply because it’s a view held by an ideological minority. Few people, either in the general population or the EA community, think mental health is the top global priority. But I don’t think EA incurs any PR risk from community members who prioritize this cause. And I also believe there are numerous ways EA could add different academic backgrounds, worldviews, etc. that wouldn’t entail any material PR risk.
I want to be very explicit that I don’t think EA should seek to suppress ideas simply because they are an extreme view and/or carry PR risks (which is not to say those risks don’t exist, or that EAs should pretend they don’t exist). That’s one of the reasons why I haven’t been downvoting any comments in this thread even if I strongly disagree with them: I think it’s valuable for people to be able to express a wide range of views without discouragement.
Glad this is something you’re tracking. For reference, here’s the relevant section of the annual review.
To clarify, my comment about EA’s political skew wasn’t meant to suggest Larks doesn’t care about viewpoint diversity. Rather, I was pointing out that the position of not caring about racial diversity is more extreme in a heavily left leaning community than it would be in a heavily right leaning community.
Thanks Ben! Great to see 80K making progress on this front! And while I haven’t crunched the numbers, my impression is that 80K’s podcast has also been featuring a significantly more diverse set of guests than when the podcast first started- this also seems like a very positive development.
Given the nature of your work, 80K seems uniquely positioned to influence the makeup of the Longtermist ecosystem as a whole. Do you track the demographic characteristics of your pipeline: people you coach, people who apply for coaching, people who report plan changes due to your work, etc.? If not, is this something you’d ever consider?
Thanks Sky! I’ll be in touch over email.
Agreed—though many of the more successful diversity efforts are really just efforts to make companies nicer and more collaborative places to work (e.g. cross-functional teams, mentoring).
Agreed. This makes those sorts of policies all the more attractive in my opinion, since improving diversity is just one of the benefits.
I’m also a little sceptical of the huge gains the HBR article suggests—do diversity task forces really increase the number of Asian men in management by a third? It suggests looking at Google as an example of “a company that’s made big bets on [diversity] accountability… We should know in a few years if that moves the needle for them”—it didn’t.
I’m also skeptical that particular programs will lead to huge gains. But I don’t think it’s fair to say that Google’s efforts to improve diversity haven’t worked. The article you cited on that was from 2017. Looking at updated numbers from Google’s site, the mix of new hires (which are less sticky than total employees) does seem to have shifted since 2014 (when Google began its initiatives) and 2018 (most recent data available). These aren’t enormous gains, but new hires do seem to have become notably more diverse. I certainly wouldn’t look at this data and say that Google’s efforts didn’t move the needle.
Women: 30.7% in 2014 vs 33.2% in 2018 (2.5% diff, 8% Pct Change)
Asian+: 37.9% in 2014 vs 43.9% in 2018 (6% diff, 16% Pct Change)
Black+: 3.5% in 2014 vs 4.8% in 2018 (1.3% diff, 37% Pct Change)
Latinx+: 5.9% in 2014 vs 6.8% in 2018 (.9% diff, 15% Pct Change)
Native American+: .9% in 2014 vs 1.1% in 2018 (.2% diff, 22% Pct Change)
White+: 59.3% in 2014 vs 48.5% in 2018 (-10.8% diff, −18% Pct Change)
The HBR study you cite actually says the evidence shows that some types of programs do effectively improve diversity, but many companies employ outdated methods that can be counterproductive.
Despite a few new bells and whistles, courtesy of big data, companies are basically doubling down on the same approaches they’ve used since the 1960s—which often make things worse, not better. Firms have long relied on diversity training to reduce bias on the job, hiring tests and performance ratings to limit it in recruitment and promotions, and grievance systems to give employees a way to challenge managers. Those tools are designed to preempt lawsuits by policing managers’ thoughts and actions. Yet laboratory studies show that this kind of force-feeding can activate bias rather than stamp it out. As social scientists have found, people often rebel against rules to assert their autonomy. Try to coerce me to do X, Y, or Z, and I’ll do the opposite just to prove that I’m my own person.
In analyzing three decades’ worth of data from more than 800 U.S. firms and interviewing hundreds of line managers and executives at length, we’ve seen that companies get better results when they ease up on the control tactics. It’s more effective to engage managers in solving the problem, increase their on-the-job contact with female and minority workers, and promote social accountability—the desire to look fair-minded. That’s why interventions such as targeted college recruitment, mentoring programs, self-managed teams, and task forces have boosted diversity in businesses. Some of the most effective solutions aren’t even designed with diversity in mind.
The rest of the article has some good examples and data on which sort of programs work, and would probably be a good reference for anyone looking to design an effective diversity program.
However, my sense is that, despite problems with diversity in EA, this has been recognized, and the majority view is actually that diversity is important and needs to be improved (see for instance CEA’s stance on diversity).
Also supporting this view, most of the respondents in 80K’s recent anonymous survey on diversity said they valued demographic diversity. The people who didn’t mention this explicitly generally talked about other types of diversity (e.g. epistemic and political) instead. And nobody expressed Larks’ view that they “do not place any value on diversity.” I agree with Hauke that this perspective carries PR risk, and in my opinion seems especially extreme in a community that politically skews ~20:1 left vs. right.
A few points…
First, I’d very much like to see EA and/or Longtermist organizations hire people with “different academic backgrounds, different world views and different ideologies.” But I don’t think that would eliminate the need for improving diversity on other dimensions like race or gender, which can provide a different set of perspectives and experiences (see, for example, “when I find myself to be the only person of my group in the room I want to leave”) than could be captured by, for example, hiring more white males who studied art history.
Second, I’m not advocating for quotas, which I have a lot of concerns about. I’d prefer to look at interventions that could encourage talented minorities to apply. My prior is that there are headwinds that (on the margins) discourage minority applicants. As multiple respondents to 80K’s recent survey on diversity noted, there’s “a snowball effect there, where once you have a sufficiently non-diverse group it’s hard to make it more diverse.” If that effect is real, claims like “we hired a white male because he was the best candidate” become less meaningful since there might have been better minority candidates who didn’t apply in the first place.
Third, some methods of increasing minority applicants are extremely low cost. For example, I saw one recent job posting from a Longtermist organization that didn’t include language like one often sees in job descriptions along the lines of “We’re an equal opportunity employer and welcome applications from all backgrounds.” It’s basically costless to include that language, so I doubt any minorities see it and think “I’m going to apply because this organization cares about diversity.” But it’s precisely because this language is costless that not including it signals that an organization doesn’t care about diversity, which discourages minorities from applying (especially if they see that the organization’s existing team is very homogeneous.)
How do you think cohorts like the self-identified conservatives in western democracies or the US intelligence community would view ideas coming from that hypothetical think tank? I’m pretty sure there’d be some skepticism, and that that skepticism would make it harder for the think tank to accomplish its goals. (I’m not arguing that they should be skeptical; I’m arguing that they would be skeptical.)
I agree we should expect a Chinese think tank to be largely staffed with Chinese people because of the talent pool it would be drawing from. I’ve provided a variety of possible reference classes for the Longtermist community; do you have views on what the appropriate benchmark should be?
Thanks for sharing your experience! It’s valuable to get the perspective of someone who’s been involved in the Longtermist community for so long, and I’m glad you haven’t felt excluded during that time.
Thanks for sharing the survey data! I’ll update the post with those numbers.
it seems somewhat risky to compare this to numbers “based on the pictures displayed on the relevant team page”, since it seems like this will inevitably under-count mixed race people who appear white.
This is a fair point. For what it’s worth, I classified a handful of people who very well could be white as POC since it looked like they could possibly be mixed race. But these people probably accounted for something like 1% of my sample, far short of the 6.5% mixed race share of EA survey respondents. So it’s plausible that because of this issue diversity at longtermist organizations is pretty close to diversity in the EA community (though that’s not exactly a high bar).
On the other hand, I’d also note Asians are by far the largest category of POC in both my sample and the EA Community, so presumably a large share of the mixed white/non-white population is part white and part Asian. It seems reasonable to assume that ~1/2 of this group would have last names that suggest Asian heritage, but there weren’t many (any?) people in my sample with such names who looked white. This might indicate that my sample had fewer mixed race people than the EA Survey, which would make the issue you’re raising less of a problem.
Interestingly, the EA survey data also has a surprisingly high (at least to me) number of mixed race respondents relative to the number of non-mixed POC. In the survey, 33% of people who aren’t non-mixed white are mixed race. For comparison, this figure is 15% at Stanford and 13% at Harvard. So I think the measurement issue you’re pointing out is much less of a problem for benchmarks other than the EA community.
Thanks for pointing this out, good to know!
DeepMind was on my original list of organizations to include, but doesn’t have a team page on its website. In an earlier draft I mentioned that I would have otherwise included DeepMind, but one of the people I got feedback from (who I consider very knowledgeable about the Longtermist ecosystem) said they didn’t think it should be counted as a Longtermist organization so I removed that comment. And the same is true for OpenAI, FYI.
I think the discrepancy is related to mixed race people, a cohort I’m including in my POC figures. Since the 2019 survey question allowed for multiple responses, I calculated the percentage of POC by adding all the responses other than white, rather than taking 1 - % of white respondents (which results in the 13% you mention).
Thinking more about this in response to your question, it’d probably be more accurate to adjust my number by dividing by the sum of total responses (107%). That would bring my 21% figure down to 19%, still well above the figure for longtermist organizations. But I think the best way of looking at this would be to go directly to the survey data and calculate the percentage of respondents who did not self-describe themselves as entirely white. If anyone with access to the survey data can crunch this number, I’d be happy to edit my post accordingly.
I’d also add that this concern applies in a domestic context as well. Efforts to influence US policy will require broad coalitions, including the 23 congressional districts that are majority black. The representatives of those districts (among others) may well be skeptical of ideas coming from a community where just 3 of the 459 people in my sample (.7%) are black (as far as I can tell). And if you exclude Morgan Freeman (who is on the Future of Life Institute’s scientific advisory board but isn’t exactly an active member of the Longtermist ecoystem), black representation is under half of a percent.
My strong prior (which it sounds like you disagree with), is that we should generally expect funding needs to increase over time. If that’s true, then EA Funds would need to grow by more than enough to offset EA Grants in order to keep pace with needs. More reliance on EA Funds would shift the mix of funding too: for instance, relatively more funding going to established organizations (which EA Grants doesn’t fund) and no natural source of funding for individuals working on Global Poverty (as that fund doesn’t grant to individuals).
I agree it would be helpful for Fund management teams to explicitly make it known if they think there are a lot of strong opportunities going unfunded. Similarly, if Fund managers think they have limited opportunities to make strong grants with additional funds, it would be good to know that too. I’ve been operating on the assumption that the funds all believe they have room for more funding; if that’s not the case, seems like an important thing to share.