I am hesitant to agree. Often proponents for this position emphasize the value of different outlooks in decision making as justification, but the actual implemented policies select based on diversity in a narrow subset of demographic characteristics, which is a different kind of diversity.
I’m sceptical of this proposal, but to steelman it against your criticism, I think we would want to say that the focus should be diversity of a) non-malleable traits that b) correlate with different life experiences—a) because that ensures genuine diversity rather than (eg) quick opinion shifts to game the system, and b) because it gives you a better protection against unknown unknowns. There are experiences a cis white guy is just far more/less likely to have had than a gay black woman, and so when you hire the latter (into a group of otherwise cisish whiteish mannish people), you get a bunch of intangible benefits which, by their nature, the existing group are incapable of recognising.
The traits typically highlighted by proponents of diversity tend to score pretty well on both counts—ethnicity, gender, and sexuality are very hard to change and (perhaps in decreasing order these days) tend to go hand in hand with different life experiences. By comparison, say, a political viewpoint is fairly easy to change, and a neurodivergent person probably doesn’t have that different a life experience than a regular nerd (assuming they’ve dealt with their divergence well enough to be a remotely plausible candidate for the job).
Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system!
I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA.
Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills.
I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed.
Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund.
I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.
If you want different life experiences, look first for people who had a different career path (or are parents), come from a foreign country with a completely different culture, or are 40+ years old (rare in EA).
I think these things cause much more relevant differences in life experience compared to things like getting genital surgery, experiencing microaggressions, getting called a racial slur, etc.
I sense that EA orgs should look at some appropriate baseline for different communities and then aim to be above that by blind hiring, adversing outside the community etc.
“EA institutions should select for diversity with respect to hiring”
Paraphrased from https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique
I am hesitant to agree. Often proponents for this position emphasize the value of different outlooks in decision making as justification, but the actual implemented policies select based on diversity in a narrow subset of demographic characteristics, which is a different kind of diversity.
I’m sceptical of this proposal, but to steelman it against your criticism, I think we would want to say that the focus should be diversity of a) non-malleable traits that b) correlate with different life experiences—a) because that ensures genuine diversity rather than (eg) quick opinion shifts to game the system, and b) because it gives you a better protection against unknown unknowns. There are experiences a cis white guy is just far more/less likely to have had than a gay black woman, and so when you hire the latter (into a group of otherwise cisish whiteish mannish people), you get a bunch of intangible benefits which, by their nature, the existing group are incapable of recognising.
The traits typically highlighted by proponents of diversity tend to score pretty well on both counts—ethnicity, gender, and sexuality are very hard to change and (perhaps in decreasing order these days) tend to go hand in hand with different life experiences. By comparison, say, a political viewpoint is fairly easy to change, and a neurodivergent person probably doesn’t have that different a life experience than a regular nerd (assuming they’ve dealt with their divergence well enough to be a remotely plausible candidate for the job).
Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system!
I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA.
Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills.
I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed.
Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund. I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.
If you want different life experiences, look first for people who had a different career path (or are parents), come from a foreign country with a completely different culture, or are 40+ years old (rare in EA).
I think these things cause much more relevant differences in life experience compared to things like getting genital surgery, experiencing microaggressions, getting called a racial slur, etc.
I sense that EA orgs should look at some appropriate baseline for different communities and then aim to be above that by blind hiring, adversing outside the community etc.
It’s hard to be above baseline for multiple dimensions, and eventually gets impossible.
Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.
Further question: If EA has diversity hires, should this be explicitly acknowledged? And what is the demographic targets?