Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system!
I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA.
Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills.
I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed.
Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund.
I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.
Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system!
I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA.
Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills.
I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed.
Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund. I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.