As a toy example to illustrate my different targets point, let’s consider just technical AI safety. [...] Therefore, I’m saying—as an independent impression—that it is a fact about technical AI safety as a field that we should expect most of the best researchers to be male and white (around 80% and 65%, respectively, within my example). There’s the separate problem of promoting diversity within the relevant populations (e.g., CS undergrads) that AI safety is drawing from, but I don’t think that problem falls within AI safety field-builders’ purview.
Here’s a few things I’d say, that maybe we agree or disagree with:
1.) There are diminishing marginal returns and difficult relevant trade-offs to hitting more ambitious targets.
2.) The correct target shouldn’t be perfect representation of the global population (there’s no internal consensus at RP on what our target should be but we’ve been thinking of trying to match STEM PhDs, or the stats at RAND, or the stats at Brookings.)
3.) There is a large “pipeline problem” (e.g., AI safety recruits from fields that have their own diversity issues and thus inherits these problems).
4.) There is more than just the “pipeline problem”, e.g., there are areas where AI safety etc. are less diverse than their pipelines and this is in part due to systematic issues that are worth fixing.
5.) Determining there is a “pipeline problem” does not mean our work is over. It is still valuable to do some work to fix the pipeline or find other ways to be better than the pipeline. Thus some (but not all) of the problem of the pipeline still should fall within AI safety field-builders’ purview.
Here’s a few things I’d say, that maybe we agree or disagree with:
1.) There are diminishing marginal returns and difficult relevant trade-offs to hitting more ambitious targets.
2.) The correct target shouldn’t be perfect representation of the global population (there’s no internal consensus at RP on what our target should be but we’ve been thinking of trying to match STEM PhDs, or the stats at RAND, or the stats at Brookings.)
3.) There is a large “pipeline problem” (e.g., AI safety recruits from fields that have their own diversity issues and thus inherits these problems).
4.) There is more than just the “pipeline problem”, e.g., there are areas where AI safety etc. are less diverse than their pipelines and this is in part due to systematic issues that are worth fixing.
5.) Determining there is a “pipeline problem” does not mean our work is over. It is still valuable to do some work to fix the pipeline or find other ways to be better than the pipeline. Thus some (but not all) of the problem of the pipeline still should fall within AI safety field-builders’ purview.