surely there’s way more that can be done to fix this other than “distort the bar” to let the presumably less good people in?
Reading this comment makes me realize that perhaps a significant fraction of the awkwardness and not-facing-the-issue that Alice was met with was due to people in the room realizing, upon hearing Alice’s thoughts, that it was too late to promote diversity, and that this made them feel icky. And that all that could be done was to do better next time around.
(It probably wasn’t clear in the post, but the context was that all the applications were in, and they’d all been evaluated and ranked already. At that point, it does seem that there’s not much—if anything—that can be done other than distorting the bar?)
For what it’s worth, I believe Alice and co. are working on things for their next internship round like targeted outreach to underrepresented groups at the bottom-of-the-pipeline level (to help more of these people get into x-risk reduction and/or EA, e.g., by participating in seminar programmes), and trying to especially encourage these folks to apply to internships/fellowships (to help more of these people move up the EA-aligned research pipeline).
I think this is the core problem with regard to diversity that we should work to address. Why are your best applicants male and white? Surely this isn’t a fact about x-risk as a field—that there’s something about being male and white that makes you better at addressing x-risk.
I agree that this is a problem to be worked on. I also notice that this discussion might be prone to talking-past-one-another on account of setting different targets. (For instance, I’m not sure whether I fully agree with you or only partially agree with you.)
As a toy example to illustrate my different targets point, let’s consider just technical AI safety. Now, I don’t know the exact numbers, but within this toy example let’s say that, of the undergraduate population as a whole that studies relevant subjects (like computer science): 80% are male; 65% are white. To me, this would imply that trying to go much above 20% female and 35% non-white working in technical AI safety is going to be difficult, and that there’ll be increasingly diminishing returns, just by the nature of the statistics at play here, to the resources (e.g., effort, time) put towards trying to achieve >20% and >35%. Like, getting to 50% and 50% would be extremely costly, I think, and so 50% would not be an appropriate target.
Therefore, I’m saying—as an independent impression—that it is a fact about technical AI safety as a field that we should expect most of the best researchers to be male and white (around 80% and 65%, respectively, within my example). There’s the separate problem of promoting diversity within the relevant populations (e.g., CS undergrads) that AI safety is drawing from, but I don’t think that problem falls within AI safety field-builders’ purview.
As a toy example to illustrate my different targets point, let’s consider just technical AI safety. [...] Therefore, I’m saying—as an independent impression—that it is a fact about technical AI safety as a field that we should expect most of the best researchers to be male and white (around 80% and 65%, respectively, within my example). There’s the separate problem of promoting diversity within the relevant populations (e.g., CS undergrads) that AI safety is drawing from, but I don’t think that problem falls within AI safety field-builders’ purview.
Here’s a few things I’d say, that maybe we agree or disagree with:
1.) There are diminishing marginal returns and difficult relevant trade-offs to hitting more ambitious targets.
2.) The correct target shouldn’t be perfect representation of the global population (there’s no internal consensus at RP on what our target should be but we’ve been thinking of trying to match STEM PhDs, or the stats at RAND, or the stats at Brookings.)
3.) There is a large “pipeline problem” (e.g., AI safety recruits from fields that have their own diversity issues and thus inherits these problems).
4.) There is more than just the “pipeline problem”, e.g., there are areas where AI safety etc. are less diverse than their pipelines and this is in part due to systematic issues that are worth fixing.
5.) Determining there is a “pipeline problem” does not mean our work is over. It is still valuable to do some work to fix the pipeline or find other ways to be better than the pipeline. Thus some (but not all) of the problem of the pipeline still should fall within AI safety field-builders’ purview.
Reading this comment makes me realize that perhaps a significant fraction of the awkwardness and not-facing-the-issue that Alice was met with was due to people in the room realizing, upon hearing Alice’s thoughts, that it was too late to promote diversity, and that this made them feel icky. And that all that could be done was to do better next time around.
(It probably wasn’t clear in the post, but the context was that all the applications were in, and they’d all been evaluated and ranked already. At that point, it does seem that there’s not much—if anything—that can be done other than distorting the bar?)
Sorry yeah for me that wasn’t very clear and I thought you were basically accusing of (1) diversity work not being worth it and (2) diversity advocates of not being sincere (but instead engaging in applause lights). I think I then felt a little bit personally attacked by my imaginations of what you were saying—which I see now are different from what you were actually saying. This contributed to my response.
I can say that I’ve definitely been in this position before facing the applications and feeling the dread that nothing can be done. I totally empathize with all your feelings here. I’ve felt every single one of them myself. By the time you get the applications in, there’s really not much you can do and it is very uncomfortable.
To the extent you think diversity in backgrounds within AI safety can be useful in contributing to alignment for a more representative or wider range of humanity, I think you can definitely make a case that it does fall within field-builders’ purview to promote diversity, no? Whose job are you suggesting it should it be otherwise?
Direct response:
Many thanks for your comment, Peter.
Reading this comment makes me realize that perhaps a significant fraction of the awkwardness and not-facing-the-issue that Alice was met with was due to people in the room realizing, upon hearing Alice’s thoughts, that it was too late to promote diversity, and that this made them feel icky. And that all that could be done was to do better next time around.
(It probably wasn’t clear in the post, but the context was that all the applications were in, and they’d all been evaluated and ranked already. At that point, it does seem that there’s not much—if anything—that can be done other than distorting the bar?)
For what it’s worth, I believe Alice and co. are working on things for their next internship round like targeted outreach to underrepresented groups at the bottom-of-the-pipeline level (to help more of these people get into x-risk reduction and/or EA, e.g., by participating in seminar programmes), and trying to especially encourage these folks to apply to internships/fellowships (to help more of these people move up the EA-aligned research pipeline).
I agree that this is a problem to be worked on. I also notice that this discussion might be prone to talking-past-one-another on account of setting different targets. (For instance, I’m not sure whether I fully agree with you or only partially agree with you.)
As a toy example to illustrate my different targets point, let’s consider just technical AI safety. Now, I don’t know the exact numbers, but within this toy example let’s say that, of the undergraduate population as a whole that studies relevant subjects (like computer science): 80% are male; 65% are white. To me, this would imply that trying to go much above 20% female and 35% non-white working in technical AI safety is going to be difficult, and that there’ll be increasingly diminishing returns, just by the nature of the statistics at play here, to the resources (e.g., effort, time) put towards trying to achieve >20% and >35%. Like, getting to 50% and 50% would be extremely costly, I think, and so 50% would not be an appropriate target.
Therefore, I’m saying—as an independent impression—that it is a fact about technical AI safety as a field that we should expect most of the best researchers to be male and white (around 80% and 65%, respectively, within my example). There’s the separate problem of promoting diversity within the relevant populations (e.g., CS undergrads) that AI safety is drawing from, but I don’t think that problem falls within AI safety field-builders’ purview.
Here’s a few things I’d say, that maybe we agree or disagree with:
1.) There are diminishing marginal returns and difficult relevant trade-offs to hitting more ambitious targets.
2.) The correct target shouldn’t be perfect representation of the global population (there’s no internal consensus at RP on what our target should be but we’ve been thinking of trying to match STEM PhDs, or the stats at RAND, or the stats at Brookings.)
3.) There is a large “pipeline problem” (e.g., AI safety recruits from fields that have their own diversity issues and thus inherits these problems).
4.) There is more than just the “pipeline problem”, e.g., there are areas where AI safety etc. are less diverse than their pipelines and this is in part due to systematic issues that are worth fixing.
5.) Determining there is a “pipeline problem” does not mean our work is over. It is still valuable to do some work to fix the pipeline or find other ways to be better than the pipeline. Thus some (but not all) of the problem of the pipeline still should fall within AI safety field-builders’ purview.
Sorry yeah for me that wasn’t very clear and I thought you were basically accusing of (1) diversity work not being worth it and (2) diversity advocates of not being sincere (but instead engaging in applause lights). I think I then felt a little bit personally attacked by my imaginations of what you were saying—which I see now are different from what you were actually saying. This contributed to my response.
I can say that I’ve definitely been in this position before facing the applications and feeling the dread that nothing can be done. I totally empathize with all your feelings here. I’ve felt every single one of them myself. By the time you get the applications in, there’s really not much you can do and it is very uncomfortable.
To the extent you think diversity in backgrounds within AI safety can be useful in contributing to alignment for a more representative or wider range of humanity, I think you can definitely make a case that it does fall within field-builders’ purview to promote diversity, no? Whose job are you suggesting it should it be otherwise?