I appreciate the effort, but as someone who has attempted similar analysis in the past I think this sort of methodology is just very hard to extract useful information from. I think you are mainly loading on essentially random facts about how research is formatted and tagged rather than the underlying reality.
As such I think you basically can’t really draw much in the way of conclusions from this data. In particular, you definitely cannot infer that the University of Louisville is the fifth most productive existential risk organization. Nor do I think you can infer much about sex; the exclusion of women like Ajeya, whose contributions are definitely more significant than many included in the list, is due to flaws in the data, not social dynamics.
“can’t really draw much in the way of conclusions from this data” seems like a really strong claim to me. I would surely agree that this does not tell you everything there is to know about existential risk research and it especially does not tell you anything about x-risk research outside classic academia (like much of the work by Ajeya).
But it is based on the classifications of a lot of people on what they think is part of the field of existential risk studies and therefore I think gives a good proxy on what people in the field think what is part of their field. Also, this is not meant to tell you that this is the ultimate list, but as stated in the beginning of the post, as a way to give people an overview of what is going on.
Finally, I think that this surely tells you something about the participation of women in the field. 1 out of 25 is really, really unlikely to happen by chance.
Finally, I think that this surely tells you something about the participation of women in the field.
It presumably tells you something about the participation of women in the field, but it’s not clear exactly what. For instance, my honest reaction to this list is that several of the people on it have a habit of churning out lots of papers of mediocre quality – it could be that this trait is more common among men in the field than among women in the field.
Well, lets have a lot at some data that would include Ajeya. If I go to the OpenPhil website and look at people on the ‘our team’ page associated with either AI or Biosecurity, then out of the 11 people I counted, 1 is a woman (this is based off a quick count, so may be wrong) (if I count the EA Community Growth (Longtermism) people then this ratio is slightly better, but my impression is the work of this team is slightly further from research into XRisk, although I may be wrong).
If I look at Rethink Priorities, their AI Governance team has 3 women out of 13 people, whilst their existential security team is 1⁄5.
For FHI, of the 31 people listed as part of their team on the website, 7 out of 31 are women. If I only include research staff (ie remove DPhil students and affiliates), then 2⁄12 are women.
For CSER, of the 35 current full time staff (note this includes administrative staff), 12 are women. Of research staff, 5⁄28 are women. If I include the alumni listed as well (and only include research staff) then 15⁄44 are women.
So according to these calculations, 9% of OpenPhil, 22% of RP, 16.6% of FHI, and 17% of CSER are women.
This obviously doesn’t look at seniority (Florian’s analysis may actually be better for this), although I think is pretty indicative that there is a serious problem
FWIW I think your analysis is more representative than FJehn’s. 10-20% (or maybe very slightly higher) seems more accurate to me than 4%, if (eg) I think about the people I’m likely to have technical discussions with or cite results from. Obviously this is far from parity (and also worse than other technical employers like NASA or Google), but 17% (say) is meaningfully different from 4%.
In my personal experience you always get downvotes/disagree votes for even mentioning any problems with gender balance/representation in EA, no matter what your actual point is.
I appreciate the effort, but as someone who has attempted similar analysis in the past I think this sort of methodology is just very hard to extract useful information from. I think you are mainly loading on essentially random facts about how research is formatted and tagged rather than the underlying reality.
As such I think you basically can’t really draw much in the way of conclusions from this data. In particular, you definitely cannot infer that the University of Louisville is the fifth most productive existential risk organization. Nor do I think you can infer much about sex; the exclusion of women like Ajeya, whose contributions are definitely more significant than many included in the list, is due to flaws in the data, not social dynamics.
“can’t really draw much in the way of conclusions from this data” seems like a really strong claim to me. I would surely agree that this does not tell you everything there is to know about existential risk research and it especially does not tell you anything about x-risk research outside classic academia (like much of the work by Ajeya).
But it is based on the classifications of a lot of people on what they think is part of the field of existential risk studies and therefore I think gives a good proxy on what people in the field think what is part of their field. Also, this is not meant to tell you that this is the ultimate list, but as stated in the beginning of the post, as a way to give people an overview of what is going on.
Finally, I think that this surely tells you something about the participation of women in the field. 1 out of 25 is really, really unlikely to happen by chance.
It presumably tells you something about the participation of women in the field, but it’s not clear exactly what. For instance, my honest reaction to this list is that several of the people on it have a habit of churning out lots of papers of mediocre quality – it could be that this trait is more common among men in the field than among women in the field.
This is just another data point that the existential risk field (like most EA adjacent communities) has a problem when it comes to gender representation. It fits really well with other evidence we have. See, for example Gideon’s comment under this post here: https://forum.effectivealtruism.org/posts/QA9qefK7CbzBfRczY/the-25-researchers-who-have-published-the-largest-number-of?commentId=vt36xGasCctMecwgi
While on the other hand there seems to be no evidence for your “men just publish more, but worse papers” hypothesis.
Well, lets have a lot at some data that would include Ajeya. If I go to the OpenPhil website and look at people on the ‘our team’ page associated with either AI or Biosecurity, then out of the 11 people I counted, 1 is a woman (this is based off a quick count, so may be wrong) (if I count the EA Community Growth (Longtermism) people then this ratio is slightly better, but my impression is the work of this team is slightly further from research into XRisk, although I may be wrong).
If I look at Rethink Priorities, their AI Governance team has 3 women out of 13 people, whilst their existential security team is 1⁄5.
For FHI, of the 31 people listed as part of their team on the website, 7 out of 31 are women. If I only include research staff (ie remove DPhil students and affiliates), then 2⁄12 are women.
For CSER, of the 35 current full time staff (note this includes administrative staff), 12 are women. Of research staff, 5⁄28 are women. If I include the alumni listed as well (and only include research staff) then 15⁄44 are women.
So according to these calculations, 9% of OpenPhil, 22% of RP, 16.6% of FHI, and 17% of CSER are women.
This obviously doesn’t look at seniority (Florian’s analysis may actually be better for this), although I think is pretty indicative that there is a serious problem
FWIW I think your analysis is more representative than FJehn’s. 10-20% (or maybe very slightly higher) seems more accurate to me than 4%, if (eg) I think about the people I’m likely to have technical discussions with or cite results from. Obviously this is far from parity (and also worse than other technical employers like NASA or Google), but 17% (say) is meaningfully different from 4%.
I’m honestly rather confused with how people can disagree vote with this. I’d I get these stats wrong?
I assume “indicative of a serious problem” is what they’re disagreeing with.
In my personal experience you always get downvotes/disagree votes for even mentioning any problems with gender balance/representation in EA, no matter what your actual point is.
I agree with this.
“Number of publications” and “Impact per publication” are separate axes, and leaving the latter out produces a poorer landscape of X-risk research.
Yes, especially given that impact of x-risk research is (very) heavy-tailed.