Well, lets have a lot at some data that would include Ajeya. If I go to the OpenPhil website and look at people on the ‘our team’ page associated with either AI or Biosecurity, then out of the 11 people I counted, 1 is a woman (this is based off a quick count, so may be wrong) (if I count the EA Community Growth (Longtermism) people then this ratio is slightly better, but my impression is the work of this team is slightly further from research into XRisk, although I may be wrong).
If I look at Rethink Priorities, their AI Governance team has 3 women out of 13 people, whilst their existential security team is 1⁄5.
For FHI, of the 31 people listed as part of their team on the website, 7 out of 31 are women. If I only include research staff (ie remove DPhil students and affiliates), then 2⁄12 are women.
For CSER, of the 35 current full time staff (note this includes administrative staff), 12 are women. Of research staff, 5⁄28 are women. If I include the alumni listed as well (and only include research staff) then 15⁄44 are women.
So according to these calculations, 9% of OpenPhil, 22% of RP, 16.6% of FHI, and 17% of CSER are women.
This obviously doesn’t look at seniority (Florian’s analysis may actually be better for this), although I think is pretty indicative that there is a serious problem
FWIW I think your analysis is more representative than FJehn’s. 10-20% (or maybe very slightly higher) seems more accurate to me than 4%, if (eg) I think about the people I’m likely to have technical discussions with or cite results from. Obviously this is far from parity (and also worse than other technical employers like NASA or Google), but 17% (say) is meaningfully different from 4%.
In my personal experience you always get downvotes/disagree votes for even mentioning any problems with gender balance/representation in EA, no matter what your actual point is.
Well, lets have a lot at some data that would include Ajeya. If I go to the OpenPhil website and look at people on the ‘our team’ page associated with either AI or Biosecurity, then out of the 11 people I counted, 1 is a woman (this is based off a quick count, so may be wrong) (if I count the EA Community Growth (Longtermism) people then this ratio is slightly better, but my impression is the work of this team is slightly further from research into XRisk, although I may be wrong).
If I look at Rethink Priorities, their AI Governance team has 3 women out of 13 people, whilst their existential security team is 1⁄5.
For FHI, of the 31 people listed as part of their team on the website, 7 out of 31 are women. If I only include research staff (ie remove DPhil students and affiliates), then 2⁄12 are women.
For CSER, of the 35 current full time staff (note this includes administrative staff), 12 are women. Of research staff, 5⁄28 are women. If I include the alumni listed as well (and only include research staff) then 15⁄44 are women.
So according to these calculations, 9% of OpenPhil, 22% of RP, 16.6% of FHI, and 17% of CSER are women.
This obviously doesn’t look at seniority (Florian’s analysis may actually be better for this), although I think is pretty indicative that there is a serious problem
FWIW I think your analysis is more representative than FJehn’s. 10-20% (or maybe very slightly higher) seems more accurate to me than 4%, if (eg) I think about the people I’m likely to have technical discussions with or cite results from. Obviously this is far from parity (and also worse than other technical employers like NASA or Google), but 17% (say) is meaningfully different from 4%.
I’m honestly rather confused with how people can disagree vote with this. I’d I get these stats wrong?
I assume “indicative of a serious problem” is what they’re disagreeing with.
In my personal experience you always get downvotes/disagree votes for even mentioning any problems with gender balance/representation in EA, no matter what your actual point is.