Thanks Michael! This is a great comment. (And I fixed the link, thanks for noting that.)
My anecdotal experience with hiring is that you are right asymptotically, but not practically. E.g. if you want to hire for some skill that only one in 10,000 people have, you get approximately linear returns to growth for the size of community that EA is considering:
And you can get to very low probabilities easily: most jobs are looking for candidates with a combination of: a somewhat rare skill, willingness to work in an unusual cause area, willingness to work in a specific geographic location, etc. and multiplying these all together gets small quickly.
It does feel intuitively right that there are diminishing returns to scale here though.
I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren’t part of the EA community. For animal welfare, there’s a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.
However, for more senior/management roles, you’d want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).
Thanks Michael! This is a great comment. (And I fixed the link, thanks for noting that.)
My anecdotal experience with hiring is that you are right asymptotically, but not practically. E.g. if you want to hire for some skill that only one in 10,000 people have, you get approximately linear returns to growth for the size of community that EA is considering:
And you can get to very low probabilities easily: most jobs are looking for candidates with a combination of: a somewhat rare skill, willingness to work in an unusual cause area, willingness to work in a specific geographic location, etc. and multiplying these all together gets small quickly.
It does feel intuitively right that there are diminishing returns to scale here though.
I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren’t part of the EA community. For animal welfare, there’s a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.
However, for more senior/management roles, you’d want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).