I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren’t part of the EA community. For animal welfare, there’s a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.
However, for more senior/management roles, you’d want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).
I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren’t part of the EA community. For animal welfare, there’s a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.
However, for more senior/management roles, you’d want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).