My hunch is (as implied elsewhere) ‘talent-constraint’ with ‘talent’ not further specified is apt to mislead. My impression for longtermist orgs (I understand from Peter and others this may apply less to orgs without this as the predominant focus) is there are two broad classes, which imperfectly line up with ‘senior’ versus ‘junior’.
The ‘senior’ class probably does fit (commensensically understood) ‘talent-constraint’, in that orgs or the wider ecosystem want to take everyone who clears a given bar. Yet these bars are high even when conditioned on the already able cohort of (longtermist/)EAs. It might be things like ‘ready to run a research group’, ‘can manage operations for an org’ (cf. Tara’s and Tanya’s podcasts), ‘subject matter expertise/ability/track record’.
One common feature is that these people add little further load on current (limited) management capacity, either because they are managing others or are already ‘up to speed’ to contribute themselves without extensive training or supervision. (Aside: I suspect this is a under-emphasised bonus of ‘value-aligned operations staff’ - their tacit knowledge of the community/mission/wider ecosystem may permit looser management than bringing on able professionals ‘from outside’.) From the perspective of the archetypal ‘pluripotent EA’ a few years out from undergrad, these are skills which are hard to develop and harder to demonstrate.
More ‘junior’ roles are those where the criteria are broader (at least in terms of legible ones: ‘what it takes’ to be a good generalist researcher may be similarly rare to ‘what it takes’ to be a good technical AI safety researcher, but more can easily ‘rule themselves out’ of the latter than the former), where ‘upskilling’ is a major objective, or where there’s expectation of extensive ‘hands-on’ management.
There might be similarly convex returns to getting a slightly better top candidate (e.g. ‘excellent versus very good’ might be 3x rather than 1.3x). Regardless, there will not be enough positions for all the talented candidates available: even if someone at an org decided to spend their time only managing and training junior staff (and haste considerations might lead them to spending more of their time doing work themselves than investing in the ‘next generation’), they can’t manage dozens at a time.
I think confusing these two broad classes is an easy way of burning a lot of good people (cf. Denise’s remarks). If Alice the 23-year-old management consultant might reason on current messaging, “EA jobs are much better for the world than management consultancy, and they’re after good people—I seem to fit the bill, so I should switch career into this”. She might then forsake her promising early career for an unedifying and unsuccessful period as ‘EA perennial applicant’, ending up worse than she was at the start. EA has a vocational quality to it—key it does not become a siren song.
There seem a few ways to do this better, as alluded to in prior discussions here and elsewhere:
0) If I’m right, it’d be worth communicating the ‘person spec’ for cases where (common-sense) talent constraint applies, and where we really would absorb basically as many as we could get (e.g. “We want philosophers to contribute to GPR, and we’re after people who either already have a publication record in this area, or have signals of ‘superstar’ ability even conditioned on philosophy academia. If this is you, please get in touch.”).
1) Concurrently, it’d be worth publicising typical applicants:place or similar measures of competition for hiring rounds in more junior roles to allow applicants to be better calibrated/emphasise the importance of plan B. (e.g. “We have early-career roles for people thinking of working as GPR researchers, which serves the purpose of talent identification and development. We generally look for XYZ. Applications for this are extremely competitive (~12:1). Other good first steps for people who want to work in this field are these”). {MIRI’s research fellows page does a lot of this well}.
2) It would be good for there to be further work addressed to avoiding ‘EA underemployment’, as I would guess growth in strong candidates for EA roles will outstrip intra-EA opportunities. Some possibilities:
2.1) There are some areas I’d want to add to the longtermist portfolio which might be broadened into useful niches for people with comparative advantage in them (macrohistory, productivity coaching and nearby versions, EA-relevant bits of psychology, etc.) I don’t think these are ‘easier’ than the existing ‘hot’ areas, but they are hard in different ways, and so broaden opportunities.
2.2) Another option would be ‘pre-caching human capital’ into areas which are plausible candidates for becoming important as time goes on. I imagine something like international relations turning out to be crucial (or, contrariwise, relatively unimportant), but it seems better rather than waiting for this to be figured out for instead people to coordinate and invest themselves across the portfolio of plausible candidates. (Easier said than done from the first person perspective, as such a strategy potentially involves making an uncertain bet with many years of one’s career, and if it turns out to be a bust ex post the good ex ante EV may not be complete consolation).
2.3) There seem a lot of stakeholders where it would be good for EAs to enter due to the second-order benefits even if their direct work is of limited direct relevance (e.g. having more EAs in tech companies looks promising to me, even if they aren’t doing AI safety). (Again, not easy from the first person-perspective).
2.4) A lot of skills for more ‘senior’ roles can and have been attained outside of the EA community. Grad school is often a good idea for researchers, and professional/management aptitude is often a transferable skill. So some of the options above can be seen as a holding-pattern/bet hedging approach: they hopefully make one a stronger applicant for such roles, but in the meanwhile one is doing useful things (and also potentially earning to give, although I think this should be a minor consideration for longtermist EAs given the field is increasingly flush with cash).
If the framing is changed to something like, “These positions are very valuable, but very competitive—it is definitely worth you applying (as you in expectation increase the quality of the appointed candidate, and the returns of a slightly better candidate are very high), don’t bet the farm (or quit the day job) on your application—and if you don’t get in, here’s things you could do to slant your career to have a bigger impact”, I’d hope the burn risk falls dramatically: in many fields there are lots of competitive oversubscribed positions which don’t impose huge costs to unsuccessful applicants.
My hunch is (as implied elsewhere) ‘talent-constraint’ with ‘talent’ not further specified is apt to mislead. My impression for longtermist orgs (I understand from Peter and others this may apply less to orgs without this as the predominant focus) is there are two broad classes, which imperfectly line up with ‘senior’ versus ‘junior’.
The ‘senior’ class probably does fit (commensensically understood) ‘talent-constraint’, in that orgs or the wider ecosystem want to take everyone who clears a given bar. Yet these bars are high even when conditioned on the already able cohort of (longtermist/)EAs. It might be things like ‘ready to run a research group’, ‘can manage operations for an org’ (cf. Tara’s and Tanya’s podcasts), ‘subject matter expertise/ability/track record’.
One common feature is that these people add little further load on current (limited) management capacity, either because they are managing others or are already ‘up to speed’ to contribute themselves without extensive training or supervision. (Aside: I suspect this is a under-emphasised bonus of ‘value-aligned operations staff’ - their tacit knowledge of the community/mission/wider ecosystem may permit looser management than bringing on able professionals ‘from outside’.) From the perspective of the archetypal ‘pluripotent EA’ a few years out from undergrad, these are skills which are hard to develop and harder to demonstrate.
More ‘junior’ roles are those where the criteria are broader (at least in terms of legible ones: ‘what it takes’ to be a good generalist researcher may be similarly rare to ‘what it takes’ to be a good technical AI safety researcher, but more can easily ‘rule themselves out’ of the latter than the former), where ‘upskilling’ is a major objective, or where there’s expectation of extensive ‘hands-on’ management.
There might be similarly convex returns to getting a slightly better top candidate (e.g. ‘excellent versus very good’ might be 3x rather than 1.3x). Regardless, there will not be enough positions for all the talented candidates available: even if someone at an org decided to spend their time only managing and training junior staff (and haste considerations might lead them to spending more of their time doing work themselves than investing in the ‘next generation’), they can’t manage dozens at a time.
I think confusing these two broad classes is an easy way of burning a lot of good people (cf. Denise’s remarks). If Alice the 23-year-old management consultant might reason on current messaging, “EA jobs are much better for the world than management consultancy, and they’re after good people—I seem to fit the bill, so I should switch career into this”. She might then forsake her promising early career for an unedifying and unsuccessful period as ‘EA perennial applicant’, ending up worse than she was at the start. EA has a vocational quality to it—key it does not become a siren song.
There seem a few ways to do this better, as alluded to in prior discussions here and elsewhere:
0) If I’m right, it’d be worth communicating the ‘person spec’ for cases where (common-sense) talent constraint applies, and where we really would absorb basically as many as we could get (e.g. “We want philosophers to contribute to GPR, and we’re after people who either already have a publication record in this area, or have signals of ‘superstar’ ability even conditioned on philosophy academia. If this is you, please get in touch.”).
1) Concurrently, it’d be worth publicising typical applicants:place or similar measures of competition for hiring rounds in more junior roles to allow applicants to be better calibrated/emphasise the importance of plan B. (e.g. “We have early-career roles for people thinking of working as GPR researchers, which serves the purpose of talent identification and development. We generally look for XYZ. Applications for this are extremely competitive (~12:1). Other good first steps for people who want to work in this field are these”). {MIRI’s research fellows page does a lot of this well}.
2) It would be good for there to be further work addressed to avoiding ‘EA underemployment’, as I would guess growth in strong candidates for EA roles will outstrip intra-EA opportunities. Some possibilities:
2.1) There are some areas I’d want to add to the longtermist portfolio which might be broadened into useful niches for people with comparative advantage in them (macrohistory, productivity coaching and nearby versions, EA-relevant bits of psychology, etc.) I don’t think these are ‘easier’ than the existing ‘hot’ areas, but they are hard in different ways, and so broaden opportunities.
2.2) Another option would be ‘pre-caching human capital’ into areas which are plausible candidates for becoming important as time goes on. I imagine something like international relations turning out to be crucial (or, contrariwise, relatively unimportant), but it seems better rather than waiting for this to be figured out for instead people to coordinate and invest themselves across the portfolio of plausible candidates. (Easier said than done from the first person perspective, as such a strategy potentially involves making an uncertain bet with many years of one’s career, and if it turns out to be a bust ex post the good ex ante EV may not be complete consolation).
2.3) There seem a lot of stakeholders where it would be good for EAs to enter due to the second-order benefits even if their direct work is of limited direct relevance (e.g. having more EAs in tech companies looks promising to me, even if they aren’t doing AI safety). (Again, not easy from the first person-perspective).
2.4) A lot of skills for more ‘senior’ roles can and have been attained outside of the EA community. Grad school is often a good idea for researchers, and professional/management aptitude is often a transferable skill. So some of the options above can be seen as a holding-pattern/bet hedging approach: they hopefully make one a stronger applicant for such roles, but in the meanwhile one is doing useful things (and also potentially earning to give, although I think this should be a minor consideration for longtermist EAs given the field is increasingly flush with cash).
If the framing is changed to something like, “These positions are very valuable, but very competitive—it is definitely worth you applying (as you in expectation increase the quality of the appointed candidate, and the returns of a slightly better candidate are very high), don’t bet the farm (or quit the day job) on your application—and if you don’t get in, here’s things you could do to slant your career to have a bigger impact”, I’d hope the burn risk falls dramatically: in many fields there are lots of competitive oversubscribed positions which don’t impose huge costs to unsuccessful applicants.