For 2, what’s “easiest to build and maintain” is determined by human efforts to build new technologies, cultural norms, and forms of governance.
For 11 there isn’t necessarily a clear consensus on what “exceptional” means or how to measure it, and ideas about what it is are often not reliably predictive. Furthermore, organizations are extremely risk averse in hiring and there are understandable reasons for this—they’re thinking about how to best fill a specific role with someone who they will take a costly bet on. But this is rather different than thinking about how to make the most impactful use of each applicant’s talent. So I wouldn’t be surprised if even many talented people cannot find roles indefinitely for a variety of reasons: 1) the right orgs don’t exist yet 2) funder market lag 3) difficulty finding opportunities to prove their competence in the first place (doing well on work tests is a positive sign but it’s often not enough for hiring managers to hire on that alone), etc.
On top of that, there’s a bit of a hype cycle for different things within causes like AI safety (there was an interp phase, followed by a model evals phase, etc). Someone who didn’t fit ideas of what’s needed in the interpretability phase may have ended up a much better fit for model evals work when it started catching on, or for finding some new area to develop.
For 12, I think it’s a mistake to bound everyone’s potential here. There are certainly some people who live far more selflessly and people who become much closer to that through their own efforts. Foreclosing that possibility is pretty different than accepting where one currently is and doing the best one can each day.
For 2, what’s “easiest to build and maintain” is determined by human efforts to build new technologies, cultural norms, and forms of governance.
For 11 there isn’t necessarily a clear consensus on what “exceptional” means or how to measure it, and ideas about what it is are often not reliably predictive. Furthermore, organizations are extremely risk averse in hiring and there are understandable reasons for this—they’re thinking about how to best fill a specific role with someone who they will take a costly bet on. But this is rather different than thinking about how to make the most impactful use of each applicant’s talent. So I wouldn’t be surprised if even many talented people cannot find roles indefinitely for a variety of reasons: 1) the right orgs don’t exist yet 2) funder market lag 3) difficulty finding opportunities to prove their competence in the first place (doing well on work tests is a positive sign but it’s often not enough for hiring managers to hire on that alone), etc.
On top of that, there’s a bit of a hype cycle for different things within causes like AI safety (there was an interp phase, followed by a model evals phase, etc). Someone who didn’t fit ideas of what’s needed in the interpretability phase may have ended up a much better fit for model evals work when it started catching on, or for finding some new area to develop.
For 12, I think it’s a mistake to bound everyone’s potential here. There are certainly some people who live far more selflessly and people who become much closer to that through their own efforts. Foreclosing that possibility is pretty different than accepting where one currently is and doing the best one can each day.