Relatedly, I hope someone is actively working on keeping people who weren’t able to get funding still feeling welcome and engaged in the community (whether they decided to get a non-EA job or are attempting to upskill). Rejection can be alienating.
Not only do we not want these folks working on capabilities, it’s also likely to me that there will be a burst of funding at some point (either because of new money, or because existing donors feel it’s crunch time and acclerate their spending). If AI safety can bring on 100 per year, and a funding burst increased that to 300 (made up numbers), it’s likely that we’d really like to get some of the people who were ranked 101-200 in prior years.
To use a military metaphor, these people are in something vaguely like the AI Safety Reserves branch. Are we inviting them to conferences, keeping them engaged in community, and giving them easy ways to keep their AI Safety knowledge up to date? At that big surge moment, we’d likely be asking them to probably take big pay cuts and interrupt promising career trajectories in whatever they decided to do. People make those kinds of decisions in large part with their hearts, not just their heads, and a sense of belonging (or non-belonging) in community is often critical to the decisions that they make.
That sounds like it would be helpful, but I would also want people to have a healthier relationship with having an impact and with intelligence than I see some EAs having. It’s also okay to not be the type of person who would be good at the types of jobs that EAs currently think are most important or would be most important for “saving the world”. There’s more to life than that.
Relatedly, I hope someone is actively working on keeping people who weren’t able to get funding still feeling welcome and engaged in the community (whether they decided to get a non-EA job or are attempting to upskill). Rejection can be alienating.
Not only do we not want these folks working on capabilities, it’s also likely to me that there will be a burst of funding at some point (either because of new money, or because existing donors feel it’s crunch time and acclerate their spending). If AI safety can bring on 100 per year, and a funding burst increased that to 300 (made up numbers), it’s likely that we’d really like to get some of the people who were ranked 101-200 in prior years.
To use a military metaphor, these people are in something vaguely like the AI Safety Reserves branch. Are we inviting them to conferences, keeping them engaged in community, and giving them easy ways to keep their AI Safety knowledge up to date? At that big surge moment, we’d likely be asking them to probably take big pay cuts and interrupt promising career trajectories in whatever they decided to do. People make those kinds of decisions in large part with their hearts, not just their heads, and a sense of belonging (or non-belonging) in community is often critical to the decisions that they make.
That sounds like it would be helpful, but I would also want people to have a healthier relationship with having an impact and with intelligence than I see some EAs having. It’s also okay to not be the type of person who would be good at the types of jobs that EAs currently think are most important or would be most important for “saving the world”. There’s more to life than that.
Really good question!