I do not think it is crunch time. I think people in the reference class you’re describing should go with some “normal” plan such as getting into the best AI PhD program you can get into, learning how to do AI research, and then working on AI safety.
(There are a number of reasons you might do something different. Maybe you think academia is terrible and PhDs don’t teach you anything, and so instead you immediately start to work independently on AI safety. That all seems fine. I’m just saying that you shouldn’t make a change like this because of a supposed “crunch time”—I would much prefer having significantly better help in 5 or 10 years, rather than not-very-good help now.)
That being said, I feel confident that there are other AI safety researchers who would say it is crunch time or very close to it. I expect this would be a minority (i.e. < 50%).
I do think it is crunch time probably, but I agree with what Rohin said here about what you should do for now (and about my minority status). Skilling up (not just in technical specialist stuff, also in your understanding of the problem we face, the literature, etc.) is what you should be doing. For what I think should be done by the community as a whole, see this comment.
Update: A friend of mine read this as me endorsing doing PhD’s and was surprised. I do not generally endorse doing PhDs at this late hour. (However, there are exceptions.) What I meant to say is that skilling up / learning is what you should be doing, for now at least. Maybe a PhD is the best way to do that, but maybe not—it depends on what you are trying to learn. I think working as a research assistant at an EA org would probably be a better way to learn than doing a PhD, for example. If you aren’t trying to do research, but instead are trying to contribute by e.g. building a movement, maybe you should be out of academia entirely and instead gaining practical experience building movements or running political campaigns.
I do not think it is crunch time. I think people in the reference class you’re describing should go with some “normal” plan such as getting into the best AI PhD program you can get into, learning how to do AI research, and then working on AI safety.
(There are a number of reasons you might do something different. Maybe you think academia is terrible and PhDs don’t teach you anything, and so instead you immediately start to work independently on AI safety. That all seems fine. I’m just saying that you shouldn’t make a change like this because of a supposed “crunch time”—I would much prefer having significantly better help in 5 or 10 years, rather than not-very-good help now.)
That being said, I feel confident that there are other AI safety researchers who would say it is crunch time or very close to it. I expect this would be a minority (i.e. < 50%).
I do think it is crunch time probably, but I agree with what Rohin said here about what you should do for now (and about my minority status). Skilling up (not just in technical specialist stuff, also in your understanding of the problem we face, the literature, etc.) is what you should be doing. For what I think should be done by the community as a whole, see this comment.
Update: A friend of mine read this as me endorsing doing PhD’s and was surprised. I do not generally endorse doing PhDs at this late hour. (However, there are exceptions.) What I meant to say is that skilling up / learning is what you should be doing, for now at least. Maybe a PhD is the best way to do that, but maybe not—it depends on what you are trying to learn. I think working as a research assistant at an EA org would probably be a better way to learn than doing a PhD, for example. If you aren’t trying to do research, but instead are trying to contribute by e.g. building a movement, maybe you should be out of academia entirely and instead gaining practical experience building movements or running political campaigns.