Regarding AI safety, of course.
The detailed version of this question follows: “Is AI safety sufficiently talent-constrained, and on an imminent enough timeline,
that me and any friends I can convince ordinary non-Ivy-League non-top-tier computer scientists, mathematicians, and programmers (including students)
should drop everything (including things you are personally uncomfortable telling people to drop out of)
and apply for every grant out there?”
I would also accept “If you are not very talented, go do something else and let us handle it”, or “If you’re unsure of your talent, here is a link to an online test or open application thing that will give you good evidence one way or the other”.
I do not think it is crunch time. I think people in the reference class you’re describing should go with some “normal” plan such as getting into the best AI PhD program you can get into, learning how to do AI research, and then working on AI safety.
(There are a number of reasons you might do something different. Maybe you think academia is terrible and PhDs don’t teach you anything, and so instead you immediately start to work independently on AI safety. That all seems fine. I’m just saying that you shouldn’t make a change like this because of a supposed “crunch time”—I would much prefer having significantly better help in 5 or 10 years, rather than not-very-good help now.)
That being said, I feel confident that there are other AI safety researchers who would say it is crunch time or very close to it. I expect this would be a minority (i.e. < 50%).
I do think it is crunch time probably, but I agree with what Rohin said here about what you should do for now (and about my minority status). Skilling up (not just in technical specialist stuff, also in your understanding of the problem we face, the literature, etc.) is what you should be doing. For what I think should be done by the community as a whole, see this comment.
Update: A friend of mine read this as me endorsing doing PhD’s and was surprised. I do not generally endorse doing PhDs at this late hour. (However, there are exceptions.) What I meant to say is that skilling up / learning is what you should be doing, for now at least. Maybe a PhD is the best way to do that, but maybe not—it depends on what you are trying to learn. I think working as a research assistant at an EA org would probably be a better way to learn than doing a PhD, for example. If you aren’t trying to do research, but instead are trying to contribute by e.g. building a movement, maybe you should be out of academia entirely and instead gaining practical experience building movements or running political campaigns.
Yes, I think it’s crunch time.
But I’d be very hesitant of advocating in general for people to sacrifice more stuff to work hard on the most urgent problems. People vastly overestimate the stability of their motivation and mental life. If you plan your life with the assumption that you’ll always be as motivated as you are right now, you’ll probably achieve less than if you take some precautions.
I’d say plan for at least 20 years of productivity. This means you want to build relationships with people who support you, invest in finding good down-time activities to keep you refreshed, and don’t burn yourself out. Be ambitious! Test your limits until you crash, but make sure you can recover and learn from it rather than taking permanent damage.
On average do I think EAs would do better with more self-sacrifice or less? It varies, and it’s important enough that I think advice should be more granular than just “do more”.
When considering self-sacrifice, it is also important to weigh-in the effects on other people. IE, every person that “sacrifices something for the cause” increases the perception that “if you want to work on this, you need to give up stuff”. This might in turn turn people off from joining the cause in the first place. So even if the sacrifice increases the productivity of that one person, the total effect might still be negative.
The rest was helpfully calibrating, thank you
My answer to the detailed version of the question is “unsure...probably no?”: I would be extremely wary of reputation effects and perception of AI safety as a field. As a result, getting as many people as we can to work on this might prove to not be the right approach.
For one, getting AI to be safe is not only a technical problem—apart from figuring out how to make AI safe, we need to also get whoever builds it to adopt our solution. Second, classical academia might prove important for safety efforts. If we are being realistic, we need to admit that the prestige associated with a field has impact on which people get involved with it. Thus, there will be a point where the costs of bringing more people in on the problem might outweight the benefits.
Note that I am not saying anything like “anybody without an Ivy-league degree should just forget about AI safety”. Just that there are both costs and benefits associated with working on this, and everybody should consider these before doing major decisions (and in particular outreach).