Hello everyone,
I’m a 19-year-old student considering whether I should spend 4 more years in dental school in order to earn to give as a dentist to reduce AI s-risks.
The decision is complicated, but I’d isolate only one sub-question: 80,000 Hours has said that earning to give usually isn’t the best option. However, there may be only 10–20% (I’m uncertain, it may be wrong) of aspiring EA people in AI risks getting hired or funded to do AI safety research. meaning EA people(including me) have 80% of chance to not be hired/funded.:
Many donors say they “don’t know where to spend the plentiful money,” while 80% of people still can’t get funding. That’s just intuitively contradictory. If AI safety isn’t severely funding-constrained, why not lower the bar and fund more “average” projects?
Maybe that average projects contribute far less than top projects. But if they aren’t funded, they can’t do research at all, and their impact is close to 0 (unless they can contribute effectively in the non-EA world, which seems unlikely for many average people)
I’ve thought of three possible justifications:
Patient philanthropy: Saving money to achieve higher leverage in the future.
Backfire risks: many projects may have near-zero or negative expected value.
Career capital: junior researchers should build skills before being funded (However, it seems there are also many senior AI risk people getting rejected.)
I’m eager to know what reasons I’m missing.
So my question is: What are other reasons that “Even if there is a lot of money, the funding bar should remain high and most projects should still be rejected?
As a 19 y/o who’s spent 300 hours thinking about this alone, I’m hitting diminishing returns from isolated thinking. So, any outside perspective would be genuinely valuable.
Please DON’T aim for a rigorous answer. Quantity-over-quality brainstorming may be better—I’d prefer 1 minute half-baked thoughts or even scattered biases over silence. Even replies of only 1 short sentence “I think one reason is X” would be extremely helpful. (Also, feel free to criticize any of my thoughts). Thank you very much.
Many people said they wanted to work for METR. I made what I thought was a good offer: take one of the benchmarks we give AIs; if you get a good score then I guarantee that I will fly you out for an interview, even if you have no work history, have no money to pay for the trip, or any other barrier one might have to employment.
Exactly zero people took me up on this.[1]
How is it possible for there to be sky-high rejection rates yet also zero people sending me applications?
I think the answer is that raw rejection rates aren’t a very useful metric. After all, an 80% rejection rate means that the AI safety jobs are 1/10th as selective as Walmart!
I would suggest ignoring raw rejection rates in favor of just looking at the criteria for the jobs you want. Particularly for something like s-risks the criteria are going to be unusual and specific, meaning that even generically qualified people will often have to dedicate substantial time to skilling up, but if you’re able to do so, then your odds are pretty good.[2]
I wouldn’t be surprised to learn that some people tried this, failed, and then were too embarrassed about failing to tell me. But, to the best of my recollection, literally zero people have told me that they even attempted this task.
I say this even with the knowledge that you are 19. I don’t want to pretend that the deck isn’t stacked against younger people—it totally is—but we employ some 19 year olds, as do other AI safety orgs. If a 19 year old had sent me a good solution to that METR challenge, for example, I would have been happy to hire them.
Is this offer still open? I’ll try it next weekend.
I no longer work at METR. I would guess that they’d be excited about applicants who have done this, but don’t want to speak for them.
I’ve been orienting to the field myself intensely during this year, after following it in the background for years.
Some takes:
Research that has impact is difficult.
AI twitter and AI safety papers on Arxiv surface a high number of papers every week, many of which are using shoddy statistics, bad experiment design, or other basic problems. Many good AIS orgs seem to aim for a high quality standard, and this may well be worth the cost.
Research output is also heavy-tailed. If a small fraction of people produce most of the useful work, a high bar can be correct.
Many organizations are growing as fast as they can. But due to research being difficult, and avoiding organizational drift, keeping quality high, that fast is still not that fast.
From outside, it’s hard to tell someone moving toward good research apart from someone treading water. So the bar is partly about legible evidence, not raw quality. “We can’t tell” is a common reason for rejection, distinct from “we can tell and the answer is no.”
This is one reason “skill up first” is common advice: it’s a way to generate signal. Some things that can work: serious attempts at open challenges, replications or extensions of safety papers, benchmark work, public writeups, or programs like MATS, ARENA, SPAR, AI Safety Camp, etc., where the output of the program can itself become a credential. A nice thing about multiple of these routes is that they don’t require permission to start.
In any case, since you’ve thought of this for 300 hours, I definitely recommend getting some outside feedback! 80000 hours career advising can be one good option, since they have focused on AI Safety related advising recently.