I’ve been orienting to the field myself intensely during this year, after following it in the background for years.
Some takes:
Research that has impact is difficult.
AI twitter and AI safety papers on Arxiv surface a high number of papers every week, many of which are using shoddy statistics, bad experiment design, or other basic problems. Many good AIS orgs seem to aim for a high quality standard, and this may well be worth the cost.
Research output is also heavy-tailed. If a small fraction of people produce most of the useful work, a high bar can be correct.
Many organizations are growing as fast as they can. But due to research being difficult, and avoiding organizational drift, keeping quality high, that fast is still not that fast.
From outside, it’s hard to tell someone moving toward good research apart from someone treading water. So the bar is partly about legible evidence, not raw quality. “We can’t tell” is a common reason for rejection, distinct from “we can tell and the answer is no.”
This is one reason “skill up first” is common advice: it’s a way to generate signal. Some things that can work: serious attempts at open challenges, replications or extensions of safety papers, benchmark work, public writeups, or programs like MATS, ARENA, SPAR, AI Safety Camp, etc., where the output of the program can itself become a credential. A nice thing about multiple of these routes is that they don’t require permission to start.
In any case, since you’ve thought of this for 300 hours, I definitely recommend getting some outside feedback! 80000 hours career advising can be one good option, since they have focused on AI Safety related advising recently.
Thanks very much for your answering, I’m very grateful for it. I think bascially your idea is “The impact of AI risks research is fat-tailed”. But there’s still a question: If there are money left, and 80% of people aren’t funded, why don’t you fund them even though they have little impact? Maybe you’ll say we should save money, but will the AI risks researchers in the future be much more capable than now? In other words, if the funding bar is still that high, is it probable 10 years later, still 80% of people can’t reach the funding bar and do direct work?
It seems many unfunded s-risks researchers are already senior(have 5+ years of experience), which means even if they have 10 years more, they probably wouldn’t get more capable and pass the funding bar. But I’m unceratin and welcome to criticize this idea.
I’ve been orienting to the field myself intensely during this year, after following it in the background for years.
Some takes:
Research that has impact is difficult.
AI twitter and AI safety papers on Arxiv surface a high number of papers every week, many of which are using shoddy statistics, bad experiment design, or other basic problems. Many good AIS orgs seem to aim for a high quality standard, and this may well be worth the cost.
Research output is also heavy-tailed. If a small fraction of people produce most of the useful work, a high bar can be correct.
Many organizations are growing as fast as they can. But due to research being difficult, and avoiding organizational drift, keeping quality high, that fast is still not that fast.
From outside, it’s hard to tell someone moving toward good research apart from someone treading water. So the bar is partly about legible evidence, not raw quality. “We can’t tell” is a common reason for rejection, distinct from “we can tell and the answer is no.”
This is one reason “skill up first” is common advice: it’s a way to generate signal. Some things that can work: serious attempts at open challenges, replications or extensions of safety papers, benchmark work, public writeups, or programs like MATS, ARENA, SPAR, AI Safety Camp, etc., where the output of the program can itself become a credential. A nice thing about multiple of these routes is that they don’t require permission to start.
In any case, since you’ve thought of this for 300 hours, I definitely recommend getting some outside feedback! 80000 hours career advising can be one good option, since they have focused on AI Safety related advising recently.
Thanks very much for your answering, I’m very grateful for it. I think bascially your idea is “The impact of AI risks research is fat-tailed”. But there’s still a question: If there are money left, and 80% of people aren’t funded, why don’t you fund them even though they have little impact? Maybe you’ll say we should save money, but will the AI risks researchers in the future be much more capable than now? In other words, if the funding bar is still that high, is it probable 10 years later, still 80% of people can’t reach the funding bar and do direct work?
It seems many unfunded s-risks researchers are already senior(have 5+ years of experience), which means even if they have 10 years more, they probably wouldn’t get more capable and pass the funding bar. But I’m unceratin and welcome to criticize this idea.