I think the better question might be, âwho are the best some professors/âacademic research groups in AI Safety to work with?â
Two meta-points I feel might be important â
For PhDs, the term âbest universityâ doesnât mean much (there are some cases in which infrastructure makes a difference, but R1 schools, private or public, generally seem to have good research infrastructure). Your output as a graduate student heavily depends on which research group/âPI you work with.
Specifically for AI safety, the sample size of academics is really low. So, I donât think we can rank them from best-to-eh. Doing so becomes more challenging because their research focus might differ, so a one-to-one comparison would be unsound.
With that out of the way, three research groups in academia come to mind:
Center for Human Inspired AI (CHIA) is a new research center at Cambridge; I donât know if their research would focus on subdomains of Safety; someone could look into this more.
I remember meeting two lovely folks from Oregon State University working on Safety at EAGx Berkeley. I cannot find their research group, and I forget what exactly they were working on; again, someone who knows more about this could comment perhaps.
An interesting route for a Safety-focused Ph.D. could be having a really good professor at a university who agrees to have an outside researcher as a co-advisor. I am guessing that more and more academics would want to start working on the Safety problem, so such collaborations would be pretty welcome, especially if they are also new to the domain.
One thing to watch out for: which research groups get funded by this NSF proposal. There will soon be new research groups that Ph.D. students interested in the Safety problem could gravitate towards!
I think the better question might be, âwho are
the bestsome professors/âacademic research groups in AI Safety to work with?âTwo meta-points I feel might be important â
For PhDs, the term âbest universityâ doesnât mean much (there are some cases in which infrastructure makes a difference, but R1 schools, private or public, generally seem to have good research infrastructure). Your output as a graduate student heavily depends on which research group/âPI you work with.
Specifically for AI safety, the sample size of academics is really low. So, I donât think we can rank them from best-to-eh. Doing so becomes more challenging because their research focus might differ, so a one-to-one comparison would be unsound.
With that out of the way, three research groups in academia come to mind:
Stuart Russellâs group at the Center for Human Compatible AI (CHAI), UC Berkeley
David Krueggerâs group at Cambridge
The Algorithmic Alignment Group at MIT EECS
Others:
Center for Human Inspired AI (CHIA) is a new research center at Cambridge; I donât know if their research would focus on subdomains of Safety; someone could look into this more.
I remember meeting two lovely folks from Oregon State University working on Safety at EAGx Berkeley. I cannot find their research group, and I forget what exactly they were working on; again, someone who knows more about this could comment perhaps.
An interesting route for a Safety-focused Ph.D. could be having a really good professor at a university who agrees to have an outside researcher as a co-advisor. I am guessing that more and more academics would want to start working on the Safety problem, so such collaborations would be pretty welcome, especially if they are also new to the domain.
One thing to watch out for: which research groups get funded by this NSF proposal. There will soon be new research groups that Ph.D. students interested in the Safety problem could gravitate towards!
Also Jacob Steinhardt, Vincent Conitzer, David Duvenaud, Roger Grosse, and in my field (causal inference), Victor Veitch.
Going beyond AI safety, you can get a general sense of strength from CSRankings (ML) and ShanghaiRankings (CS).