It is possible but unlikely that such a person would be a TA. Someone with little prior ML experience would be a better fit as a participant.
Max Nadeau
No concrete plans one way or the other.
From the post: “We plan to have some researchers arrive early, with some people starting as soon as possible. The majority of researchers will likely participate during the months of December and/or January.”
Best of luck with your new gig; excited to hear about it! Also, I really appreciate the honesty and specificity in this post.
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
Is it way easier for researchers to do AI safety research within AI scaling labs (due to: more capable/diverse AI models, easier access to them (i.e. no rate limits/usage caps), better infra for running experiments, maybe some network effects from the other researchers at those labs, not having to deal with all the logistical hassle that comes from being a professor/independent researcher)?
Does this imply that the research ecosystem OP is funding (which is ~all external to these labs) isn’t that important/cutting-edge for AI safety?
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
What does OP’s TAIS funding go to? Don’t professors’ salaries already get paid by their universities? Can (or can’t) PhD students in AI get no-strings-attached funding (at least, can PhD students at prestigious universities)?
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
How much do the roles on the TAIS team involve engagement with technical topics? How do the depth and breadth of “keeping up with” AI safety research compare to being an AI safety researcher?
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
How inclined are you/would the OP grantmaking strategy be towards technical research with theories of impact that aren’t “researcher discovers technique that makes the AI internally pursue human values” → “labs adopt this technique”. Some examples of other theories of change that technical research might have:
Providing evidence for the dangerous capabilities of current/future models (should such capabilities emerge) that can more accurately inform countermeasures/policy/scaling decisions.
Detecting/demonstrating emergent misalignment from normal training procedures. This evidence would also serve to more accurately inform countermeasures/policy/scaling decisions.
Reducing the ease of malicious misuse of AIs by humans.
Limiting the reach/capability of models instead of ensuring their alignment.
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
What sorts of personal/career development does the PA role provide? What are the pros and cons of this path over e.g. technical research (which has relatively clear professional development in the form of published papers, academic degrees, high-status job titles that bring public credibility)?
We intended that sentence to be read as: “In addition to people who plan on doing technical alignment, MLAB can be valuable to other sorts of people (e.g. theoretical researchers)”.