Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
How much do the roles on the TAIS team involve engagement with technical topics? How do the depth and breadth of “keeping up with” AI safety research compare to being an AI safety researcher?
The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we’re thinking about the core problems and where their research interests overlap with Open Phil’s philanthropic goals in the space. To do this well, it’s really valuable to have a good grip on the existing work in the relevant area(s).
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I’m leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
How much do the roles on the TAIS team involve engagement with technical topics? How do the depth and breadth of “keeping up with” AI safety research compare to being an AI safety researcher?
The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we’re thinking about the core problems and where their research interests overlap with Open Phil’s philanthropic goals in the space. To do this well, it’s really valuable to have a good grip on the existing work in the relevant area(s).