Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team

As part of my in­ter­view se­ries, I’m con­sid­er­ing in­ter­view­ing AI safety tech­ni­cal re­searchers at sev­eral of the main or­ga­ni­za­tions on what they would recom­mend new­com­ers do to ex­cel in the field. If you would like to see more in­ter­views on this topic, please let me know in the com­ments.

Ryan Carey is an AI Safety Re­search Fel­low at the Fu­ture of Hu­man­ity In­sti­tute. Ryan also some­times coaches peo­ple in­ter­ested in get­ting into AI safety re­search for 80,000 Hours. The fol­low­ing take­aways are from a con­ver­sa­tion I had with Ryan Carey last June on how to tran­si­tion from be­ing a soft­ware en­g­ineer to a re­search en­g­ineer at a safety team.


A lot of peo­ple talk to Ryan and ask “I’m cur­rently a soft­ware en­g­ineer, and I would like to quit my job to ap­ply to AI safety en­g­ineer­ing jobs. How can I do it?”

To these peo­ple, Ryan usu­ally says the fol­low­ing: For most peo­ple tran­si­tion­ing from soft­ware en­g­ineer­ing into AI safety, be­com­ing a re­search en­g­ineer at a safety team is of­ten a re­al­is­tic and de­sir­able goal. The bar for safety en­g­ineers seems high, but not in­sanely so. E.g. if you’ve already been a Google en­g­ineer for a cou­ple of years, and have an in­ter­est in AI, you have a fair chance of get­ting a re­search en­g­ineer role at a top in­dus­try lab. If you have a cou­ple of years of some­what less-pres­ti­gious in­dus­try work, there’s a fair chance of get­ting a valuable re­search en­g­ineer role at a top aca­demic lab. If you don’t make it, there are a lot of reg­u­lar ma­chine learn­ing en­g­ineer­ing jobs to go around.

How would you build your CV in or­der to make a cred­ible ap­pli­ca­tion? Ryan sug­gests the fol­low­ing:

  1. First, spend a month try­ing to repli­cate a pa­per from the Neurips safety work­shop. It’s nor­mal to take 1-6 weeks full time to repli­cate a pa­per when start­ing out. Some pa­pers are harder or eas­ier than that, but if it’s tak­ing much longer, you prob­a­bly would need to build those skills be­fore you could work in the field.

  2. You might si­mul­ta­neously ap­ply for in­tern­ships at AI safety orgs or a MIRI work­shop.

  3. If you’re not able to get an in­tern­ship and repli­cate pa­pers yet, maybe try to progress fur­ther in reg­u­lar ma­chine learn­ing en­g­ineer­ing first. Try to get in­tern­ships or jobs at any of the big com­pa­nies/​trendy star­tups, just as you would if you were pur­su­ing a reg­u­lar ML en­g­ineer­ing ca­reer.

  4. If you’re not there yet, maybe con­sider a mas­ter’s de­gree in ML if you have the money. Peo­ple com­monly want to avoid for­mal stud­ies by self-study­ing and then carv­ing a path to a less-or­tho­dox safety startup of the likes of MIRI. If su­per bright and math-y, then this can work, but it is a riskier path.

  5. If you can’t get (2-4), one op­tion is to take three months to build up your GitHub of repli­cated pa­pers. Maybe go to a ma­chine learn­ing con­fer­ence. (6 months of build­ing your GitHub is much more of­ten the right an­swer than 6 months of math.) Then re­peat steps 2-4.

  6. If you’re not able to get any of the in­tern­ships or rea­son­ably good ML in­dus­try jobs or into mas­ter’s pro­grams (top 50 in the world), then it may be that ML re­search en­g­ineer­ing is not go­ing to work out for you. In this case, you could look at other di­rectly use­ful soft­ware work, or earn­ing to give.

While do­ing these steps, it’s rea­son­ably use­ful to be read­ing pa­pers. Ro­hin Shah’s Align­ment Newslet­ter is amaz­ing if you want to read things. The se­quences on the Align­ment Fo­rum are an­other good op­tion.

As for text­books, read­ing the Good­fel­low ML text­book is okay. Un­der­stand­ing Ma­chine Learn­ing: From The­ory to Al­gorithms by Shai Shalev-Shwartz if you want to work at MIRI/​do math.

There are no great lit re­views yet for safety re­search. Tom Ever­itt’s pa­per on ob­ser­va­tion in­cen­tives is good if try­ing to do the­o­ret­i­cal re­search. If try­ing to do ex­per­i­men­tal re­search, Paul Chris­ti­ano’s Deep Re­in­force­ment Learn­ing from Hu­man Prefer­ences pa­per is good.

Good schools for do­ing safety:

  • Best: Berkeley

  • Amaz­ing: Oxford, Toronto, Stanford

  • Great: Cam­bridge, Columbia, Cor­nell, UCL, CMU, MIT, Im­pe­rial, other Ivies

Gen­eral ad­vice:

Peo­ple shouldn’t take crazy risks that would be hard to re­cover from (e.g. don’t quit their job un­less it’s easy to get a new one).

If you are try­ing to do re­search on your own, get feed­back early, e.g. share with Align­ment Fo­rum, Less Wrong, or share google docs with peo­ple. Repli­ca­tions are fine to share; they pad CVs but aren’t of great in­ter­est oth­er­wise.


We ran the above past Daniel Zie­gler, who pre­vi­ously tran­si­tioned from soft­ware en­g­ineer­ing to work­ing at Open AI. Daniel said he agrees with this ad­vice and added:

“In ad­di­tion to fully repli­cat­ing a sin­gle pa­per, it might be worth read­ing a va­ri­ety of pa­pers and at least roughly reim­ple­ment­ing a few of them (with­out try­ing to get the same perfor­mance as the pa­per). e.g. from https://​​spin­ningup.ope­nai.com/​​en/​​lat­est/​​spin­ningup/​​key­pa­pers.html.”


If you liked this post, I recom­mend you check out 80,000 Hours’ pod­cast with Cather­ine Ols­son and Daniel Zie­gler.

This piece is cross-posted on my blog here.