I don’t think there’s any other public information.
To apply, people should email me asking about it (buck@intelligence.org). The three people who’ve received one of these grants were all people who I ran across in my MIRI recruiting efforts.
Two grants have been completed and a third is ongoing. Of the two people who completed grants, both successfully replicated several deep RL papers. and one of them ended up getting a job working on AI safety stuff (the other took a data science job and hopes to work on AI safety at some point in the future).
So, to clarify: this program is for people who are already mostly sure they want to work on AI Safety? That is, a person who is excited about ML, and would maaaaybe be interested in working on safety-related topics, if they found those topics interesting, is not who you are targeting?
Yeah, I am not targeting that kind of person. Someone who is excited about ML and skeptical of AI safety but interested in engaging a lot with AI safety arguments for a few months might be a good fit.
Is there any public information on the AI Safety Retraining Program other than the MIRI Summer Update and the Open Phil grant page?
I am wondering:
1) Who should apply? How do they apply?
2) Have there been any results yet? I see two grants were given as of Sep 1st; have either of those been completed? If so, what were the outcomes?
I don’t think there’s any other public information.
To apply, people should email me asking about it (buck@intelligence.org). The three people who’ve received one of these grants were all people who I ran across in my MIRI recruiting efforts.
Two grants have been completed and a third is ongoing. Of the two people who completed grants, both successfully replicated several deep RL papers. and one of them ended up getting a job working on AI safety stuff (the other took a data science job and hopes to work on AI safety at some point in the future).
I’m happy to answer more questions about this.
So, to clarify: this program is for people who are already mostly sure they want to work on AI Safety? That is, a person who is excited about ML, and would maaaaybe be interested in working on safety-related topics, if they found those topics interesting, is not who you are targeting?
Yeah, I am not targeting that kind of person. Someone who is excited about ML and skeptical of AI safety but interested in engaging a lot with AI safety arguments for a few months might be a good fit.