Well, you might be getting toward the frontiers of where published AI Safety-focused books can take you. From here, you might want to look to AI Safety agendas and specific papers, and AI textbooks.
MIRI has a couple of technical agendas for more foundational and more machine learning-based research on AI Safety. Dario Amodei of OpenAI, and some other researchers also put out a machine learning-focused agenda. These agendas cite and are cited by a bunch of useful work. There’s also great unpublished work by Paul Christiano on AI Control.
In order to understand and contribute to current research, you will also want to do some background reading. Jan Leike (now of Deepmind) has put out a good syllabus of relevant reading materials through 80,000 Hours that includes some good suggestions. Personally, for a math student like yourself wanting to start out with theoretical computer science, George Boolos’ book Computability and Logic might be useful. Learning Python and Tensorflow is also great in general.
To increase your chance of working on this career, you might want to look toward the entry requirements for some specific grad schools. You might also want to go for some internships at these groups (or at other groups that do similar work).
In academia some labs analyzing safety problems are:
UC Berkeley (especially Russell)
Cambridge (Adrian Weller)
ANU (Hutter)
Montreal Institute for Learning Algorithms
Oxford
Louisville (Yampolskiy)
In industry, Deepmind and OpenAI both have safety-focused teams.
Working on grad school or internships in any of these places (notwithstanding that you won’t necessarily end up in a safety-focused team) would be a sweet step toward working on AI Safety as a career.
Feel free to reach out by email at (my first name) at intelligence.org with further questions, or for more personalized suggestions. (And the same offer goes to similarly interested readers)
I like Ryan’s suggestions. (I also work at MIRI.) As it happens, we also released a good intro talk by Eliezer last night that talks more about ‘what does alignment research look like?’: link.
At Montreal, all I know is that the PhD student, David Krueger, is currently in discussions about what work could be done. At Oxford, I have in mind the work of folks at FHI like Owain Evans and Stuart Armstrong.
If you are interested in applying to Oxford, but not FHI, then Michael Osbourne is very sympathetic to AI safety, but doesn’t currently work on it. He might be worth chatting to. Also, Shimon Whiteson does lots of relevant seeming work in the area of deep RL, but I don’t know if he is at all sympathetic.
Well, you might be getting toward the frontiers of where published AI Safety-focused books can take you. From here, you might want to look to AI Safety agendas and specific papers, and AI textbooks.
MIRI has a couple of technical agendas for more foundational and more machine learning-based research on AI Safety. Dario Amodei of OpenAI, and some other researchers also put out a machine learning-focused agenda. These agendas cite and are cited by a bunch of useful work. There’s also great unpublished work by Paul Christiano on AI Control.
In order to understand and contribute to current research, you will also want to do some background reading. Jan Leike (now of Deepmind) has put out a good syllabus of relevant reading materials through 80,000 Hours that includes some good suggestions. Personally, for a math student like yourself wanting to start out with theoretical computer science, George Boolos’ book Computability and Logic might be useful. Learning Python and Tensorflow is also great in general.
To increase your chance of working on this career, you might want to look toward the entry requirements for some specific grad schools. You might also want to go for some internships at these groups (or at other groups that do similar work).
In academia some labs analyzing safety problems are:
UC Berkeley (especially Russell)
Cambridge (Adrian Weller)
ANU (Hutter)
Montreal Institute for Learning Algorithms
Oxford
Louisville (Yampolskiy)
In industry, Deepmind and OpenAI both have safety-focused teams.
Working on grad school or internships in any of these places (notwithstanding that you won’t necessarily end up in a safety-focused team) would be a sweet step toward working on AI Safety as a career.
Feel free to reach out by email at (my first name) at intelligence.org with further questions, or for more personalized suggestions. (And the same offer goes to similarly interested readers)
I like Ryan’s suggestions. (I also work at MIRI.) As it happens, we also released a good intro talk by Eliezer last night that talks more about ‘what does alignment research look like?’: link.
Any details on safety work in Montreal and Oxford (other than FHI I assume)? I might use that for an application there.
At Montreal, all I know is that the PhD student, David Krueger, is currently in discussions about what work could be done. At Oxford, I have in mind the work of folks at FHI like Owain Evans and Stuart Armstrong.
If you are interested in applying to Oxford, but not FHI, then Michael Osbourne is very sympathetic to AI safety, but doesn’t currently work on it. He might be worth chatting to. Also, Shimon Whiteson does lots of relevant seeming work in the area of deep RL, but I don’t know if he is at all sympathetic.