Working at EA organizations series: Machine Intelligence Research Institute
[Cross-posted on Lesswrong]
This is the fourth post in the Working At EA Organizations series. The posts so far:
The Machine Intelligence Research Institute (MIRI) does “foundational mathematical research to ensure that smarter-than-human artificial intelligence has a positive impact”. AI alignment is a popular cause within the effective altruism community and MIRI has made the case for their cause and approach. The following are my notes from an interview with Malo Bourgon (program management analyst and generalist at MIRI) which he reviewed before publishing.
Current talent needs
Since the size of the research team has just doubled, MIRI is not actively looking for new researchers for the next 6 months. However, if you are interested in working at MIRI you should still express your interest.
In the foreseeable future, MIRI is planning to further grow the research team. The main ingredients for a good fit are interest in the problems that MIRI works on and strong talent in math and other quantitative subjects. Being further on the academic career path will therefore naturally help by teaching you more math, but absolutely isn’t necessary.
One of the best ways to evaluate your fit is to take a look at MIRI’s research guide. Look at MIRI’s problem areas and study the one that looks most interesting to you. In tandem, grab one of the textbooks to get acquainted with the relevant math. Contrary to what some EAs think, it’s not necessary to understand all of the research guide in order to start engaging with MIRI’s research. It’s a good indicator if you can develop an understanding of a specific problem in the guide and even more so if you can start contributing new ideas or angles of attack on that problem.
It’s turned out very hard to find a fundraiser who is both very good at talking to donors and deeply understands the problems MIRI works on. If someone comes along they would potentially be open to hiring such a person.
How can you get involved on a lower commitment basis?
Although there are presently no official volunteer or intern positions, hires always go through a phase of lower commitment involvement through one of the following.
These are independently run workshops about MIRI’s research around the world. Check here if there’s one in your area. If not, you can run one yourself. Organizing a MIRIx workshop is as easy as organizing an EA or Lesswrong meetup. It’s fine to just meet at home and study MIRI’s problems in a group.
You organize the logistics and get in contact with MIRI beforehand via this form. If you organize the workshop, MIRI will pay for the expenses that the participants make as a result of attending (e.g. snacks and drinks).
MIRIx workshops can take the form of a study group or a research group centered around MIRI (or related) problems. All you need to take care of is advertising it—it’s good when you already have someone in your city who would be interested. The group will be listed on the MIRIx page and you could advertise it to a relevant university department, on Lesswrong or to your local EA chapter.
MIRIx workshops not only let you learn about MIRI’s problems but also potentially provide an opportunity to contribute to MIRI’s research (this varies between groups). If you’re doing well at a MIRIx workshop, it will be noticed.
These workshops in the Bay Area are an absolutely essential part of the MIRI application process, but even if you don’t plan to work at MIRI, your application is encouraged. MIRI pays for all expenses, including international flights. This could also be a great chance to visit an EA hub.
Want to write a thesis on some problem related to MIRI research? Researching an adjacent area in math? Get in contact here to apply for research assistance. If you have a research idea that’s not obviously within MIRI’s research focus but could be interesting, or you have an interest in type theory, do get in contact as well.
Contribute to the research forum at agentfoundations.org by sharing a link to a post you made on Lesswrong, GitHub or your personal blog etc. If it gets at least two upvotes, your post will appear on the agentfoundations.org website. Read the How to Contribute page for more information.
How competitive are the positions?
MIRI are looking for top research talent and can only hire a few people. Do have a backup plan. Malo wants to encourage more people to have a go with the math problems, though. Learning more math can contribute to your backup plan as well and you may be able to employ the knowledge to research AI safety problems in other contexts. (From another source I heard that if you’re among the best of your year in a quantitative subject at a top university, that’s a good indicator that you should give it a shot.)
What’s the application process like?
The application process is very different from most organizations. It is less formalized and includes a period of progressive engagement. Usually you start off by working on MIRI-style problems via e.g. one of the channels named above and notice that you develop an interest in (one of) them. By then you may have been in contact with someone at MIRI in some way.
Attending a MIRI workshop is usually an essential step. This will be a good opportunity for MIRI researchers to get to know you, and for you to get to know them. If it goes well, you could work remotely or on-site as a research contractor spending some share of your time on MIRI-research. Once again, if both sides are interested, this could potentially lead to a full-time engagement.
At what yearly donation would you prefer marginal hires to earn to give for you instead of directly working for you?
The people who get hired as researchers are hard to replace. Hundreds of thousands or millions of dollars would be appropriate depending on the person. It’s hard to imagine that someone who could join the research team and wants to work on AI safety could make a bigger impact by earning to give.
Anything else that would be useful to know?
In the long term, not everyone who wants to work on the mathematical side of AI safety should work for MIRI. The field is set to grow. Being familiar with MIRI’s problems may prove useful even if you don’t think there will be a good fit with MIRI in particular. All in all, if you’re interested in AI safety work, try to familiarize yourself with the problems, get in contact with people in the field (or the community) early and build flexible career capital at the same time.