We are excited to announce the second iteration of ARENA (Alignment Research Engineer Accelerator), a 6-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers.
The program will commence on May 22nd, 2023, and will be held at the Moorgate WeWork offices in London. This will overlap with SERI MATS, who are also using these offices. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice.
ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, engage in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision.
Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.
In this chapter, you will learn all about transformers, and build and train your own. You’ll also learn about Mechanistic Interpretability of transformers, a field which has been advanced by Anthropic’s Transformer Circuits sequence, and open-source work by Neel Nanda.
There are a number of techniques that are helpful for training large-scale models efficiently. Here, you will learn more about these techniques and how to use them. The focus is on hands-on learning, rather than just a theoretical understanding.
We will conclude this program with capstone projects, where participants get to dig into something related to the course. This should draw on much of the skills and knowledge participants will have accumulated over the last 5 weeks.
Below is a diagram of the curriculum as a whole, and the dependencies between sections.
Here is some sample material from the course, which you will be able to full understand once you reach that point in the course. This notebook is on Indirect Object Identification (from the chapter on Transformers & Mechanistic Interpretability), it will represent one of a set of optional 2-day mini projects which participants can choose from towards the end of that 9-day period.
Call for staff
As well as inviting applications from participants, we’re also interested in applications from teaching assistants (TAs). You can apply to be a TA for specific chapters of content, if you have particular expertise in them. TAs will be well compensated for their time. We are also looking for people who can help design parts of the curriculum in the RL and training at scale sections (this will also be compensated, and can be done virtually). Please contact me (callum@arena.education) with any more questions, or comment on this post and I will try and respond in a timely manner.
We’re also interested in people who can provide DevOps support, particularly during the first week (e.g. setting people up on virtual machines, and resolving technical problems).
Lastly, if there are some chapters of the course you’re highly knowledgeable in, and others you would like to skill up in, we’d be open to a hybrid system of part-TA-ing, part participating. If you’re interested in something like this, you should put “staff” rather than “participant” in the application form (linked at the end of this post), and in the form you’ll have the opportunity to specify exactly what you’re interested in.
FAQ
If you have a question not in this list, which you think it would be valuable for people to have an answer for, please comment your question below and we will respond. If you have a question which you don’t want to make public, you can message us directly (or ask the question in your application form).
Q: Who is this program suitable for?
A: We welcome applications from people who fit most or all of the following criteria:
Care about AI safety, and making future development of AI go well
Relatively strong math skills (e.g. about one year’s worth of university-level applied math)
Strong programmers (e.g. have a CS degree / work experience in SWE, or have worked on personal projects involving a lot of coding)
Have experience coding in Python
Would be able to travel to London for 6 weeks, starting 22nd May
We expect some participants to be university students, and others to have already graduated.
Note—these criteria are mainly intended as guidelines. If you’re uncertain whether you meet these criteria, or you don’t meet some of them but still think you might be a good fit for the program, please do apply! You can also reach out to me directly, at callum@arena.education.
Q: What will an average day in this program look like?
At the start of the program, most days will involve pair programming, working through structured exercises designed to cover all the essential material in a particular chapter. The purpose is to get you more familiar with the material in a hands-on way. There will also usually be a short selection of required readings in the morning.
As we move through the course, some chapters will transition into more open-ended material. Much of this will still be structured (e.g. in the Mechanistic Interpretability section there will be a large set of structured exercises you can choose from), but you’ll have more choice over which things you want to study in more depth. You’ll also hopefully be able to do some independent projects, e.g. experiments, large-scale implementations, paper replications, or other bonus content. There will still be TA supervision during these sections, but the goal is for you to develop your own research & implementation skills. You may also want to work on group projects with other participants during this time instead, if that is your preference.
The program will run on weekdays. Each day will be roughly the length of a normal working day (9am-5pm), although there will be more flexibility in working hours during the days of more open-ended projects. There will be no compulsory attendance on weekends, but we might organize AI safety discussion groups or social events during this time. The office space will be available 24-7 for anyone who wants to use it outside regular hours.
Q: How many participants will there be?
We’re expecting between 15-25 participants, although this depends on several factors e.g. the strength of applications.
Q: Will there be prerequisite material?
A: Yes, we will be sending you prerequisite reading & exercises covering material such as PyTorch, einops and some linear algebra (this will be in the form of a Colab notebook). We expect that these will take approximately 1-2 days to complete.
Q: When is the application deadline?
A: The deadline for submitting applications is May 5th, 2023 (i.e. Friday the week after next), although we will be interviewing and making offers to candidates on a rolling basis.
We expect to get back to everyone who applied by May 7th (although we will try to get back to you sooner if you apply earlier).
Q: What will the application process look like?
A: There will be three steps:
Fill out an application form (this is designed to take ~20 minutes).
Perform a coding assessment.
Interview virtually with one of us, so we can find out more about your background and interests in AI safety & this course.
Q: Can I join for some sections, but not others?
A: We don’t recommend this, because it could contribute to a somewhat disjointed experience. However, if we choose to invite you to this program but you have other commitments, we would be willing to discuss possible arrangements (e.g. if you are attending the AI Safety Hub research program and you wanted a week in between ARENA and this, you might want to not attend the capstone project week). Unless something like this is specified in your application, we will assume your intention is to join for the full 6 weeks.
Q: Will you pay stipends to participants?
A: Yes, we plan to pay stipends. We have not settled on the exact amount yet, but we expect the total will come to something like $3-4.5k per participant for the whole 6 week period. We hope that money will not be a barrier for promising candidates who wanted to attend this program.
Q: Which costs will you be covering?
A: The stipends will be enough to cover costs of accommodation (we won’t be directly providing accommodation, although we will give support to people who are struggling to find a place to stay). Any reasonable travel costs will be reimbursed, and we will be providing lunch and dinner in-office during the program (the office will also be kept well-stocked with snacks).
Q: Is the program available remotely?
A: We won’t be able to guarantee support for people who want to study the material virtually, and we don’t think this would offer a comparable experience to attending in-person. However, we will likely be making at least some of this material available for self-study, and we may connect applicants who are interested in studying the material virtually so they can form study groups. Strong applicants may also be added to the Slack group we’ll be creating for the course, so that they can discuss the material further.
Q: I’m interested in trialling some of the material, or recommending material to be added. Is there a way I can do this?
A: If either of these is the case, please feel free to reach out directly via a LessWrong message—we’d love to hear from you!
Q: Do you plan to run more bootcamps in the future?
Possibly! If you can’t make these dates, then we encourage you to submit an application anyway (the form is designed to be relatively low-effort to fill out). We would be excited to continue to run these bootcamps if this program is also well-received.
Link to Apply
Here is the link to apply (it is the same for participants and staff). You shouldn’t spend longer than 20-30 minutes on it.
AI Alignment Research Engineer Accelerator (ARENA): call for applicants
TL;DR
Apply here for the second iteration of ARENA!
Introduction
We are excited to announce the second iteration of ARENA (Alignment Research Engineer Accelerator), a 6-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers.
The program will commence on May 22nd, 2023, and will be held at the Moorgate WeWork offices in London. This will overlap with SERI MATS, who are also using these offices. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice.
ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, engage in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision.
For more information, see our website.
Outline of Content
The 6-week program will be structured as follows:
Chapter 0 - Fundamentals
Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.
Topics include:
PyTorch basics
CNNs, Residual Neural Networks
Optimization
Backpropagation
Hyperparameter search with Weights and Biases
Model training & PyTorch Lightning
Duration: 5 days
Chapter 1 - Transformers & Mechanistic Interpretability
In this chapter, you will learn all about transformers, and build and train your own. You’ll also learn about Mechanistic Interpretability of transformers, a field which has been advanced by Anthropic’s Transformer Circuits sequence, and open-source work by Neel Nanda.
Topics include:
GPT models (building your own GPT-2)
Training and sampling from transformers
TransformerLens
In-context Learning and Induction Heads
Indirect Object Identification
Superposition
Duration: 9 days
Chapter 2 - Reinforcement Learning
In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI’s Gym environment to run their own experiments.
Topics include:
Fundamentals of RL
Vanilla Policy Gradient
PPO
Deep Q-learning
RLHF
Gym & Gymnasium environments
Duration: 6 days
Chapter 3 - Training at Scale
There are a number of techniques that are helpful for training large-scale models efficiently. Here, you will learn more about these techniques and how to use them. The focus is on hands-on learning, rather than just a theoretical understanding.
Topics include:
GPUs
Distributed computing
Data/tensor/pipeline parallelism
Finetuning LLMs
Duration: 4 days
Chapter 4 - Capstone Projects
We will conclude this program with capstone projects, where participants get to dig into something related to the course. This should draw on much of the skills and knowledge participants will have accumulated over the last 5 weeks.
Duration: 6 days[1]
Below is a diagram of the curriculum as a whole, and the dependencies between sections.
Here is some sample material from the course, which you will be able to full understand once you reach that point in the course. This notebook is on Indirect Object Identification (from the chapter on Transformers & Mechanistic Interpretability), it will represent one of a set of optional 2-day mini projects which participants can choose from towards the end of that 9-day period.
Call for staff
As well as inviting applications from participants, we’re also interested in applications from teaching assistants (TAs). You can apply to be a TA for specific chapters of content, if you have particular expertise in them. TAs will be well compensated for their time. We are also looking for people who can help design parts of the curriculum in the RL and training at scale sections (this will also be compensated, and can be done virtually). Please contact me (callum@arena.education) with any more questions, or comment on this post and I will try and respond in a timely manner.
We’re also interested in people who can provide DevOps support, particularly during the first week (e.g. setting people up on virtual machines, and resolving technical problems).
Lastly, if there are some chapters of the course you’re highly knowledgeable in, and others you would like to skill up in, we’d be open to a hybrid system of part-TA-ing, part participating. If you’re interested in something like this, you should put “staff” rather than “participant” in the application form (linked at the end of this post), and in the form you’ll have the opportunity to specify exactly what you’re interested in.
FAQ
If you have a question not in this list, which you think it would be valuable for people to have an answer for, please comment your question below and we will respond. If you have a question which you don’t want to make public, you can message us directly (or ask the question in your application form).
Q: Who is this program suitable for?
A: We welcome applications from people who fit most or all of the following criteria:
Care about AI safety, and making future development of AI go well
Relatively strong math skills (e.g. about one year’s worth of university-level applied math)
Strong programmers (e.g. have a CS degree / work experience in SWE, or have worked on personal projects involving a lot of coding)
Have experience coding in Python
Would be able to travel to London for 6 weeks, starting 22nd May
We expect some participants to be university students, and others to have already graduated.
Note—these criteria are mainly intended as guidelines. If you’re uncertain whether you meet these criteria, or you don’t meet some of them but still think you might be a good fit for the program, please do apply! You can also reach out to me directly, at callum@arena.education.
Q: What will an average day in this program look like?
At the start of the program, most days will involve pair programming, working through structured exercises designed to cover all the essential material in a particular chapter. The purpose is to get you more familiar with the material in a hands-on way. There will also usually be a short selection of required readings in the morning.
As we move through the course, some chapters will transition into more open-ended material. Much of this will still be structured (e.g. in the Mechanistic Interpretability section there will be a large set of structured exercises you can choose from), but you’ll have more choice over which things you want to study in more depth. You’ll also hopefully be able to do some independent projects, e.g. experiments, large-scale implementations, paper replications, or other bonus content. There will still be TA supervision during these sections, but the goal is for you to develop your own research & implementation skills. You may also want to work on group projects with other participants during this time instead, if that is your preference.
The program will run on weekdays. Each day will be roughly the length of a normal working day (9am-5pm), although there will be more flexibility in working hours during the days of more open-ended projects. There will be no compulsory attendance on weekends, but we might organize AI safety discussion groups or social events during this time. The office space will be available 24-7 for anyone who wants to use it outside regular hours.
Q: How many participants will there be?
We’re expecting between 15-25 participants, although this depends on several factors e.g. the strength of applications.
Q: Will there be prerequisite material?
A: Yes, we will be sending you prerequisite reading & exercises covering material such as PyTorch, einops and some linear algebra (this will be in the form of a Colab notebook). We expect that these will take approximately 1-2 days to complete.
Q: When is the application deadline?
A: The deadline for submitting applications is May 5th, 2023 (i.e. Friday the week after next), although we will be interviewing and making offers to candidates on a rolling basis.
We expect to get back to everyone who applied by May 7th (although we will try to get back to you sooner if you apply earlier).
Q: What will the application process look like?
A: There will be three steps:
Fill out an application form (this is designed to take ~20 minutes).
Perform a coding assessment.
Interview virtually with one of us, so we can find out more about your background and interests in AI safety & this course.
Q: Can I join for some sections, but not others?
A: We don’t recommend this, because it could contribute to a somewhat disjointed experience. However, if we choose to invite you to this program but you have other commitments, we would be willing to discuss possible arrangements (e.g. if you are attending the AI Safety Hub research program and you wanted a week in between ARENA and this, you might want to not attend the capstone project week). Unless something like this is specified in your application, we will assume your intention is to join for the full 6 weeks.
Q: Will you pay stipends to participants?
A: Yes, we plan to pay stipends. We have not settled on the exact amount yet, but we expect the total will come to something like $3-4.5k per participant for the whole 6 week period. We hope that money will not be a barrier for promising candidates who wanted to attend this program.
Q: Which costs will you be covering?
A: The stipends will be enough to cover costs of accommodation (we won’t be directly providing accommodation, although we will give support to people who are struggling to find a place to stay). Any reasonable travel costs will be reimbursed, and we will be providing lunch and dinner in-office during the program (the office will also be kept well-stocked with snacks).
Q: Is the program available remotely?
A: We won’t be able to guarantee support for people who want to study the material virtually, and we don’t think this would offer a comparable experience to attending in-person. However, we will likely be making at least some of this material available for self-study, and we may connect applicants who are interested in studying the material virtually so they can form study groups. Strong applicants may also be added to the Slack group we’ll be creating for the course, so that they can discuss the material further.
Q: I’m interested in trialling some of the material, or recommending material to be added. Is there a way I can do this?
A: If either of these is the case, please feel free to reach out directly via a LessWrong message—we’d love to hear from you!
Q: Do you plan to run more bootcamps in the future?
Possibly! If you can’t make these dates, then we encourage you to submit an application anyway (the form is designed to be relatively low-effort to fill out). We would be excited to continue to run these bootcamps if this program is also well-received.
Link to Apply
Here is the link to apply (it is the same for participants and staff). You shouldn’t spend longer than 20-30 minutes on it.
We look forward to receiving your application!
There may be the possibility for extending your capstone project past the end date.