Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
We (Redwood Research and Lightcone Infrastructure) are organizing a bootcamp to bring people interested in AI Alignment up-to-speed with the state of modern ML engineering. We expect to invite about 20 technically talented effective altruists for three weeks of intense learning to Berkeley, taught by engineers working at AI Alignment organizations. The curriculum is designed by Buck Shlegeris (Redwood) and Ned Ruggeri (App Academy Co-founder). We will cover all expenses.
We aim to have a mixture of students, young professionals, and people who already have a professional track record in AI Alignment or EA, but want to brush up on their Machine Learning skills.
Dates are Jan 3 2022 - Jan 22 2022. Application deadline is November 15th. We will make application decisions on a rolling basis, but will aim to get back to everyone by November 22nd.
The curriculum is still in flux, but this list might give you a sense of the kinds of things we expect to cover (it’s fine if you don’t know all these terms):
Week 1: PyTorch — learn the primitives of one of the most popular ML frameworks, use them to reimplement common neural net architecture primitives, optimization algorithms, and data parallelism
Week 2: Implementing transformers — reconstruct GPT2, BERT from scratch, play around with the sub-components and associated algorithms (eg nucleus sampling) to better understand them
Week 3: Training transformers — set up a scalable training environment for running experiments, train transformers on various downstream tasks, implement diagnostics, analyze your experiments
(Optional) Week 4: Capstone projects
We’re aware that people start school/other commitments at various points in January, and so are flexible about you attending whatever prefix of the bootcamp works for you.
Logistics
The bootcamp takes place at Constellation, a shared office space in Berkeley for people working on long-termist projects. People from the following organizations often work from the space: MIRI, Redwood Research, Open Philanthropy, Lightcone Infrastructure, Paul Christiano’s Alignment Research Center and more.
As a participant, you’d attend communal lunches and events at Constellation and have a great opportunity to make friends and connections.
If you join the bootcamp, we’ll provide:
Free travel to Berkeley, for both US and international applications
Free housing
Food
Plug-and-play, pre-configured desktop computer with an ML environment for use throughout the bootcamp
You can find a full FAQ and more details in this Google Doc.
- How I failed to form views on AI safety by 17 Apr 2022 11:05 UTC; 213 points) (
- Three intuitions about EA: responsibility, scale, self-improvement by 15 Apr 2022 7:55 UTC; 196 points) (
- Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2] by 6 May 2022 0:19 UTC; 111 points) (
- Apply to the second iteration of the ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2] by 6 May 2022 4:23 UTC; 69 points) (LessWrong;
- Retrospective on the Summer 2021 AGI Safety Fundamentals by 6 Dec 2021 20:10 UTC; 67 points) (
- “Brain enthusiasts” in AI Safety by 18 Jun 2022 9:59 UTC; 63 points) (LessWrong;
- 22 Apr 2022 23:29 UTC; 53 points) 's comment on A grand strategy to recruit AI capabilities researchers into AI safety research by (
- Thoughts on AI Safety Camp by 13 May 2022 7:16 UTC; 33 points) (LessWrong;
- Should the EA community have a DL engineering fellowship? by 24 Dec 2021 13:43 UTC; 26 points) (
- Thoughts on AI Safety Camp by 13 May 2022 7:47 UTC; 18 points) (
- Framing approaches to alignment and the hard problem of AI cognition by 15 Dec 2021 19:06 UTC; 16 points) (LessWrong;
- Celebrating 2021: What are your favourite wins & good news for EA, the world and yourself? by 14 Dec 2021 3:56 UTC; 14 points) (
Have you thought of recording the sessions and putting them online afterwards? I’d be interested in watching, but couldn’t apply (on a honeymoon in Tahoe, which is close enough to Berkeley, but I imagine my partner would kill me if I went missing each day to attend an ML bootcamp).
Not addressing video recordings specifically; but we might run future iterations of this bootcamp if there’s enough interest, it goes well and it continues to seem valuable. So feel free to submit the application form while noting you’re only interested in future cohorts.
Should I reapply if I already filled in the interest form earlier? I notice that the application form is slightly updated.
No, the previous application will work fine. Thanks for applying :)
Is there any sort of confirmation email sent after submitting the application? I’ve just submitted one, and didn’t receive anything via email. Thanks!
Sorry, no confirmation email currently! Feel free to send me a PM with your real name, and I can confirm that your application went through (though if you saw the “Thank you” screen, I would be quite surprised if your application got lost)