Introducing the ML Safety Scholars Program
Program Overview
The Machine Learning Safety Scholars program is a paid, 9-week summer program designed to help undergraduate students gain skills in machine learning with the aim of using those skills for empirical AI safety research in the future. Apply for the program here by May 31st.
The course will have three main parts:
Machine learning, with lectures and assignments from MIT
Deep learning, with lectures and assignments from the University of Michigan, NYU, and Hugging Face
ML safety, with lectures and assignments produced by Dan Hendrycks at UC Berkeley
The first two sections are based on public materials, and we plan to make the ML safety course publicly available soon as well. The purpose of this program is not to provide proprietary lessons but to better facilitate learning:
The program will have a Slack, regular office hours, and active support available for all Scholars. We hope that this will provide useful feedback over and above what’s possible with self-studying.
The program will have designated “work hours” where students will cowork and meet each other. We hope this will provide motivation and accountability, which can be hard to get while self-studying.
We will pay Scholars a $4,500 stipend upon completion of the program. This is comparable to undergraduate research roles and will hopefully provide more people with the opportunity to study ML.
MLSS will be fully remote, so participants will be able to do it from wherever they’re located.
Why have this program?
Much of AI safety research currently focuses on existing machine learning systems, so it’s necessary to understand the fundamentals of machine learning to be able to make contributions. While many students learn these fundamentals in their university courses, some might be interested in learning them on their own, perhaps because they have time over the summer or their university courses are badly timed. In addition, we don’t think that any university currently devotes multiple weeks to AI Safety.
There are already sources of funding for upskilling within EA, such as the Long Term Future Fund. Our program focuses specifically on ML and therefore we are able to provide a curriculum and support to Scholars in addition to funding, so they can focus on learning the content.
Our hope is that this program can contribute to producing knowledgeable and motivated undergraduates who can then use their skills to contribute to the most pressing research problems within AI safety.
Time Commitment
The program will last 9 weeks, beginning on Monday, June 20th, and ending on August 19th. We expect each week of the program to cover the equivalent of about 3 weeks of the university lectures we are drawing our curriculum from. As a result, the program will likely take roughly 30-40 hours per week, depending on speed and prior knowledge.
Preliminary Content & Schedule
Machine Learning (content from the MIT open course)
Week 1 - Basics, Perceptrons, Features
Week 2 - Features continued, Margin Maximization (logistic regression and gradient descent), Regression
Deep Learning (content from a University of Michigan course as well as an NYU course)
Week 3 - Introduction, Image Classification, Linear Classifiers, Optimization, Neural Networks. ML Assignments due.
Week 4 - Backpropagation, CNNs, CNN Architectures, Hardware and Software, Training Neural Nets I & II. DL Assignment 1 due.
Week 5 - RNNs, Attention, NLP (from NYU), Hugging Face tutorial (parts 1-3),
RL overview. DL Assignment 2 due.
ML Safety
Week 6 - Risk Management Background (e.g., accident models), Robustness (e.g., optimization pressure). DL Assignment 3 due.
Week 7 - Monitoring (e.g., emergent capabilities), Alignment (e.g., honesty). Project proposal due.
Week 8 - Systemic Safety (e.g., improved epistemics), Additional X-Risk Discussion (e.g., deceptive alignment). All ML Safety assignments due.
Week 9 - Final Project (edit May 5th: If students have a conflict in the last week of the program, they can choose not to complete the final project. Students who do this will receive a stipend of $4000 rather than $4500.)
Who is eligible?
The program is designed for motivated undergraduates who have interest in doing empirical AI safety research in the future. We will accept Scholars who will be enrolled undergraduate students after the conclusion of the program (this includes graduated/soon graduating high school students about to enroll in their first year of undergrad).
Prerequisites:
Differential calculus
At least one of linear algebra or introductory statistics (e.g., AP Statistics). Note that if you only have one of these, you may need to make a conscious effort to pick up material from the other during the program.
Programming. You will be using Python in this course, so ideally you should be able to code in that language (or at least be able to pick it up quickly). The courses will not teach Python or programming.
We don’t assume any ML knowledge, though we expect that the course could be helpful even for people who have some knowledge of ML already (e.g., fast.ai or Andrew Ng’s Coursera course).
Questions
Questions about the program should be posted as comments on this post. If the question is only relevant to you, it can be addressed to Thomas Woodside ([firstname].[lastname]@gmail.com).
Acknowledgement
We would like to thank the FTX Future Fund regranting program for providing the funding for the program.
Application
You can apply for the program here. Admission is rolling, but you must apply by May 31st to be considered for the program. All decisions will be released by June 7th.
- Future Fund June 2022 Update by 1 Jul 2022 0:50 UTC; 279 points) (
- How to pursue a career in technical AI alignment by 4 Jun 2022 21:36 UTC; 265 points) (
- Announcing the Introduction to ML Safety Course by 6 Aug 2022 2:50 UTC; 136 points) (
- AI safety starter pack by 28 Mar 2022 16:05 UTC; 126 points) (
- Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2] by 6 May 2022 0:19 UTC; 111 points) (
- Apply to the second iteration of the ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2] by 6 May 2022 4:23 UTC; 69 points) (LessWrong;
- How to pursue a career in technical AI alignment by 4 Jun 2022 21:11 UTC; 69 points) (LessWrong;
- Enlightenment Values in a Vulnerable World by 18 Jul 2022 11:54 UTC; 66 points) (
- Announcing an Empirical AI Safety Program by 13 Sep 2022 21:39 UTC; 64 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- ML Safety Scholars Summer 2022 Retrospective by 1 Nov 2022 3:09 UTC; 56 points) (
- Cost-effectiveness of student programs for AI safety research by 10 Jul 2023 17:23 UTC; 53 points) (
- Future Matters #2: Clueless skepticism, ‘longtermist’ as an identity, and nanotechnology strategy research by 28 May 2022 6:25 UTC; 52 points) (
- List of AI safety newsletters and other resources by 1 May 2023 17:24 UTC; 49 points) (
- ML Safety Scholars Summer 2022 Retrospective by 1 Nov 2022 3:09 UTC; 29 points) (LessWrong;
- AI Risk Intro 2: Solving The Problem by 22 Sep 2022 13:55 UTC; 22 points) (LessWrong;
- Cost-effectiveness of student programs for AI safety research by 10 Jul 2023 18:28 UTC; 15 points) (LessWrong;
- Enlightenment Values in a Vulnerable World by 20 Jul 2022 19:52 UTC; 15 points) (LessWrong;
- AI Risk Intro 2: Solving The Problem by 24 Sep 2022 9:33 UTC; 11 points) (
- Enlightenment Values in a Vulnerable World by 18 Jul 2022 12:00 UTC; 4 points) (Progress Forum;
- 11 Aug 2022 4:05 UTC; 3 points) 's comment on AI safety starter pack by (
- 21 Sep 2022 11:11 UTC; 2 points) 's comment on ojorgensen’s Quick takes by (
You may already have this in mind but—if you are re-running this program in summer 2023, I think it would be a good idea to announce this further in advance.
I completely agree! Summer plans are often solidified quite early, so promoting earlier is better. I’m no stranger to the idea of doing things early!
In this case, we saw a need the need for this program only a few weeks ago and we’re now trying to fill it. If we do run it again next year, we’ll announce it earlier, though there’s definitely still some benefit to having applications open fairly late (e.g. for people who may not have gotten other positions because they lacked ML knowledge).
Are you running this internship this year (2023). I haven’t seen any posts, yet?
No, it is not being run again this year, sorry!
Do you know any equivalent programs running that will have applications open in the fall or even next year? I think this is a valuable program and have been searching around for something equivalent. Thanks if you got a sec to share resources.
There is for the ML safety component only. It’s very different from this program in time commitment (much lower), stipend (much lower), and prerequisites (much higher, requires prior ML knowledge) though. There are a lot of online courses that just teach ML, so you could take one of those on your own and then this.
https://forum.effectivealtruism.org/posts/uB8BgEvvu5YXerFbw/intro-to-ml-safety-virtual-program-12-june-14-august-1
Regarding the prerequisites (differential calculus, linear algebra or statistics, programming) do you have any recommended resources for people who have studied these but are a bit rusty and want to revise the content before MLSS? Or a list of concepts or tasks that participants can use to check whether their understanding is at the required level? (e.g. “If you know the concepts X, Y and Z in linear algebra, and can write a program to solve task A and B, then you probably have enough background for MLSS.”)
Document provided by MLSS:
Preparation Materials and Curriculum [public]
https://docs.google.com/document/d/1jKAeq6Sm9HTuA8N3545fZrhTxIw-gFYa4tBmNZPJAXE/edit
Sorry, missed replying to this comment as we were working on this doc, this is indeed the resource we recommend!
Can undergraduates who already know ML skip weeks 1-2? Can undergraduates who already know DL skip weeks 3-5?
We’ll consider this if there’s enough demand for it! But especially for the latter option, it might make sense for students to work through the last three weeks on their own (ML Safety lectures will be public by then).
Besides the designated work hours, will the program mostly be synchronous? I am heavily interested in participating, but will be based outside the U.S. for the summer.
Thanks so much for your time!
It will be mostly asynchronous, with a few hours of synchronous content per week. We also expect to have sections at different times for people in different timezones so there should be one that works for you.
I’m wondering whether all admission notices been sent
No, we’re still working on it! All decisions will be sent by tomorrow, June 7th, as indicated in this post.
Hello! I have looked through this website for thousand times, but I still regret to find that I do not seem to be accepted into this project, but I still want to learn about the courses. In addition to the links that have been posted for most courses of the project, I would like to ask whether you will establish an open community to share homework content during the project, and the final project and ML Safety course in the third week, I would like to learn these, thank you!
This document will include all of that information (some of it isn’t ready yet).
May I know how many places are there available for this program?
This will depend on the number of TAs we can recruit, our yield rate, and other variables so I can’t give a good figure on this right now, sorry.
It’d be really great if there were programs like these for Information Security / Cybersecurity fields as well :(
Where do you plan to publicize the ML safety course? I have a lot of interest in this topic area but cannot commit to the full program.
We’ll have a website, and probably also make a forum post here when we release it.
Is there any chance to participate even if I got my bachelor’s degree last year? I am in my first year as a grad student and I am extremely interested in these topics.
If not, is there any way to access the ML safety part material? I feel the need for structured material to study from about this topic and I would gladly look at it also by myself.
Best regards
We’re prioritizing undergraduates, but depending on the strength/size of our undergraduate applicant pool, we may also admit graduate students. Feel free to apply!
The ML Safety curriculum is not yet fully ready, but when it is we will release it publicly, not just for this program. We’ll post again when we do.
Relatedly, is there any chance you’d consider people who have recently completed undergrad, but aren’t grad students (e.g. are currently working in the corporate sector)?
We may consider people in this situation, but it’s not the focus of our program and we will prioritize undergraduates.
This is great news! I’ll apply soon. Thanks
After submitting my application, should I be getting a form confirmation email? Or is it fine not to have received anything? I wasn’t sure if my application was processed correctly. Thanks!
A confirmation email is not expected. We received your application!
Hi, sadly just saw this post now but I’d definitely apply if you’ll be hosting this program again next summer! Is there any way to get notified in case?
I submitted my application yesterday, but haven’t received any confirmation email or anything of the like. Are we to expect something to confirm our application has been received, or is there any way I can make sure that all my materials have been submitted. Thanks
Same question here.
Does the time “30-40 hours/week ” refer to the equivalent time of lectures per week or the estimated total studying time (that is, the lectures + the time we use to do the assignment and revise the lessons) we may devote to the program every week?
Total time including assignments. Don’t worry, there will not be 30-40 hours of lecture videos every week!
Are there select hours that we must be available? I have commitments at certain times of the day, and timezones may not match up. Would this be a problem?
That shouldn’t be a problem. For synchronous activities, we will have multiple sessions you can attend (we will have people from all over the world so we need to do this anyway).
Am I able to get a reference letter from one of the professors or add this experience to my resume if I complete the program?
You can certainly add it to your resume, but you wouldn’t be able to get a reference letter.
The program uses public recorded online classes, and while we have TAs, none of them are professors.
Someone referred me to apply to be a TA for this program. How would you like such people to contact you—should I email you, or is there another form for that?
Not clear right now whether we will need more TAs, but if we do, we’ll make a post soon with an application. I’ll reply to this if/when that happens. Thanks for your interest!
Could we still apply if we have exams between Jun 20-Jun 24 and will be available afterwards?
Yes, but please note this on your application. In general, short periods of unavailability are fine, but we won’t give any extensions for them so you will likely have to complete the material at an accelerated pace at the times when you are available.