Announcing an Empirical AI Safety Program
Last summer, the Center for AI Safety ran the ML Safety Scholars program (MLSS) to help students interested in AI Safety gain technical skills and learn about empirical safety topics. This fall, we are running an abridged version of this program called Introduction to ML Safety.
Apply to be a participant by September 21st.
Apply to be a facilitator by September 18th.
Website (for advertising): mlsafety.org/intro-to-ml-safety
About the Course
Introduction to ML Safety is an 8-week course that aims to introduce students with a deep learning background to empirical AI Safety research. The program is designed and taught by Dan Hendrycks, a UC Berkeley ML PhD and director of the Center for AI Safety, and provides an introduction to robustness, alignment, monitoring, systemic safety, and conceptual foundations for existential risk.
Each week, students will be assigned readings, lecture videos, and required homework assignments. The materials are publicly available at course.mlsafety.org.
There are two tracks:
Introductory track: for people who are new to AI Safety. This track aims to familiarize students with the AI X-risk discussion alongside empirical research directions.
Advanced track: for people who already have a conceptual understanding of AI X-risk and want to learn more about existing empirical safety research so they can start contributing.
Introduction to ML Safety will be virtual by default, though students can participate in person if a section is facilitated at their local university.
Time Commitment
The program will last for 8 weeks, beginning on September 26th and ending on November 18th. Participants are expected to commit at least 5 hours per week. This includes ~1 hour of recorded lectures (which will take more than one hour to digest), ~1-2 hours of readings, ~1-2 hours of written assignments, and 1 hour of discussion.
We understand that 5 hours is a large time commitment, so to make our program more inclusive and remove any financial barriers, we will provide a $1000 stipend upon completion of the course. For logistical reasons, we can only pay participants who are eligible to work in the US. Non-US residents are welcome to apply and participate but we cannot offer them a stipend. We will make an effort to provide international students in the US with stipends but we cannot guarantee that we will be able to.
Eligibility
Anyone can apply as long as they have the following prerequisites:
Deep learning: you can gauge the background knowledge required by skimming the week 1 slides: deep learning review.
Linear algebra or introductory statistics (e.g., AP Statistics)
Multivariate differential calculus
If you are not sure whether you meet these criteria, err on the side of applying.
Facilitating a section
To be a facilitator, you must have a strong background in deep learning and AI Safety. For the latter, taking AGISF (AGI Safety Fundamentals) or participating in a similar program is sufficient. Much of the course content is not covered in AGISF, so if you are not familiar with it, you will have to learn it in advance each week.
The time commitment for running one section is ~2-4 hours per week depending on prior familiarity with the material. 1 hour of discussion and 1-3 hour of prep depending on your prior familiarity with the material. Discussion times are flexible.
We will pay facilitators a stipend corresponding to ~$30 per hour (subject to legal constraints).
You can apply to be a facilitator here (applications are due by September 18th).
Questions
Questions about the program should be posted as comments on this post. For questions that are only relevant to you, please reach out to introcourse@mlsafety.org.
Can someone who is not a student participate?
Yes! Thanks for asking.
I’m only pointing out the obvious here but like… the deadline for facilitator applications is 5 days from the date of this post?? And participants is 8 days away??
And this page https://github.com/centerforaisafety/Intro_to_ML_Safety seems to suggest that much of the course notes are currently incomplete..?
I feel like a reasonable reaction is basically what’s going on with that? Why announce and do it now? It feels and looks rushed or amateurish (which isn’t purely a cosmetic issue imo).
These are reasonable concerns, thanks for voicing them. As a result of unforeseen events, we became responsible for running this iteration only a couple of weeks ago. We thought that getting the program started quickly — and potentially running it at a smaller scale as a result — would be better than running no program at all or significantly cutting it down.
The materials (lectures, readings, homework assignments) are essentially ready to go and have already been used for MLSS last summer. Course notes are supplementary and are an ongoing project.
We are putting a lot of hours into making sure this program gets started without a hitch and runs smoothly. We are sorry the deadlines are so aggressive and agree that it would have been better to launch earlier. If you have trouble getting your application in on time, please don’t hesitate to contact us about getting an extension. We also plan to run another iteration in the Spring and announce the program further in advance.
I’m not involved with running this course but I’ve watched the online lectures and there’s a decent amount of content, albeit at a high level. If the course is run with rolling cohorts then the inconvenience from the short notice is offset by being able to participate or facilitate a later cohort.
Personally, I think developing courses while running them is a good way to make sure you’re creating value and updating based on feedback as opposed to putting in too much effort before testing your ideas.
The question:
does not allow negative inputs
Fixed, thanks!