Just to make this a little more accessible to people who aren’t familiar with SERI-MATS, MATS is Machine Learning Alignment Theory Scholars Program, a training program for young researchers who want to contribute to AI alignment research.
MATS alumni have gone on to publishsafetyresearch (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.
Just to make this a little more accessible to people who aren’t familiar with SERI-MATS, MATS is Machine Learning Alignment Theory Scholars Program, a training program for young researchers who want to contribute to AI alignment research.
Thanks Joseph! Adding to this, our ideal applicant has:
an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals course;
previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.), ideally at a postgraduate level;
strong motivation to pursue a career in AI alignment research, particularly to reduce global catastrophic risk.
MATS alumni have gone on to publish safety research (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.