MATS alumni have gone on to publishsafetyresearch (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.
Thanks Joseph! Adding to this, our ideal applicant has:
an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals course;
previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.), ideally at a postgraduate level;
strong motivation to pursue a career in AI alignment research, particularly to reduce global catastrophic risk.
MATS alumni have gone on to publish safety research (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.