Co-Director at ML Alignment & Theory Scholars Program (2022-present)
Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
Manifund Regrantor (2023-present)
Advisor, Catalyze Impact (2023-present)
Advisor, AI Safety ANZ (2024-present)
Ph.D. in Physics at the University of Queensland (2017-2023)
Group organizer at Effective Altruism UQ (2018-2021)
Give me feedback! :)
TL;DR: MATS is fundraising for Summer 2025 and could support more scholars at $35k/scholar
Ryan Kidd here, MATS Co-Executive Director :)
The ML Alignment & Theory Scholars (MATS) Program is twice-yearly independent research and educational seminar program that aims to provide talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance and connect them with the Berkeley AI safety research community. The Winter 2024-25 Program will run Jan 6-Mar 14, 2025 and our Summer 2025 Program is set to begin in June 2025. We are currently accepting donations for our Summer 2025 Program and beyond. We would love to include additional interested mentors and scholars at $35k/scholar. We have substantially benefited from individual donations in the past and were able to support ~11 additional scholars due to Manifund donations.
MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research management, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and research management, greatly reducing the barriers to research mentorship. Immediately following each program is an optional extension phase in London where top performing scholars can continue research with their mentors. For more information about MATS, please see our recent reports: Alumni Impact Analysis, Winter 2023-24 Retrospective, Summer 2023 Retrospective, and Talent Needs of Technical AI Safety Teams.
You can see further discussion of our program on our website and Manifund page. Please feel free to AMA in the comments here :)