Executive summary: MATS ran a series of AI safety discussion groups for their Summer 2024 Program, covering key topics like AI capabilities, timelines, training challenges, deception risks, and governance approaches to help scholars develop critical thinking skills about AI safety.
Key points:
Curriculum covered 5 weekly topics: AI intelligence/power, transformative AI timelines, training challenges, alignment deception risks, and AI governance approaches.
Core and supplemental readings were provided for each topic, along with discussion questions to facilitate critical analysis.
Curriculum aimed to increase scholars’ knowledge of AI safety ecosystem and potential catastrophe scenarios.
Changes from previous version included reducingd after the discussion series concluded.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: MATS ran a series of AI safety discussion groups for their Summer 2024 Program, covering key topics like AI capabilities, timelines, training challenges, deception risks, and governance approaches to help scholars develop critical thinking skills about AI safety.
Key points:
Curriculum covered 5 weekly topics: AI intelligence/power, transformative AI timelines, training challenges, alignment deception risks, and AI governance approaches.
Core and supplemental readings were provided for each topic, along with discussion questions to facilitate critical analysis.
Curriculum aimed to increase scholars’ knowledge of AI safety ecosystem and potential catastrophe scenarios.
Changes from previous version included reducingd after the discussion series concluded.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.