Applications open for AI Safety Fundamentals: Governance Course

Link post

Apply to participate or facilitate, before 25th June 2023.

We are excited to support participants who are curious about working in AI governance, or who already do so. If you have networks that might be interested, we would appreciate you sharing this course with them.

Full announcement

There has been increasing interest in how AI governance can mitigate extreme risks from AI, but it can be difficult to get up to speed on research and ideas in this area.

The AI Safety Fundamentals (AISF): Governance Course is a completely free online class designed to efficiently introduce key ideas in AI governance, with a focus on risks from future AI systems. We offer:

  • A widely recommended curriculum that provides a structured guide to the field

    • The course is designed with input from a wide range of relevant experts. The curriculum will be updated before the course launches in mid-July.

  • Weekly facilitated small-group discussions, for accountability and sharing ideas

  • Our course community—opportunities to engage in relevant online discussions, learn about professional opportunities, and attend Q&A sessions with experts

The course is run by BlueDot Impact, a nonprofit project founded by members of the organising team behind the course’s previous iteration.

Note that we have renamed the website from “AGI Safety Fundamentals” to “AI Safety Fundamentals”. We’ll release another post within the next week to explain our reasoning, and we’ll respond to any discussion about the rebrand there.

Apply here, by 25th June 2023.

Time commitment

The course will run for 12 weeks from July-September 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week project.

The time commitment is around 5 hours per week. The split will be ~1.5-2 hours of reading, ~1.5 hours of discussion, and a ~1-hour expert Q&A session.

Course structure

Participants will be grouped depending on their current policy expertise. Discussion facilitators will be knowledgeable about AI governance; they can help answer participants’ questions and point them to further resources.

Participants can use project time to synthesise their views on the field and how they can put these ideas into practice, and/​or to start building knowledge or writing samples that will help them with their career.

Target audience

Due to capacity constraints, we don’t expect to be able to accept all applicants. We think this course will particularly be able to help you if any of the following apply to you:

  • You have policy experience, and are keen to apply your skills to reducing risk from AI.

  • You have a technical background, and want to learn about how you can use your skills to contribute to AI Governance agenda.

  • You are early in your career or a student who is interested in exploring a career in governance to reduce risks from advanced AI.

We expect at least 25% of the participants will not fit any of these descriptions. There are many skills, backgrounds and approaches to AI Governance we haven’t captured here, and we will consider all applications accordingly.

If we don’t have the capacity to have you in the organized course, you can still read through our public curriculum .

Apply now!

If you would like to be considered for the next round of the courses, starting in July 2023, please apply here by 25th June 2023. More details can be found here. We aim to let you know the outcome of your application by late June 2023.

If you already have experience working on AI Governance or feel well-versed in the content, we’d be excited for you to join our community of facilitators. Please apply to facilitate here. (This is the same form; you will be offered an option to select “facilitator”.)