List of AI safety courses and resources

By: Daniel del Castillo, Chris Leong, and Kat Woods

We made a spreadsheet of resources for learning about AI safety. It was for internal purposes here at Nonlinear, but we thought it might be helpful to those interested in becoming safety researchers.

Please let us know if you notice anything that we’re missing or that we need to update by commenting below. We’ll update the sheet in response to comments.

Highlights

There are a lot of courses and reading lists out there. If you’re new to the field, out of the ones we investigated, we recommend Richard Ngo’s curriculum of the AGI safety fundamentals program. It is a good mix of shorter, more structured, and more broad than most alternatives. You can register interest for their program when the next round starts or simply read through the reading list on your own.

We’d also like to highlight that there is a remote AI safety reading group that might be worth looking into if you’re feeling isolated during the pandemic.

About us: Nonlinear is a new AI alignment organization founded by Kat Woods and Emerson Spartz. We are a means-neutral organization, so are open to a wide variety of interventions that reduce existential and suffering risks. Our current top two research priorities are multipliers for existing talent and prizes for technical problems.

PS—Our autumn Research Analyst Internship is open for applications. Deadline is September 7th, midnight EDT. The application should take around ten minutes if your CV is already written.