No, the same set of ~28 authors read all of the readings.
The order of the readings was indeed specified:
Concise overview (Stuart Russell, Sam Bowman; 30 minutes)
Different styles of thinking about future AI systems (Jacob Steinhardt; 30 minutes)
A more in-depth argument for highly advanced AI being a serious risk (Joe Carlsmith; 30 minutes)
A more detailed description of how deep learning models could become dangerously “misaligned” and why this might be difficult to solve with current ML techniques (Ajeya Cotra; 30 minutes)
An overview of different research directions (Paul Christiano; 30 minutes)
A study of what ML researchers think about these issues (Vael Gates; 45 minutes)
Some common misconceptions (John Schulman; 15 minutes)
Researchers had the option to read the transcripts where transcripts were available; we said that consuming the content in either form (video or transcript) was fine.
No, the same set of ~28 authors read all of the readings.
The order of the readings was indeed specified:
Concise overview (Stuart Russell, Sam Bowman; 30 minutes)
Different styles of thinking about future AI systems (Jacob Steinhardt; 30 minutes)
A more in-depth argument for highly advanced AI being a serious risk (Joe Carlsmith; 30 minutes)
A more detailed description of how deep learning models could become dangerously “misaligned” and why this might be difficult to solve with current ML techniques (Ajeya Cotra; 30 minutes)
An overview of different research directions (Paul Christiano; 30 minutes)
A study of what ML researchers think about these issues (Vael Gates; 45 minutes)
Some common misconceptions (John Schulman; 15 minutes)
Researchers had the option to read the transcripts where transcripts were available; we said that consuming the content in either form (video or transcript) was fine.