The primary purpose of the sequences was to communicate the generators behind AI risk and to teach the tools necessary (according to Eliezer) to make progress on it, so references to it are all over the place, and it’s the second most central theme to the essays.
Later essays in the sequences tend to have more references to AI risk than earlier ones. Here is a somewhat random selection of ones that seemed crucial when looking over the list, though this is really very unlikely to be comprehensive:
There are lots more. Indeed, towards the latter half of the sequences it’s hard not to see an essay quite straightforwardly about AI Alignment every 2-3 essays.
The primary purpose of the sequences was to communicate the generators behind AI risk and to teach the tools necessary (according to Eliezer) to make progress on it, so references to it are all over the place, and it’s the second most central theme to the essays.
Later essays in the sequences tend to have more references to AI risk than earlier ones. Here is a somewhat random selection of ones that seemed crucial when looking over the list, though this is really very unlikely to be comprehensive:
Ghosts in the Machine
Optimization and the Intelligence Explosion
Belief in Intelligence
The Hidden Complexity of Wishes
That Alien Message (I think this one is particularly good)
Dreams of AI Design
Raised in Technophilia
Value is Fragile
There are lots more. Indeed, towards the latter half of the sequences it’s hard not to see an essay quite straightforwardly about AI Alignment every 2-3 essays.