See here for a list of things I’ve written that summarise, comment on, or take inspiration from parts of The Precipice.
I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren’t included in the audiobook
Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
This might be better than Superintelligence and Human-Compatible as an introduction to the topic of AI risk. It also seemed to me to be a surprisingly good introduction to the history of AI, how AI works, etc.
But I’m not sure this’ll be very useful for people who’ve already read/listened to a decent amount (e.g., the equivalent of 4 books) about those topics.
This is more relevant to technical AI safety than to AI governance (though obviously the former is relevant to the latter anyway).
See here for my notes on this book, and here for some more thoughts on this and other nuclear-risk-related books.
This is available as an audiobook, but a few Audible reviewers suggest using the physical book due to the book’s use of equations and graphs. So I downloaded this free PDF into my iPad’s Kindle app.
Here are some relevant books from my ranked list of all EA-relevant (audio)books I’ve read, along with a little bit of commentary on them.
The Precipice, by Ord, 2020
See here for a list of things I’ve written that summarise, comment on, or take inspiration from parts of The Precipice.
I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren’t included in the audiobook
Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
Superintelligence, by Bostrom, 2014
The Alignment Problem, by Christian, 2020
This might be better than Superintelligence and Human-Compatible as an introduction to the topic of AI risk. It also seemed to me to be a surprisingly good introduction to the history of AI, how AI works, etc.
But I’m not sure this’ll be very useful for people who’ve already read/listened to a decent amount (e.g., the equivalent of 4 books) about those topics.
This is more relevant to technical AI safety than to AI governance (though obviously the former is relevant to the latter anyway).
Human-Compatible, by Russell, 2019
See also this interesting Slate Star Codex review
The Strategy of Conflict, by Schelling, 1960
See here for my notes on this book, and here for some more thoughts on this and other nuclear-risk-related books.
This is available as an audiobook, but a few Audible reviewers suggest using the physical book due to the book’s use of equations and graphs. So I downloaded this free PDF into my iPad’s Kindle app.
Destined for War, by Allison, 2017
See here for some thoughts on this and other nuclear-risk-related books, and here for some thoughts on this and other China-related books.
The Better Angels of Our Nature, by Pinker, 2011
See here for some thoughts on this and other nuclear-risk-related books.
Rationality: From AI to Zombies, by Yudkowsky, 2006-2009
I.e., “the sequences”
Age of Ambition, by Osnos, 2014
See here for some thoughts on this and other China-related books.
I’ve also now listened to Victor’s Understanding the US Government (2020) due to my interest in AI governance, and made some quick notes here.
I’m also going to listen to Tegmark’s Life 3.0, but haven’t done so yet.