There’s this policy report from September 2014, Unprecedented Technological Risks, signed by Beckstead, Bostrom, Bowerman, Cotton-Barratt, MacAskill, Ó hÉigeartaigh, and Ord. Not a long read, but I’d expect the references to be among the best available.
Does anyone know (from experience) good articles/books on not-necessarily-AI technology risks or non-AI technology risk?
Is “Global Catastrophic Risks” by Bostrom worth reading in this context? It’s from 2008; my concern is that it might be outdated.
There’s this policy report from September 2014, Unprecedented Technological Risks, signed by Beckstead, Bostrom, Bowerman, Cotton-Barratt, MacAskill, Ó hÉigeartaigh, and Ord. Not a long read, but I’d expect the references to be among the best available.
I thought it was excellent when I read it (in 2010), and I expect it’s probably held up pretty well. I can’t think of a better replacement.
I’d suggest Global Catastrophic Risks as a good primer. (The essays aren’t written by Bostrom; he co-edited the book)