Nick Bostrom’s “Superintelligence” is an older book, but still a good overview. Stuart Russell’s “Human Compatible” is a more modern take. I touch upon some of the main issues in my talk here. Paul Christiano’s excellent “What Failure Looks Like” tackles the argument from another angle.
“We think AI poses an existential risk to humanity”
I’m struggling to understand why someone would believe this. What are some good resources to learn more about why I would?
Nick Bostrom’s “Superintelligence” is an older book, but still a good overview. Stuart Russell’s “Human Compatible” is a more modern take. I touch upon some of the main issues in my talk here. Paul Christiano’s excellent “What Failure Looks Like” tackles the argument from another angle.