Annoyingly, I’m not going to answer your question, but I’m going to ask you a question: having read all of those books, which would you most recommend to a person who was only going to read one book about AI?
If your answer is ‘depends what they’re looking for’, imagine I’m the one person: my priorities are: -a very clear case for why AI might be dangerous, with all the steps laid-out and strongly argued-for, such that I can easily pick out parts where I’m confused or disagree -includes relatable everyday examples, both because that will help me understand, and because I’d like some of these at my fingertips so that I can more easily explain AI risk to non-EAs who aren’t familiar with it (or aren’t familiar with the sorts of risks that EAs worry about).
Not annoying at all. I’m always happy to share book recommendations. :)
Of all the books I’ve read related to AI, I think that The Alignment Problem: Machine Learning and Human Values would be the best. I found it far easier to digest. It was written by a journalist rather than an academic, and that shows. The writing style is much smoother than some of the other books on the above list. While it certainly had less detail than Human Compatible or Superintelligence, I think that the increase in “digestibility” far outweighs the decrease in “rigor.”
If you are already somewhat aware of AI safety/alignment/risk and that whole hodgepodge of ideas, then I think that Human Compatible or Superintelligence would be fine, as they are both a bit more in-depth. But if you haven’t encountered these ideas before I think that The Alignment Problem would be a better introduction. The other benefit of either Human Compatible or Superintelligence is that they are respected/known within EA, so if other people’s perceptions of you matter then reading one of these might make more sense.
Annoyingly, I’m not going to answer your question, but I’m going to ask you a question: having read all of those books, which would you most recommend to a person who was only going to read one book about AI?
If your answer is ‘depends what they’re looking for’, imagine I’m the one person: my priorities are:
-a very clear case for why AI might be dangerous, with all the steps laid-out and strongly argued-for, such that I can easily pick out parts where I’m confused or disagree
-includes relatable everyday examples, both because that will help me understand, and because I’d like some of these at my fingertips so that I can more easily explain AI risk to non-EAs who aren’t familiar with it (or aren’t familiar with the sorts of risks that EAs worry about).
Not annoying at all. I’m always happy to share book recommendations. :)
Of all the books I’ve read related to AI, I think that The Alignment Problem: Machine Learning and Human Values would be the best. I found it far easier to digest. It was written by a journalist rather than an academic, and that shows. The writing style is much smoother than some of the other books on the above list. While it certainly had less detail than Human Compatible or Superintelligence, I think that the increase in “digestibility” far outweighs the decrease in “rigor.”
If you are already somewhat aware of AI safety/alignment/risk and that whole hodgepodge of ideas, then I think that Human Compatible or Superintelligence would be fine, as they are both a bit more in-depth. But if you haven’t encountered these ideas before I think that The Alignment Problem would be a better introduction. The other benefit of either Human Compatible or Superintelligence is that they are respected/known within EA, so if other people’s perceptions of you matter then reading one of these might make more sense.
Thanks, this is really helpful!