FYI: I’m working on a book about the threat of AGI/​ASI for a general audience. I hope it will be of value to the cause and the community

(Cross posted from LessWrong)

The TLDR is the title of this post.

Hi all, long time EA/​Rationalist, first time poster (apologies if formatting is off). I’m posting to mention that I’m 30,000 words into a draft of a book about the threat of AI written for a general audience.
People who read this forum would likely learn little from the book but it would be for their friends and the larger group of people who do not.

Brief FAQ:
Q: What’s the book about? What’s the structure?
A: Summarizing the book in a few short sentences: Artificial Super Intelligence is coming. It is probably coming soon. And it might be coming for you.
Structurally, the initial chunk is making the case for AGI/​ASI happening at all; happening soon; and not obviously being controllable. In short, the usual suspects.
The next chunk will be a comprehensive list of all the objections/​criticisms of these positions/​beliefs and responses to them. The final chunk explores what we can do about it. My goal is to be thorough and exhaustive (without being exhausting).

Q: Why should this book exist? Aren’t there already good books about AI safety?

A: Yes, there are! Superintelligence, Human Compatible, Precipice, Alignment Problem, Life 3.0 etc. all provide high quality coverage in different ways. But most of them are not intended for a general audience. My goal will be to explain key concepts but in the most accessible way possible (eg discuss the orthogonality thesis without using the word orthogonal).
Second, the market craves new content. While some people read books that are 2-10 years old, many people don’t, so new works need to keep coming out. Additionally, there have been so many advances recently, some coverage quickly becomes out of date. I think we should have more books come out on this urgent issue.

Q: Why you?

A: a) I have 14 years of experience explaining concepts to a general audience through writing and presenting hundreds of segments on my podcast The Reality Check;
b) I also have 14 years experience as a policy analyst, again learning to explain ideas in a simple, straightforward manner.
c) I’m already writing it and I’m dedicated to finishing it. I waited until I was this far along in the writing to prove to myself that I was going to be able to do it. This public commitment will provide further incentive for completion.

Q: Are you concerned about this negatively impacting the movement?

A: This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good. I will strive to do justice to the positions and concerns people in the community have (while understanding that there is disagreement within the community).

Q: Do you need any help?

A: Sure, thanks for asking. See breakdown of possibilities below.
a) If anyone is keen to volunteer as a research assistant, please let know.
b) I’ll soon start looking for an agent. Anyone have connections to John Brockman (perhaps through Max Tegmark)? Or other?
c) If smart and capable people want to review some of the content in the future when it is more polished, that would be great.
d) I’m waiting to hear back about possible funding from the LTFF. If that falls through, some funding to pay for research assistance, editors/​review, book promotion, or even to focus my time (as this is a side project) would be useful.


Q: Most books don’t really have much impact, isn’t this a longshot?

A: Yes. Now is the time for longshots.