Seeking Input to AI Safety Book for non-technical audience

TLDR: You are invited to make suggestions for ideas/​information/​framing for an upcoming AI safety book for a non-technical audience.

Context:
I’m still writing an accessible book about AI Safety/​Risk for a non-technical audience to serve both the AI safety cause and the community. The book’s intended audience is likely not people who read this forum but rather your friends, family, policy makers, and non-science people who are curious about the topic. I started last June, it is pretty far along, and I hope to have it available within the next three months (I received a LTFF grant last year to help it come into existence).

Briefly, the purpose of the book is to communicate that intelligence is really powerful, AI progress is happening fast and AI systems are becoming more intelligent/​powerful, that advanced AI is a threat/​risk to humanity because it may not be aligned with our values and may be uncontrollable, therefore we should act now to reduce the risk.

Opportunity:
You can present ideas, facts, framing, or anything else you think would be important in such a book in the comments below or send me a message.
If interested, you may still be wondering what I’ve already included. Broadly, as a heuristic, if your idea is very obvious, I’m probably already including it. But it could be still be useful for you to suggest it, so I can see that others think it is important.
If your idea is highly technical, I have likely chosen not to include it. But it could still be useful to suggest if it is a key consideration that can be made more accessible. I’m trying to open-minded but efficient with people’s time.
I am also trying to minimize the occurrence of someone saying “I really wish he had mentioned X” after the book comes out. No promises of inclusion but at least your suggestions will be considered.

Finally, I’m more than happy to have people be more involved as getting feedback from a range of knowledgeable people is useful for a variety of reasons.

(Cross posted from LessWrong)