[Question] How/​When Should One Introduce AI Risk Arguments to People Unfamiliar With the Idea?

Note: Due to the possibility that there will be a quick and easy answer to my question (e.g., a cache of sources I simply hadn’t encountered), I’ll keep the text of this post fairly short (especially in relation to my own notes/​thoughts on this question).

For some time, I and other people I’ve asked have been unaware of good guides or resources about personally introducing and discussing AI risk as a legitimate “this-century” concern (as opposed to linking people to articles/​videos). Given how far outside of the Overton window this claim seems to be, I’ve personally struggled to figure out how to best introduce the topic without coming across as a bit obsessive/​whacky, with the result being some seemingly ineffective or half-baked conversations with co-workers and friends.

Especially given the apparent or potential increase in media/​popular attention for EA, it seems that having better communication about AI risk would be a good idea. While I personally think that Rob Miles videos and Cold Takes are good, I would probably prefer to have a better personal grasp so that I don’t have to rely so heavily on “here are a bunch of links for you to check out.” (To be clear, that’s not what I’ve led with thus far.)

For me, there seem to be at least two key parts to this:

  1. What are the “minimum viable arguments” or basic chains of points that go from “what is AI” to “AI has a non-trivial chance of existential risk this century”? This is just the bare epistemic foundation for the claims.

  2. What kinds of quotes, ideas, arguments, analogies, examples, etc. are fairly easy to introduce and at least are effective for getting people to be open to the (admittedly quite bold) claim that “AI risk this century” should be taken seriously?

I would love to get into deeper aspects of epistemics and questions about persuasion theory (e.g., how should you adapt to audiences or discussion goals, how can you reduce the time cost or cognitive difficulty of evaluating AI risk arguments), but for now I’ll just leave my question at that and see if anyone knows of resources that might help answer these initial questions.