Note: Due to the possibility that there will be a quick and easy answer to my question (e.g., a cache of sources I simply hadn’t encountered), I’ll keep the text of this post fairly short (especially in relation to my own notes/thoughts on this question).
For some time, I and other people I’ve asked have been unaware of good guides or resources about personally introducing and discussing AI risk as a legitimate “this-century” concern (as opposed to linking people to articles/videos). Given how far outside of the Overton window this claim seems to be, I’ve personally struggled to figure out how to best introduce the topic without coming across as a bit obsessive/whacky, with the result being some seemingly ineffective or half-baked conversations with co-workers and friends.
Especially given the apparent or potential increase in media/popular attention for EA, it seems that having better communication about AI risk would be a good idea. While I personally think that Rob Miles videos and Cold Takes are good, I would probably prefer to have a better personal grasp so that I don’t have to rely so heavily on “here are a bunch of links for you to check out.” (To be clear, that’s not what I’ve led with thus far.)
For me, there seem to be at least two key parts to this:
What are the “minimum viable arguments” or basic chains of points that go from “what is AI” to “AI has a non-trivial chance of existential risk this century”? This is just the bare epistemic foundation for the claims.
What kinds of quotes, ideas, arguments, analogies, examples, etc. are fairly easy to introduce and at least are effective for getting people to be open to the (admittedly quite bold) claim that “AI risk this century” should be taken seriously?
I would love to get into deeper aspects of epistemics and questions about persuasion theory (e.g., how should you adapt to audiences or discussion goals, how can you reduce the time cost or cognitive difficulty of evaluating AI risk arguments), but for now I’ll just leave my question at that and see if anyone knows of resources that might help answer these initial questions.
I think AI Risk Intro 1: Advanced AI Might Be Very Bad is great.
I think Is Power-Seeking AI An Existential Risk is probably the best introduction, though it’s probably too long as a first introduction if the person is yet that motivated. It’s also written as a list of propositions, with probabilities, and that might not appeal to many people.
I also listed some shorter examples in this post for the AI Safety Public Materials Bounty we’re running, that might be more suitable as a first introduction. Here are the ones most relevant to people not versed in machine learning:
AI risk executive summary (2014)
Robert Miles’ YouTube channel (2017-present)
AGI Safety From First Principles (2020)
The case for taking AI risk seriously as a threat to humanity (2020)
The competition is also trying to get more, because I think there is a lot more that can be done.
I’m also interested in seeing better knowledge translation in this area. Particularly in the form of storytelling or narrative to make it less theoretical and provide more narrative traction.