If they aren’t already familiar with AI risk, then they probably won’t read Nick Bostrom’s Superintelligence (2014) or AI as a Positive and Negative Factor in Global Risk (2008). For people whom a single article is more appropriate, keeping in mind the Lessons Learned from Talking to Academics about AI Safety, what is the best resource for introducing someone to AI safety?
[Question] What is the best article to introduce someone to AI safety for the first time?
What is their level of familiarity with machine learning and/or computer science?
High on computer science, very low on ML.
Just updated my post on this: https://forum.effectivealtruism.org/posts/8sAzgNcssH3mdb8ya/resources-i-send-to-ai-researchers-about-ai-safety
I have different recommendations for ML researchers / the public / proto-EAs (people are more or less skeptical to begin with, rely on different kinds of evidence, and are willing to entertain weirder or more normative hypotheses), but that post covers some introductory materials.
If they’re a somewhat skeptical ML researcher and looking for introductory material, my top recommendation at the moment is “Why I Think More NLP Researchers Should Engage with AI Safety Concerns” by Sam Bowman (2022), 15m (Note: stop at the section “The new lab”)