I have different recommendations for ML researchers / the public / proto-EAs (people are more or less skeptical to begin with, rely on different kinds of evidence, and are willing to entertain weirder or more normative hypotheses), but that post covers some introductory materials.
Just updated my post on this: https://forum.effectivealtruism.org/posts/8sAzgNcssH3mdb8ya/resources-i-send-to-ai-researchers-about-ai-safety
I have different recommendations for ML researchers / the public / proto-EAs (people are more or less skeptical to begin with, rely on different kinds of evidence, and are willing to entertain weirder or more normative hypotheses), but that post covers some introductory materials.
If they’re a somewhat skeptical ML researcher and looking for introductory material, my top recommendation at the moment is “Why I Think More NLP Researchers Should Engage with AI Safety Concerns” by Sam Bowman (2022), 15m (Note: stop at the section “The new lab”)