Stay up-to-date with the latest advancements in the world of AI and AI safety with our newsletter, crafted by the experts at the Center for AI Safety. No technical background required.
Each week, this newsletter provides summaries of three important developments that AI policy professionals should know about, especially folks working on US AI policy. Visit the archive to read a sample issue.
This weekly newsletter, sent out by Jeff Ding, a researcher at the Future of Humanity Institute, covers the Chinese AI landscape and includes translations from Chinese government agencies, newspapers, corporations, and other sources.
This is a weekly newsletter about artificial intelligence, covering everything from technical advances to policy debates, as well as a weekly short story.
Policy.ai (Center for Security and Emerging Technology (CSET))
A biweekly newsletter on artificial intelligence, emerging technology and security policy.
The AI Evaluation Substack is a monthly digest that covers the latest developments, research trends, and critical evaluations in the field of artificial intelligence.
This newsletter, which is distributed eight times a year, provides information about the Oxford Internet Institute, a multidisciplinary research and teaching department of the University of Oxford dedicated to the social science of the Internet.
A blog “Don’t Worry About the Vase” is a Substack newsletter: “Doing both speed premium short term updates and long term world model building. Currently focused on weekly AI updates. Explorations include AI, policy, rationality, medicine and fertility, education and games.”
Other resources: collections, programs, reading lists, etc.
Getting involved
AI Safety Training—A database of training programs, conferences, and other events for AI existential safety, collected by AI Safety Support
The closest thing to this that I’m aware of is Lots of Links by AI Safety Support, which is great, but you can’t comment on it to add more and share reflections, which I think is a bummer. There’s probably more. (Relevant xkcd.)
Thanks to folks who directed me to some of the resources listed here!
Note also that in some cases, I’m quoting near-verbatim from assorted places that directed me to these or from the descriptions of the resources listed on their websites.
List of AI safety newsletters and other resources
This is a list of AI safety newsletters (and some other ways to keep up with AI developments).[1]
If you know of any that I’ve missed, please comment![2]
And I’d love to hear about your experiences with or reflections on the resources listed.
Thanks to everyone who puts together resources like the ones I’m collecting here!
Just to be clear: the list includes newsletters and resources that I haven’t engaged with much.
Podcast & video channels
Dwarkesh Podcast—deeply researched interviews
AI Explained (YouTube) - ~weekly recaps of recent news or research from AI and AI safety)
The AI Policy Podcast (CSIS) - every two weeks, discussions about AI policy and more.
Robert Miles AI Safety (YouTube)
AI X-risk Research Podcast with Daniel Filan (AXRP)
The 80,000 Hours podcast often has episodes related to AI safety
You can listen to many Forum posts, including a “Curated and Popular” feed, and more. See more info here.
There are also the following, although I haven’t personally engaged with them:
The AI Safety Podcast
The AI Daily Brief (YouTube) (not safety-focused)
Future of Life Institute Podcast
Newsletters
General-audience, safety-oriented newsletters on AI and AI governance
Note that the EA Newsletter, which I currently run, also often covers relevant updates in AI safety.
AI Safety Newsletter (Center for AI Safety) (🔉)
Stay up-to-date with the latest advancements in the world of AI and AI safety with our newsletter, crafted by the experts at the Center for AI Safety. No technical background required.
Audio available via the Forum cross-posts.
Transformer (Shakeel Hashim)
A weekly briefing of AI and AI policy updates and media/popular coverage.
AI Policy Weekly (Center for AI Policy)
Each week, this newsletter provides summaries of three important developments that AI policy professionals should know about, especially folks working on US AI policy. Visit the archive to read a sample issue.
More in-the-weeds safety-oriented newsletters
GovAI newsletter (Centre for the Governance of AI)
Includes research, annual reports, and rare updates about programmes and opportunities. They also have a blog.
ChinAI (Jeffrey Ding)
This weekly newsletter, sent out by Jeff Ding, a researcher at the Future of Humanity Institute, covers the Chinese AI landscape and includes translations from Chinese government agencies, newspapers, corporations, and other sources.
AI safety takes (Daniel Paleka)
Summaries of news and research in AI safety (once every month or two).
The EU AI Act Newsletter (Future of Life Institute (FLI))
A bi-weekly newsletter about up-to-date developments and analyses of the proposed EU AI law.
The Autonomous Weapons Newsletter (Future of Life Institute (FLI))
Monthly updates on the technology and policy of autonomous weapons.
Other AI newsletters (not necessarily safety-oriented)
EuropeanAI newsletter (Charlotte Stix)
This bi-monthly newsletter covers the state of European AI and the most recent developments in AI governance within the EU Member States.ai
Import AI (Jack Clark)
This is a weekly newsletter about artificial intelligence, covering everything from technical advances to policy debates, as well as a weekly short story.
Policy.ai (Center for Security and Emerging Technology (CSET))
A biweekly newsletter on artificial intelligence, emerging technology and security policy.
The AI Evaluation Substack
The AI Evaluation Substack is a monthly digest that covers the latest developments, research trends, and critical evaluations in the field of artificial intelligence.
TLDR AI
Daily email about new AI tech.
Newsletters on related topics, or which often cover AI or AI safety
RAND newsletters (and research you can get on RSS feeds)
E.g. Policy Currents
GCR Policy Newsletter
A twice-monthly newsletter that highlights the latest research and news on global catastrophic risk.
Forecasting newsletter (and Alert/Sentinel minutes)
Covers prediction markets and forecasting platforms as well as some changes in recent forecasts.
Crypto-Gram (Schneier on Security)
Crypto-Gram is a free monthly e-mail digest of posts from Bruce Schneier’s Schneier on Security blog.
Oxford Internet Institute
This newsletter, which is distributed eight times a year, provides information about the Oxford Internet Institute, a multidisciplinary research and teaching department of the University of Oxford dedicated to the social science of the Internet.
Statecraft (Santi Ruiz)
Interviews with policymakers and others.
Don’t Worry About the Vase (Zvi Mowshowitz)
A blog “Don’t Worry About the Vase” is a Substack newsletter: “Doing both speed premium short term updates and long term world model building. Currently focused on weekly AI updates. Explorations include AI, policy, rationality, medicine and fertility, education and games.”
Other resources: collections, programs, reading lists, etc.
Getting involved
AI Safety Training—A database of training programs, conferences, and other events for AI existential safety, collected by AI Safety Support
80,000 Hours lists “collections” of opportunities for getting involved, like internships in ML, fellowships, and Master’s options (see also the EA Opportunity Board and the overall 80,000 Hours job board.)
Emerging Technology Policy Careers compiles information about policy and public service careers in emerging tech policy.
Recurring courses & programs
AGI Safety Fundamentals (AGISF) - courses by BlueDot Impact on AI alignment (101 and 201) and AI governance
MATS Program—ML Alignment & Theory Scholars (MATS) Program (previously SERI MATS—Stanford Existential Risks Initiative ML Alignment Theory Scholars)
Intro to ML Safety by the Center for AI Safety (CAIS)
I’m not sure how recurring or standardized these are:
MLAB: Upskill in machine learning (advanced)
ML Safety Scholars: Upskill in machine learning (beginners) (not running this year)
Philosophy Fellowship: For grad students and PhDs in philosophy
PIBBSS: For social scientists and natural scientists
Lists/collections (see also reading lists from the above)
Lots of Links by AI Safety Support
A collection of AI Governance-related Podcasts, Newsletters, Blogs, and more (Alex Lintz, 2 Oct 2021)
Resources that (I think) new alignment researchers should know about (LessWrong post by Akash, 29 Oct 2022)
Resources I send to AI researchers about AI safety (LessWrong post by Vael Gates, 14 Jun 2022)
List of AGI safety talks gathered by BlueDot Impact
Forums
AI Alignment Forum—quite technical, restricted posting
LessWrong—lots of AI content, but also focuses on other topics
Effective Altruism Forum—this platform
A few highlighted blogs
Cold Takes by Holden Karnofsky
Planned Obsolescence by Ajeya Cotra and Kelsey Piper
AI Impacts
Epoch AI
Closing notes
Please suggest additions by commenting!
Please post reflections and thoughts on the different resources (or your personal highlights).
Links to no longer active newsletters can be found in this footnote.[3]
Thanks again to everyone.
The closest thing to this that I’m aware of is Lots of Links by AI Safety Support, which is great, but you can’t comment on it to add more and share reflections, which I think is a bummer. There’s probably more. (Relevant xkcd.)
Thanks to folks who directed me to some of the resources listed here!
Note also that in some cases, I’m quoting near-verbatim from assorted places that directed me to these or from the descriptions of the resources listed on their websites.
AI Safety Support (AI Safety Support)
Opportunities in AGI safety (BlueDot Impact)
ML Safety Newsletter (Center for AI Safety)
Alignment Newsletter (Rohin Shah)
This week in security (@zackwhittaker)