My write-up as of September 2025 (updated December 2025) - feel free to ask me for a more updated Source of Truth if you’re interested in this space—and please let me know what I’m missing!
AI Digest: Produces explainers on AI and as far as I know might be interested in someone running a video program for them
Control AIis getting great clips from podcasts and putting them up on Twitter. I’ve suggested that they should create a spreadsheet of these so everyone can use them in their videos. They’ve also been supporting existing creators to make videos about AI Safety, like this SciShow video
Giving What We Can is not an AI safety organization, but is starting up a video program headed by Justin Portela
“Fancier Crowd” that I’m going to be vague about: People with experience in Netflix pitches and successful political social media campaigns are becoming more involved in communicating AI safety through video, often independent of EA organizations
Organizations that might be interested in doing more video
Seismic Foundation: aims to raise public awareness and engagement with AI safety through broad-reach, relatable content for targeted groups of the general public, such as series and documentaries. They’re also starting up a content engine—they’re looking for creator partnerships, amplifying content or ideas from other AI Safety organizations and they’re on the hunt for a host / talent for some of their projects.
Explainable AI are pairing with AI organisations that want to increase the public exposure of their research / work. Thus far, we have done campaigns for CAIS, Apollo Research and Control AI papers. It is our hope to both increase our network of creators making content, as well as talk to any AI orgs who may be interested in having their work exposed more. See: https://www.instagram.com/explainable__ai/
Mithuna Yoganathan, a physics YouTuber, recently made a video on AI 2027
The CeSIA advised for the production of this video, which reached nearly five million views by the time I am writing this (i.e. plausibly >10% of the French adult population)
The CeSIA also gave an interview to popular scientific-skeptic channel La Tronche en Biais, which has expressed interest in posting more on the topic of AI safety more recently
David Louapre, who runs Science Étonnante, one of the most popular French-language science channels, has just this week announced pivoting to working as an AI safety researcher at Hugging Face, so it’s possible more will come from that direction too
Where can I find videos about AI safety? (tries to be quite thorough and is a great resource, but didn’t satisfy my personal use case for the kind of video content that is becoming more common now, with more urgency and higher production values)
What’s going on in video in AI Safety these days? (A list)
My write-up as of September 2025 (updated December 2025) - feel free to ask me for a more updated Source of Truth if you’re interested in this space—and please let me know what I’m missing!
Key Players Making Video:
Organizations
Palisade Research:
Does research on AI for policymakers and communicators.
Has started its own video program—check out their videos! Run by Petr Lebedev, an ex-producer/scriptwriter for Veritasium
Future of Life Institute (FLI): Has an accelerator program to fund creators making AI-related videos as well as a “Keep the Future Human” contest
BlueDot Impact: is contracting with Mateus de Sousa to make shortform videos
AI Digest: Produces explainers on AI and as far as I know might be interested in someone running a video program for them
Control AI is getting great clips from podcasts and putting them up on Twitter. I’ve suggested that they should create a spreadsheet of these so everyone can use them in their videos. They’ve also been supporting existing creators to make videos about AI Safety, like this SciShow video
Conor Axiotes: making an AI documentary
Michael Trazzi: Made the SB 1047 Documentary and makes TikToks
80,000 Hours Video Program (AI in Context):
Two full-time people (Chana Messinger, Aric Floyd) and contractors.
Our main effort is the YouTube channel “AI in Context.”
Eventually we might look to expand—more videos, more channels, maybe pitching to streaming services
Scriptwriting is our current biggest bottleneck
We contract for production and film people (lights, camera work, sound)
80,000 Hours podcast is released as videos
Martin Percy is an experienced filmmaker making interactive films about AI such as AI Basics: Thrills or Chills
Giving What We Can is not an AI safety organization, but is starting up a video program headed by Justin Portela
“Fancier Crowd” that I’m going to be vague about: People with experience in Netflix pitches and successful political social media campaigns are becoming more involved in communicating AI safety through video, often independent of EA organizations
Organizations that might be interested in doing more video
Giving What We Can
Seismic Foundation: aims to raise public awareness and engagement with AI safety through broad-reach, relatable content for targeted groups of the general public, such as series and documentaries. They’re also starting up a content engine—they’re looking for creator partnerships, amplifying content or ideas from other AI Safety organizations and they’re on the hunt for a host / talent for some of their projects.
Youtubers / Tiktokkers
Not a complete list of people in this space
Longform
Rob Miles
Siliconversations: A channel that has received funding from the Future of Life Institute (FLI), makes AI Safety videos
John Leaver: Runs Digital Engine, Pindex, and Inside AI; has been very helpful and is looking to start a new channel, often seeking hosts.
Drew Spartz runs a very successful YouTube channel on AI Safety, had a multi-million view video on AI 2027
Rational Animations
Explainable AI are pairing with AI organisations that want to increase the public exposure of their research / work. Thus far, we have done campaigns for CAIS, Apollo Research and Control AI papers. It is our hope to both increase our network of creators making content, as well as talk to any AI orgs who may be interested in having their work exposed more. See: https://www.instagram.com/explainable__ai/
Mithuna Yoganathan, a physics YouTuber, recently made a video on AI 2027
Computerphile posted a lot of videos about AI Safety
Spencer Greenberg is ramping up making YouTube and shortform videos and has a team of people working with him, including Liam Elkins
There are also points of connection to CGP Grey, Veritasium and Kurzgesagt
Kris Gasteratos is making excellent videos about cultivated meat; I think there’s a lot he’s learned that I’d like to learn from
(Copied from Matrice’s excellent comment below): There’s a decent amount of French-speaking ~AI safety content on YouTube:
@Shaïman Thürler’s channel Le Futurologue
@Gaetan_Selle 🔷 ’s channel The Flares
@len.hoang.lnh’s channel Science4All and Thibaut Giraud’s channel Monsieur Phi, the two channels cited by 9 of the 17 people citing a YouTube channel as where they first heard of EA in the 2020 EA Survey
The CeSIA advised for the production of this video, which reached nearly five million views by the time I am writing this (i.e. plausibly >10% of the French adult population)
The CeSIA also gave an interview to popular scientific-skeptic channel La Tronche en Biais, which has expressed interest in posting more on the topic of AI safety more recently
David Louapre, who runs Science Étonnante, one of the most popular French-language science channels, has just this week announced pivoting to working as an AI safety researcher at Hugging Face, so it’s possible more will come from that direction too
Tiktok
Zac of the Neural guide pivoted recently to talking about AI Safety at Explainable
parthknowsai: daily videos explaining AI news, sometimes AI safety relevant papers
nicole_flowing: talks about the implications of AI, and sometimes AI Safety related stuff
_dbrogle: daily technical AI explainers + some other stuff
PhilosopherGames: Recent addition around December 2025, won a prize in the Keep the Future Human contest
Filmmakers
Kate Hammond, of https://www.softcutfilms.com/ is making an AI Safety Documentary
Elizabeth Cox, of Should We Studios, producing Ada, about pressing problems
Other points of connection to Hollywood
Harrison Wood is a video producer who has been working with EA organizations
Retreats / coordination / upskilling
In conjunction with Mox, Explainable is putting on the Frame Fellowship
I’ve heard about two other people who are considering putting on coordination retreats for people interested in this space.
Lots of people are interested in working on this / helping out:
e.g.
Another ex-big channel producer/scriptwriter might be interested in getting involved
We know several people from the ex-Veritasium crowd
Some of the people who worked on Michael Trazzi’s documentary
People in the “EA Youtuber” WhatsApp group chat
I know of two videographers interested in x-risk looking for work
Someone who’s done videography on both sides of the camera and has done scriptwriting before and is poking around the space
Someone making tools for video creators
And others!
I also have a long list of folks from the expression of interest forms I have out
Links where you can find more information
Where can I find videos about AI safety? (tries to be quite thorough and is a great resource, but didn’t satisfy my personal use case for the kind of video content that is becoming more common now, with more urgency and higher production values)
How cost-effective are AI safety YouTubers? — EA Forum
Reach out if you’d like to be involved, have experience in video, or know people who do!