Michaël Trazzi
Things I Learned Making The SB-1047 Documentary
Finishing The SB-1047 Documentary
Israeli Prime Minister, Musk and Tegmark on AI Safety
Collin Burns on Alignment Research And Discovering Latent Knowledge Without Supervision
Victoria Krakovna on AGI Ruin, The Sharp Left Turn and Paradigms of AI Alignment
David Krueger on AI Alignment in Academia and Coordination
On a related note, has someone looked into the cost-effectiveness of funding new podcasts vs. convincing mainstream ones to produce more impactful content, similarly to how OpenPhil funded Kurzgesagt?
For instance, has anyone tried to convince people like Lex Fridman who has already interviewed MacAskill and Bostrom, to interview more EA-aligned speakers?
My current analysis gives roughly an audience of 1-10M per episode for Lex, and I’d expect that something around $20-100k per episode would be enough of an incentive.
In comparison, when giving $10k to start a podcast, the potential reach is maybe 100-10k per episode after 10 episodes, but maybe the EV is higher because most of the impact is after those 10 first episodes. Also, the core audience is more willing to update their models and benefit from the podcast than eg the average Lex Fridman listener.
Another counterargument woud be that Lex already interviews people like MacAskill and Bostrom so the marginal impact of an additional one is low, and EA-aligned impactful speakers already manage to get on mainstream media to do outreach anyway (eg Will going on cable cable TV for WWOTF).
I’m flattered for The Inside View to be included here among so many great podcasts. This is an amazing opportunity and I am excited to see more podcasts emerge, especially video ones.
If anyone is on the edge of starting and would like to hear some of the hard lessons I’ve learned and other hot takes I have on podcasting or video, feel free to message me at michael.trazzi at gmail or (better) comment here.
Note: if you want to discuss some of the content of this episode, or one of the above quotes, I’ll be at EAG DC this weekend chatting about AI Governance–feel free to book a meeting!
Shahar Avin on How to Strategically Regulate Advanced AI Systems
Agreed!
As Zach pointed out below there might be some mistakes left in the precise numbers, for any quantitative analysis I would suggest reading AI Impacts’ write-up: https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/
Thanks for the corrections!
Can you tell me exactly which numbers I should change and where?
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Markus Anderljung On The AI Policy Landscape
Alex Lawsen On Forecasting AI Progress
Sorry about that! The AI generating the transcript was not conscious of the pain created by his terrible typos.
Materializing AI 2027 with board game pieces was such a simple yet powerful idea. Brilliantly executed.
Congrats to Phoebe, Aric, Chana and the rest of the team.
Looking forward the upcoming videos.