It’s great to see the podcast expanding. I think the ship has already sailed on this, but it feels important for me to flag two experiences I’ve had since the podcast’s “shift to AI.”
I listen much less than I used to. This is partly because I end up thinking plenty about AI at work, but also because podcasts occupy a middle ground between entertainment and informativeness for me. Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.
I share episodes with non-EAs much less than I used to. Most normies I know are sick of hearing about AI and, moreover, there’s no longer any content to engage people who don’t want to listen to a three-hour podcast about AI. I think that’s a shame, since many of those people would have happily listened to a three-hour podcast about e.g. vaccines, subscribed, and then learned about AI at a later date.
This also applies to the 80k brand as a whole. I used to recommend it to people interested in having an impact with their career but ever since 80k pivoted to an AI career funnel I recommend it to fewer people and always with the caveat of “They focus only on AI now, but there is some useful content hidden beneath”
“Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.”
-> Personally, I find a whole lot of non-technical AI content to be highly repetitive. It seems like a lot of the same questions are being discussed again and again with fairly little progress.
For 80k, I think I’d really encourage the team to focus a lot on figuring out new subtopics that are interesting and important. I’m sure there are many great stories out there, but I think it’s very easy to get trapped into talking about the routine updates or controversies of the week, with little big-picture understanding.
My suggestion along these lines would be to try to get guests on who come with a different perspective on transformative AI or AGI than most of the 80,000 Hours Podcast’s past guests or most people in EA. Toby Ord’s episode was excellent in this respect; he’s as central to EA as it gets, yet he was dumping cold water on the scaling trends many people in EA take for granted.
Some obvious big names that might be hard to get: François Chollet, Richard Sutton, and Yann LeCun (the links go to representative podcast clips for each one of them).
A semi-big name who will probably be easier to get: Jeff Hawkins of Numenta.
A less famous person who might be a good stand-in for Richard Sutton’s perspective on AI is Edan Meyer, an academic AI researcher.
With some research and asking around, you could probably generate more ideas for guests along these lines.
I think one good way to get more clarity on the big picture and stimulate more creative thinking is to bring people into the conversation who have more diverse viewpoints. Even if you were to come at it from the perspective of being 95% certain that LLMs will scale to AGI within 10 years (which AFAIK is a big exaggeration of the 80,000 Hours team’s real views), one really useful part of having guests like this one would be prompting the hosts and the audience to think about why, exactly, these guests are wrong in their LLM skepticism.
I think even in cases where you are 95% sure you’re right, talking to brilliant, eloquent experts who disagree can only serve to sharpen your thinking and put you in a better position to think about and articulate your case. Conversely, I think when you’re only talking to people who agree with you, you don’t develop an ability to make a persuasive case to people who don’t already agree. You take for granted things other people don’t take for granted, and you’re maybe not even aware of other people’s objections, qualms, and concerns. Maybe the most important part of persuasion is showing people you know what they have to say and that you have an answer to it.
A lot of the stated goals in the Google Doc come down to persuasion, so this seems in line with your goals.
Thanks for the nudge. I agree it seems crucial to try to find things that are actually different to cover—both for the sake of being interesting and more importantly to actually have an impact. I’d love to hear any particular suggestions you have about things that seem underexplored and important to you!
I had a similar experience. I recommended the podcast to dozens of people over the years, because it was one of the best to have fascinating interviews with great guest on a very wide range of topics. However, since it switched to AI as the main topic, I have recommended it to zero people and I don’t expect this to change if the focus stays this way.
Thanks Matt and other commenting here. I have independently starting worrying about the show being too narrow and repetitive this year, and will be factoring in the issues people have raised here in planning for next year!
(Unfortunately I can’t say we’ll probably get back to being as interesting for an EA Forum audience as we once were, as we’re working with a different theory of change now and I think for better or worse the times we’re living in call for shifting strategy.)
To be 100% clear about what I see as the main issue (by what I think are 80k’s lights), it’s not that the podcast is less interesting for an EA Forum audience, but rather that it’s less interesting in general. It’s a niche podcast for people who already think AI is very important.
I’m sort of confused by how this interacts with the goals laid out in the Google Doc. I think it’s great to target elite decision-makers — but I would have assumed the greatest impact is on the extensive margin, among people who (1) have decision-making power but aren’t AI specialists or (2) don’t already have well-developed views.
By not offering content that will allow the podcast to grow along this margin, I would worry that you are preaching to various existing choirs! I certainly can’t imagine anyone becoming interested in working in AI as a result of this podcast—they’ll never listen!
But I think surely you have thought about this — I am interested in the answer, though.
Hey Matt, obviously there’s a tonne one could say here, just to offer some quick thoughts:
In playing down the chances that people here will enjoy it as much as they used to, I wasn’t particularly responding to you or about impact, just not wanting to set readers up to be disappointed.
The optimal strategy may well involve the show being less interesting in general. I’d say for instance that the AI Policy Podcast is much less interesting than the 80k podcast used to be, while nevertheless being very influential (and probably beneficial). It depends on whether the primary mechanism is to persuade people who aren’t bought in, or provide resources that are useful to people and enable them to have impact as they become persuaded by whatever means. (Or simply improve understanding of difficult issues within the AI ecosystem itself.) Different interviewers might reasonably adopt different strategies.
The behaviour of the people on audio apps vs YouTube is very different, in some ways totally opposite. An interview repeating points about AI/AGI/intelligence explosion that would be familiar and tedious to you, but doing it very well, could nevertheless reach and indeed persuade people by being fed to them by the YouTube algorithm (while adding little value to regular subscribers).
I have to leave it there just due to time constraints but my current bottom line is that I need to aim to have more unexpected guests / POVs, and a wider range of topics. So basically I’m agreeing with you that we’ve swung too far away from what people like about the show before this year and general interest is indeed important for the reason you give.
Do you see other podcasts filling the long-form, serious/in-depth, EA-adjacent/aligned niche in areas other than AI? E.g., GiveWell has a podcast, but I’m not sure it’s the same sort of thing. There’s also Hear This Idea, often Clearer Thinking or Dwarkesh Patel cover relevant stuff.
(Aside, was thinking of potentially trying to do a podcast involving researchers and research evaluators linked to The Unjournal; if I thought it could fill a gap and we could do it well, which I’m not sure of.)
No, I really don’t. Sometimes you see things in the same territory on Dwarkesh (which is very AI-focused) or Econtalk (which is shorter and less and less interesting to me lately). Rationally Speaking was wonderful but appears to be done. Hear This Idea is intermittent and often more narrowly focused. You get similar guests on podcasts like Jolly Swagman but the discussion is often at too low of a level, with worse questions asked. I have little hope of finding episodes like those with Hannah Ritchie, Christopher Brown,Andy Weber, or Glen Weyl anywhere else anytime soon. It’s actually a big loss in my life and (IMO) leaving many future potential EAs and AI people on the table.
Hi Matt. Since you mentioned “vaccines”, you may be interested in the podcast Hard Drugs.
Hard Drugs is a show by Saloni Dattani and Jacob Trefethen about medical innovation: how to speed it up, how to scale it up, and how to make sure lifesaving tools reach the people who need them the most. It is brought to you by Works in Progress and Open Philanthropy. Listen on your favorite podcast app or subscribe to our YouTube channel.
It’s great to see the podcast expanding. I think the ship has already sailed on this, but it feels important for me to flag two experiences I’ve had since the podcast’s “shift to AI.”
I listen much less than I used to. This is partly because I end up thinking plenty about AI at work, but also because podcasts occupy a middle ground between entertainment and informativeness for me. Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.
I share episodes with non-EAs much less than I used to. Most normies I know are sick of hearing about AI and, moreover, there’s no longer any content to engage people who don’t want to listen to a three-hour podcast about AI. I think that’s a shame, since many of those people would have happily listened to a three-hour podcast about e.g. vaccines, subscribed, and then learned about AI at a later date.
This also applies to the 80k brand as a whole. I used to recommend it to people interested in having an impact with their career but ever since 80k pivoted to an AI career funnel I recommend it to fewer people and always with the caveat of “They focus only on AI now, but there is some useful content hidden beneath”
“Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.”
-> Personally, I find a whole lot of non-technical AI content to be highly repetitive. It seems like a lot of the same questions are being discussed again and again with fairly little progress.
For 80k, I think I’d really encourage the team to focus a lot on figuring out new subtopics that are interesting and important. I’m sure there are many great stories out there, but I think it’s very easy to get trapped into talking about the routine updates or controversies of the week, with little big-picture understanding.
My suggestion along these lines would be to try to get guests on who come with a different perspective on transformative AI or AGI than most of the 80,000 Hours Podcast’s past guests or most people in EA. Toby Ord’s episode was excellent in this respect; he’s as central to EA as it gets, yet he was dumping cold water on the scaling trends many people in EA take for granted.
Some obvious big names that might be hard to get: François Chollet, Richard Sutton, and Yann LeCun (the links go to representative podcast clips for each one of them).
A semi-big name who will probably be easier to get: Jeff Hawkins of Numenta.
A less famous person who might be a good stand-in for Richard Sutton’s perspective on AI is Edan Meyer, an academic AI researcher.
With some research and asking around, you could probably generate more ideas for guests along these lines.
I think one good way to get more clarity on the big picture and stimulate more creative thinking is to bring people into the conversation who have more diverse viewpoints. Even if you were to come at it from the perspective of being 95% certain that LLMs will scale to AGI within 10 years (which AFAIK is a big exaggeration of the 80,000 Hours team’s real views), one really useful part of having guests like this one would be prompting the hosts and the audience to think about why, exactly, these guests are wrong in their LLM skepticism.
I think even in cases where you are 95% sure you’re right, talking to brilliant, eloquent experts who disagree can only serve to sharpen your thinking and put you in a better position to think about and articulate your case. Conversely, I think when you’re only talking to people who agree with you, you don’t develop an ability to make a persuasive case to people who don’t already agree. You take for granted things other people don’t take for granted, and you’re maybe not even aware of other people’s objections, qualms, and concerns. Maybe the most important part of persuasion is showing people you know what they have to say and that you have an answer to it.
A lot of the stated goals in the Google Doc come down to persuasion, so this seems in line with your goals.
Thanks for all the suggestions!
Thanks for the nudge. I agree it seems crucial to try to find things that are actually different to cover—both for the sake of being interesting and more importantly to actually have an impact. I’d love to hear any particular suggestions you have about things that seem underexplored and important to you!
I had a similar experience. I recommended the podcast to dozens of people over the years, because it was one of the best to have fascinating interviews with great guest on a very wide range of topics. However, since it switched to AI as the main topic, I have recommended it to zero people and I don’t expect this to change if the focus stays this way.
Useful to know, thanks
Thanks Matt and other commenting here. I have independently starting worrying about the show being too narrow and repetitive this year, and will be factoring in the issues people have raised here in planning for next year!
(Unfortunately I can’t say we’ll probably get back to being as interesting for an EA Forum audience as we once were, as we’re working with a different theory of change now and I think for better or worse the times we’re living in call for shifting strategy.)
To be 100% clear about what I see as the main issue (by what I think are 80k’s lights), it’s not that the podcast is less interesting for an EA Forum audience, but rather that it’s less interesting in general. It’s a niche podcast for people who already think AI is very important.
I’m sort of confused by how this interacts with the goals laid out in the Google Doc. I think it’s great to target elite decision-makers — but I would have assumed the greatest impact is on the extensive margin, among people who (1) have decision-making power but aren’t AI specialists or (2) don’t already have well-developed views.
By not offering content that will allow the podcast to grow along this margin, I would worry that you are preaching to various existing choirs! I certainly can’t imagine anyone becoming interested in working in AI as a result of this podcast—they’ll never listen!
But I think surely you have thought about this — I am interested in the answer, though.
Hey Matt, obviously there’s a tonne one could say here, just to offer some quick thoughts:
In playing down the chances that people here will enjoy it as much as they used to, I wasn’t particularly responding to you or about impact, just not wanting to set readers up to be disappointed.
The optimal strategy may well involve the show being less interesting in general. I’d say for instance that the AI Policy Podcast is much less interesting than the 80k podcast used to be, while nevertheless being very influential (and probably beneficial). It depends on whether the primary mechanism is to persuade people who aren’t bought in, or provide resources that are useful to people and enable them to have impact as they become persuaded by whatever means. (Or simply improve understanding of difficult issues within the AI ecosystem itself.) Different interviewers might reasonably adopt different strategies.
The behaviour of the people on audio apps vs YouTube is very different, in some ways totally opposite. An interview repeating points about AI/AGI/intelligence explosion that would be familiar and tedious to you, but doing it very well, could nevertheless reach and indeed persuade people by being fed to them by the YouTube algorithm (while adding little value to regular subscribers).
I have to leave it there just due to time constraints but my current bottom line is that I need to aim to have more unexpected guests / POVs, and a wider range of topics. So basically I’m agreeing with you that we’ve swung too far away from what people like about the show before this year and general interest is indeed important for the reason you give.
Thanks! This is very helpful/informative — particularly the thing about YouTube!
Thanks for letting us know! That’s useful data.
Do you see other podcasts filling the long-form, serious/in-depth, EA-adjacent/aligned niche in areas other than AI? E.g., GiveWell has a podcast, but I’m not sure it’s the same sort of thing. There’s also Hear This Idea, often Clearer Thinking or Dwarkesh Patel cover relevant stuff.
(Aside, was thinking of potentially trying to do a podcast involving researchers and research evaluators linked to The Unjournal; if I thought it could fill a gap and we could do it well, which I’m not sure of.)
No, I really don’t. Sometimes you see things in the same territory on Dwarkesh (which is very AI-focused) or Econtalk (which is shorter and less and less interesting to me lately). Rationally Speaking was wonderful but appears to be done. Hear This Idea is intermittent and often more narrowly focused. You get similar guests on podcasts like Jolly Swagman but the discussion is often at too low of a level, with worse questions asked. I have little hope of finding episodes like those with Hannah Ritchie, Christopher Brown, Andy Weber, or Glen Weyl anywhere else anytime soon. It’s actually a big loss in my life and (IMO) leaving many future potential EAs and AI people on the table.
Hi Matt. Since you mentioned “vaccines”, you may be interested in the podcast Hard Drugs.
Here’s some suggestions from 6 minutes of ChatGPT thinking. (Not all are relevant, e.g., I don’t think “Probable Causation” is a good fit here.)