I’d love to hear in more detail about what this shift will mean for the 80,000 Hours Podcast, specifically.
The Podcast is a much-loved and hugely important piece of infrastructure for the entire EA movement. (Kudos to everyone involved over the years in making it so awesome—you deserve huge credit for building such a valuable brand and asset!)
Having a guest appear on it to talk about a certain issue can make a massive real-world difference, in terms of boosting interest, talent, and donations for that issue. To pick just one example: Meghan Barrett’s episode on insects seems to have been super influential. I’m sure that other people in the community will also be able to pick out specific episodes which have made a huge difference to interest in, and real-world action on, a particular issue.
My guess is that to a large extent this boosted activity and impact for non-AI issues does not “funge” massively against work on AI. The people taking action on these different issues would probably not have alternatively devoted a similar level of resources to AI safety-related stuff. (Presumably there is *some* funging going on, but my gut instinct is that it’s probably pretty low(?)) Non-AI-related content on the 80K Podcast has been hugely important for growing and energizing the whole EA movement and community.
Clearly, though, internally within the 80k team, there’s an opportunity cost to producing it, versus only doing AI content.
It feels like it would be absolutely awful—perhaps close to disastrous—for the non-AI bits of EA, and adjacent topics, if the Podcast were to only feature AI-related content in future. It won’t be completely obvious and salient that this is the effect. But, counterfactually, I think it will probably be really, really bad, going forward, to not have any new non-AI content on the Podcast.
It would be great to hear more about plans here. My guess (hope?!) is that it might still be advantageous to keep producing a range of content, in order to keep a broader listenership/wider “top-of-funnel”?
If the plan is to totally discontinue non-AI related content, I wonder if it would be possible to consider some steps that might be taken to ameliorate the effects of this on other issues and cause areas. For example, in a spirit of brainstorming, maybe 80k could allow other groups to record and release podcast episodes onto the 80k Podcast channel (or a new “vertical”/sub-brand of it)? (Obviously 80k would have a veto and only release stuff which they thought meets their high quality bar.) This feels like it could be really useful in terms of allowing non-AI groups to access the audience of the Podcast, whilst allowing 80k’s in-house resources to pivot to an AI focus.
Perhaps there are other cooperative options that could be considered along these lines if the plan is to only make AI content going forward.
I should stress again my admiration and gratitude for the 80k team in creating such a cool and valuable thing as the Podcast in the first place—I’m sure this sentiment is widely shared!
Piggybacking on this comment because I feel like the points have been well-covered already:
Given that the podcast is going to have a tigher focus on AGI, I wonder if the team is giving any considering to featuring more guests who present well-reasoned skepticism toward 80k’s current perspective (broadly understood). While some skeptics might be so sceptical of AGI or hostile to EA they wouldn’t make good guests, I think there are many thoughtful experts who could present a counter-case that would make for a useful episode(s).
To me, this comes from a case for epistemic hygiene, especially given the prominence that the 80k podcast has. To outside observers, 80k’s recent pivot might appear less as “evidence-based updating” and more as “surprising and suspicious convergence” without credible demonstrations that the team actually understands opposing perspectives and can respond to the obvious criticisms. I don’t remember the podcast featuring many guests who present a counter-case to 80ks AGI-bullishness as opposed to marginal critiques, and I don’t particularly remember those arguments/perspectives being given much time or care.
Even if the 80k team is convinced by the evidence, I believe that many in both the EA community and 80k’s broader audience are not. From a strategic persuasion standpoint, even if you believe the evidence for transformative AI and x-risk is overwhelming, interviewing primarily those already also convinced within the AI Safety community will likely fail to persuade those who don’t already find that community credible. Finally, there’s also significant value in “pressure testing” your position through engagement with thoughtful critics, especially if your theory of change involves persuading people who are either sceptical themselves or just unconvinced.
Some potential guests who could provide this perspective (note, I don’t these 100% endorse the people below, but just that they point the direction of guests that might do a good job at the above):
Thanks for your comment and appreciation of the podcast.
I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer.
We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it.
This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it.
There’s something I’d like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that).
Is this an element you considered?
Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference).
I’d like to know what will be 80k’s position on that topic? (if this is public information)
Thanks for the rapid and clear response, Luisa—it’s very much appreciated. I’m incredibly relieved and pleased to hear that the Podcast will still be covering some non-AI stuff, even it it’s less frequently than previously. It feels like those episodes have huge impact, including in worlds where we see a rapid AI-driven transformation of society—e.g. by increasing the chances that whoever/whatever wields power in the future cares about all moral patients, not just humans.
Hope you have fun making those, and all, future episodes :)
This is probably motivated reasoning on my part, but the more I think about this, I think it genuinely probably does make sense for 80k to try to maintain as big and broad an audience for the Podcast as possible, whilst also ramping up its AI content. The alternative would be to turn the Podcast effectively into an only-AI thing, which would presumably limit the audience quite a lot (?) I’m genuinely unsure what is the best strategy here, from 80k’s point of view, if the objective is something like “maximise listenership for AI related content”. Hopefully, if it’s a close call, they might err on the side of broadness, in order to be cooperative with the wider EA community.
I’d love to hear in more detail about what this shift will mean for the 80,000 Hours Podcast, specifically.
The Podcast is a much-loved and hugely important piece of infrastructure for the entire EA movement. (Kudos to everyone involved over the years in making it so awesome—you deserve huge credit for building such a valuable brand and asset!)
Having a guest appear on it to talk about a certain issue can make a massive real-world difference, in terms of boosting interest, talent, and donations for that issue. To pick just one example: Meghan Barrett’s episode on insects seems to have been super influential. I’m sure that other people in the community will also be able to pick out specific episodes which have made a huge difference to interest in, and real-world action on, a particular issue.
My guess is that to a large extent this boosted activity and impact for non-AI issues does not “funge” massively against work on AI. The people taking action on these different issues would probably not have alternatively devoted a similar level of resources to AI safety-related stuff. (Presumably there is *some* funging going on, but my gut instinct is that it’s probably pretty low(?)) Non-AI-related content on the 80K Podcast has been hugely important for growing and energizing the whole EA movement and community.
Clearly, though, internally within the 80k team, there’s an opportunity cost to producing it, versus only doing AI content.
It feels like it would be absolutely awful—perhaps close to disastrous—for the non-AI bits of EA, and adjacent topics, if the Podcast were to only feature AI-related content in future. It won’t be completely obvious and salient that this is the effect. But, counterfactually, I think it will probably be really, really bad, going forward, to not have any new non-AI content on the Podcast.
It would be great to hear more about plans here. My guess (hope?!) is that it might still be advantageous to keep producing a range of content, in order to keep a broader listenership/wider “top-of-funnel”?
If the plan is to totally discontinue non-AI related content, I wonder if it would be possible to consider some steps that might be taken to ameliorate the effects of this on other issues and cause areas. For example, in a spirit of brainstorming, maybe 80k could allow other groups to record and release podcast episodes onto the 80k Podcast channel (or a new “vertical”/sub-brand of it)? (Obviously 80k would have a veto and only release stuff which they thought meets their high quality bar.) This feels like it could be really useful in terms of allowing non-AI groups to access the audience of the Podcast, whilst allowing 80k’s in-house resources to pivot to an AI focus.
Perhaps there are other cooperative options that could be considered along these lines if the plan is to only make AI content going forward.
I should stress again my admiration and gratitude for the 80k team in creating such a cool and valuable thing as the Podcast in the first place—I’m sure this sentiment is widely shared!
Piggybacking on this comment because I feel like the points have been well-covered already:
Given that the podcast is going to have a tigher focus on AGI, I wonder if the team is giving any considering to featuring more guests who present well-reasoned skepticism toward 80k’s current perspective (broadly understood). While some skeptics might be so sceptical of AGI or hostile to EA they wouldn’t make good guests, I think there are many thoughtful experts who could present a counter-case that would make for a useful episode(s).
To me, this comes from a case for epistemic hygiene, especially given the prominence that the 80k podcast has. To outside observers, 80k’s recent pivot might appear less as “evidence-based updating” and more as “surprising and suspicious convergence” without credible demonstrations that the team actually understands opposing perspectives and can respond to the obvious criticisms. I don’t remember the podcast featuring many guests who present a counter-case to 80ks AGI-bullishness as opposed to marginal critiques, and I don’t particularly remember those arguments/perspectives being given much time or care.
Even if the 80k team is convinced by the evidence, I believe that many in both the EA community and 80k’s broader audience are not. From a strategic persuasion standpoint, even if you believe the evidence for transformative AI and x-risk is overwhelming, interviewing primarily those already also convinced within the AI Safety community will likely fail to persuade those who don’t already find that community credible. Finally, there’s also significant value in “pressure testing” your position through engagement with thoughtful critics, especially if your theory of change involves persuading people who are either sceptical themselves or just unconvinced.
Some potential guests who could provide this perspective (note, I don’t these 100% endorse the people below, but just that they point the direction of guests that might do a good job at the above):
Melanie Mitchell
François Chollet
Kenneth Stanley
Tan Zhi-Xuan
Nora Belrose
Nathan Lambert
Sarah Hooker
Timothy B. Lee
Krishnan Rohit
Thanks for your comment and appreciation of the podcast.
I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer.
We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it.
This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call.
There’s something I’d like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that).
Is this an element you considered?
Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference).
I’d like to know what will be 80k’s position on that topic? (if this is public information)
Thanks for asking. Our definition of impact includes non-human sentient beings, and we don’t plan to change that.
Great! Good to know.
Thanks for the rapid and clear response, Luisa—it’s very much appreciated. I’m incredibly relieved and pleased to hear that the Podcast will still be covering some non-AI stuff, even it it’s less frequently than previously. It feels like those episodes have huge impact, including in worlds where we see a rapid AI-driven transformation of society—e.g. by increasing the chances that whoever/whatever wields power in the future cares about all moral patients, not just humans.
Hope you have fun making those, and all, future episodes :)
This is probably motivated reasoning on my part, but the more I think about this, I think it genuinely probably does make sense for 80k to try to maintain as big and broad an audience for the Podcast as possible, whilst also ramping up its AI content. The alternative would be to turn the Podcast effectively into an only-AI thing, which would presumably limit the audience quite a lot (?) I’m genuinely unsure what is the best strategy here, from 80k’s point of view, if the objective is something like “maximise listenership for AI related content”. Hopefully, if it’s a close call, they might err on the side of broadness, in order to be cooperative with the wider EA community.