Thanks for your comment and appreciation of the podcast.
I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer.
We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it.
This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it.
There’s something I’d like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that).
Is this an element you considered?
Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference).
I’d like to know what will be 80k’s position on that topic? (if this is public information)
Thanks for the rapid and clear response, Luisa—it’s very much appreciated. I’m incredibly relieved and pleased to hear that the Podcast will still be covering some non-AI stuff, even it it’s less frequently than previously. It feels like those episodes have huge impact, including in worlds where we see a rapid AI-driven transformation of society—e.g. by increasing the chances that whoever/whatever wields power in the future cares about all moral patients, not just humans.
Hope you have fun making those, and all, future episodes :)
Thanks for your comment and appreciation of the podcast.
I think the short story is that yes, we’re going to be producing much less non-AI podcast content than we previously were — over the next two years, we tentatively expect ~80% of our releases to be AI/AGI focused. So we won’t entirely stop covering topics outside of AI, but those episodes will be rarer.
We realised that in 2024, only around 12 of the 38 episodes we released on our main podcast feed were focused on AI and its potentially transformative impacts. On reflection, we think that doesn’t match the urgency we feel about the issue or how much we should be focusing on it.
This decision involved very hard tradeoffs. It comes with major downsides, including limiting our ability to help motivate work on other pressing problems, along with the fact that some people will be less excited to listen to our podcast once it’s more narrowly focused. But we also think there’s a big upside: more effectively contributing to the conversation about what we believe is the most important issue of this decade.
On a personal level, I’ve really loved covering topics like invertebrate welfare, global health, and wild animal suffering, and I’m very sad we won’t be able to do as much of it. They’re still incredibly important and neglected problems. But I endorse the strategic shift we’re making and think it reflects our values. I’m also sorry it will disappoint some of our audience, but I hope they can understand the reasons we’re making this call.
There’s something I’d like to understand here. Most of the individuals that an AGI will affect will be animals, including invertebrates and wild animals. This is because they are very numerous, even if one were to grant them a lower moral value (although artificial sentience could be up there too). AI is already being used to make factory farming more efficient (the AI for Animals newsletter is more complete about that).
Is this an element you considered?
Some people in AI safety seem to consider only humans in the equation, while some assume that an aligned AI will, by default, treat them correctly. Conversely, some people push for an aligned AI that takes into account all sentient beings (see the recent AI for animals conference).
I’d like to know what will be 80k’s position on that topic? (if this is public information)
Thanks for asking. Our definition of impact includes non-human sentient beings, and we don’t plan to change that.
Great! Good to know.
Thanks for the rapid and clear response, Luisa—it’s very much appreciated. I’m incredibly relieved and pleased to hear that the Podcast will still be covering some non-AI stuff, even it it’s less frequently than previously. It feels like those episodes have huge impact, including in worlds where we see a rapid AI-driven transformation of society—e.g. by increasing the chances that whoever/whatever wields power in the future cares about all moral patients, not just humans.
Hope you have fun making those, and all, future episodes :)