Piggybacking on this comment because I feel like the points have been well-covered already:
Given that the podcast is going to have a tigher focus on AGI, I wonder if the team is giving any considering to featuring more guests who present well-reasoned skepticism toward 80k’s current perspective (broadly understood). While some skeptics might be so sceptical of AGI or hostile to EA they wouldn’t make good guests, I think there are many thoughtful experts who could present a counter-case that would make for a useful episode(s).
To me, this comes from a case for epistemic hygiene, especially given the prominence that the 80k podcast has. To outside observers, 80k’s recent pivot might appear less as “evidence-based updating” and more as “surprising and suspicious convergence” without credible demonstrations that the team actually understands opposing perspectives and can respond to the obvious criticisms. I don’t remember the podcast featuring many guests who present a counter-case to 80ks AGI-bullishness as opposed to marginal critiques, and I don’t particularly remember those arguments/perspectives being given much time or care.
Even if the 80k team is convinced by the evidence, I believe that many in both the EA community and 80k’s broader audience are not. From a strategic persuasion standpoint, even if you believe the evidence for transformative AI and x-risk is overwhelming, interviewing primarily those already also convinced within the AI Safety community will likely fail to persuade those who don’t already find that community credible. Finally, there’s also significant value in “pressure testing” your position through engagement with thoughtful critics, especially if your theory of change involves persuading people who are either sceptical themselves or just unconvinced.
Some potential guests who could provide this perspective (note, I don’t these 100% endorse the people below, but just that they point the direction of guests that might do a good job at the above):
Piggybacking on this comment because I feel like the points have been well-covered already:
Given that the podcast is going to have a tigher focus on AGI, I wonder if the team is giving any considering to featuring more guests who present well-reasoned skepticism toward 80k’s current perspective (broadly understood). While some skeptics might be so sceptical of AGI or hostile to EA they wouldn’t make good guests, I think there are many thoughtful experts who could present a counter-case that would make for a useful episode(s).
To me, this comes from a case for epistemic hygiene, especially given the prominence that the 80k podcast has. To outside observers, 80k’s recent pivot might appear less as “evidence-based updating” and more as “surprising and suspicious convergence” without credible demonstrations that the team actually understands opposing perspectives and can respond to the obvious criticisms. I don’t remember the podcast featuring many guests who present a counter-case to 80ks AGI-bullishness as opposed to marginal critiques, and I don’t particularly remember those arguments/perspectives being given much time or care.
Even if the 80k team is convinced by the evidence, I believe that many in both the EA community and 80k’s broader audience are not. From a strategic persuasion standpoint, even if you believe the evidence for transformative AI and x-risk is overwhelming, interviewing primarily those already also convinced within the AI Safety community will likely fail to persuade those who don’t already find that community credible. Finally, there’s also significant value in “pressure testing” your position through engagement with thoughtful critics, especially if your theory of change involves persuading people who are either sceptical themselves or just unconvinced.
Some potential guests who could provide this perspective (note, I don’t these 100% endorse the people below, but just that they point the direction of guests that might do a good job at the above):
Melanie Mitchell
François Chollet
Kenneth Stanley
Tan Zhi-Xuan
Nora Belrose
Nathan Lambert
Sarah Hooker
Timothy B. Lee
Krishnan Rohit