We’d love ideas for 1) who to interview and 2) what topics to cover (even if you don’t have a particular expert in mind for that topic) on The 80,000 Hours Podcast.
Some prompts that might help generate guest ideas:
What ‘big ideas’ books have you read in the last 5 years that were truly good? (e.g. the Secret of Our Success)
Who gave that EAG(x) talk that blew your mind? (you can find recorded EAG(x) talks here)
What’s an important EA Forum post that you’ve read that could be more accessible, or more widely known?
Who’s an expert in your field who would do a great job communicating about their work?
What’s an excellent in-the-weeds nonfiction book you’ve read in the last few years that might be relevant to a pressing world problem? (e.g. Chip War)
Who’s appeared on our show in the past that you’d like to hear more from? (complete list of episodes here)
Who’s spearheading an entrepreneurial project you want to learn more about?
Who has a phenomenal blog on ideas that might be relevant to pressing problem areas?
Who’s a journalist that consistently covers topics relevant to EA, and nails it?
Who are the thought leaders or contributors in online forums or platforms (like LessWrong, EA Forum, or others) whose posts consistently spark insightful discussions?
We might be interested in covering the following topics on the show: suffering-risks, how AI might help solve other pressing problems, how industries like aviation manage risk, whether we should prioritize well-being over health interventions, what might bottleneck AI progress. Who’s an expert you know of who can speak compellingly on one of those topics?
Some prompts that might help generate topic ideas:
What’s a problem area that you’ve always wanted to know more about?
What’s a great video essay you’ve seen in the last few months? What was it on?
What’s a policy idea that you think more policymakers should hear?
What’s a problem area that seems seriously underrated in EA?
What’s a philosophy concept that’s important to your worldview that people aren’t talking enough about?
What’s a major doubt you have about whether a particular problem (e.g. AI / biosecurity) is actually especially pressing, or a particular solution is especially promising?
What’s an issue in the EA community that you’d like to hear someone address? (e.g. FTX, burnout, deference)
We’d love to get lots and lots of ideas — so please have a low bar for suggesting! That said, we get many more suggestions than we can plausibly make episodes, so in practice we follow through on <5% of proposals people put to us.
Jeff Sebo on his work launching multiple EA-aligned programs at NYU that advance and legitimize “fridge” causes, including the Wild Animal Welfare Program and the Mind, Ethics, and Policy Program. Jeff is also a great presenter, and many students describe him as their favorite professor.
He also has cold takes, presumably
What is a fridge cause?
I’m guessing Rockwell meant “fringe”
Seconding this… I once saw Jeff give a 30 minute talk, completely unprepared, without using a filler word even once. Easy podcast guest to edit!
Sampled from my areas of personal interest, and not intended to be at all thorough or comprehensive:
AI researchers (in no particular order):
Prof. Jacob Steinhardt: author of multiple fascinating pieces on forecasting AI progress and contributor/research lead on numerous AI safety-relevant papers.
Dan Hendrycks: director of the multi-faceted and hard-to-summarize research and field-building non-profit CAIS.
Prof. Sam Bowman: has worked on many varieties of AI safety research at Anthropic and NYU
Ethan Perez: researcher doing fascinating work to display and address misalignments in today’s AIs.
Toby Shevlane: Model Evaluations for Extreme Risks
Jess Whittlestone: head of AI policy at Center for Long-Term Resilience, much research here
Plenty of others: Jade Leung (AI governance and evaluations at OpenAI), Prof. David Krueger (varied AI safety research), Prof. Percy Liang (evaluating models), Prof. Roger Grosse (influence functions for interpretability), many others listed here.
Economists who have written (esp. but not only deflationary arguments contra Davidson) on AI’s economic impact:
Chad Jones (see here)
Ben Jones (see e.g. this, but also all his research)
Matt Clancy (see this debate, though an episode with him should also address his non-AI work as well!)
Daron Acemoglu (see Power and Progress)
Maybe other reviewers here?
Ethicists:
Iason Gabriel: has worked both on critiques of effective altruism, AI evaluations (extreme risks, representation), and normative questions related to AI alignment. This excellent FLI interview had so many ideas that would be great to explore in more depth.
David Thorstad: has written critiques of existential risk reduction and longtermism.
Emma Curran: author of contractualist reply to longtermism
The three I would personally be most excited to listen to: Toby Shevlane, Matt Clancy, Iason Gabriel.
+1 on David Thorstad
Nate Silver on his takes on EA
Charity Entrepreneurship
Following the episode with Mustafa, it would be great to interview the founders of leading AI labs—perhaps Dario (Anthropic) [again], Sam (OpenAI), or Demis (DeepMind). Or alternatively, the companies that invest / support them—Sundar (Google) or Satya (Microsoft).
It seems valuable to elicit their honest opinions[1] about “p(doom)”, timelines, whether they believe they’ve been net-positive for the world, etc.
I think one risk here is either:
a) not challenging them firmly enough—lending them undue credibility / legitimacy in the minds of listeners
b) challenging them too strongly—reducing willingness to engage, less goodwill, etc
I don’t know, I feel like there are serious questions for these people to answer that they’re probably not going to be drawn on unless in a particular arena (such as a US Senate hearing),[1] and otherwise interviewing these people further might not be that high value? Especially since Rob/Luisa are very generous towards their guests![2]
There was a great interview with Dario recently by Dwarkesh, but even that to me was on the ‘soft’ side of things. Like I still don’t have a clear honest idea to me of, if you do truly think AIXR is high and AI Safety is so import, why you would want billions of dollars to create an AI 10x as capable as your current leading model
And even then it took Gary Marcus nudging the senators that Sam hadn’t answered their question, and even then Sam white-lied it by saying he feared AI could cause “significant harm” whereas he should have said “human extinction”
Btw which is not a bad thing if you’re reading Rob/Luisa. <3 you both you have no idea how much value the podcast has given me over the last few years
Anna Christina Thorsheim, ED and co-founder of Family Empowerment Media.
Good to interview on charity incubation, reproductive choice, working with local partners as an outsider.
Yes !!!
Some of the scholars who’ve worked on Insects or Decapod and Pain/Sentience (Jonathan Birch, Meghan Barrett, Lars Chittika etc)
Bob Fischer on comparing interspecies welfare
+1 for Bob Fischer!
+1 on Meghan Barrett. Her talk on insect welfare at EAG London last year was one of the best talks I’ve seen. Great argument, conceptually clear, changed my mind.
Someone who runs or has built a medium to large location-identified EA community but isn’t based in the UK or the Bay Area (e.g. Germany, New York, London, the Netherlands, Norway, Denmark, Sweden, France, Switzerland, Poland, Australia, Israel, UAE, Mexico)
Someone from Doneer Effectief, Effektiv Spenden, De Tien Procent Club or another local Effective giving org
Jason Matheny
Big +1
Owain Evans on AI alignment (situational awareness in LLM, benchmarking truthfulness)
Ben Garfinkel on AI policy (best practices in AI governance, open source, the UK’s AI efforts)
Anthony Aguirre on AI governance, forecasting, cosmology
Beth Barnes on dangerous capability evals (GPT-4′s and Claude’s eval)
+1 to Beth Barnes on dangerous capability evals
Carl Shulman (again), his interviews on the Dwarkesh Patel podcast were incredible, and there seems to be potential for more
Vaclav Smil, who appears to be very knowledgeable, with a comprehensive model of the entire world. His books are filled with facts.
Lukas Finnveden about his blogposts on the altruistic implications of Evidential Cooperation in Large Worlds
Some employee of MIRI who is not Yudkowsky. I suggest
Tsvi Benson-Tilsen (blog), who has appeared on at least one podcast which I liked. Has looked into human intelligence enhancement and a variety of other problems such as communication. Generally has longer AI timelines.
Or Scott Garrabrant, but I don’t know how interesting his interview would be for a nontechnical audience.
Another interview on wild animal welfare, perhaps with someone from Wild Animal Initiative.
Perhaps invite Brian Tomasik on the podcast?
Romeo Stevens (blog) mainly for his approach to his career: Founded a startup to support himself early on, and is now independent. Doesn’t tend to write his ideas down, here’s an interview which details some of his ideas.
Also from MIRI: Abram Demski has lots of interesting ideas
John Nkengasong, US Global AIDS Coordinator, former first Director of the African CDC, and professional virologist.
Good to interview on PEPFAR (which is a Big Deal), efforts to address the global burden of disease by local, bilateral, and multilateral funders and other actors.
Johannes Haushofer (and / or colleagues), President and CEO at Malengo, and Professor of Economics at Stockholm University.
Good to interview on cash transfers, supporting immigration, experience of moving from academic work to social entrepreneurship. Broadly familiar with effective altruism and may have useful reflections on that too.
Matt Clancy, Research Fellow in Metascience at Open Philanthropy, Senior Fellow at the Institute for Progress, and author of New Things Under the Sun, a living literature review of what academia knows (and doesn’t know) about innovation
Good to interview on making science better, progress studies, and frontier growth (whilst having an understanding of longtermist concerns with technological safety)
More country-specific content could be really interesting. I’d be interested in broad interviews covering:
China—economic projections, expert views and disagreement on stability of CCP, tech progress, info on public opinion about US/West, demographic challenges, entrepreneurship, etc. (not sure he’d be the best person to cover all this, but maybe Kaiser Kuo?)
India—whether high growth rates can be sustained, Sino-Indian relations, complexity of India’s diplomatic relationships with Russia and US, challenges and stability of world’s largest democracy, intra-country variation in culture and economic structure, Indian human capital and tech talent + plausibility of India becoming an AI power in next few decades
Same for other emerging powers—maybe Nigeria and Indonesia
A whole episode on semiconductors and supply chains, including role of countries like South Korea, Japan, and Singapore
Yeah I think this is underrated. Also ways to think well about these countries. What should our mental models contain?
It would be cool to have someone with experience in startups who also knows a decent amount about EA because many insights from running a successful startup might apply to people working to ambitiously solve neglected and important problems. Maybe Patrick Collison?
The 80k podcast has good reach inside and outside the EA Community. It can be used to signal-boost lesser known ideas, co-ordinate on important ones, but I think it’s also important to use it to expose EAs to good faith criticisms of our commonly-held positions too.
As such, I’d like to nominate Melanie Mitchell, Davis Professor of Complexity at the Sante Fe Institute. Forum users might know her for taking the Con side on the recent Munk Debate on Artifical Intelligence. Melanie is a highly qualified AI scientist, literally writing the textbook on Genetic Algorithms, and is a sceptic of the likelihood of Strong AI emerging soon and thus the case of AI xRisk.
I expect many EAs, and many users of the Forum, to disagree with her. That’s absolutely your right! But I think it’s important for the AIXR community to listen to its sceptics, identify cruxes with them, and be a willing participant in that dialogue. For example, while I’m more concerned about AIXR than Melanie, I found her recent article summarising the debate on LLM understanding was an absolutely fantastic framing of the issue, well worth reading, and would be a fantastic starting point for a discussion between her sceptical perspective, and Rob (who I believe is much more AIXR concerned).
I know that the episode with Glen Weyl might have limited the desire for this kind of thing, but I still think good-faith dialogue like this critically important for truth-seeking.
Could you elaborate on the episode with Glen Weyl?
short answer: it seemed the 80k conversation with Glen didn’t end up causing any useful updates, though it was perhaps good in a ‘broadening intellectual horizons’ sense. My intuitive take is that few EA critics (even relatively informed ones like Glen) are going to get an 80k invite sometime soon.
long answer: I’ll try my best, though I’m trying to parse past events without really experiencing the context as-it-happened.
In February 2019 Weyl appears on the 80k podcast, but in the ‘critiques of effective altruism’ section they kinda seem to talk past each other
In 2019 Weyl posts Why I Am Not A Technocrat, a pluralistic critique of rationality and EA movements (among others)
In Jan 2021, Scott Alexander reviews this critique but is again unconvinced. Weyl responds, and they get into it in the comments, making very little progress
In March 2021, on the rationally speaking podcast, Galef and Buterin discuss Weyl’s critiques and are similarly confused/unconvinced
Remmelt tried to provide some context on Weyl’s perspective. Some commenters liked it, others didn’t. I’m not sure what if anything Galef took from the exchange
In October 21, Weyl posted a more conciliatory post though still critical. I wasn’t even aware of it until I did this deep dive, so not sure how much others noticed
In 2022 Weyl appeared on Hear This Idea, still a critic of EA, and here I still find him unclear and Finn to make some good counterpoints. But 3 years on very little progress on understanding seems to have been made on either side.
I got a lot of this info from a Twitter thread by Remmelt earlier this year, which includes a claim that he discussed Weyl’s interviews with Wiblin & Scott directly, and both said they found Weyl ‘all over the place’ and implicitly unconvincing.
Basically, I don’t think Rob and the 80k team think they got a lot out of the conversation with Glen (note—this is just my impression, and I’m very willing to be corrected). I suspect that they updated away from ‘conversing with critics’ to ‘inform EAs of EA-related people/work’. I guess that’s fair, it’s their podcast. I just think that these conversations are incredibly important even if difficult. If anything, they should be more explicitly about the disagreements. A really good example of how to do this well, in my view, is Nick Anyos and his Critiques of EA Podcast[1]
Nick, if you’re reading, please make some more if you have the capacity/willingness to do so!
Me
What do you propose you would talk about?
Possibly, the state of quantification in EA.
Kevin Esvelt
David Milliband, CEO of IRC, for an EA-adjacent view on how to be most effective in global health.
David can speak to why he doesn’t just follow EA orthodoxy in running a very large development org with a massive budget. These reasons might prove to be good or bad or just thought-provoking
Yes I would love to hear a little more from “mainstream” aid and development orgs and have that discussion around how they see EA ideas, and how EA-compatible ideas are growing (or not) within those spaces. Also USAID, World Bank etc. although I don’t have a specific name.
Oliver Habryka
David Pearce on the seriousness of suffering, paradise engineering, and negative utilitarianism.
I second David Pearce, and I’d add digital sentience to the topic list. (Pearce appears to have a sophisticated view on consciousness, and his bottom line belief is that digital consciousness—at least, the type that would run on classical computers—is not possible.)
David Goldberg. He’s the founder of one of the most successful EA organisations, but I’ve never seen him talk about it in public.
Why did someone downvote this?
I don’t know. Which EA organisation did he found?
Founders Pledge
Thank you! His name was somewhat hard to google, because of another (apparently more Google-famous) David Goldberg.
Lars Doucet! He’s a winner of an ACX Book Review Contest based on his review of 19th century economist Henry George’s works on economic incentives for fair natural resource allocation. A stable society makes every x-risk much more tractable 😉
https://www.gameofrent.com/
I’d love an episode on s-risks (although I’m not sure who would be best to invite on).
Daron Acemoglu (economics legend, recent book Power and Progress on big picture technological progress)
Jerusalem Demsas, staff writer at the Atlantic focused on housing and infrastructure development and visiting Fellow at Center for Economy and Society.
Good to interview on YIMBY movement and American infrastructure.
Tom Chivers, science writer at Semafor and author of The Rationalist’s Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity’s Future
Good to interview as someone who is broadly familiar with rationalist / x-risk / EA communities but not an active party.
A debate format on whether nuclear winter is possible / likely (Robock v Reisner?)
Someone knowledgeable about
Wild Animal Welfare (Please help me suggest a name)
Animal Sentience (Rethink Priorities? - please help me suggest a name)
Intersection between Animal Welfare and AI Alignment/governance (Please help me suggest a name)
Hi Imma,
Maybe Brian Tomasik?
Ooh what about Bob Fischer? He’s a philosophy professor who ran Rethink’s moral weights project and is now on their new Worldview Investigations team! [edit: just saw him suggested in a different comment]
David Thorstad (Reflective Altruism/GPI/Vanderbilt) Tyler John (Longview) Rory Stewart (GiveDirectly)
+1 on Rory Stewart- as well as being the President of GD, he was the Secretary of State for International Development in the UK, has started and run his own charity (I believe with his wife) in the developing world, has mentioned EA previously, is known to be an enjoyable person to listen to (judging by the success of his podcast), and has just released a book- and therefore might be more likely than usual to engage with popular media.
Rory Stewart is always a good time, surprised he hasn’t been interviewed already!
+1 David Thorstad
Also +1 David Thorstad, assuming we are interested in the best critiques of longtermism/X-risk reduction existing out there. I don’t see anyone remotely as appropriate as him on the topic.
I’d love to hear novelist and YouTuber John Green talk about how he decided to start fundraising for Partners in Health’s maternity hospital, his campaign to reduce the cost of TB vaccines and tests, and his thoughts on EA
Could be good outreach for EA too
I would love to see more catastrophic resilience interview after a string of AI safety interviews. Perhaps volcanologists and nuclear security folks:
Mike Cassidy: Associate Professor in Volcanology, authored the best fallout post here.
Follow up with ALLFED and David Denkenberger.
Emad Kiyaei and Kolfinna Tómasdóttir: Setting up an AI risk in Nuclear Weapon civil consortium...
Dominic Roser on Christians in Effective Altruism
I’m atheistic/agnostic and am unfamiliar with Dominic Roser.… but I think that discussing the compatibility of EA with religious views/a religious view would be very interesting and appealing to a large number of people.
Katja Grace on AI is not an arms race, the case for and against pushing for slower AI progress, and the AI impacts survey.
Connor Leahy—CEO of Conjecture, an AI Safety startup. You’re gonna get a lot of valuable hot takes from him.
Rob Miles—AI Safety YouTuber, although recently he’s been busy with other stuff.
Dawn Drescher—CEO of ImpactMarkets, so knowledgeable about funding in the EA/AI Safety space, but also great person to talk to about a lot of stuff.
Oliver Habryka—LW Admin
Perrin Walker—aka SolenoidEntity, the voice behind SSC/ACX podcast and many other projects
Karl von Wendt aka Karl Olsberg—German sci-fi writer, educating the German public about AI X-risks
Kelsey Piper—no intro needed
Geoffrey Hinton ans/or Yoshua Bengio—likewise
On the related theme I see running through many of these suggestions (slowing down AI / moratorium on AGI): Joep Meindertsma of Pause AI.
Or Holly Elmore—she was just on The Filan Cabinet podcast.
Awww, thank you! ☺️
Christian Ruhl on nuclear risk and analyzing nuclear risk from an EA perspective (full disclosure: Christian is a colleague of mine).
Christine Korsgaard on Kantian Approaches to animal welfare/ about her recent-ish book ‘Fellow Creatures’
An updated interview with Rachel Glennerster! Even though she’s obviously a legend in international development, I had a fascinating conversation with her earlier this year about longtermism and how she holds space/concern for both immediate needs and longtermism.
also doing interesting work on market shaping mechanisms, esp for pandemics and climate change!
Richard Fisher could be an interesting one, author of the recent ‘The Long View’
Andrew Youn, founder of One Acre Foundation and co-founder of D-Prize.
Good to interview on social entrepreneurship, working in low-income contexts as an outsider, experience of being a (small) GiveWell grantee, engaging billionaires and other donors with working to support the world’s poorest people, and agricultural productivity improvement.
Madhu Pai, MD and epidemiology professor focused on tuberculosis
Good to interview on tuberculosis, why it hasn’t been addressed to the same extent as other health conditions, what individuals, funders, and governments could do to reduce the burden; and on what effective altruists or others focused on cost-effectiveness might be missing in their current models of doing good.
Stefan Dercon, author of Gambling on Development, IMV the best book on development economics published this year, former DFID Chief Economist and FCDO Policy Advisor, currently at Blavatnik School in Oxford.
Good to interview on why some countries grow and some don’t, and what both insiders and outsiders might be able to do about it
Joel McGuire from HLI (gave two great talks at EAGxBoston and EAGxNYC)
Someone on atomically precise manufacturing: A “big if true” thing that is floating around EA but never really tackled head on. I don’t know how good Eric Drexler is on podcasts, but he’d be an obvious candidate.
Or whatever person wrote this report.
Someone both with deep knowledge of colonial history and a good understdaning of EA.
Especially in EA work on global health and poverty, I think it would be interesting to see if there are lessons from history we might want to look more into. Some, or perhaps many narratives during colonial times sounded a bit like current sentiments about modernization and free markets. I think an interview with such a person would broaden EA discussions and perspectives as I do not think many EAs are aware of differences and similarities between our work and development work dating back to colonial times.
I find Jaron Lanier’s thoughts on the impacts of tech development interesting and probably controversial in EA circles.
Daron Acemoglu is a known growth economist, and he has recently published a new book and had an interesting episode with Rhys Lindmark.
I also remember following Vinay Gupta a while back, and that he had interesting opinions on civilizational resilience (say, here). He was also involved with Ethereum, and is doing blockchain stuff, not sure what.
Generally, I’m interested in more discussion with people in communities or ideas that could be EA-adjacent but there isn’t a lot of (visible) overlap at the moment. Say, leftist economists, blockchain, metascience, or virtue-ethicists.
I’d also be very interested in both Lanier and Gupta’s views. But, could you elaborate on why you’d expect Lanier to be “probably controversial in EA circles”?
Maybe controversial is too strong a word. I think EAs tend to think on the margin and are also less concerned with inequality or with non-existential risks from advanced tech.
Some weak (not necessarily endorsed) suggestions:
Jonathan Baron about rational thinking and decision making
James Gleick about the history of information and information theory
Victoria Krakovna about AI alignment
Melanie Mitchell about complex systems and AI
Harold James about economic crises
Gregory Allen about US export controls
David Reich about population genetics
Kathryn Paige Harden about behaviour genetics
Larry Temkin about global aid and EA skepticism
Johan Norberg about global capitalism and liberalism
Elizabeth Barnes about critical disability theory and the use of DALYs
Chris Miller about semiconductors
Nate Silver about forecasting and media
Julia Wise about early EA and community health
Agree that early EA would be a very fun topic
Not sure I buy that it’s a top priority though, unless we’re talking about our understanding of ourselves.
Agree that critical disability theory would be a really interesting and challenging topic.
Lawrence Newport to discuss his shockingly successful campaign to get the Bully XL dog banned in the UK.
Tetlock. Yes, again.
Seconded, especially to discuss his findings from the Forecasting Existential Risks tournament (might especially be interesting if any of the early 2023 predictions have since resolved)
Wei Dai on meta questions about metaphilosophy, the intellectual journey that led him there, and implications for AI safety.
Sebastian Schwiecker of Effektiv Spenden on entrepreneurial EA meta work.
Tyler Cowen again
Adrian Hill, Director of the Jenner Institute and Professor of Vaccinology in Oxford, co-leader of the group that created the Oxford-AstraZeneca Covid-19 vaccine, leader of the group who developed the R21 malaria vaccine.
Good to interview on Covid-19, on getting vaccines into the world (cf. R21 vs RTS,S in terms of country approval processes), vaccines in general, global health R&D.
Heidi Williams, Director of Science Policy at Institute for Progress and professor of Economics at Dartmouth.
Good to interview on economics of science and progress, the economics of pharma and health R&D, and innovation
Michael Huemer, who blogs at Fake Noûs (see top posts), and has been on Dwarkesh Podcast.
Cory Doctorow- Author fiction and non and activist on competitive compatibility, fixing the internet, resisting monopoly, influencing through fiction and non https://pluralistic.net/
Daniel Suarez hard sci fi author on ai risk on space risk (Kessler Syndrome) and a counter point on why and how we should colonize space asap vs Will MacAskill concern we should maybe wait for better values first. https://www.daniel-suarez.com/
James Starvidis retired Admiral and author of 2034 on great power war risk and fiction for warning about it. https://www.goodreads.com/review/show/3813314838
Annalee Newitz author of scatter adapt remember how humans will survive a mass extinction on surviving various x risk https://www.goodreads.com/book/show/15798335
Scott Santens on UBI! https://www.scottsantens.com/
Rob Thelen entrepreneur founder of rownd.io ; former IBMer, Googler, USAF combat veteran, on career exploration and pivots and the case for relentless optimism (biased- he’s a good friend of mine and hugely positive influence) https://rownd.io/about-rownd
+1 to Cory Doctorow
+1 to James Starvidis
I just finished Unrivaled by Michael Beckley (h/t Stefan Schubert) who has a lot of interesting views on China https://www.michaelbeckley.org/articles
His main points are that China does not and will not come close to the US in terms of power, and that the US should be pretty hawkish to contain Chinese power. I found it a pretty well balanced book that’s well supported afaict, and doesn’t glorify the US. I haven’t read his 2023 book yet, which is about “the coming conflict with China”.
Unrivaled also discussed the details of a possible Taiwan invasion and why it would likely fail.
(Not sure if this is within the scope of what you’re looking for. )
I’d be excited about having something like a roundtable with people who have been through 80′000h advising – talking about how their thinking about their career has changed, advice for people in a similar situation, etc. I’d imagine this could be a good fit for 80k After Hours?
Amrita Ahuja, senior philanthropic staffer (Douglas B Marshall Foundation, CRI Foundation), co-founder and current Board Chair of Evidence Action.
Good to interview as someone with great experience in philanthropy, leadership in social entrepreneurship, and familiarity-but-not-identity with effective altruism.
Nick Bostrom on his new book Deep Utopia, and/or his extensive other work.
John Halstead on his report on climate change and longtermism, and maybe some other posts on his blog.
Jennifer Pahalka who recently wrote a book “Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better”. Loved her interview on Ezra Klein show (https://podcasts.apple.com/us/podcast/the-book-i-wish-every-policymaker-would-read/id1548604447?i=1000615839464)! And, given government/policy work has been expanded upon in the recent career guide, I think this would be a great 80,000 hours interview!
I’d love to see Johann Frick (Philosophy, UC Berkeley) on the podcast. Johann is a nonconsequentialist who defends the procreation Asymmetry and thinks longtermism is deeply misguided. Imo, his recent paper on the Asymmetry is one of the best; he’ll be able to steel-person many philosophical views that challenge common EA commitments; and he’s an engaging speaker.
Ken Caldeira on climate
Eliot Higgins, investigative journalist and founder of bellingcat. He also wrote this (which I haven’t read), and who featured, maybe hosted this podcast series which I thought was interesting.
I would like to hear a conversation with some prominent EA critics—specifically people who’ve engaged with but ultimately rejected the EA movement. I think Jess Whittlestone is in that category (though maybe it’s not something she wants to talk about); also Carla Zoe Kremer, Luke Kemp and similar.
Strongly endorse this idea
Perhaps some sort of middle-ground, if it’s not what the 80k team wants to do/feels it would be too adversarial etc, is to have a mini-series run of ~4/5 episodes or whatever with specific EA critics? Similar to Nick Anyos’ podcast perhaps?
Toby Jolly on Impactful Government Careers
Some great suggestions here already.
I’d add in Owen Barder. Former DFID chief economist, Centre for Global Development… and was involved with setting up advance market commitment for pneumococcal vaccine.
Currently CEO of precision agriculture development, could comment on process of givewell assessment of his charity also.
Would be great to have someone who is exceptional at convincing high net worth individuals to donate for specific causes. I’m sure some people in the AI Safety community would find that valuable given the large funding gap despite the exceptional amount of attention the field is receiving. I’m sure other cause areas would also find it valuable.
EDIT: I’ve gotten a few disagree-votes, which is totally fine! Though, I’m curious why some people disagree. If it‘s because they wouldn’t find this interesting, they don’t think it would be appropriate for the podcast, or…?
Richard Chappell on consequentialism, theories of well-being, and reactive vs goal-directed ethics.
Ege Erdil on AI forecasting, economics, and quantitative history.
Chad Jones on AI, risk tolerance, and growth.
Phil Trammell on growth economics (the part of his work more directly focused on philanthropy was covered in his previous appearance).
Steven Pinker (there are a lot of things that he has written that are relevant to one aspect or another of EA).
Amanda Askell on AI fine-tuning, AI moral status, and AIs expressing moral and philosophical views (she talks some about this in a video Anthropic put out).
Pablo Stafforini on the history of EA and translations of EA content.
Finally, I think it would be good to have an episode with a historian of the world wars, similar to the episode with Christopher Brown. Anthony Beevor or Stephen Kotkin, maybe.
Some functionary involved in malaria vaccine distribution to tell us how they could expand and accelerate.
Someone to explain to us how that Danish pharmaceutical firm’s governance structure works, and whether it’s better for continuous investment in innovation than the one where “founder mode” ends and lawyers take the reins of firms crucial to human progress.
I liked your interview with a professor who talked about defense methods against pandemics and potential gene drive efficacy against malaria, new world screw worm, lyme disease, and maybe one other nasty enemy. Works in Progress also had an article about gene drives’ promise against diseases like these in its most recent edition. I would also like to know about Jamaica and Uruguay’s attempts to open new fronts against the New World Screw Worm.
I liked an interview that I believe to have been on 80k hours about efforts to reduce air pollution in India. I would like to know what effect could be expected from allowing export of natural gas from countries like Turkmenistan, Iran, and Venezuela to India.
I am interested in learning about the importance of fertilizer prices and natural gas prices to global nutrition. I think there is a woman at the Breakthrough Institute who studies this topic. I suppose oil prices may be an important input, too.
I would like to know more about how USD interest rates and oil prices impact global poverty, so as to better evaluate the importance of factors like home rental inflation and economic sanctions in determining poverty rates.
Dr Jessica Eccles, post-doc at Brighton Medical School, has an amazing research pipeline looking at the overlaps and interdependencies between hypermobility, neurodiversity, and psychological symptoms. She is systematically exploring the interface between mind and body, not least interoception. This has recently also overlapped with long Covid. I am confident that her work will revolutionalize our attitudes to many ‘medically unexplained symptoms’, including in relation to PoTS, ME/CFS and chronic pain syndromes such as fibromyalgia. An oustanding example of shattering the McNamara fallacy, i.e. making the important measurable, rather than the measurable, important!
Prof. Lorna Harries, Exeter University. An expert on cellular ageing, passionate about her work, and a great communicator. She is also a great advocate for using alternatives to animals to research human disease, having established the second Animal-Free Research Centre of Excellence in the UK. She has several YouTube videos.
Details at: https://medicine.exeter.ac.uk/people/profile/index.php?web_id=Lorna_Harries
Leif Wenar
I’ve been an avid listener to the 80k Podcast for much of the past decade. I’ve heard Rob’s, Luisa’s, and Kieran’s voices more than my annoying uncle who just doesn’t ever stop talking. I wanted to suggest you invite Michael Simm on to make the case for the charity he founded. I think there’s been a shortage of young innovators working in GHD. And his counter-intuitive focus on the US poverty challenges the black-and-white dichotomy between poor and rich countries.
Phaedra Boinodiris—AI for the Rest of us (book about AI Ethics)
https://phaedra.ai/
She is currently the business transformation leader for IBM’s Responsible AI consulting group and serves on the leadership team of IBM’s Academy of Technology. Boinodiris is the author of the book ’AI for the Rest of Us, and is a co-founder of the Future World Alliance, a non-profit dedicated to curating K-12 education in AI ethics. She is pursuing her Ph.D. in AI and Ethics at University College Dublin’s Smart Lab.
Christopher Mason (author of The Next 500 Years: Engineering Life to Reach New Worlds) on how bioengineering can help us explore and inhabit other planets to mitigate extinction. Familiar with effective altruism but brings different views to the table on longtermist priorities.
Luke Kemp on:
Climate Change and Existential Risk
The role of Horizon Scans in Existential Risk Studies
His views on what EA gets wrong about XRisk
Deep Systems Thinking and XRisk.
Alternatively for another Climate Change and XRisk that would be narrower and less controversial/critical of EA than Luke is, Constantin Arnsschedit would be good
Why is this comment getting downvotes (rather than just disagree votes)?
(It had −12 karma when I wrote this comment)
I think the most likely thing is that on a post like this the downvotes vs disagreevotes distinction isn’t very strong. Its suggestions, so one would upvite the suggestions one likes most, and downvote those you like least (to contribute to visibility). If this is the case, I think its pretty fair to be honest.
If not, then I can only posit a few potential reasons, but these all seem bad to me that I would assume the above is true:
People think 80K platforming people who think climate change could contribute to XRisk would be actively harmful (eg by distracting people from more important problems)
People think 80K platforming Luke (due to his criticism of EA- which I assume they think is wrong or bad faith) would be actively harmful, so it shouldn’t be considered
People think having a podcast specifically talking about what EA gets wrong about XRisk would be actively harmful (perhaps it would turn newbies off, so we shouldn’t have it)
People think suggesting Luke is trolling because they think their is no chance that 80K would platform him (this would feel very uncharitable towards 80K imo)
Ajay Agrawal. He researches the potential economic impact of AI and is one of the authors of Power and Prediction, which offers a unique and surprisingly simple perspective on this subject.
If you enjoyed the Micheal Webb episode, then I think you would enjoy this interview too.
Kelly Wanser, ED of SilverLiving, an NGO focused on advocating for safe research into solar radiation management to address near term climate risks.
Good to interview on climate change and safe technological research and development.
I thought her Volts interview was well conducted.
Oh classic, she already appeared on the podcast in 2021. I no longer endorse this suggestion, since I don’t think the context for SRM has changed enough since she last appeared.
I think another discussion presenting SRM in the context of GCR might be good; there has now been a decent amount of research on this which probably proposes actions rather different from what SilverLining presents.
SilverLining is also decently controversial I the SRM community, so some alternative perspectives would probably be better than Kelly