Adding some weight to others’ comments that since 80k went whole-hog for AI-more-AI-nothing-but-AI, what was initially interesting & compelling AI content for me to listen to as part of a broader repertoire of distinctly EA takes on things has felt like a firehose and there isn’t interesting content I look to the podcase for now. I miss the other areas of content a lot.
Encountering these, which I’d listen to in 30-45min chunks over a few days, was indescribably useful. The ones with Ajeya Cotra on world-view diversfication, Rachel Glennerster on market shaping, Karen Levy on program dev & eval, and Hugh White on Donald Trump/US change, were so genuinely novel and informative to me that the perspectives they shared are now baked into how I think about things. The podcast change since then to 1000 angles of AI risk has nowhere near this value.
Editing to add something less crabby:
Some areas of AI risk that would be substantially interesting and useful and re-engage me would be around building out an actual understanding of AI risk. AI discourse given any attention here has been representative of a dangerously homogeneous group for something prioritized for its existential level of risk, global impact, etc. (mostly white men, almost entirely W.E.I.R.D. countries, middle-class, narrowly technical interest, etc.). More or less a mirror of the same people causing the risk. For novel + valuable content, I want to know perspectives that can help fill out even a bit more of the ENTIRE REST OF HUMANITY perspectives on this one—countries/regions, ethnicities, life stages, genders, walks-of-life, socio-econ statuses, faiths, sectors, families, education experiences. I have a sense we can’t possibly have a good grasp of what the major risks are if our understanding is based exclusively on what’s most valued to the most narrow group of people. It would also open up so much rich space for new problem frames --> new solutions. I would avidly listen to this kind of content. The podcast team expansions would ideally reflect people with the abilities to build this out...
Thanks for letting me know your experience with the podcast. I’m sorry you’re finding it less valuable right now. To be clear, things like the Ajeya and Hugh White episodes are centrally the kinds of episodes we’ll still be putting out a lot of (in fact, we’ve recorded another episode with Ajeya which we’re currently getting towards releasing). We’ll likely still do occasional episodes like the Levy and Glennerster ones, but fewer.
I agree that AI risk is an incredibly large, complex area and that it’s going to take a lot more work for us to build out a full understanding of it!
Adding some weight to others’ comments that since 80k went whole-hog for AI-more-AI-nothing-but-AI, what was initially interesting & compelling AI content for me to listen to as part of a broader repertoire of distinctly EA takes on things has felt like a firehose and there isn’t interesting content I look to the podcase for now. I miss the other areas of content a lot.
Encountering these, which I’d listen to in 30-45min chunks over a few days, was indescribably useful. The ones with Ajeya Cotra on world-view diversfication, Rachel Glennerster on market shaping, Karen Levy on program dev & eval, and Hugh White on Donald Trump/US change, were so genuinely novel and informative to me that the perspectives they shared are now baked into how I think about things. The podcast change since then to 1000 angles of AI risk has nowhere near this value.
Editing to add something less crabby:
Some areas of AI risk that would be substantially interesting and useful and re-engage me would be around building out an actual understanding of AI risk. AI discourse given any attention here has been representative of a dangerously homogeneous group for something prioritized for its existential level of risk, global impact, etc. (mostly white men, almost entirely W.E.I.R.D. countries, middle-class, narrowly technical interest, etc.). More or less a mirror of the same people causing the risk. For novel + valuable content, I want to know perspectives that can help fill out even a bit more of the ENTIRE REST OF HUMANITY perspectives on this one—countries/regions, ethnicities, life stages, genders, walks-of-life, socio-econ statuses, faiths, sectors, families, education experiences. I have a sense we can’t possibly have a good grasp of what the major risks are if our understanding is based exclusively on what’s most valued to the most narrow group of people. It would also open up so much rich space for new problem frames --> new solutions. I would avidly listen to this kind of content. The podcast team expansions would ideally reflect people with the abilities to build this out...
Thanks for letting me know your experience with the podcast. I’m sorry you’re finding it less valuable right now. To be clear, things like the Ajeya and Hugh White episodes are centrally the kinds of episodes we’ll still be putting out a lot of (in fact, we’ve recorded another episode with Ajeya which we’re currently getting towards releasing). We’ll likely still do occasional episodes like the Levy and Glennerster ones, but fewer.
I agree that AI risk is an incredibly large, complex area and that it’s going to take a lot more work for us to build out a full understanding of it!
Thanks for the reply :)