Hi there,
Interesting idea. I think there’s a lot of possible commentary or answers, so I’ll provide some quick thoughts based on ~1 month of reading/upskilling in AI/Bio-related X-risk, which I began since I received an FTX Future Fund Regrant.
Does anyone have any advice before I start this project?
Do experiments. These can inform your estimates of how valuable a podcast would be to others, how useful it would be to you and how much effort it would require. This post is a great experiment also, so kudos!
In particular, are there any resources you recommend for teaching myself about machine learning, genomics, or politics?
There are lots of different materials online for learning about these general topics. I would highly suggest you start with having a thorough understanding of the relevant x-risk cause areas without getting into technical details first, followed by learning about these technical topics if/when they appear to be most appropriate.
I’m interested in whether this particular piece of advice in the previous paragraph is contentious (with the other perspective being “go learn lots of general skills before getting more context on x-risks”). Still, I think that might be a costly approach involving spending lots of time learning extraneous detail with no apparent payoff.
For AI:
I think the best place to start is the Cambridge AGI Safety Fundamentals course (which has technical and governance variations). You don’t need a lot of Deep Learning expertise to do the course, and the materials are available online until they run one.
For Bio:
Tessa curated A Biosecurity and Biorisk Reading+ List, which covers several domains, including genomics.
And are there any hidden risks I’m not considering that might make this idea worse than it seems?
Other than not achieving your goals, or being costly, mitigated by starting small and doing experiments, the most significant potential risk is some information hazard. If you focus on pre-requisite skills, then info hazards might be less likely. There are dangers in being too careful around info hazards, so maybe the best action is to share podcasts with a small group of info hazard-aware community members first to check.
Good luck! And please feel free to reach out if you’d like to discuss this further.
Sorry for the slow reply.
Talking about allocation of EA’s to cause areas.
I agree that confidence intervals between x-risks are more likely to overlap. I haven’t really looked into super-volcanoes or asteroids and I think that’s because what I know about them currently doesn’t lead me to believe they’re worth working on over AI or Biosecurity.
Possibly, a suitable algorithm would be to defer to/check with prominent EA organisations like 80k to see if they are allocating 1 in every 100 or every 1000 EAs to rare but possibly important x-risks. Without a coordinated effort by a central body, I don’t see how you’d calibrate adequately (use a random number generator and if the number is less than some number, work on a neglected but possibly important cause?).
My thoughts on EA allocation to cause areas have evolved quite a bit recently (partly due to talking 80k and others, mainly in biosecurity). I’ll probably write a post with my thoughts, but the bottom line is that, basically, the sentiment expressed here is correct and that it’s easier socially to have humility in the form of saying you have high uncertainty.
Responding to the spirit of the original post, my general sense is that plenty of people are not highly uncertain about AI-related x-risk—you might have gotten that email from 80k titled “A huge update to our problem profile — why we care so much about AI risk”. That being said, they’re still using phrases like “we’re very uncertain”. Maybe the lack of uncertainty about some relevant facts is lower than their decision rule. For example, in the problem profile, they write:
Different Views under Near-Termism
This seems tempting to believe, but I think we should substantiate it. What current x-risks are not ranked higher than non-x-risks (or how much less of a lead do they have) relative to non-x-risks causes from a near-term perspective?
I think this post proposes a somewhat detailed summary of how your views may change under a transformation from long-termist to near-termist. Scott says:
His arguments here are convincing because I find an AGI event this century likely. If you didn’t, then you would disagree. Still, I think that even were AI not to have short timelines, other existential risks like engineered pandemics, super-volcanoes or asteroids might have milder only catastrophic variations, which near-termists would equally prioritise, leading to little practical variation in what people work on.
Talking about different cultures and EA
Can you reason out how “there would be an effect”?