Interesting idea. I think there’s a lot of possible commentary or answers, so I’ll provide some quick thoughts based on ~1 month of reading/upskilling in AI/Bio-related X-risk, which I began since I received an FTX Future Fund Regrant.
Does anyone have any advice before I start this project?
Do experiments. These can inform your estimates of how valuable a podcast would be to others, how useful it would be to you and how much effort it would require. This post is a great experiment also, so kudos!
In particular, are there any resources you recommend for teaching myself about machine learning, genomics, or politics?
There are lots of different materials online for learning about these general topics. I would highly suggest you start with having a thorough understanding of the relevant x-risk cause areas without getting into technical details first, followed by learning about these technical topics if/when they appear to be most appropriate.
I’m interested in whether this particular piece of advice in the previous paragraph is contentious (with the other perspective being “go learn lots of general skills before getting more context on x-risks”). Still, I think that might be a costly approach involving spending lots of time learning extraneous detail with no apparent payoff.
For AI:
I think the best place to start is the Cambridge AGI Safety Fundamentals course (which has technical and governance variations). You don’t need a lot of Deep Learning expertise to do the course, and the materials are available online until they run one.
And are there any hidden risks I’m not considering that might make this idea worse than it seems?
Other than not achieving your goals, or being costly, mitigated by starting small and doing experiments, the most significant potential risk is some information hazard. If you focus on pre-requisite skills, then info hazards might be less likely. There are dangers in being too careful around info hazards, so maybe the best action is to share podcasts with a small group of info hazard-aware community members first to check.
Good luck! And please feel free to reach out if you’d like to discuss this further.
Hi there,
Interesting idea. I think there’s a lot of possible commentary or answers, so I’ll provide some quick thoughts based on ~1 month of reading/upskilling in AI/Bio-related X-risk, which I began since I received an FTX Future Fund Regrant.
Do experiments. These can inform your estimates of how valuable a podcast would be to others, how useful it would be to you and how much effort it would require. This post is a great experiment also, so kudos!
There are lots of different materials online for learning about these general topics. I would highly suggest you start with having a thorough understanding of the relevant x-risk cause areas without getting into technical details first, followed by learning about these technical topics if/when they appear to be most appropriate.
I’m interested in whether this particular piece of advice in the previous paragraph is contentious (with the other perspective being “go learn lots of general skills before getting more context on x-risks”). Still, I think that might be a costly approach involving spending lots of time learning extraneous detail with no apparent payoff.
For AI:
I think the best place to start is the Cambridge AGI Safety Fundamentals course (which has technical and governance variations). You don’t need a lot of Deep Learning expertise to do the course, and the materials are available online until they run one.
For Bio:
Tessa curated A Biosecurity and Biorisk Reading+ List, which covers several domains, including genomics.
Other than not achieving your goals, or being costly, mitigated by starting small and doing experiments, the most significant potential risk is some information hazard. If you focus on pre-requisite skills, then info hazards might be less likely. There are dangers in being too careful around info hazards, so maybe the best action is to share podcasts with a small group of info hazard-aware community members first to check.
Good luck! And please feel free to reach out if you’d like to discuss this further.