New Podcast: X-Risk Upskill
After attending EAG in San Francisco last month, I realized that the most important thing for me to do right now is upskill in the fields of AI, biorisk, and policy. I recently graduated from Tulane with a Bachelor’s in Engineering and Philosophy, so I have a general background in technology and the humanities. However, I am interested in a career in DC as a scientific advisor to policy makers, and to do that I need a better technical understanding of these fields.
To that end, I had the idea to make a podcast about my journey learning about these topics. I plan to learn using a variety of self-teaching materials: books, video series, podcasts, online classes, etc. Once I finish something, I then write and record a thirty-ish minute summary of what I learned. I’m thinking of calling it “X-Risk Upskill.”
Even if this project doesn’t become popular, it will have three main advantages for me: it will motivate me to actually do the work of upskilling; it will help me better absorb the material, since the best way to learn is to teach; and it will give me a chance to practice my science communication skills. If the podcast does get popular, it could bolster my career by getting my name out there and help other aspiring EAs upskill as well.
Does anyone have any advice before I start this project? In particular, are there any resources you recommend for teaching myself about machine learning, genomics, or politics? Should I find someone to edit my scripts before I record them? And are there any hidden risks I’m not considering that might make this idea worse than it seems?
This sounds like an absolutely amazing idea which I bet will get you feedback that will make you way over 10x more efficient with how you learn.
If you’re worried about PR risks—you can do something like this in a more-closed group (like a forum post that you keep replying to?), or as a middle-ground you can avoid using buzzwords like “Effective Altruism”. If you want a more official answer about PR, I recommend messaging Lizka, and she’ll know who’s in charge of that stuff
Hi there,
Interesting idea. I think there’s a lot of possible commentary or answers, so I’ll provide some quick thoughts based on ~1 month of reading/upskilling in AI/Bio-related X-risk, which I began since I received an FTX Future Fund Regrant.
Do experiments. These can inform your estimates of how valuable a podcast would be to others, how useful it would be to you and how much effort it would require. This post is a great experiment also, so kudos!
There are lots of different materials online for learning about these general topics. I would highly suggest you start with having a thorough understanding of the relevant x-risk cause areas without getting into technical details first, followed by learning about these technical topics if/when they appear to be most appropriate.
I’m interested in whether this particular piece of advice in the previous paragraph is contentious (with the other perspective being “go learn lots of general skills before getting more context on x-risks”). Still, I think that might be a costly approach involving spending lots of time learning extraneous detail with no apparent payoff.
For AI:
I think the best place to start is the Cambridge AGI Safety Fundamentals course (which has technical and governance variations). You don’t need a lot of Deep Learning expertise to do the course, and the materials are available online until they run one.
For Bio:
Tessa curated A Biosecurity and Biorisk Reading+ List, which covers several domains, including genomics.
Other than not achieving your goals, or being costly, mitigated by starting small and doing experiments, the most significant potential risk is some information hazard. If you focus on pre-requisite skills, then info hazards might be less likely. There are dangers in being too careful around info hazards, so maybe the best action is to share podcasts with a small group of info hazard-aware community members first to check.
Good luck! And please feel free to reach out if you’d like to discuss this further.
If you think there’s a risk you start but do not continue the project, you could just publish the transcripts rather than record the podcast.
I did this with nuclear weapons, I made a big website full of basic knowledge so that I would have to learn the basic knowledge. I could have just quietly read the information, but engaging my web building creativity, and pretending I was a teacher :-) seemed to help things along.
A bachelor’s in Engineering and Philosophy sounds quite promising. I love the idea of those two together.
I’ve been arguing that existential risk is at it’s heart largely a philosophical problem, more than a technical one. In short, the “more is better” relationship with knowledge which is driving the technology is an outdated, simplistic and increasingly dangerous philosophy left over from the 19th century and earlier. It was a great philosophy in the long era of knowledge scarcity, but we no longer live in that era.
Existential risk is largely a failure to adapt our knowledge philosophy to the new conditions created by the success of the knowledge explosion. Revolutionary new conditions, same old philosophy.
For more...
https://forum.effectivealtruism.org/posts/kbfdeZbdoFXT8nuM6/our-relationship-with-knowledge