I imagine they might release more feeds in future, so it may be worth occasionally searching for podcasts by “The Nonlinear Fund” in podcast apps to see if others come up.
I’ve been hitting these feeds pretty hard and really valuing them. Examples of cool things that this had allowed me to do are easily fit into my schedule Richard Ngo’s sequence on AGI Safety from First Principles, some old Luke Muehlhauser posts on science/rationality-based self-help approaches, and all core readings in the AGI Safety Fundamental governance course.
Nonlinear Library has machine-read (but still pretty good) versions of a large and increasing number of posts from the EA Forum, LessWrong, and the Alignment Forum. See https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library This is probably the podcast I’ve listened to most often since it came out, and will probably remain the podcast I listen to most often for the indefinite future.
Nonlinear Fund also now have additional podcast feeds:
Feeds for top posts of all time (or something like that) from the Alignment Forum, from the EA Forum, and from LessWrong
a feed called “the alignment section” which seems to be curated by them (and seems to me to be a pretty good selection)
I imagine they might release more feeds in future, so it may be worth occasionally searching for podcasts by “The Nonlinear Fund” in podcast apps to see if others come up.
I’ve been hitting these feeds pretty hard and really valuing them. Examples of cool things that this had allowed me to do are easily fit into my schedule Richard Ngo’s sequence on AGI Safety from First Principles, some old Luke Muehlhauser posts on science/rationality-based self-help approaches, and all core readings in the AGI Safety Fundamental governance course.