And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I’d like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what “arcs” they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.
I agree that that’s how I want the eventual decision to be made. I’m not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian’s or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.
This has some flavor of ‘X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I’ll defer to them’, which I think EAs generally say/think/do too often. It’s very easy to miss things even when you’ve worked on something for a while (esp. if it’s more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people’s reactions are explicitly part of what you’re optimizing for. (Obviously what we care about are new-people’s reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)
As with everything, there’s some risk of the opposite (‘not expecting enough of professionals?’), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it’s the opposite with experts outside of EA).
Meta: Rereading your comment, I think it’s more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it’s good to leave thoughts on possible interpretations of what people write.
Criticizing 80K when you think they’re wrong (especially about object-level factual questions like “is longtermism true?”).
Criticizing EAs when you think they’re wrong even if you think they’ve spent hundreds of hours reaching some conclusion, or producing some artifact.
(I.e.: try to model how much thought and effort people have put into things, and keep in mind that no amount of effort makes you infallible. Even if it turns out the person didn’t make a mistake, raising the question of whether they messed up can help make it clearer why a choice was made.)
Using the comment section on a post like this to solicit interest in developing a competitor-podcast-episode-intro-resource.
Loudly advertising your competitor episode list here, so people can compare the merits of 80K’s playlist to yours.
The thing I don’t endorse is what I talk about in my comments.
I agree that that’s how I want the eventual decision to be made. I’m not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian’s or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.
This has some flavor of ‘X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I’ll defer to them’, which I think EAs generally say/think/do too often. It’s very easy to miss things even when you’ve worked on something for a while (esp. if it’s more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people’s reactions are explicitly part of what you’re optimizing for. (Obviously what we care about are new-people’s reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)
As with everything, there’s some risk of the opposite (‘not expecting enough of professionals?’), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it’s the opposite with experts outside of EA).
Meta: Rereading your comment, I think it’s more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it’s good to leave thoughts on possible interpretations of what people write.
Yeah, I endorse all of these things:
Criticizing 80K when you think they’re wrong (especially about object-level factual questions like “is longtermism true?”).
Criticizing EAs when you think they’re wrong even if you think they’ve spent hundreds of hours reaching some conclusion, or producing some artifact.
(I.e.: try to model how much thought and effort people have put into things, and keep in mind that no amount of effort makes you infallible. Even if it turns out the person didn’t make a mistake, raising the question of whether they messed up can help make it clearer why a choice was made.)
Using the comment section on a post like this to solicit interest in developing a competitor-podcast-episode-intro-resource.
Loudly advertising your competitor episode list here, so people can compare the merits of 80K’s playlist to yours.
The thing I don’t endorse is what I talk about in my comments.