This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
My goal is usually not to evaluate a single black-box claim in isolation, but rather to build a gears-level model of the system in question. I care about whether hydroxyhypotheticol reduces malignant examplitis only to the extent that it might tell me something about the internal workings of the system. I’m not here to get a quick win by noticing an underutilized dietary supplement; I’m here for the long game, and that means making the investment to understand the system.
Initially I talked about hosting Zoom discussion for those that were interested, but I think it’s a bit more than I can take on right now (not so low-commitment). If anyone wants to organize one, comment or PM me and I will be happy to coordinate for future posts.
For now I will include an excerpt from each post, but if anyone wants to volunteer to do a brief summary instead, please get in touch.
LW4EA: Paper-Reading for Gears
Link post
Written by LW user johnswentworth.
This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
Please feel free to,
Discuss in the comments
Subscribe to the LessWrong for EA tag to be notified of future posts
Tag other LessWrong reposts with LessWrong for EA.
Recommend additional posts
Initially I talked about hosting Zoom discussion for those that were interested, but I think it’s a bit more than I can take on right now (not so low-commitment). If anyone wants to organize one, comment or PM me and I will be happy to coordinate for future posts.
For now I will include an excerpt from each post, but if anyone wants to volunteer to do a brief summary instead, please get in touch.