This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
Motivation for post: As a former CFAR instructor, longtime teacher, and rationality pundit, I find myself giving lots of advice in lots of different contexts. I also try to check in from time to time to find out which bits of advice actually proved helpful to people. Over the years, I’ve heard from a genuinely surprising number of people that my (offhand, very basic, not especially insightful) thoughts on “shoulder advisors” were quite useful to them, and remained useful over time. So: a primer.
“There’s a copy of me inside your head?” Hermione asked.
“Of course there is!” Harry said. The boy suddenly looked a bit more vulnerable. “You mean there isn’t a copy of me living in your head?”
There was, she realized; and not only that, it talked in Harry’s exact voice.
“It’s rather unnerving now that I think about it,” said Hermione. “I do have a copy of you living in my head. It’s talking to me right now using your voice, arguing how this is perfectly normal.”
“Good,” Harry said seriously. “I mean, I don’t see how people could be friends without that.”
The term “shoulder advisor” comes from the cartoon trope of a character attempting to make a decision while a tiny angel whispers in one ear and a tiny devil whispers in the other. (Full Post on LW)
Initially I talked about hosting Zoom discussion for those that were interested, but I think it’s a bit more than I can take on right now (not so low-commitment). If anyone wants to organize one, comment or PM me and I will be happy to coordinate for future posts.
For now I will include an excerpt from each post, but if anyone wants to volunteer to do a brief summary instead, please get in touch.
LW4EA: Shoulder Advisors 101
Link post
Written by LW user Duncan_Sabien.
This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
Please feel free to,
Discuss in the comments
Subscribe to the LessWrong for EA tag to be notified of future posts
Tag other LessWrong reposts with LessWrong for EA.
Recommend additional posts
Initially I talked about hosting Zoom discussion for those that were interested, but I think it’s a bit more than I can take on right now (not so low-commitment). If anyone wants to organize one, comment or PM me and I will be happy to coordinate for future posts.
For now I will include an excerpt from each post, but if anyone wants to volunteer to do a brief summary instead, please get in touch.