I feel like many of the people I interact with in the EA/rationalist communities think they have more insight into other people’s lives than they really do, kind of like an inverse illusion of transparency. This seems especially true to me of community builders, e.g. the types that hear “I’m in a PhD” and immediately think “what do I say to get this person to drop out and move to Berkeley to work on AI safety[1]”
There aremanyreasons why this behavior is often bad. This post explains one of the reasons: we tend to overestimate how much we know about others’ lives.
A more cooperative suggestion: “how can I help this person fulfill their potential in a way they feel good about while presenting any information they might find useful”
[Link-post] Beware of Other-Optimizing
This is a link-post for https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing
I feel like many of the people I interact with in the EA/rationalist communities think they have more insight into other people’s lives than they really do, kind of like an inverse illusion of transparency. This seems especially true to me of community builders, e.g. the types that hear “I’m in a PhD” and immediately think “what do I say to get this person to drop out and move to Berkeley to work on AI safety[1]”
There are many reasons why this behavior is often bad. This post explains one of the reasons: we tend to overestimate how much we know about others’ lives.
A more cooperative suggestion: “how can I help this person fulfill their potential in a way they feel good about while presenting any information they might find useful”