I really like this mindset as a way to avoid the “elitism” that a lot of people (rightly or wrongly) perceive in EA thinking.
When I encounter someone who’s working on a project, my first thought isn’t “what’s the impact of this project?”. Instead, I ask:
“Is this person doing something with a reasonably positive impact on the world, and doing it for good reasons?”
“Are they open to doing something else, if they come to believe that doing so will get them even more of what they value?”
For almost everyone I’ve met within EA, both of those answers are “yes”, and that seems to me like one of the most important facts about our community, however different our “relative impacts” might be.
Even outside of EA, I think that a lot of people still share our core goal of helping others as much as possible, and that this goal manifests in the way they think about their work. In my view, this makes them “allies” in a certain fundamental sense. As long as we share that goal, we can find ways to work together, in an alliance against the world’s more… unhelpful forces.
Example: I have a friend who’s curious about EA, but whose first love is ecology, and who works in a science education nonprofit (but wants to keep looking for better opportunities). I don’t know what her actual impact is, but I do know that she really cares about helping people, and I ask her about her work with genuine interest when I see her.
I wouldn’t recommend this friend for 80,000 Hours consulting, but I think that as EA grows to incorporate more causes and more people, she’ll eventually find a place in the community. And even if she never takes a new job, it’s good to just have a lot of people in the world who hear the phrase “effective altruism” and think “yes, those people are on my side, they’re trying to help just like I am, I want them to succeed”. If we want to make that happen, we should be careful to notice when someone is doing something good, even if isn’t “optimized”.
I really like this mindset as a way to avoid the “elitism” that a lot of people (rightly or wrongly) perceive in EA thinking.
When I encounter someone who’s working on a project, my first thought isn’t “what’s the impact of this project?”. Instead, I ask:
“Is this person doing something with a reasonably positive impact on the world, and doing it for good reasons?”
“Are they open to doing something else, if they come to believe that doing so will get them even more of what they value?”
For almost everyone I’ve met within EA, both of those answers are “yes”, and that seems to me like one of the most important facts about our community, however different our “relative impacts” might be.
Even outside of EA, I think that a lot of people still share our core goal of helping others as much as possible, and that this goal manifests in the way they think about their work. In my view, this makes them “allies” in a certain fundamental sense. As long as we share that goal, we can find ways to work together, in an alliance against the world’s more… unhelpful forces.
Example: I have a friend who’s curious about EA, but whose first love is ecology, and who works in a science education nonprofit (but wants to keep looking for better opportunities). I don’t know what her actual impact is, but I do know that she really cares about helping people, and I ask her about her work with genuine interest when I see her.
I wouldn’t recommend this friend for 80,000 Hours consulting, but I think that as EA grows to incorporate more causes and more people, she’ll eventually find a place in the community. And even if she never takes a new job, it’s good to just have a lot of people in the world who hear the phrase “effective altruism” and think “yes, those people are on my side, they’re trying to help just like I am, I want them to succeed”. If we want to make that happen, we should be careful to notice when someone is doing something good, even if isn’t “optimized”.