EA gives permission to care about many things
A short post to put a feeling down in words...
One of the aspects of EA thinking I love the most is the way it sanctions caring, and caring broadly.
In a world faced with so many woes, caring is scary and often isolating. But EA says it’s okay to care, and creates an entry point to a wonderful, wide world of other people who do, too.
People who care and make it known often become the target of pushback, especially when what they care about isn’t widely, legibly socially sanctioned. In my days of street advocacy, this pushback often looked like passersby questioning why I cared about the cause I had turned up for and not [fill-in-the-blank alternative].
This was frustrating. Often, I did care about that other cause too; I was just not focusing that hour of my day on it.
EA asks a similar question, but it feels very different.
The question of focus is less:
“Why are you working on that instead of something that actually matters?”
And more:
“Your cause has value, but have you considered whether this alternative might warrant more of your attention?”
Because EA is a question. And in asking the question, we acknowledge there are many things worth caring about, and many ways we might try to improve the world.
EA says it’s okay to care about more than one thing. It’s okay to dedicate your life, your resources, and your headspace to one thing, but you can do that in coordination with other people who are focusing elsewhere.
No one person has to carry every cause.
That’s where the community comes in.
If you zoom out, the EA ecosystem starts to look less like a group of people arguing about which cause matters most, and more like a portfolio of people caring deeply about different parts of the problem of creating a better world.
Prioritization doesn’t mean caring about less.
It means caring together, in a coordinated way.
And that structure can make it feel safer to care about more things, not fewer, because you know you’re not carrying the weight of the world alone.
I like the framing of this Rocky, especially the idea of a portfolio of people/causes. And in that portfolio is an inherent diversication which should be valued. I understand the overwhelming tide pushing our community towards AI security and safety. And while I’m not invalidating that general movement, I know my skills and network lend better to focusing on other efforts. Language like this is useful when I’m expected to have an “answer” to all things EA when I face the outside world.