some further & updated thoughts, written in ~30 min, are below. canonical version lives here.
Here’s a frame I’ve found helpful for thinking about effective altruism:
When I look inside myself, I notice that I care about a lot of things.
You could also reasonably replace “care” with “wanting,” “preferring,” “valuing,” “desiring,” “having goals,” etc, rather than “caring.” I’m okay being loose.
Some examples of things I care about:
I want my sister to have an excellent career.
I’m hungry, and want some food.
I want to be valued by people I respect.
I want my dogs to have enjoyable lives.
(And many, many more).
(It’s often useful to be introspective/clear-eyed about what you care about, what that ontology looks like, which values are instrumental to which other values, etc., but I won’t be doing that here, and indeed I think it might be anti-helpful in this particular frame at this particular time. Stay with me until the end.)
Sort-of by definition, I want more of the things I care about. I see my life as a difficult, high-level optimization problem aimed at making decisions which, given my resources at various times, increase my values across time.
Some of the things I care about — like wanting food because I’m hungry — are fundamentally oriented at myself. And I take actions to do better along these axes.
Some examples of actions:
Reading a book on tax strategies
Learning how to cook
Asking people for feedback on my sartorial choices
etc
And in general, I try to be effective at getting what I want, here — that is, I aim to achieve these kinds of goals/values/preferences to as great of a degree as possible.
But other things I care about — like wanting my sister to have an excellent career, or my dogs to have enjoyable lives — are fundamentally oriented at others-by-their-lights. And I take actions to do better along these axes, too.
These motivations often look starkly different in a lot of different situations.
For some of these altruistic motivations, it just so happens that some lovely dynamics have coalesced such that there’s an existing group of people / infrastructure / etc who have worked & are working quite hard toward helping me get what I want w/r/t some of those things I care about that are oriented at others-by-their-lights. In particular, I haven’t found any community which is more effective at helping me achieve the things I care about that are oriented at others-by-their-lights than this one.
(The group of people / infrastructure / etc I’m referring to is effective altruism.)
Why do I like this frame?
Because it’s apparent that I care about quite a few things. It becomes evident quickly that totalizing stances toward EA are just not worth it; a bad trade; just getting less of what I want.
In particular, I think this kind of frame can be validating toward folks who’ve gone quite far, and repressed the values that they in-fact have in other areas of their life. (I think I was in this camp ~two years ago.)
There are interesting subproblems that come into clearer view, e.g.:
When should, on the margin, my resources go toward different things that I care about?
What actions would get me more access to the things that I want with greater robustness (i.e. getting me closer to many different things I want, all at once)?
etc
UC Berkeley EA is hosting a west coast uni student EA retreat on april 10-12, with ~50 attendees from Berkeley, Stanford, UCLA, UCI, UCSD, & more, as well as special guests like Matt Reardon, Jake McKinnon, Jesse Gilbert, Julie Steele, Adam Khoja, Richard Ren, & more...
...but we only know to reach out to people who’re involved with their uni’s clubs. so: if you’re interested in attending, book a 5-10 minute chat with alex or aiden :)
some examples of gaps in our outreach:
unis that don’t have an EA club
students who haven’t joined their uni’s EA club
transfers to west-coast unis
students who’re on leave from their uni and presently living on the west coast
high-schoolers who’ll soon be starting at west coast
we won’t be able to take everyone, but reading the ea forum is a pretty positive indicator that you’d be a good fit!