You have more than one goal, and that’s fine
This version of the essay has been lightly edited. You can find the original here.
When people come to an effective altruism event for the first time, the conversation often turns to projects they’re pursuing or charities they donate to. They often have a sense of nervousness around this, a feeling that the harsh light of cost-effectiveness is about to be turned on everything they do. To be fair, this is a reasonable thing to be apprehensive about, because many youngish people in EA do in fact have this idea that everything in life should be governed by cost-effectiveness. I’ve been there.
Cost-effectiveness analysis is a very useful tool. I wish more people and institutions applied it to more problems. But like any tool, this tool will not be applicable to all parts of your life. Not everything you do is in the “effectiveness” bucket. I don’t even know what that would look like.
I have lots of goals. I have a goal of improving the world. I have a goal of enjoying time with my children. I have a goal of being a good spouse. I have a goal of feeling connected in my friendships and community. Those are all fine goals, but they’re not the same. I have a rough plan for allocating time and money between them: Sunday morning is for making pancakes for my kids. Monday morning is for work. It doesn’t make sense to mix these activities, to spend time with my kids in a way that contributes to my work or to do my job in a way that my kids enjoy.
If I donate to my friend’s fundraiser for her sick uncle, I’m pursuing a goal. But it’s the goal of “support my friend and our friendship,” not my goal of “make the world as good as possible.” When I make a decision, it’s better if I’m clear about which goal I’m pursuing. I don’t have to beat myself up about this money not being used for optimizing the world — that was never the point of that donation. That money is coming from my “personal satisfaction” budget, along with money I use for things like getting coffee with friends.
I have another pot of money set aside for donating as effectively as I can. When I’m deciding what to do with that money, I turn on that bright light of cost-effectiveness and try to make as much progress as I can on the world’s problems. That involves looking at the research on different interventions and choosing what I think will do the most to bring humanity forward in our struggle against pointless suffering, illness, and death. The best cause I can find usually ends up being one that I didn’t previously have any personal connection to, and that doesn’t nicely connect with my personal life. And that’s fine, because personal meaning-making is not my goal here. I can look for personal meaning in the decision afterward, but that’s not what drives the decision.
When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal. It’s fine to support your local arts organization because their work gives you joy, because you want to be active in your community, or because they helped you and you want to reciprocate. If you also have a goal of improving the world as much as you can, decide how much time and money you want to allocate to that goal, and try to use those resources as effectively as you can.
This work is licensed under a Creative Commons Attribution 4.0 International License.
- Aiming for the minimum of self-care is dangerous by 9 Dec 2021 21:27 UTC; 230 points) (
- Results from the First Decade Review by 13 May 2022 15:01 UTC; 163 points) (
- How EA is perceived is crucial to its future trajectory by 23 Jul 2022 3:24 UTC; 162 points) (
- Impact obsession: Feeling like you never do enough good by 23 Aug 2023 11:32 UTC; 155 points) (
- You Don’t Need To Justify Everything by 12 Jun 2022 18:36 UTC; 139 points) (
- Notes on not taking the GWWC pledge (yet) by 15 Nov 2023 13:46 UTC; 126 points) (
- An EA’s Guide to Berkeley and the Bay Area by 13 Oct 2022 18:34 UTC; 107 points) (
- Taking prioritisation within ‘EA’ seriously by 18 Aug 2023 17:50 UTC; 102 points) (
- Getting into an EA-aligned organisation mid-career by 8 Aug 2023 8:11 UTC; 72 points) (
- Some mental health resources tailored for EAs by 20 Aug 2021 16:04 UTC; 70 points) (
- How have you become more (or less) engaged with EA in the last year? by 8 Sep 2020 18:28 UTC; 57 points) (
- Guide to talking about effective altruism and effective giving by 10 Sep 2021 5:15 UTC; 57 points) (
- Celebrating Progress: Recent Highlights from the NYC EA Community by 20 Sep 2023 10:30 UTC; 53 points) (
- Notes on Effective Altruism by 11 Jul 2022 23:00 UTC; 49 points) (
- I want to read more stories by and about the people of Effective Altruism by 12 May 2023 11:08 UTC; 47 points) (
- Latest Research and Updates for February 2019 by 28 Feb 2019 15:07 UTC; 45 points) (
- A Guide to Early Stage EA Group-Building at Liberal Arts Colleges by 2 Jul 2019 12:53 UTC; 37 points) (
- 7 Dec 2023 2:56 UTC; 37 points) 's comment on EA thoughts from Israel-Hamas war by (
- Why are some EAs into cryonics? by 17 Mar 2021 7:10 UTC; 36 points) (
- My Thoughts On Suffering by 21 Nov 2023 22:23 UTC; 33 points) (
- Challenges for EA Student Group Organizers by 26 Jul 2022 5:06 UTC; 26 points) (
- 22 Jun 2022 15:00 UTC; 24 points) 's comment on Mental support for EA partners? by (
- 26 Mar 2019 6:26 UTC; 23 points) 's comment on Severe Depression and Effective Altruism by (
- 9 May 2019 22:58 UTC; 20 points) 's comment on Why we should be less productive. by (
- 17 Dec 2020 9:16 UTC; 18 points) 's comment on Careers Questions Open Thread by (
- My independent impressions on the EA Handbook by 7 Aug 2023 9:09 UTC; 15 points) (
- 15 May 2022 18:46 UTC; 14 points) 's comment on antimonyanthony’s Quick takes by (
- 9 Feb 2022 14:45 UTC; 13 points) 's comment on The Life-Goals Framework: How I Reason About Morality as an Anti-Realist by (
- 20 Jul 2021 14:15 UTC; 13 points) 's comment on Should EA have a career-focused “Do the most good” pledge? by (
- 3 Jan 2022 12:06 UTC; 10 points) 's comment on doing more good vs. doing the most good possible by (
- 27 Sep 2022 7:07 UTC; 9 points) 's comment on Yi-Yang’s Quick takes by (
- 12 Jan 2024 11:19 UTC; 9 points) 's comment on Saving the world sucks by (
- The Happiness Maximizer: Why EA is an x-risk by 30 Aug 2022 4:29 UTC; 8 points) (
- 27 Jun 2021 8:38 UTC; 8 points) 's comment on Anna_Gajdova’s Quick takes by (
- 15 Jun 2022 12:19 UTC; 6 points) 's comment on You Don’t Need To Justify Everything by (
- 22 Aug 2021 17:46 UTC; 6 points) 's comment on Some mental health resources tailored for EAs by (
- Balancing Courage and Self-Care in Effective Altruism by 21 Aug 2023 8:28 UTC; 4 points) (
- 23 Jul 2022 8:38 UTC; 4 points) 's comment on Leaning into EA Disillusionment by (
- Is fundraising through my hobbies an effective use of my time? by 23 Jan 2020 6:14 UTC; 4 points) (
- 10 Nov 2022 18:59 UTC; 3 points) 's comment on In (Praise?) of Ineffective Altuism by (
- 16 Mar 2023 18:24 UTC; 3 points) 's comment on A BOTEC-Model for Comparing Impact Estimations in Community Building by (
- 3 Sep 2022 16:53 UTC; 3 points) 's comment on EA is about maximization, and maximization is perilous by (
- The doing good and feeling good tension. by 21 May 2022 11:12 UTC; 2 points) (
- 効果的利他主義について考える by 18 Aug 2023 15:48 UTC; 2 points) (
- 目標は複数あって、それでいい by 22 Aug 2023 15:40 UTC; 2 points) (
- Hai più di un obiettivo, e non c’è nulla di male by 18 Jan 2023 11:53 UTC; 1 point) (
- Note sull’altruismo efficace by 18 Jan 2023 11:24 UTC; 1 point) (
- 15 Jun 2022 15:02 UTC; 1 point) 's comment on The totalitarian implications of Effective Altruism by (
Stuff I’d change if I were rewriting this now:
not include the reference to “youngish” EAs wanting to govern everything by cost-effectiveness. I think it’s more a result of being new to the idea than young.
make clearer that I do think significant resources should go toward improving the world. Without context, I don’t think that’s clear from this post.
This post is pushing against a kind of extremism, but it might push in the wrong direction for some people who aren’t devoting many resources to altruism. It’s not that I think people in general should be donating more to their friend’s fundraiser or their community arts organization—I’d rather see them putting more resources towards things that are more important and cost-effective. But I would like people to examine whether they’re doing things for more self-regarding personal reasons, or for optimizer-y improve-the-world reasons. And enjoy the resources they put toward themselves and their friends, but also take seriously the project of improving the world and put significant resources toward that. Rather than being confused about which project you’re pursuing, which I think is suboptimal both for your own enjoyment and for improving the world.
There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.
You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it’s not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn’t stop us from being ready and willing to use it where appropriate.
Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don’t have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that’s not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don’t think in literal cost-effectiveness terms, but global benefits are still my general goal. It’s not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.
It’s important to remember that having parochial attitudes towards some things in your own life doesn’t necessarily justify attempts to spread analogous attitudes among other people.
(I broke the quoted text into more paragraphs so that I could parse it more easily. I’m thinking about a reply – the questions you’re posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don’t think he addresses the “what if there actually exist moral obligations?” question, instead assuming mostly non-moral-realism)
On a different note though:
I actually agree with this claim, but it’s a weirder claim.
People used to have real communities. And engaging with them was actually a part of being emotionally healthy.
Now, we live in an atomized society where where community mostly doesn’t exist, or is a pale shadow of it’s former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it’s actually helping them be healthy.
And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.
Sometimes this does mean a local arts program, or dance community, or whatever. If that’s something you’re actually getting value from.
The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a “real community” than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the “I want to live in a world with nice things, this is a nice thing” standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)
I feel that my folk dance community is a pretty solidly real one—people help each other move, etc. The duration is reassuring to me—the community has been in roughly its current form since the 1970s, so folk dancers my age are attending each other’s weddings and baby showers but we eventually expect to attend each other’s funerals. But I agree that a lot of community institutions aren’t that solid.
I recently chatted with someone who said they’ve been part of ~5 communities over their life, and that all but one of them was more “real community” like than the rationalists. So maybe there’s plenty of good stuff out there and I’ve just somehow filtered it out of my life.
The “real communities” I’ve been part of are mostly longer-established, intergenerational ones. I think starting a community with almost entirely 20-somethings is a hard place to start from. Of course most communities started like that, but not all of them make it to being intergenerational.
I saw what seemed like potential communities over the years “soccer club, improv comedy club, local toastmasters” but I was afraid… to be myself, being judged, making a fool of me, worried about being liked… so I passed. Here I am now in EA giving it a shot. I may go to the improv comedy mtgs soon. According to Hari’s “Lost connections” finding a community is very important; we social animals and don’t do well in loneliness.
fold dance community sounds wonderful and fun :)
Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.
Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)
The tldr I guess is:
Maybe it’s the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don’t).
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.
Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.
This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn’t mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn’t necessarily mean stressing out about it, and we shouldn’t set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven’t personally experienced confirmation of that point of view.
If I think I shouldn’t have the mocha, I just… don’t get the mocha. Sometimes I do get the mocha, but then I don’t feel anxiety about it, I know I just acted compulsively or whatever and I then think “oh gee I screwed up” and get on with my life.
The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer’s “about a third” principle, is a first step in this direction.
Not sure what he says (haven’t got the interest to search through a whole series of posts for the relevant ones, sorry) but my point assuming antirealism (or subjectivism) seems to have been generally neglected by philosophy both inside and outside the academia: just because the impartial good isn’t everything doesn’t mean that it is rational to generically promote other people’s pursuits of their own respective partial goods. The whole reason humans created impartial morality in the first place is that we realized that it works better than for us to each pursue partialist goals.
So, regardless of most moral points of view, the shared standards and norms around how-much-to-sacrifice must be justified on consequentialist grounds.
I should emphasize that antirealism != agent-relative morality, I just happen to think that there is a correlation in plausibility here.
Thanks for writing this.
I feel an ongoing sense of frustration that even though this has seemed like the common wisdom of most “longterm EA folk” for several years… new people arriving in the community often have to go through a learning process before they can really accept this.
This means that in any given EA space, where most people are new, there will be a substantial fraction of people who haven’t internalized this, and are still stressing themselves out about it, and are in turn stressing out new new people who are exposed more often to the “see everything through the utilitarian lens” than posts like this.
This post, alongside Julia’s essay “Cheerfully,” are the posts I most often recommend to other EAs.
“When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal”.
That actually sums up and as well clarified some of my uncertainties/curiosities on the application of Cost-effectiveness.
Thanks for sharing.
Thanks for this post. I think it provides a useful perspective, and I’ve sent it to a non-EA friend of mine who’s interested in EA, but concerned by the way that it (or utilitarianism, really) can seem like it’d be all-consuming.
I also found this post quite reminiscent of Purchase Fuzzies and Utilons Separately (which I also liked). And something that I think might be worth reading alongside this is Act utilitarianism: criterion of rightness vs. decision procedure.
This is a great post Julia. This helped me. I do a lot of volunteer work in my community and have been thinking about if I should give that up to attempt to devote more time to EA causes (even though I don’t want to), but I really should not do this. Don’t think I would be that effective with my extra time anyway, because something would be missing from my life. Much love.
Could you say a little more about how you decide what size each pot of money should be?
My advice on how to decide the pots of money is basically in this post: http://www.givinggladly.com/2012/03/tradeoffs.html
TL;DR: spend some time noticing how much other people and let that inform your budget, but don’t try to pay attention them every day, because you probably can’t go around powered by guilt forever.
That advice was written at a time when I thought of donation as basically the only path to impact, at least for myself. I do think it’s worth seriously considering whether other paths are viable for you and not committing to a level of donation that will seriously reduce your ability to pursue other things. This probably won’t be surprising coming from the person running Giving What We Can, but I think something like 10% is a level that’s both significant and also compatible with, for example, working for a nonprofit.
I find the upside of deciding annually on my donation budget is that I can then make all the other decisions the way everyone else does. Vacation? Lunch with a friend? Donation to friend’s fundraiser? They’re all in the “stuff that will enrich my life” category, so I can trade them off against each other however I think will be best for me.
Thanks, Julia.
I think guilt is a powerful & fragile motivator that should basically be considered harmful, at least for people whose psychologies are shaped like mine.
This all reminds me of stuff that Raemon has been writing recently, as well as this part of the EA jobs are really hard to get thread.
This would be my practical question as well, for the following reasons.
I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only “satisfice” (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).
So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be satisficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.
In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their relational value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.
If others have found ways to reconcile infinite optimizing goals with satisficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)
Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I don’t know if you ever ended up writing more about this, but if you did, I hope you’d consider publishing it—I think that could help a lot of people!
Hi Ann, thanks for the reply! I agree that self-compassion can be an important piece of the puzzle for many people with an EA outlook.
I am definitely still working on reframing EA-related ideas and motivations so that the default language would not so easily lead to ‘EA guilt’ and some other problems. Lately I’ve been focusing on more general alternatives to ‘compassion’, because people often have different (and strong) preexisting notions of what compassion means, and so I’m not sure if compassion will serve as the kind of integrative ‘bridge concept’ that I’m looking for to help solve many (e.g. terminological) problems simultaneously.
So unfortunately I don’t have much (quickly publishable) stuff on compassion specifically, having been rotating abstract alternatives like ‘dissonance minimalism’ or ‘complex harmonization’. But who knows, maybe I’ll end up relating things via compassion again, at some point!
I’m not up-to-date on what the existing EA-memesphere writings on (self-)compassion are, but I love the Replacing Guilt series by Nate Soares (http://mindingourway.com/guilt), often mentioned on LW/EA. It has also been narrated as a podcast by Gianluca Truda. I believe it is a good recommendation for anyone who is feeling overwhelmed by the ambitions of EA.
Thanks so much for writing this! I feel like I end up trying to express this idea quite frequently and I’m really glad for the resource on it. I’d also love to see talking about our non-altruistic goals and motivations become more normalised within EA, so yes, thanks 🙂
Personally I identify with the approach you’re expressing very strongly – I find it hard to understand the thought that I might care for my friends only because it ultimately helps me help the world more; I think of them in different categories. But then I know others who find it very alien that I both care a lot about helping the world as much as possible, but am also happy making some decisions for completely non-altruistic reasons. Have others come up against this divide as a problem issue in EA discussions? I feel like at times it is a place where discussions have got stuck.
I’d be interested in knowing too, as others have asked, how do you (and others) tend to approach weighing things to spend your time on against each other when they are part of different goals? I have various strategies that I try, but they usually boil down to using the non-EA goals as constraints – if there is a choice between a morally effective thing and something else, I usually end up doing the EA thing when I get the answer “no” to questions like “will doing it make me sad” or “would I be failing in something I owe to someone else”. I don’t find that very satisfactory – how do others do it?
[I’m doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don’t have time to re-read them or say very nuanced things about them.]
[I work with Julia.]
I think this piece is maybe the best short summary of a strand in Julia’s writing that has helped EA to seem more attainable for people.
“When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal”...this quote drives it home for me....what a way to end this introductory course on EA as a first timer. Amazing.
“I have lots of goals. I have a goal of improving the world. I have a goal of enjoying time with my children. I have a goal of being a good spouse. I have a goal of feeling connected in my friendships and community. Those are all fine goals, but they’re not the same.” This post is one of my favorite articles in the EA program. Written in clear, easy to understand language and goes straight to how I feel. I have confusing emotions about doing the most good.
Having clear and separate goals sounds helpful. Using some resources to go to a movies or meet a friend for coffee is ok. I am having a tougher time deciding whether to stop donating to areas that are close to me. Should I stop donating money in Haiti and instead donate more to high impact areas? This is a tough choice and I am still struggling with deciding.