doing more good vs. doing the most good possible
According to Wikipedia,
Effective altruism is a philosophical and social movement that advocates “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis”.
According to logic, if it’s good to do good, then it’s better to do more good, therefore you should try to do the most good possible. But I wonder if the social movement of Effective Altruism would be able to do the most good if it encouraged people to do incrementally more good instead of holding a gold standard of people dedicating their lives to the cause. Most people aren’t able to optimize their career for EA and/or donate most of their income due to lack of motivation, executive function, resources, or other reasons, and even those who do are often trapped in guilty thoughts about whether or not they are doing enough. This is unhealthy.
Similarly, animal advocates such as the Good Food Institute and the author of How to Create a Vegan World believe that to improve animal welfare, the movement should reduce meat consumption by focusing on changing institutions to make dairy and meat substitutes that match the original in taste, appearance, and price. This would result in less meat being eaten in total, compared to the strategy of convincing people to become completely vegetarian or vegan by moral argument. Convincing the world to become vegan or vegetarian one by one is very difficult: while people generally believe in better treatment of animals, most find the barriers to vegetarianism too high to be able to do it for longer than a year, with a third stopping before the 3 month mark. Using a strategy of promoting incremental meat reduction in all individuals over total meat abstinence in some individuals makes the goal much more tractable.
Going a step further: I think that EA should be more actively welcoming of, not only people who forgo significant lifestyle change, but also people who want to donate to or work on causes that aren’t identified as effective by the EA community. As Julia Galef said in a 2017 talk at EA Global (yes, that’s JGL sitting beside her), there are three buckets in which people spend their money: personal expenses, donations to causes that they personally care about, and donations to improve the world, via causes identified to be cost-effective by the Effective Altruism community.[1] Messaging used to try to guilt people into switching money between these three buckets isn’t effective. Julia talks about how EA messaging should focus on the third bucket (giving to improve the world), and I think the next evolution in this movement is that there’s huge potential in promoting effectiveness in furthering personal charitable causes. People normally don’t think at all about effectiveness when they are donating, since they assume that charities have similar levels of effectiveness. In an ideal world, effectiveness would automatically cross someone’s mind whenever they are thinking of giving, no matter what they are giving their money or time to.
Does money spent more effectively on causes that aren’t popular in the EA community result in more money being available to spend on EA causes? It seems reasonable that it might, by raising the general awareness of effectiveness in charitable giving. When GiveDirectly (a non-profit that gives cash to help with poverty) started operating in the United States, they found that donors who started off donating to Americans would later also give money to people in extreme poverty developing countries.[2]
What could it look like if Effective Altruism was focused on making altruism more effective instead of practicing altruism in the most effective way possible? Maybe there would be a lot more global health and development material, since that’s the most common entry point into EA. People are often moved by a story from a specific country, or by people affected by a specific crisis, and could be linked to effective charities that operate specifically in those regions. Maybe effective altruists would get good SEO for more specific cause areas and situations, the way that Giving Green has ranked climate organizations for effectiveness, or like Vox’s article about effective giving for criminal justice reform.
More focus on effectiveness in other cause areas could also develop an area into becoming so cost effective that it becomes widely endorsed by the EA community. I think there is a shortage of effective organizations to give to, compared to the amount of money available. Benjamin Todd estimates that there are 50 billion dollars committed to EA. The money can be spent on anti-malaria nets and deworming medication and save a lot of lives (and billionaires do spend hundreds of millions in these areas), but that amount of money is meant to be leveraged to create new projects with the potential to be even more effective, perhaps even addressing some of the root causes of global inequities. For example, we now know that it’s much more effective to save lives by increasing economic growth (by doing things like opening up trade with a country or giving people jobs who wouldn’t normally have the opportunity) than to do public health interventions such as anti-malaria nets.
Finally, more thinking about effectiveness in cause-specific contexts could also lead to more people dedicating their lives to EA. After all, doing as much good as possible is the logical conclusion. But since it’s less effective to ask people to overhaul their career or lifestyle, I think the best thing we can do is to earnestly help people in making the change they want to see, effectively.
Notes
I think Founder’s Pledge already does a pretty good job of this, for rich people. I think EA should do a better job of doing this for regular people.
“What I was most surprised about is the level of support I received. The deployment team listened to what I was most passionate about—Mental Health and Sex Slavery of Women—went away, and then came back with a comprehensive research report that highlighted the best way I could support those charities. They identified the best charities in terms of outcomes, data-driven giving, and transparency.”
Quote from Forbes.
I emailed Founders Pledge about EA versus non-EA causes, and got this response:
Hi Ruth, Thanks for reaching out! Yes, you’re right—folks who sign with us are able to choose where they’d like to donate to, so some of that giving is going to be more EA aligned than the rest. Obviously, we are an EA aligned organisation—so our goal is that we’re increasing the proportion of the overall giving that flows through us that goes to our recommended charities over time. I will say, though, that our selection of recommended charities is pretty tight—just those which really represent maximum impact. So there may be giving which flows through FP which would be considered strongly EA aligned, but which isn’t represented in the figure of how much is donated to our recommended charities. You can check out our 2020 Impact Report for more updated figures on our giving, and keep an eye our for our 2021 Impact Report which should be out towards the end of January. Warmest, Carrie
Dan Wahl notes that this is reminiscent of Parfit’s Hitchhiker.
Further reading
- ↩︎
Will MacAskill said in the same 2017 talk that the “causes I personally care about” bucket of spending can actually be broken down into even more buckets, such as reciprocity (e.g. giving to the college you attended), identity (e.g. giving to an LGBTQ+ group), and personal connection (e.g. donating to your brother, who’s running a marathon for charity).
- ↩︎
This info is 31 minutes into the podcast: http://rationallyspeakingpodcast.org/263-is-cash-the-best-way-to-help-the-poor-michael-faye/
- EA Updates for January 2022 by 5 Jan 2022 11:35 UTC; 37 points) (
- 27 Sep 2022 7:07 UTC; 9 points) 's comment on Yi-Yang’s Quick takes by (
I think that “EA budget” is often interpreted too narrowly, in a few ways. So I strongly support the idea that we should be accepting and supportive of people doing more good, even if it’s not the most good they can do, in a few ways.
For example, you mention “money spent more effectively on causes that aren’t popular in the EA community [which can] result in more money being available to spend on EA causes.” That’s definitely a great investment—if supporting High Impact Athletes to get them to donate effectively is good, supporting effectiveness in music education to encourage others to think about effectiveness can also be effective. If talking to people about effectiveness in other areas gets a couple people to look into EA, it will be very useful!
And even without moving money to EA areas, this can be effective. For example, if I can spend time talking to people about effect interventions in domestic education in ways that convince 100 people to donate $1,000 a month to a charity that improves lives by 1 QALY for $1,000, which is well below the bar for effective charities, instead of one which improves lives by 1 QALY per $10,000, I’ve effectively improved their collective charity by over 1,000 QALY per year—which is very plausibly more good than I would do spending that time working directly to create QALYs directly.
And from a decision-theory perspective, within our “EA Budget,” I also think there is room for what you’re discussing when you mention that we can “create new projects with the potential to be even more effective, perhaps even addressing some of the root causes of global inequities.” This is due to Value of Information. If we have a half dozen charities that save lives for under $10,000, and another hundred which seem promising as new projects, where we don’t know about the cost-per -life-saved, or whichever metric we are using, it makes sense to do some level of exploration into those hundred charities.
I also think that for people with sufficient income, there are non-EA things which should also be done wth money. For example, it’s really good to support organizations you benefit from, such as donating to the local library and paying dues to NPR—not as altruistic expenses, but as part of a budget where you’re not freeloading. (I’ve said the same about spending on offsetting my CO2 emissions.) And I also agree with Julia that it’s also good to give money to things you care about, even apart from EA money and personal spending—so I will donate to a friend’s campaigns to raise money for ineffective charities, because I want to support my friend. And to be clear, none of this last set of things is my EA budget, so I don’t count it towards that total—and still think it’s important to do.
I see the “incremental” vs. “optimal” approach as a bit of a false dichotomy , in the sense that it seems like what you’re really arguing for (or at least what I’d argue for) is that EA needs more on-ramps. As you mentioned, plant-based burgers normalize veganism and give folks a clearer path to caring about animals. Donating to US-based GiveDirectly leads to donating abroad.
Given the multiple orders of magnitude of difference in effectiveness between US and developing country charities (to take global wellbeing as an example cause area), it seems difficult to argue on the merits of “doing more good” if it doesn’t lead to even more good from that person/group, i.e. if there isn’t momentum or a flywheel effect on someone’s altruism. But as a “Big Tent EA,” I would love to see more focus on EA on-ramps because that compounding effect seems real and substantial to me.
But maybe I’m biased because I followed that path? Hopefully still!
I think I agree with this.
I’ve been struggling to reconciling similar beliefs described in the post since reading How to Create a Vegan World, which seems to describe a surprisingly different model to current meta-EA strategy, and I think your reframe makes sense. Tobias talks a lot about the journey that potential vegans take, which could be equivalent to first ‘incrementally’ doing the most good and normalizing taking an EA approach to specific causes or areas of one’s life.
Also, anecdotally, I’m also someone who took that path but wants EA to be more ‘big tent!’
The interesting thing about the strategy described in “How to create a vegan world” is that it would encourage people who don’t think about morality at all to start eating more plant-based foods. I think if EA really executed on the content of my post, this could happen with charitable giving. Imagine if we were able to get 10% of the population of developed countries to think of effectiveness when they thought about charities or charitable giving, within their chosen cause area. Maybe it would shift the funding landscape just enough so that more effective charities within a cause area have better SEO and show up first on a Google search. That’s what I would love to see.
I clearly need to read “How to Create a Vegan World”! Adding it to my reading list.
I certainly want to live in a vegan world, i.e. one where the wellbeing of non-human animals is considered equally to people. But I’m not sure I want to live in an “EA world.” Maybe that’s a failure of my own imagination, but it’s hard for me to even think about what that would look like...
As a “Big Tent EA”, I’d like to see EA grow, not only to increase it’s impact in a strict sense of scale, but also reform, refine, and expand on its ideas. There are certain EA values that I’d like to see become universal — generally various expansions of the moral circle, e.g. cosmopolitanism, veganism, longtermism — but I’m not as sure about widespread adoption of the movement. Does a world where everyone’s moral circle is a bit larger have to be an “EA world?”
I’m not sure of this representation of the views of Effective Altruists. Effective altruists aren’t expected to donate most of their income (whether they optimise their career for EA or not) and very few do so. And the fact that there are psychological reasons against being too self-sacrificial has been admitted and extensively discussed within EA (e.g. here, here, here, and here, and many other posts).
The notion that EA should focus more on making work on causes that aren’t identified as effective within EA more effective was discussed in this post. It’s a big question with lots of considerations. One argument against is, however, that the differences in cost-effectiveness between cause areas may be big, meaning that steering resources towards the most effective cause areas may be very important. Another is that a reduced focus on effectiveness may lead to lowered intellectual standards and general dilution of the EA message. But there are many other considerations to take into account.
I wish that the effective altruism movement was instead called altruistic rationality. I can’t think of a better term than “effective altruism” for optimizing any kind of charitable giving or volunteering that most people in developed countries participate in, but it’s difficult to integrate that with the current effective altruism community, given that trying to get people to switch cause areas up front is ineffective and makes people think poorly of the movement. I support both types of activities but the fact that altruistic rationalist activities are called effective altruism, and the fact that most causes are commonly called “non-EA causes” in this movement prevents a broader effective altruism movement from forming.
To some extent SoGive will be implementing what you’re suggesting. As well as the overall top, EA-recommended charities, we are also looking to identify the best charities within other cause areas (e.g. poverty/homelessness in the UK, developed world health, tree-planting charities). Ideally we want to nudge people to switch to the overall top charities regardless of cause area, but we know that a lot of people are very committed to a particular cause, so it could be quite valuable to help them at least identify the top charities within that cause.
I agree. I’ve also found it a bit paralyzing to browse these boards. I find an organization I think is doing great work then find a post laying out how the organization is not as good as it seems and I don’t want to ‘waste’ my money on a sub-optimal charitable donation.
Agreed, it’s not helpful to discourage people who are doing good by only criticizing where they might fall short. It’s one of the challenges of the EA mindset, but in my experience it’s a challenge that most EAs have struggled with and tried to find solutions for. Generally, the solutions recognize that beating yourself up about this stuff isn’t really effective or altruistic at all.
My favorite writing on this topic comes from Julia Wise, a community liaison at CEA and author of the Giving Gladly blog. Here’s a few posts I found helpful:
http://www.givinggladly.com/2013/06/cheerfully.html?m=1
http://www.givinggladly.com/2020/01/its-ok-to-feed-stray-cats.html?m=1
http://www.givinggladly.com/2019/02/you-have-more-than-one-goal-and-thats.html?m=1
Yes, it’s very common to fall into this pit of EA burnout and have to dig yourself out! I wish it was less common because it can a really draining experience. And I wonder if it’s possible to do things a little differently so that it becomes less common. Sasha Chapin describes this as a “toxic social norm”: https://sashachapin.substack.com/p/your-intelligent-conscientious-in
It’s true, people in EA talk about how you shouldn’t feel guilty and burn out, but burn out still happens because the “toxic social norm” is that in EA we keep thinking about maximizing impact, and that’s just difficult to keep optimizing on without burning out.
Small flag (in case newcomers are reading this): “donate” is not the same thing as “do good”—which also includes using your career, time, influence and other kinds of capital!
(If you replace “donate” with “use resources” I think most of the discussion in the post and the comments is still pretty accurate!)
Thank you for posting this! Incremental improvement in collaboration with “non-EA” individuals, groups and organizations could be more effective than working with EA-only individuals, groups and organizations.
Right! And I think we need some clarification of terms. We can’t be calling people who are passionate about, say, effective solutions for homelessness in New York City, “non effective altruists”. That’s divisive and also kinda rude.
I think similar to “chocolate milk” and “milk chocolate”, we should have effective altruists, and altruistic rationalists. The second word is the main thing. Anyone who is trying to do any kind of good effectively should be able to call themselves an effective altruist. People who are passionate about doing the most good possible without any bias towards specific people or causes can be called altruistic rationalists. And of course, anyone can do both types of activities, without feeling any shame or guilt.
Thanks for posting this, I have had similar thoughts/questions in the past and briefly talked with a few people about it, but I don’t think I’ve posted much about it on the forum.
I was especially interested in a point/thread you mentioned about people perceiving many charities as having similar effectiveness and that this may be an impediment to people getting interested in effective altruism. I’m not familiar with the research/arguments there, but yeah it sounds like if that is true it might be beneficial/effective to first “meet people where they are” (in terms of cause passions/focuses/neutrality) by showing the differences in effectiveness in that one cause area: I imagine that part of the reason that some people may resistant to believing in high differences in impact is some (perhaps unconscious) motivated reasoning since they don’t want to acknowledge their current passion is not very comparatively impactful.
In other words, if people are resistant to changing their minds in part because of a reinforcing loop of “I don’t want to admit I may be wrong about my passion/cause area,” “many charities have similar impacts,” and “What I am doing is impactful,” one may be able to more effectively change people’s thoughts about the second point by highlighting differences in charity effectiveness within a cause area. Specifically, this could help by sidestepping the “I don’t want to change my cause area” motivated reasoning/resistance. Of course, many people might still be resistant to changing their minds more broadly, and this all depends on the extent to which that claim about people’s perceptions of effectiveness is true, but it seems like it might be helpful for some people.
See here
I think generally any kind of criticism of people trying to do good without first having built a relationship on common ground leads to “soldier mindset” where people become defensive about their actions. People who donate money or time by default expect to be thanked and feel good about it, in proportion to the amount of money or time that they donated. I suspect it’s always more productive to build a relationship with someone and find out what motivates them to give, and share relevant organizations or articles in line with their motivation, as opposed to approaching with foremost intention to convince people to change. And EAs should definitely have a scout mindset about this—There’s lots of reasons people might not think primarily of effectiveness when donating, and they’re not things we should change about people, but things we can build on. E.g. maybe some people donate to what’s convenient, or what they read about from a specific publisher that they trust, or this organization did a presentation at their church. That’s good to know.
i like that you included a further reading section.
What do you think of:
Effective altruism is a philosophical and social movement where people use reasoning and evidence to do the most good with some of their resources. The key is ‘with some of their resources.’ It is better than saying ‘with their spare resources,’ because if one approaches e. g. 30% of their networks to implement EA-related changes, then the definition would be inaccurate because these networks are not ‘spare;’ it is also better than saying ‘with their philanthropically allocated resources’ or something else that includes ‘allocated’ because that connotes staticity of the extent and a distinction between philanthropically and non-philanthropically ‘utilized’ ‘resources’ (which may make the definition inaccurate when someone uses some of their resources partly in a good maximizing manner and partly in another way).
This definition can be also acceptable to people who can think about how much of resources they can do good with and how, as opposed to thinking in absolutes or competitive terms.
The issue can be that ‘some of their resources’ is vague, so the institutional understanding of an ok extent of resources (any which does not prevent people from engaging in what truly makes them happy, such as sharing great times with others, and having a facile life?) needs to be maintained.
I am also writing ‘reasoning and evidence’ (first ‘reasoning,’ also as opposed to ‘reason,’ and then ‘evidence’) because thus readers can understand the process of coming up with solutions based on the information that they find and can trust as opposed to being given evidence (which connotes incriminating evidence at court) and being empowered to assert it by ‘reason’ (implying a framework under one which is right and thus should not be argued against).
Hmmm, I think this is not quite what I am after. If I understand correctly, what you’re saying is that we should normalize people having a limited allotment for their “third bucket” budget for saving the world. What I am saying is that we should normalize anyone doing any kind of altruism who is mindful of effectiveness within the work that they are doing.
Ah, I see. Yes, normalizing altruism with an effectiveness mindset within the work people are doing may be a more robust solution than inviting a limited resource budget allocation for almost ‘external’ EA ventures.