EA as a Tree of Questions
Introduction
As a community builder, I sometimes get into conversations with EA-skeptics that arenât going to sway the person Iâm talking to. The Tree of Questions is a tool I use to be more sure of having effective conversations, faster identifying the crux. Much of this is inspired by Scott Alexanderâs âtower of assumptionsâ and Benjamin Toddâs ideas of The Core of EA.
The Tree
a trunk with core ideas almost all EAs acceptâand without which you have to jump through some very specific hoops in order to agree with standard EA stances.
Branches for different cause areas or ideas within EA, such as longtermism. If you reject the trunk, thereâs no point debating branches.
All too often, I find people cutting off a branch of this tree, and then believing theyâve cut down the entire thing. âIâm not an EA because I donât believe in x-risk,â is an example. Deciding what assumptions you have to agree with in order to be on those branches is a job for people more knowledgeable about the philosophy behind them. What I present here is questions I ask to know whether someone can even get up the trunkâif they canât, then itâs meaningless to help them reach for the branches.
This post is focused on the kinds of conversations where there is some cost to debating. It could be the social cost of yapping too much at a family dinner, it could be the risk of seeming pushy to a friend whoâs skeptical, or just that youâre tabling and you could be talking to someone else. Thatâs why Iâve listed a few points that I think arenât worth the time/âeffort to argue against, if someone raises them as objections to the trunk of the EA tree. Iâll also list some bad counterarguments that you should practice countering. These are all real examples from my experience in EA Lund (Sweden), so Iâm interested to hear from you in the comments if your mileage varies.
Altruism
The first part of the trunk is to ask âdo you care about helping others?â This is actually the first words I say to people when tabling, and I think itâs important to frame it in this very normal, easy-to-grasp way. Iâve heard people talk about EA as maximizing or optimizing, but this is much less attractive and often carries negative connotations.
Concede:
Narrow moral circles. Some only want to help their family, city, or religious group. One person even answered âthis is college man, Iâm just here to party!â
Self-reliance/ânon-interventionism. This could be based on the empirical claim that intervening makes things worse, or on the moral claim that itâs valuable for people to help themselves. You can get away with one followup question here if you find their argument particularly unsound, but I havenât found them convinced even if I can show that itâs a bad policy.
Debate briefly:
âAltruism is self-serving/âvirtue signalling.â This is a non-sequitur; I asked if you wanted to help others, not why people want to help others.
âGiving to one means I have to give to everyone.â This is a classic slippery slope, and I trust you to convince them that itâs ok to only do what is feasible for you.
Effectiveness
The second question is âis helping a lot better than helping a little?â Saying no to this means effectiveness isnât interesting, and (most likely) neither is EA. I rarely ask this directly because it begs the questions of what âa lotâ means, but I do give examples of differences in cost effectiveness and gauge their reaction.
Concede:
âIâm content as long as I do some good every now and then.â I think this one is especially important to be respectful about, so you donât come across as pushy. I want to flag that Iâm afraid many are put off by EA being demanding already, so that personal fear makes me extra unwilling to argue against this objection.
Negative vs positive obligations. Some consider it more important for their own CO2 emissions to be low than for the global ones to be. The focus is on them not doing harm, rather than no harm being doneâcontrary to what most EAs believe.
Debate briefly:
Uncertainty about othersâ preferences. While true in one sense, you know for sure that no one wants their child dying from malaria, to be tortured, or to see our species go extinct. This is the level of problems EA operates on, so they might still be interested.
Worries about burnout from trying too hard. This is the flip side of EA being as a community demanding to some. You can make big wins here by saying clearly that weâd happily help them avoid trying to hard, while still doing something. You can refer to research showing that do-gooders are happier, if theyâre amenable to that.
Comparability
Can we quantitatively compare how much good different actions do?â This is often snuck in with the Effectiveness question, because a comparison has already been made when weâre comparing a lot to a little. However, I find it important to be attentive to when someoneâs turned off by the idea of quantifying altruism.
Concede:
âI donât want to use imperfect metrics.â QALYs are imperfect, and so are many similar metrics we use to measure our impact. We miss 2nd order effects which might dominate (e.g. The Meat-Eater Problem), and there can be errors in how theyâre determined empirically. This is an important conversation to have within EA, but I donât think having that be your first EA conversation is conducive to you joining. I just say something like âAbsolutelyâtheyâre imperfect, but the best tools available for now. Youâre welcome to join one of our meetings where we chat about this type of consideration.â
Anti-prioritarianism. You could claim that itâs wrong of me to only give one of my children a banana, even if thatâs the only child whoâs hungry. Some would say I should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.
Institutional Trust
To embrace EA, you need to believe that at least some of its flagship organizations and leadersâ80,000 Hours, Will MacAskill, Giving What We Can, etc.âare both well-intentioned and capable. Importantly, many skeptics leap straight to this âtop of the trunk,â accusing EA groups of corruption or undue influence (e.g., âOpen Philanthropy takes dirty billionaire moneyâ).
While those concerns deserve a thoughtful debate, they should come after someone already agrees that (i) helping strangers matters, (ii) doing more good is better than doing a little, and (iii) we can meaningfully compare different interventions. In other words, donât let institutional distrust be the very first deal-breakerâfocus on the roots before you tackle the branches.
Further Discussions
There are more points central to the thought patterns in EAâexpected value, longtermism, sentience considerations, population ethicsâbut theyâre not as integral to EA as the ones above. If someone rejects one of them and claims that itâs why they reject EA, Iâd say theyâve only sawed off a cluster of branches.
Institutional Trust
I donât quite follow the logic here. Your first paragraph seems to acknowledge that some degree of institutional trust is part of the trunk rather than merely the branches, but the end of the second paragraph characterizes it as a branches issue.
Iâd agree that institutional trust is in a sense less foundational than ârootâ issues like altruism and effectiveness, but being less foundational does not imply it is less practically critical to reach the end result. If A and B and C and D are all practically essential to reach any of E through H, itâs reasonable for someone who is being invited in to start with whichever of A-D they think is weakest out of respect for their time.
As an aside, if one goes so far as to say that EA as currently constituted doesnât have anything meaningful to offer to those who do not âbelieve that at least some of its flagship organizations and leadersâ80,000 Hours, Will MacAskill, Giving What We Can, etc.âare both well-intentioned and capable,â [1] then maybe that is a signal something is wrong.
This is further along than your statement that this belief is necessary to âembraceâ EA, so I donât want to imply that it is your view.
Thatâs a mistake, thanks for pointing it out! That final sentence wasnât meant to stay in. That is, I think institutional trust is part of the trunk and not the branches.
I agree with your side point that there are some ideas & tools within EA that many would find useful even while rejecting all of the EA institutions.
This is a good post if you view it as a list of frequently asked questions about effective altruism when interacting with people who are new to the concept and a list of potential good answers to those questions â including that sometimes the answer is to just let it go. (If someone is at college just to party, just say ârock onâ.)
But thereâs a fine line between effective persuasion and manipulation. Iâm uncomfortable with this:
If I were a passer-by who stopped at a table to talk to someone and they said this to me, I would internally think, âOh, so youâre trying to work me.â
Back when I tabled for EA stuff, my approach to questions like this was to be completely honest. If my honest thought was, âYeah, I donât know, maybe weâre doing it all wrong,â then I would say that.
I donât like viewing people as a tool to achieve my ends â as if I know better than them and my job in life is to tell them what to do.
And I think a lot of people are savvy enough to tell when youâre working them and recoil at being treated like your tool.
If you want people to be vulnerable and put themselves on the line, youâve got to be vulnerable and put yourself on the line as well. Youâve got to tell the truth. Youâve got be willing to say, âI donât know.â
Do you want to be treated like a tool? Was being treated like a tool what put you in this seat, talking to passers-by at this table? Why would you think anyone else would be any different? Why not appeal to whatâs in them thatâs the same as whatâs in you that drew you to effective altruism?
When I was an organizer at my universityâs EA group, I was once on a Skype call with someone whose job it was to provide resources and advice to student EA groups. I think he was at the Centre for Effective Altruism (CEA) â this would have been in 2015 or 2016 â but I donât remember for sure.
This was a truly chilling experience because this person advocated what I saw then and still see now as unethical manipulation tactics. He advised us â the group organizers â to encourage other students to tie their sense of self-esteem or self-worth to how committed they were to effective altruism or how much they contributed to the cause.
This person from CEA or whatever the organization was also said something like, âif weâre successful, effective altruism will solve all the worldâs problems in priority sequenceâ. That and the manipulation advice made me think, âOh, this guyâs crazy.â
I recently read about a psychology study about persuading people to eat animal organs during World War II. During WWII, there was a shortage of meat, but animalsâ organs were being thrown away, despite being edible. A psychologist (Kurt Lewin) wanted to try two different ways of convincing women to cook with animal organs and feed them to their families.
The first way was to devise a pitch to the women designed to be persuasive, designed to convince them. This is from the position of, âI figured out whatâs right, now let me figure out what to say to you to make you do whatâs right.â
The second way was to pose the situation to the women as the studyâs designers themselves thought of it. This is from the position of, âIâm treating you as an equal collaborator on solving this problem, Iâm respecting your intellect, and Iâm respecting your autonomy.â
Five times more women who were treated in the second way cooked with organs, 52% of the group vs. 10%.
Among women who had never cooked with organs before, none of them cooked with organs after being treated the first way. 29% of the women who had never cooked with organs before did so for the first time after being treated the second way.
You can read more about this study here. (There might be different ways to interpret which factors in this experiment were important, but Kurt Lewin himself advocated the view that if you want things to change, get people involved.)
This isnât just about whatâs most effective at persuasion, as if persuasion is the end goal and the only thing that matters. Treating people as intellectual equals also gives them the opportunity to teach you that youâre wrong. And you might be wrong. Wouldnât you rather know?
Iâm sad to hear that youâd feel manipulated by my reply to the QALY-doubting response, but Iâm very happy and thankful to get the feedback! We do want to show that EA has some useful tools and conclusions, while also being honest and open about whatâs still being worked on. Iâll take this to heart.
I feel the need to clarify that none of these responses are meant to be âsales-yâ or to trick people into joining a movement that doesnât align with their values. My reply was more based on the ideas that we need more skeptics. If they have epistemic (as opposed to ethical) objections, I think itâs particularly important to signal that theyâre invited. My condolences for having gotten such awful advice from whatever organization it was, but thatâs not how we do things at EA Lund.
I like the tree metaphor! Iâve always thought of it as a ladder of premises. You climb your way up the ladder, starting with the basics. Every step up the ladder, you lose some people. And if you start really high up the ladder, some people might get confused because they donât understand the fundamental premises.
But I see how the branching metaphor can lead to the different viewpoints.
1. Life & quality of life matter.
2. They can be quantified.
3. More is better, e.g. 2 lives saved is 2x better than 1 life saved. (Effective)
4. You have the ability to do good by improving others lives & the quality of their lives. (Altruism)
Maybe here it branches...
A. Geography doesnât matter, a life across the world matters the same amount as a life in your country â Global Health.
B. Humans arenât special. Animal lives matter too. â Animal Welfare.
C. Time doesnât matter, a life in the future matters the same amount as a life in the present. + There might be a lot of future humans + Reducing existential risk is the most effective way of addressing this â Longtermism
In an undergrad philosophy class, the way my prof described examples like this is as being about equality of regard or equality of concern. For example, if there are two nearby cities and one gets hit by a hurricane, the federal government is justified in sending aid just to the city thatâs been damaged by the hurricane, rather than to both cities in order to be âfairâ. It is fair. The government is responding equally the needs of all people. The people who got hit by the hurricane are more in need of help.
For a more realistic example, I talked to one person who said that theyâd focus significantly on homelessness in their own city as well as homelessness in Rwanda, because itâs unfair to not divide the resources. Theyâre not doing the most good, because they find it more ethical to divide their resources.
So I think your professorâs description is good, but Iâm not sure it helps discuss egalitarianism/âprioritarianism with laymen in their terms. When I say Iâd give everything to Rwanda, Iâm answering âwhat does the most good?â and not âwhatâs the most fair/âjust?â Nonetheless Iâll consider raising this response next time the objection comes up.
I like the branching tree metaphor, and I like the attempt to framework these intro conversations â good post.
Executive summary: This reflective, experience-based post introduces the âEA Tree of Questionsâ as a conversational tool to help community builders quickly identify whether someone shares the core beliefs necessary for meaningful engagement with Effective Altruism, enabling more efficient and respectful dialogue with skeptics.
Key points:
The âEA Treeâ metaphor distinguishes between foundational beliefs (the trunk) and more complex cause-specific ideas (the branches); debating advanced topics is often fruitless if someone doesnât accept the core trunk principles.
Three trunk questionsâAltruism, Effectiveness, and Comparabilityâform the basis for determining if a person is philosophically aligned enough to engage meaningfully with EA ideas.
Practical advice is offered for when to concede, engage, or disengage based on real conversations, aiming to avoid unproductive debates and reduce social costs in outreach settings.
Institutional trust is presented as a later-stage concern that shouldnât be a conversation starter; it matters only after agreement on more fundamental principles.
The post encourages tailoring conversations to a personâs values and level of receptiveness, especially when EA can appear demanding or overly quantitative.
The author invites community input and treats the model as a work-in-progress, acknowledging variability in reactions and emphasizing the importance of respectful engagement.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.