A framework for discussing EA with people outside the community

Summary

In this post, I ask: Is there an existing framework or a communication norm with EA for discussing EA ideas with people we know outside of the community? If not, is it worth investing time in developing a framework?

The crux of this idea is that I believe that accurately communicating EA values to one other person could potentially double your impact (presuming the person you convince has the same potential for impact as you). And for most people in your life, you may be the best person they could hear about EA from (since you know them well enough to give a customized explanation, and they’re more inclined to listen to your advice).

I’ve read advice in this area along the lines of “different approaches work for different people”, which I totally agree with. I’ve also read that just providing information to people doesn’t necessarily move them to change—it’s the Aha! experience that counts, which leads to insight and change. I’ve found this Communicating about EA guide which I found useful. This article on practical debiasing I’ve also found very useful. But the difference between those guides and what I’m looking for is that I want to understand if there’s a way to figure out which approach will work best for this person.

This framework might look something like:

  1. Recognise an opportunity to bring up EA

  2. Understand this person’s values and current beliefs

  3. Establish a space where this person feels comfortable/​safe changing their mind

    1. a. Help to dissociate a person’s reasoning from their identities.

    2. Ask them to talk through their beliefs or their understanding.

    3. Depoliticise topics and establish some “skin in the game” to help people be open to changing their mind.

  4. Try to understand what the Aha! moment might be for this person and help them reach it

Introduction

Books are generally more reliable than we are to communicate EA ideas. If the opportunity to discuss EA comes up in conversation I’ve tended to prefer to point people to articles/​books etc that are introductions to EA, as they can introduce ideas and build on them in a logical way (whereas I find conversations can quickly derail and go down tangents and ideas can be taken out of context—plus we’re wrestling with all sorts of cognitive biases etc.). However, over time I’ve realised getting people to read books and take them seriously is hard, so having conversations about EA is important.

So far, whenever the opportunity to discuss EA ideas has come up in conversation with friends or family, I’ve always sort of floundered. I feel it’s really easy to quickly have the conversation derail as people bring up various objections. This makes it hard to introduce and build on the ideas the way a book can.

However, I feel like people might engage with these ideas if presented in a context of what this particular person cares about. For example, someone might ask me during dinner “Why did you go vegan?”. Veganism has several arguments in its favour—animal autonomy, climate change, the knock-on impacts to human suffering etc. This would be a good opportunity to seek to first explore their current understanding of why people would go vegan (in my experience, people have assumed veganism is for personal health reasons rather than reducing animal suffering or climate change). From there I could try to find out what they care about, then discuss the most powerful ideas (for them) first.

I want to propose some initial ideas of how this framework might look, but these ideas need lots of work and my hope is that eventually this will become a concrete, clear framework that can be used.

1. Recognise an opportunity to bring up EA

From personal experience I’ve found a few examples where a conversation with a friend/​family/​colleague might lead towards discussing EA:

  • We’ve been discussing social justice generally (or reacting to news)

  • Someone collecting for charity has approached us/​we’ve seen a TV advert for a charity

  • I’ve been asked why I’m vegan during a meal

There may be many more examples you could name and recognise in the moment, so I probably don’t need to write an exhaustive list. Essentially I’d just like to emphasise here that I’m interested in this framework to discuss EA in the context of the friendly “pub chat”—this is not a framework for a recruitment drive, it’s about being best prepared to discuss EA should it come up organically.

2. Understand this person’s values and current beliefs

I believe that after recognising an opportunity in conversation to discuss EA, the next step should probably be establishing this person’s moral beliefs/​what they want to see in the world (i.e. if they will find meaning in contributing to the problem of global health, animals, long termism etc). What does the person value now, and how is this understanding tied to their sense of identity?

This raises the questions:

  • How do you map out what someone already believes?

  • How do you make it clear to them that you understand their views?

  • How do you convey that you think you can help them do more good in a way that aligns with their current values??

(To quote The Good Place) Of course, the exact opposite may be true!

Should you instead start with a standard “elevator pitch” for EA, and then follow up on whichever parts seem to catch the listener’s interest? There are pros to this strategy as well — I’ll go into more detail in future sections.

3. Establish a space where this person feels comfortable/​safe changing their mind

In the chapter on Reason in Enlightenment Now, Steven Pinker discusses a few ideas that I like and think might help build a framework. I’ve tried to succinctly state each idea lifted from the book, then present a takeaway that could be applied to a framework.

(A lot of text here in italics is lifted directly from the book, though some words are removed for brevity, but there’s much more in the book that’s useful if people are interested).

a. Identity-protective cognition, motivated reasoning and cognitive dissonance reduction

When people are first confronted with information that contradicts a staked-out position, they become even more committed to their original position. Feeling their identity threatened, belief holders double down and muster ammunition to fend off the challenge. As the counter-evidence builds up, the dissonance can mount until it becomes too much to bear and the opinion topples over (the affective tipping point). This tipping point depends on the balance between how badly the opinion holder’s reputation would be damaged by relinquishing the opinion and whether the counterevidence is so blatant and public as to be common knowledge.

My takeaway here is: Help to dissociate a person’s reasoning from their identities.

For example, an identity of “meat is manly” might be holding someone back from being open to discussions about veganism. It should be possible to create a safe discussion where you can deconstruct this idea with someone. Additionally, hopefully the fact that you’re discussing this with a person you have an existing relationship with will help to create a safe space for this person to explore and dismantle a harmful identity.

b. Talking things through fully

People understand concepts only when they are forced to think them through, to discuss them with others, and to use them to solve problems. People don’t spontaneously transfer what they learned from one concrete example to others in the same abstract category. Students in a critical thinking course who are taught to discuss the American Revolution from both the British and American perspectives will not make the leap to consider how the Germans viewed World War I. With these lessons about lessons under their belt, psychologists have recently devised debiasing programs that fortify logical and critical thinking curricula. They encourage students to spot, name, and correct fallacies across a wide range of contexts. Practices of successful forecasters have been compiled into a set of guidelines for good judgment (for example, start with the base rate; seek out evidence and don’t overreact or underreact to it; don’t try to explain away your own errors but instead use them as a source of calibration). These and other programs are provable effective: students’ newfound wisdom outlasts the training session and transfers to new subjects.

...the mere requirement to explicate an opinion can shake people out of their overconfidence—The illusion of Explanatory Depth. When people with die-hard opinions on Obamacare or NAFTA are challenged to explain what those policies actually are, they soon realise that they don’t actually know what they are talking about and become more open to counter-arguments.

My takeaway here is: “If it isn’t said out loud, I don’t have to deal with it.”—Ask the person to talk through their beliefs or their understanding. Ask open-ended questions and help to think them through completely with them.

Using veganism again as an example, I believe most arguments against veganism would fall apart here if someone has to fully articulate and justify their view. The line people don’t spontaneously transfer what they learned from one concrete example to others in the same abstract category really sticks out for me. This is the “Make the Link” argument people make about veganism—transferring someone’s empathy for dogs/​cats and other animals to those animals that are being raised to be killed and eaten (this article on conflicted omnivores is very interesting and deals with similar issues). But the goal here is not to catch someone out, the goal is to give this person a non-judgemental space to fully explore their understanding—something they may not have previously had the opportunity to do.

This can be a bit of a tightrope walk—people know when their beliefs are being challenged, and when they’re being pushed toward a conclusion they don’t like. There’s a difference between being asked to explain something factual (e.g. NAFTA) and a matter of personal philosophy/​ethics -people aren’t under the same explanatory pressure when something is “just what I believe”. It’s important that these conversations happen in a context that isn’t just “safe”, but is comfortable and pleasantly engaging—two friends talking, rather than an authority figure/​expert and a layperson talking.

Establishing that you’re both discussing these ideas with a scout mindset early on in a conversation would be useful to decouple ideas from personal values. It would be important to make it clear that you’re not trying to shoot them down and you don’t think you’re better than them. The goal is to establish truth and make it clear that you have some potential ideas to share that are really interesting. This includes making it clear that they might have an understanding you are not aware of—we’re searching for that too!

c. Skin in the game

People are less biased when they have skin in the game and have to live with the consequences of their opinions. “Contrary to common bleak assessments of human reasoning abilities, people are quite capable of reasoning in an unbiased manner, at least when they are evaluating arguments rather than producing them, and when they are after the truth rather than trying to win a debate.”. When issues are not politicized, people can be altogether rational. Experiments have shown that when people hear about a new policy, such as welfare reform, they will like it if it is proposed by their own party and hate it if it is proposed by the other—all the while convinced that they are reacting to it on its objective merits.

The factual state of affairs should be unbundled from remedies that are freighted with symbolic political meaning. People are less polarized in their opinion about the very existence of anthropogenic climate change when they are reminded of the possibility that it might be mitigated by geoengineering than when they are told that it calls for stringent controls on emissions.

My takeaway here is: Depoliticising topics and establishing some “skin in the game” can help people be open to changing their mind.

I’m especially drawn to the idea of how to establish skin in the game and exploring how to get people to live with the consequences of their opinions. It would make sense that if people have to live with the consequences of a decision then they are going to spend more time critically reviewing and understanding it.

But I’m unsure how this might look in practice, or how you would establish it. Theoretically, if you wanted someone to consider the consequences of meat-eating by forcing them to watch videos of factory farms. But this doesn’t seem particularly compassionate or useful. “Skin in the game” might be a hard thing to establish in casual conversation, but thought experiments like The Drowning Child have helped challenge me to realise what I owe to other people in the past. So thought experiments might be a useful tool to make people ask themselves if they have obligations to act.

d. Techniques

So broadly, I think asking someone what they believe, their reasons for believing it, and what would cause them to change their mind would be a great place to start. From there, you can tailor the discussion related to their values and how they make decisions. Here are some other good techniques suggested in Enlightenment Now:

  • Ask people to switch sides in a debate and argue the opposite position

  • Have people try to reach a consensus in a small discussion group, forcing them to defend their opinions to their group mates (with the truth usually winning)

  • Adversarial collaboration—work together to get to the bottom of an issue, setting up empirical tests that people agree beforehand will settle it.

4. The Aha moment!

I think the goal ultimately is to set the stage as much as possible for an Aha! moment. To almost be the mediator between someone and the ideas of EA. To create a safe space for this person to explore these ideas. I think most people change significantly when they have an Aha! moment rather than when presented with lots of information. Like alcoholics resolve to get sober when they hit some sort of rock-bottom or other significant realisation. Or people resolve to lose weight when there is some sort or paradigm shift in their relationship to food/​their body.

My paradigm shift for veganism (sorry to keep bringing veganism up, it’s just useful for examples!) was the book Sapiens by Yuval Noah Harari. In it, he discusses animals’ capacity for inner lives and the suffering we subject to them on factory farms. It was something I hadn’t considered before (and never had been asked to explain out loud so was never challenged on it). It was all explained in a very non-judgemental way and it connected the dots for me in a way that being exposed to veganism in other capacities never had before. Additionally, it kind of came out of left field, it was a history book… I wasn’t seeking out information to challenge my perception of veganism, so I’m very thankful that this idea was challenged for me but was done so in a very kind, thoughtful way. Other people may have gone vegan from watching a documentary etc—that is their Aha! moment. (What I’m trying to say here is that I don’t want this framework to be seen as manipulative. I very much view it as a kindness that someone pointed out to me how my actions were affecting the world around me in a way I hadn’t realised and gently guided me into how to change them. Many arguments made by vegans discuss them as if they are obvious, which I can find very off-putting. I often ask myself “if I read this when I wasn’t a vegan would it lead me to change my mind?”—The answer is often no. Conversations can too often be a kind of gotcha! in an attempt to win, rather than find the truth.) It should be an imperative goal of this framework to try and find the Aha! moment with this person (with the distinct advantage of this being someone you know and have a relationship with, rather than a blanket approach of introducing someone through an article/​book).

This is my first longform post, and it’s way longer than I was expecting! I’m excited to read any and all contributions! :)

Many thanks to Aaron Gertler for feedback on drafts of this post.