We all teach: here’s how to do it better

Epistemic status[1]: There’s consistent meta-analytic evidence for the interventions and models presented here. Still, that research is only one piece of your evidence-based decision-making. I’ve been directive for brevity’s sake. Use your judgement, adapt to the context, and let me know where you disagree in the comments.

Summary: we need to learn to teach better

Education is the #2 focus area for the Centre for Effective Altruism. Many of us educate, from field building to fundraisers, from coffees to conferences. As a result, I think we can do better by applying the meta-analyses on what works from educational psychology. We can use models and strategies that have been shown to work for hundreds of thousands of students, rather than feeling like community building is a whole new paradigm.

I appreciate and admire all those who do community building. If I identify any groups that could improve, it’s because I think they have the skills and track record to do an incredible amount of good. I want them to thrive, and I hope they see any feedback here as constructive.

How to teach effectively, in five steps

For those of you who don’t care about the causal model, or the evidence, the following summary will explain what good teaching looks like. I think we could frame more community building events using these steps. Doing so will build capacity in the community, and build members’ motivation and engagement. This obviously isn’t everything we should be doing to build a community—social events, 1 on 1s, conferences, and reading groups still have their place. But, if we’re trying to build a capable and motivated group, we should try the following more often.

  1. Generally, work backward from some new skills you’re going to help people learn. These are your learning objectives. Knowledge is obviously good, but is most valuable and motivating when connected to important skills.

    1. For example, don’t say “learn about AI” but instead “make well-calibrated forecasts about the future of society” or “make career plans that do as much good as possible”

    2. Make these learning objectives explicit to the learners, so they know what you’re trying to achieve together

  2. For longer programs, consider assessing people’s competence at the end, in a way that helps both you and the learner know how you’re going. This is your ‘summative assessment.’

    1. For example, don’t just run a discussion group with no assessment, and avoid ‘knowledge tests’ like exams. Instead, have them do the skill (like make and justify some forecasts, make a career plan) and provide some feedback on how they’re going.

    2. If you’re teaching an important skill, then practising it in this way should be motivating. If they’re not doing it, the skill might not be important enough, you might not have explained how it’s important, you might have set the bar too high, or might have forgotten to craft formative activities.

  3. Craft ‘formative’ activities for them to practise the skill with you. This lets you give them feedback. You don’t learn to play tennis by watching tennis or talking about tennis. It’s hard to learn to volley for the first time by playing a real match. Show them how to volley, then have them hit lots volleys, then make it progressively closer to a match.

    1. Match the formative assessments to the summative assessments (and therefore, the learning outcomes).

      1. Don’t just use discussions if you want people to finish your program by posting some research on the EA forum.

    2. Piece these activities together so they add up to your ‘summative assessment.’ Don’t require skills on the summative assessment that you haven’t practised together.

      1. If you want people to post research on the forum, break that skill down into parts. Practice each part together so you can provide feedback on their skills.

    3. Add variety to formative activities so they don’t get boring.

    4. Make them easier than you think they need to be. You have probably forgotten how hard it was to learn this skill. Make them progressively harder.

    5. Some formative activities are better than others. For example, at some point consider using concept maps, having peers work together, having them grade each other’s work, or simulate a professional role.

  4. Now think about what knowledge or demonstrations might help people first practise the skill. You can learn to better hit tennis balls by seeing someone show good technique (slowly), or by understanding why top-spin is good. Present new content in small bites that engage both the learner’s eyes and ears.

    1. Our brains are built for multimedia that uses sight and sounds. So, in general, video is better than text + pictures, which is better than just text or sound.

    2. Make these simpler than you think they need to be. Use short sentences and simple language. You like skills, you probably forget how hard it was to understand this knowledge the first time. Start with a concrete or relatable example (like ‘humans run the world, not chimps’) before moving to something abstract (like ‘AI could be catastrophic’).

    3. Make it really obvious where learners should focus. Don’t give them more than one thing to focus on at once. Don’t distract them with (too many) memes, jokes, and asides unless they directly contribute to the learning points.

    4. Empathise with misconceptions, but then correct them (like “many people think the top charities are about twice as impactful as the average one, but it turns out the number is closer to 100 or 1000 times”). By focusing your plan on correcting misconceptions, you’re more likely to update beliefs rather than entrenching existing ones.

    5. Intersperse this knowledge with frequent opportunities to think, remember, and apply what you’re saying. Add little multiple choice questions, opportunities for discussion, or ask them to apply the idea immediately to a new example.

  5. Provide warm, encouraging, understanding role-models (including yourself).

    1. Take care with things that might induce guilt or fear. Show empathy for how people might be feeling (like “many people, including me, feel a pang of guilt with this thought experiment”). Check your values align (ask “what’s important to you?”). Provide them with options to better reach their goals (for example, “Things are less scary when we feel like we’re doing something to fix the problem. Here are some things I found helpful.”)

    2. Encourage people for the progress they’ve made rather than making them feel inadequate for not doing enough (“donating 1% to the best charities is amazing. You’re doing more good than most people.”)

    3. Have high expectations but take (at least some) ownership for supporting people to get there (“I know you can get a paper into NeurIPS. My job is to get you there.”; “I’m not sure I explained that very well, can you try to say it back to me?”)

    4. Paraphrase the learner’s position back to them before correcting any misconceptions (like “your saying that this is too sci-fi to be real. Is that right?”)

    5. Be careful how you spend your weirdness points: focus on the more relatable parts of your identity so you have enough social capital to push on the more important intuitions (like if your goal is longtermism for the general public, probably don’t also bring up donating 10% and veganism at the same time; instead focused on shared appreciation for promoting education, democracy, and protecting the climate).

    6. Tell stories of people, including yourself, who have embodied the values you’re aspiring to cultivate (like altruism, compassion, truth-seeking, collective growth, determination)

We need better models of community building so fewer people bounce off

Community building in EA is a precarious tightrope walk. We want to be truth-seeking but socially accepting. We want to challenge ideas without being adversarial, to push around ideas but not push around people. We want to be able to be avant garde while having many ideas adopted by mainstream institutions. We want to welcome weird thinking without looking so weird that promising people bounce off.

If you’re like me, you’ve seen many promising people bounce off. I’ve seen committed, vegan medical students who are earning to give bounce off their EA groups. People made Jeremy[3] feel he wasn’t doing enough to help others.

I feel the tension. I want Jeremy to see if he fits direct work too. We want to help people who feel comfortable challenging their career plans. But, we also want value aligned people like Jeremy to feel like they belong. If we don’t, then we won’t be able to help them challenge those plans.

That EA group was doing all the right things. They were tabling, had a fellowship, were running intro events and doing 1:1s. But, I don’t think they knew what it meant to do those things well. Some groups measure outcomes: ‘people go from table to email list, and from email list to event, from event to fellowship’. But, if we only focus on outcomes then it’s hard to understand how to get better at the process. Why is Nikola better at tabling than Trevor? A better theory of change would help people know what they should be trying to improve. How do we create highly engaged EAs?

If we don’t know why some things work, how can we measure if it’s working, and how do we choose what to prioritise, or what to change?

A ‘highly engaged EA’ is someone who has internalised the values and knowledge of the EA community, and is autonomously motivated to act on them

If we want people to make clear, impartial, and prosocial decisions—like dedicating their time, their money, or their careers to the causes EAs care about—it can require sacrifices and emotional strength. To do that, I think we need:

  1. People who have internalised the values of EA (including the value to challenge ideas)

  2. People whose life goals align with the intrinsic values of EA (like wanting to have an impactful career, to build the community, do the most good through their donations)

  3. People who are motivated to act on those values for their own reasons, not out of guilt or pressure or to seek status, and

  4. People who feel the agency and confidence to take action on the basis of those values, rather than feeling paralysed by impending doom or intractable problems

    1. Ultimately, this likely leads to behaviours like choosing a career aligned with EA ideas, donating money to effective charities, and attending community events, but a ‘highly engaged EA’ doesn’t have to do all of these.

There are well-supported psychological theories that explain how people internalise values, how they set goals, and how they develop motivation, confidence and agency. In this post I’ll explain self-determination theory (or SDT) applied to these questions, and the data that supports SDT in application.

Self-determination theory is a very well supported model of how to develop motivation, agency, and confidence

SDT explains many motivational puzzles, and explains many controversies in EA. What happens if we guilt people with the drowning child? Should we encourage or discourage people from identifying as ‘effective altruists’?

Importantly, the evidence for SDT is compelling. Here’s an abbreviated list of meta-analytic findings on SDT in practice. For reference, in psychology, a ‘large’ effect size is a correlation of 0.3 (r = .3; about ~10% of the variance explained [ = .10]; an intervention that increases an outcome by about 0.8 standard deviations [d = 0.8]). These are important for putting these findings in perspective. Any jargon used here I will explain in the following sections.

  • The ‘autonomous’ or ‘self-determined’ types of motivation described in SDT robustly predict outcomes we care about. A meta-analysis of 223,209 students found self-determined forms of motivation have very large effects on engagement (r = .57), effort (r = .51), and enjoyment (r = .56). Effects on objectively measured academic performance are small but reliable (r = .1).

  • We can build those ‘autonomous’ types of motivation by supporting psychological needs. Another meta-analysis of 79,000 students found that the level of ‘psychological need support’ explains about 30–40% of the variance in autonomous motivation. This is 3 to 4 times the size of a ‘large’ effect in psychology.

  • Educators can learn to support these psychological needs. An 10-year old meta-analysis of 20 studies showed teachers can learn to become more supportive of psychological needs (d = 0.63) in 1–3 hours. In a meta-analysis of all interventions to improve motivation, those informed by SDT had the second biggest effect sizes (d = .7 from 11 studies; #1st was at d = .74 with only 4 studies on ‘transformative experiences’)

  • The same patterns show up in leadership, health, and physical activity. SDT interventions change how supported people feel. Feeling supported makes people more motivated, engaged, and happy.

    • For example, this graph from a meta-analysis of 32,780 employees shows SDT at work. Good leaders make their followers feel competent, connected, and in control. Those followers are then more motivated, happier, and perform better.

Understanding SDT helps us know what we’re trying to do in community building. We’re trying to support people’s psychological needs as they learn how to improve the world. If we target those needs, and measure how well we’re doing it, we can help people productively and happily do as much good as they can.

SDT describes how motivation varies not just in quantity, but in quality

When I was 18, I was on a date and a Hare Krishna gave me a book. Distracted by my teenage infatuation, I accepted the book and kept walking. In a classic ‘persuasion’ move (like from Cialdini’s Influence), they asked for a donation. I obviously paid it to avoid looking stingy in front of my date. I then tossed the book, saw the Hare Krishna pick it up, and repeat the ruse on someone else. This got the donation, but if we did this for the Against Malaria Foundation, why does it make us cringe? Isn’t $10 dollars from a distracted man still life-saving bed-nets for people in need?

Controlled types of motivation are weak, short-term, and fragile

Many donation-solicitation methods involve guilt and pressure to get a short-term outcome (see our meta-review here). We can pressure people into motivation (external regulation) or have them internalise beliefs that make them feel guilty (introjected motivation). Both of these do create short term behaviour change.

But, these controlled forms of motivation are not what I think we—as a community—would hope for. They are why charity solicitation develops a bad name. They’re ‘controlled’ because we feel motivated by a force that doesn’t align with our values. It generally leads to short-term motivation, or even covert resistance.

That’s not to say we should never do things that lead to guilt. Extrinsic forms of motivation can provide a strong impetus when it’s also value aligned. People use Beeminder to provide extrinsic motivation that’s aligned with what’s important to them (e.g., to exercise). That’s very different from the time my mum paid me $10 every time I lost a kilo of weight. Her incentive turned into an excellent money-spinner: I would lose a few kilos, buy a new video game, put the weight back on again, and the cycle would repeat. The problem here was the extrinsic motivations didn’t match what was important to me.

Autonomous motivations are the sustainable, value aligned motivation of highly engaged EAs

So if extrinsic motivation is weak, what are the alternatives? Why do we play video games, music, go bouldering, do yoga, or donate to charity when there are often few extrinsic rewards?

It’s because these are autonomously motivating. Intrinsic motivation is a special type of autonomous motivation: it’s where we’re driven purely by the joy of the activity itself. This is more relevant for sport, music, art, or gaming, where people often find the activity fun. But, donating to charity isn’t fun. Eating healthily and intense cardio aren’t fun. These are driven by different types of motivation. We often do these things because they’re aligned with our personal goals and values (known as identified motivation), or because they’re part of who we want to be (known as integrated motivation).

These better explain why people, for example, might sustainably avoid eating animal products. It’s not fun to have a less varied diet, and most people I know don’t avoid animal products out of guilt or pressure. People do it because they care about animals and the planet, and want to minimise their impact on both (which is identified motivation). Or, they identify with being vegetarian or vegan; that is, it’s part of their identity (which is integrated motivation). All of these ‘autonomous’ sources of motivation—identified, integrated, and intrinsic—are sustainable and distinct from extrinsic motivation because they mean the behaviour aligns with your values.

To promote highly engaged EAs, we want people driven to act by these autonomous sources of motivation. We want them to look for the best way to impartially do the most good (and take action on it) because that intellectual project aligns with their personal goals and values.[4] This framing helps answer some of the controversies in EA already.

  1. Do we want people to experience guilt in response to the drowning child metaphor? Maybe, but only if the person already sees how EA aligns with their values. If they don’t see that alignment, you might get short-term compliance or a hefty dose of resistance.

  2. Do we want people to identify as ‘an effective altruist’? Yes, as long as that identity also includes the values of truth seeking and a scientific mindset. As outlined in Scout Mindset, it’s true that some identities do cloud our abilities to see the truth, but most agree an EA identity explicitly involves truth-seeking as a value. If we discourage people from holding EA identities, we may be turning them away from a powerful source of sustainable motivation. Not using EA as an identity is like not using nuclear power. Yeah, sometimes it causes a meltdown, but most of the time it’s safe, low carbon, base-load energy.

Right, so autonomous motivation good, controlled motivations (generally) bad. But how do we, as a community, promote autonomous motivation?

We say ‘educate, don’t persuade’, but what’s the difference? Support people’s psychological needs

I like the idea of ‘educate, don’t persuade.’ It rings true to me. But what does it mean? Education and persuasion both involve one person communicating ideas or beliefs to another person. When is communicating about, say, factory farming ‘education’ and when is it ‘persuasion’?

Persuasion has a bad reputation because it often fails to support people’s psychological needs, like their freedom of choice. ‘Educating’ feels more supportive of these needs.

The three most commonly accepted basic psychological needs are:

  • a need to feel like you can align your behaviour to what you value (the need for ‘autonomy’),

  • the need to feel like you have what you need to achieve your goals (the need for ‘competence’),

  • and the need to belong and feel connected to others (the need for ‘relatedness’).

When we satisfy these needs, we motivate people over the long-term. When we don’t we might get short-term compliance, but nothing sustainable.

If you’re having dinner with a friend and they ask you where you donate, you probably have a good feel for their values. If you talk about why you donate to the Long Term Future Fund, but given what you know about them, how they might prefer the Global Health and Development fund or Top Charities Fund, that feels more supportive. You’re connecting with them, acknowledging what they value, and giving them a path to effectively achieve those goals.

If you instead say “anything you do in the present is probably meaningless over the long term, so you should only really donate to the Long Term Future Fund or you’re neglecting to protect your great-grandchildren…” that feels more like bad ‘persuasion’. You’re ignoring what they value, making them feel hopeless toward achieving their goals, and guilting them into doing what you value.

I don’t think people in the community do the latter very often. However, consistently supporting psychological needs is a very difficult challenge with a mixed audience. That is, it’s hard to appeal to people’s values when their values might differ. It’s hard to give all people the information they need to achieve their goals when their goals diverge. A place where you belong and feel connected might be one where I feel alienated.

We’re more likely to be open to the views of others when we connect with them—when we feel like we belong in their group. Both the source of the message and the message itself matters to whether or not we believe it. For example, pro-environmental messages were only persuasive to conservatives when coming from a conservative source (even if the message was tailored to their values). I focused on autonomy first because I think value alignment is a better method of signalling in-group relationships than external cues (e.g., appearance). I don’t think EAs should wear MAGA hats to reach republicans in the US. I think they’re better aligning on the benefits of market solutions, or the desire to make government spending decisions cost-effective. We can make EA ideas helpful to many groups.

Still, EA won’t be a supportive place for everyone. Most people want to do good, and as Hillary Graves says:

“All you really need is some component of your moral philosophy which acknowledges that making the world a better place is a worthwhile thing to do. And that’s really all you need to get the effective altruist project off the ground.”

We could focus on this shared common ground, and helping people better live by their values. Instead, I’d hazard we bounce people because many focus on the counterintuitive or demanding conclusions from EA (e.g., AI doom). Yes, we make normative arguments (e.g., ‘it’s better to do more good than less’). But, we don’t have to make normative arguments in a controlling way.

By being ‘need supportive’, we can help people feel understood, competent, and empowered. If they feel this way, they are more likely to listen to your ideas. It makes them more likely to take on new beliefs and values (e.g., a more impartial sense of altruism). Then, as they become more aligned, it’s easier to support their psychological needs. It creates a virtuous cycle.

So, this post aims to outline some gaps between what research says on building long-term motivation, and what sometimes happens in EA events I’ve been to.

Guilt and pressure are bad long-term motivators.
Use existing values instead.

Like many EAs, I found Peter Singer’s Drowning Child thought experiment persuasive. It was certainly part of my initial drive toward donating to global poverty, and those donations were central to becoming an EA.

It also took me to a pretty dark place, where I felt I couldn’t justify my life decisions or minor purchases, once converted into lives I could have saved. Some EA forum posts even suggested we use this as a persuasive appeal (see the ‘currency’ described in this post; I found the currency too unpalatable to repeat here). It took a few good years of therapy, supportive colleagues, and some quality writing from the EA community[5] to get me out of that space into something more sustainable.

I also noticed the downing child metaphor got the hackles up in other people. Yes, I needed to be more tactful with some of them, but with others, they just found the conclusions too demanding to even entertain. I don’t think my experiences here are unique. I think they align with the research around motivation.

The problems with guilt or fear appeals are two-fold:

  1. They rely upon the recipient feeling you have a shared set of values. In cases where that isn’t explicit, or when change is hard, guilt- or fear-based appeals can lead to resistance, both to the messenger and the idea. For example:

    1. The Behavioural Insights Team’s report on changing diets summarised a range of studies showing guilt-based appeals can backfire, and pride and positivity appeals are more likely to work. For example, for most people, moralising around meat can backfire: people often eat more meat in the month afterwards.

    2. Interventions that target controlled motivations tend to not work for reducing criminal reoffending (see this meta-review for effects across a range of interventions; a classic example of this failure known to the EA community is the harm caused by Scared Straight where shaming juvenile offenders led to higher rates of recidivism)

    3. Our recent meta-analysis of 167 studies showed that—across cross-sectional, longitudinal, and experimental designs—controlled motivation did not increase prosocial behaviour, but increased antisocial behaviour. Autonomy-supportive environments were what increased prosocial behaviour (preprint here).

  2. Other meta-analyses have found that guilt and fear don’t harm engagement or performance (see in work and education). But, even if people work hard, I worry those forms of motivation may lead to mental health problems and burn-out in the community. For example:

    1. Obsessive passion is where someone is strongly committed to an activity but for controlled reasons (e.g., driven by guilt). A meta-analysis of 1,308 effect sizes showed obsessive passion was associated with higher burnout, negative affect, anxiety, rumination, and activity/​life conflict.

    2. A meta-analysis of a range of health behaviours found guilt-based motives do lead to positive health behaviours, but lead to negative mental health outcomes. Similarly, experiences of guilt and shame are strongly correlated with depressive symptoms (meta-analysis of 108 studies).

Whether we’re trying to educate about the importance of the long-term future, the plight of factory farmed animals, or the opportunities to improve global health, there are better alternatives to guilt and fear.[6] Many of the core claims of EA are intuitive to most people, for example:

  • all else being equal, we should do more good rather than less;

  • science and reason are good methods for identifying what works;

  • we’ll make better decisions if we see the world as it is, rather than as we want it to be;

  • future generations matter;

  • animals can suffer;

  • luck determines where and when you were born;

  • it’s good to help others;

  • it’s better to help them with what they need rather than what is nice for us to give.

EA provides some ideas that might be novel to many people, but aligning on the intuitive claims first can establish value-alignment first. Some of these might sound like applause lights, but they serve to align us with the audience. As pointed out by others, EA can sound less weird, if we want it to. We should spend our weirdness points more wisely. We can frame EA in ways that explain it accurately, clearly, and convincingly (e.g., Gidon Kadosh’s piece).

Using values and avoiding guilt are just part of creating a ‘need supportive environment’

We had over 30 world experts in self-determination theory agree on what makes a motivating learning environment.[7] They agreed on a range of strategies that would have moderate-to-strong improvements to motivation. Expert opinion is relatively weak evidence but the goal here is to make more concrete what a ‘need supportive’ environment looks like. I’ll focus below on the less obvious ones.

For more detail, I made a short video series talking about how to satisfy these psychological needs. I’ll explain the key ideas in the rest of this post, but if you want more, you might find this playlist of 5 short videos (5 minutes each) useful, or this one-page summary:

Educators that support autonomy tend to…

  • Provide rationales that connect to the learners’ values

  • Explain the real problem in the world you’re hoping to help the person solve, how they’re going to learn to solve it, and how they’ll know they’ve learned it

    • The technical term here is ‘constructive alignment’.


      Basically, work backwards from real world problem to the learning objective(s) to the assessment(s) to the learning activity, for example:
      • “As you’ll learn in this program, AI is likely to become a powerful force in the world, and we don’t yet know how to build it safely.” (real world problem)

      • “In this program I want to help you identify whether you think you might fit a career in AI safety.” (specific learning objective)

      • “At the end of the program, you’ll have an opportunity to do a 4-week project to test your fit for this kind of work. I will help you craft something that fits your interests.” (assessment task)

      • “Today, to help prepare you for that project, we are going to…” (explain rationale for the learning activity)

    • In EA education and outreach, these four components often do not align well. For example:

      • We have fellowships that are almost exclusively discussions but projects (‘assessments’) that are almost all writing. If I need to learn to write for the project, the learning activities should help me practice my writing with opportunities for feedback.

      • In an AI and Economics course I started, the content pivoted to the history of Claude Shannon and Information Theory. For me to stay motivated, I need to understand why learning about Information Theory (the learning activity) helps me solve economic problems around AI (the real world problem).

  • Use the language of choices, empathy, and values instead of ‘shoulds’ and ‘musts’

    • It’s okay to make normative and scientific claims, but the language can be more or less autonomy supportive. There’s a big difference between a doctor showing empathic concern about a client’s drug use and…

  • People really don’t like feeling judged and pressured. So, instead of…

  • Try…

    • Talking about what others do: “Many people I know have made small changes that make a big difference for animals.”

    • Using if-then rationales: “If you want to have an impact with your career, 80k has some great resources.”

    • Preemptively empathising: “Some people find Singer’s argument pretty demanding. What did you think?”

To build a sense of belonging and relatedness, educators can…

And to make people feel competent, we can…

Use the hoards of meta-analyses on what improves learning. There are dozens of these that are relevant to community builders. I’ll turn to those now.

If educate is the goal, use evidence-based education strategies

There are many repositories of evidence-based teaching strategies, most of them targeting school children (e.g., Visible Learning, the Educational Endowment Foundation, and Evidence for Learning). Adult learning can look and feel different from school classrooms. As a result, we’ve created a toolkit summarising the meta-analytic evidence for what works in adult learning (usually universities) here.[8] See these links if you want to know why meta-analyses are important for teaching, how we tried to make them more accessible, and how to use them in your teaching.

These strategies are not all common sense: many educators use strategies that have been shown to reduce learning. For example, I still hear people espousing so-called ‘facts’ about learning styles (e.g., ‘I’m tailoring this to visual learners’). Learning styles don’t work—instead use speech for words and images alongside them, wherever possible. Make the language as simple as you can, because you’re probably plagued with a curse of knowledge. That means—as is often recommended on the forum—that we should avoid using jargon (see Michael Aird, Akash Wasil, Rob Wiblin) unless the goal of the session is to learn that jargon (e.g., ‘instrumental convergence’ in the AI Safety Fundamentals course) or we know everyone has covered that jargon (e.g., later weeks of that course).

More controversially, meta-analyses show that too many jokes and memes distract an audience, so they’re more likely to remember the joke than what you’re trying to teach. This is controversial because, you know, funny stuff is funny. For example, I love Last Week Tonight, or Robert Miles’ AI safety videos. Their jokes make things more engaging—but I do often notice myself attending to a meme and forgetting what he was saying. I obviously don’t think we should eschew humour completely—I still add terrible jokes to my videos—but we need to be careful to make the jokes relevant to the learning objectives, so the learning still comes through.

Other—possibly obvious but often neglected—evidence based strategies include:

Learn how to use multimedia so it doesn’t overload your audience

Even if you’re not making curricula or designing virtual programs, you’re probably trying to communicate your ideas publicly. You might be writing on the forum, in which case learning to write in a clear and compelling way is going to increase your impact (HT Kat Woods). But writing is an inefficient mechanism for communication. Our brains are designed to hear words[9] and see pictures (which is why videos work better than many other forms of teaching and communication). There are a handful of strategies shown to improve learning from multimedia from hundreds of experiments (see our meta-review from last year). I’m going to practise what I preach here and direct you to one of three multimedia methods for learning these principles. The best is this video playlist (five videos, roughly five minutes each) that walks through how to make multimedia that works. If you’re strapped for time, this twitter thread covers all the evidence-based strategies. If you don’t want to leave this post, this one-page summary covers most of the key ideas.

If possible, try to use a narrative arc that makes the problem concrete, hits an emotion, has something unexpected, then a concrete resolution. My adaption of Chip and Dan Health’s excellent book (‘Made to stick’; my video summary here) involves the following five steps:

  1. Problem with emotion: who’s the hero? What problem do they face? How did they feel?

  2. Stuck point with misconception: where did they get stuck? What unhelpful belief kept them stuck?

  3. Empathy then refutation of stuck point or misconception, made by someone credible: Show you understand why people have the refutation. Then, explain why it’s wrong.

  4. Principle for next steps, using an analogy: what should the hero believe instead? Ideally, provide an analogy to the solution.

  5. Solution to the original problem, with a new and positive emotion: how did the hero win using the new belief? How did they feel afterwards?

Zan Saeri used some of my old video scripts to turn this into a GPT-3 preset. Have fun.

In summary, we’re all teachers.
Learn to do it well.

So much of community and field building involves what great teachers do: we share important ideas, make a motivating environment to explore the ideas, and create activities that help us engage with them. It’s not the only thing, of course, but by trying some of the ideas in this post, you can make this part of your community building more effective and evidence based. In doing so, we can better grow the community of people who are committed to EA.

If you want to study how to do this better, reach out.

I’m an academic in the School of Psychology at the University of Queensland. I think building the EA, rationalist, x-risk and longtermist communities are very important pathways for doing a huge amount of good. If you’re interested in this and want to do research or run projects in this space, don’t be afraid to reach out via noetel at gmail dot com

Thanks to the following people who provided constructive feedback on this post.

Alexander Saeri, Peter Slattery, Emily Grundy, Yi-Yang Chua, Jamie Bernardi, and Sebastian Schmidt were all autonomy supportive teachers that helped me improve this post. Mistakes are my own.

  1. ^

    I’m acutely aware of the irony that a forum post including prohibitions against jargon contains jargon as the first two words.

  2. ^

    Per Ivar Friborg has also published a great summary of SDT for EA here

  3. ^

    Gender and name randomised for anonymity

  4. ^

    This aligns well with Luke Muehlhouser’s conceptualisation of Effective Altruism as a project that some people are excited about, but others aren’t, and I think that difference is the degree of alignment between the project and their own values

  5. ^

    There were too many forum and blogs posts to name each one that helped me become more sustainably motivated, but if I had to choose one, it’d be Julia Wise

  6. ^

    I’m obviously not the first person to suggest we should replace guilt (e.g., Nate Soares)

  7. ^

    The experts also agreed on mistakes some exhausted, defeated teachers might make, like yelling at students, but I didn’t include them here because I can’t imagine an EA community builder making the same mistakes.

  8. ^

    We haven’t finished porting INSPIRE from our beta version to the proper website, so I actually prefer the user experience on this version on Notion. Still, I’ve linked to the new website so this post is more likely to be evergreen.

  9. ^

    Another hat tip to Kat and Nonlinear for the Nonlinear Library. Not only are podcast versions of the forum more practical for many people, we’re likely to learn just as well, if not better (assuming the posts don’t have many images that we’d miss). The Astral Codex Podcast does an amazing job of this where Solenoid Entity explains the graphs in exquisite detail, and has them show up on your podcast player (🧑‍🍳😙🤌)