A Semester-Long Course In EA

Hi, my name Nick Whitaker. Emma Abele and I run the EA chapter at Brown University. We are building a semester-long accredited EA course based on Stephen Capser’s (Harvard EA) Arete Fellowship curriculum. I’d love to hear people’s thoughts on the course. In particular, we have been trying to strengthen the week on suffering, so any advice on that would be welcome. When teaching the Arete course, many students were asking questions like, “What is consciousness/​sentience/​suffering,” so we want to preempt those.

Calander:

  1. Th Jan 23: Welcome

  2. Tu Jan 28: Guest Speaker #1

  3. Th Jan 30: Principles of EA

  4. Tu Feb 4: Tools of EA

  5. Th Feb 6: Guest Speaker #2

  6. Tu Feb 11: Moral Psychology and EA

  7. Th Feb 13: Guest Speaker #3

  8. Tu Feb 18: no class [long-weekend]

  9. Th Feb 20: Hard Questions in EA

  10. Tu Feb 25: Careers

  11. Th Feb 27: Guest Speaker #4

  12. Tu Mar 3: Suffering

  13. Th Mar 5: Guest Speaker #5

  14. Tu Mar 10: Addressing Human Suffering

  15. Th Mar 12: Guest Speaker #6

  16. Tu Mar 17: Addressing Animal Suffering

  17. Th Mar 19: Guest Speaker #7 & Midterm project (Career Analyses) due

  18. Tu Mar 24: no class [spring break]

  19. Th Mar 26: no class [spring break]

  20. Tu Mar 31: Preventing Suffering Through Progress

  21. Th Apr 2: Guest Speaker #8

  22. Tu Apr 7: Allowing Flourishing to Continue: X-Risk

  23. Th Apr 9: Guest Speaker #9

  24. Tu Apr 14: The Wide World of EA

  25. Th Apr 16: Guest Speaker #10

  26. Tu Apr 21: Topics Submitted by Students

  27. Th Apr 23: EA in Action & Blog posts due

  28. Tu Apr 28: Workshopping Final Projects [Reading period: optional class unless there is snow-day that causes schedule shift]

  29. Th Apr 30: Workshopping Final Projects [Reading period: optional class unless there is snow-day that causes schedule shift]

  30. Tu May 5: Final project due

In lieu of a guest speaker (if there is an issue scheduling), we will have the following assignment: Choose one episode of the 80k podcast, the Rationally Speaking podcast, or the Future Perfect podcast, to listen to and write a two page double spaced reflection.

Course:

Welcome [Th Jan 23]

In this first meeting we will explain the structure of this course and what’s it’s all about. This should help shoppers decide if this is a course they want to take. We will also introduce students to Effective Altruism (EA).

  • Please read The Greatest Good (5000 words), an Atlantic article on EA, before the first day of class. The article isn’t too long — Derek Thomson, the author of the piece, tells the story of his journey into EA. Lucky for him, this involved living next to one of the leaders of the movement, Will MacAskill.

    • Take a moment to reflect on the reasons Derek felt compelled to become involved in EA. Do these reasons register with you?

Principles of EA [Th Jan 30]

In this meeting we will talk about the main driving principles behind Effective Altruism.

  • Begin by reading Efficient Charity — Do Unto Other (1900 words). This explanation of why we should be rational about our charity donations is an important principle in EA to start us off.

  • Now for a more full picture of what EA is all about, read the Center for Effective Altruism’s Introduction to Effective Altruism (3200 words). This works as a general introduction to the motivations behind EA. Many people feel compelled to do good, but sometimes we are not as thoughtful as we can be about putting that impulse towards altruism to good use. Some people attempting to good have gotten very lucky and made huge impacts. Many other attempts to do good have underperformed. EA seeks to help more people do good better

    • You’ve just read an introduction to the Effective Altruism mission, community, and goals.

      • Do you like the idea of contributing to high-impact causes as a central goal in your life?

      • What are your initial thoughts about the main cause areas? Are you surprised at what they are/​aren’t?

  • We can also think of EA as emerging from a long philosophical tradition. Indeed, some of the movement’s pioneers include the famous utilitarian philosopher, Peter Singer, as well as the more recently famous moral philosopher, Will MacAskill. Take a look at this piece, where MasAskill introduces EA from a philosophical perspective (9 pages). The reading is from the Norton Anthology of Ethics. EA has traditionally been most closely associated with utilitarianism (Please take a look at this quick CrashCourse video (10min) if you don’t feel familiar with the philosophy of utilitarianism), but in his piece, MacAskill makes a pluralistic case for EA, even among different philosophical schools. Utilitarianism, and the broader idea of consequentialism (morality based on the consequences of actions), will be returning ideas in this course, but one need not be committed to either of them to think that EA is a worthwhile pursuit. Indeed, doing good for others may simply be a meaningful pursuit to you regardless of any philosophical or ethical commitments.

    • Consider the questions of: whether you want to do good, how much good you aim to do in your life, and what motivates you to do good.

  • Now, with your base in both the commonsensical and philosophical cases for EA, take a look at Will MacAskill discussing What are the Most Important Moral Problems of our Time? (12 min). Here, MacAskill goes more deeply into the issue areas of EA from a wide perspective.

    • A founder of EA, Will MacAskill, takes a step back to look at our progress, our problems, and our potential as a civilization. Effective Altruism thinks big.

      • In what ways has human moral progress maybe not kept up with economic or scientific progress?

      • What should some of our key priorities this century be?

Tools of EA [Tu Feb 3]

Last week, we focussed on different reasons for why we might be motivated to do good from both a common sense and philosophical perspective. In this meeting, we will talk about important tools from a variety of disciplines that we use to figure out how to do good better.

  • Begin by taking a look at Owen Cotton-Barratt’s piece, Prospecting for Gold (7800 words). In it, Owen takes a deeper look into what we mean by “effective.” Whether doing something is effective is key concept in EA, so Owen’s framework is something that we will be returning to throughout our course.

    • The three key components of effectiveness are: Scale, tractability, and uncrowedness.

      • Do you understand why Owen focuses on these three components?

      • This framework is typically what determines whether something is an EA issue area or not. Can you think of attempts to do good that don’t pass the framework?

      • Why is “counterfactual” reasoning important?

  • Expected value theory is another crucial aspect of rational decision making. Take a look at Devin Soni’s What is Expected Value (800 words)

    • You might think about it as the quantitative side of the effectiveness framework. If a problem has a large scale, is tractable, and is also uncrowded, you might expect positive interventions in it to have a high expected value.

    • Sometimes, in our “default-mode,” we may only want to do good only if it is certain that our action will bring about good. Expected value shows us that even if an action has a small likelihood of yielding a good result, the action may still be very worthwhile if the good done by the action is sufficiently large.

    • A key part of expected value theory is risk preference. EA organizations tend to be risk-neutral. Read this short concept page on Risk Aversion (400 words)

      • Can you think of an example where an expected value calculation might tell you to take an action that at first glance seems wrong?

    • Some people use “social discount rates” (SDRs) in expected value calculation. In essence, a social discount rate makes something in the future worth less than something now. There are some legitimate reasons to do this, like uncertainty about the future and the fact that people in the future will probably be richer (We will return to this idea later). But, SDRs can also be motivated by time preference, the idea that having something now is innately more valuable to us than having it later. Typically, EAs think it is illegitimate (and possibly immoral) to discount for these sorts of reasons. If we did use them (as Tyler Cowen will point out in Stubborn Attachments later this semester), an SDR could justify someone dying today because Cleopatra wanted an extra helping of dessert.

  • Sometimes we talk about EA in terms of “saving lives.” This can be fine as shorthand, but if we are speaking more precisely, we should acknowledge that doing good encompasses more than saving lives. We must also factor in the quality of those lives. These questions come up often. Let’s say you had $3000, which statistically guaranteed you could save one life. But, you also had the chance to donate that money to help people get a procedure to cure their blindness. How many blindness cures would be worth saving one life? Maybe 10? 20? 50? These questions are extremely difficult, but no matter what our decision is we can’t avoid making an implicit judgment. We will probably never be able to answer these questions perfectly, but we have tools to at least try to answer them better. To get a sense of that, read this short concept page (300 words) about HALYs, QALYs, DALYs, and WELBYs.

  • Another important concept to keep in mind is Bayes Theorem. Watch this video (4 min) and then this video (11min) by Julia Galef to learn what Bayes Theorem is and why it’s an important part of rational thinking.

    • What is the basic idea behind Bayes Theorem?

    • What is a situation that came up in your life recently when thinking about Bayes Theorem would have changed the way you thought?

Moral Psychology and EA [Tu Feb 11]

This week we return to some of the ideas we discussed in the “Principles of EA week.” However, this week, we will be focusing on our moral reasoning: Why are we ethical at all? Why do our ethical intuitions sometimes fail us?

  • EA can be an interesting blend of intuitive and counter-intuitive ethical reasoning. Begin with Four Idea You Already Agree With (2000 words), a case for EA that attempts to appeal to one’s intuitive ethical reasoning.

    • Are these ideas with which you already agree? Does this piece ignore any implicit tradeoffs?

  • At the same time, EAs tend to think that our moral intuitions can often fail us. Take a look at Joshua Greene’s Beyond Point-and-Shoot Morality (11 min)

    • Psychology professor Joshua Greene discusses how to approach moral thinking and when to trust our intuitions in a world very different from the one our sense of morality evolved in.

    • Automatic mode can handle many situations well, but it seems to be susceptible to some of what we might call “moral illusions.” Illusions might include time, place, intention, and a difference between causing and preventing.

      • When and why is our automatic mode still useful?

      • Are these moral illusions, in fact, illusions, or are they morally relevant?

      • Point and shoot— Is it more wrong to kill a child or to let one die because you didn’t donate to a cause that would save them? Now go into manual mode. Does the answer change?

  • One other way that our moral intuitions deceive us is that they do not tend to be “scope-sensitive.” Read Nate Soares’s On Caring (2800 words)

    • We don’t understand big numbers. This isn’t just true when we talk about ethics. Intuitively, many people only think that a billion is only a bit bigger than a million. But, that is clearly incorrect. A million minutes is equivalent to almost two years. A billion minutes is equivalent to almost 2000 years.

    • One thing that Soare’s post doesn’t necessarily point out is that it is probably necessary that we are scope-insensitive. If we could feel the entire weight of suffering in the world, it would be psychologically compromising. Even feeling the suffering in one city might be psychologically compromising! We shouldn’t expect ourselves to ever be feel in a scope-sensitive way. Rather, we might practice being more scope-sensitive in our decision making, especially when we are thinking about how to do good.

  • At the same time, we might have reason to believe that our moral intuitions are improving. Holden Karnofsky: Radical Empathy (1800 words)

    • Karnofsky’s discussion that aims to put our notion of empathy into a philosophical and historical perspective. He picks up Peter Singer’s notion of expanding moral circles. From a historical perspective, we used to only care about our families or tribes. Now, many care about everyone in their nation-state. Many care about everyone in the world. Some even care about non-human lives. It is a great victory for humanity that we have been able to expand our moral circles, and one of the tasks of EA might be to try to expand our own moral circles further.

      • What does your moral circle encompass right now? Does it differ in any ways from the “standard” moral circle people have? Should the standard moral circle encompass more beings? Should yours?

      • Is radical empathy truly radical? In what ways is it radical, and in what ways is it not?

  • Finally, most of our judgements are a combination of moral beliefs and empirical claims about the world. Yet empirical claims about the world can often be mistaken. Being an EA is hard, but we should embrace the challenge. Watch Michael Page discuss Embracing the Intellectual Challenge of Effective Altruism (19 min)

    • This video from the Center for Effective Altruism walks through some of the biggest challenges of the EA movement, laying out why it’s hard but fulfilling.

      • Page contrasts the “liability framing” and “opportunity framing” of effective altruism. How might you weigh these two perspectives?

      • Effective Altruism is diverse, and this semester, we will see disagreement in our discussions. How do you feel about “agreeing to disagree” with others?

      • How set in stone are your moral beliefs?

Hard Questions in EA [Th Feb 20]

This week we will be discussing complicated topics in EA theory and strategy. We will begin with a few issues regarding what it means to do good.

  • We begin with one of the most famous problems in utilitarian thought, the repugnant conclusion. Begin with Julia Galef’s (a prominent member of the EA community and host of the Rationally Speaking podcast) video on the topic (6 minutes). If you are interested, Julia’s youtube channel has lots of other great videos.

    • The Repugnant Conclusion was introduced in Derek Parfit’s book Reasons and Persons, a landmark work in utilitarian philosophy. In it, he looks for reasons to reject the conclusion. Ultimately, there is no obvious reason to reject the conclusion, though many possible solutions have been proposed.

      • Is the repugnant conclusion repugnant to you? How much should we worry about it?

      • EA arguments occasionally lead to the question of the repugnant conclusion related questions, so it is an important concept to keep in mind. For example, should EAs want the earth to have many more people on it? Even if you are not a utilitarian, you may still wonder what constitutes doing good: Creating a smaller, happier world, or a bigger, less happy, world.

      • What does the repugnant conclusion imply for animal welfare? Could we consistently reject the repugnant conclusion for humans and accept it for animals?

  • Another potential critique of EA comes from Susan Wolf’s paper Moral Saints. Take a look at the summary here (1400 words). That summary is written by the website PhilosophyBro, which has funny but very accurate summaries of philosophy articles in “bro” jargon. The whole paper is here if you are interested (21 pages).

    • Wolf argues that living a life only on moral principles and rules, regardless of the set of moral principles and rules, would be a bad thing. Being a moral saint would mean missing out on many other essential parts of the human experience.

      • How does the paper read as a critique of EA?

      • On what margin of doing good does Wolf’s argument become relevant?

      • If we view EA as an opportunity to do good, rather than as a responsibility to do good, does Wolf’s argument still hold?

  • A final philosophical issue to look at is Pascal’s Mugging (3 pages).

    • Pascal’s Mugging is a problem in expected value calculation, a topic which we have discussed earlier. The problem is the reverse of Pascal’s Wager, where Pascal argued that one should believe in God because the cost of doing so is low (the mental act of believing, perhaps going to church) and the cost of not doing so is extremely high (eternal pain and punishment). Pascal’s mugging is, essentially, the reverse. This problem will become relevant when we discuss existential risk and long-termism later in the semester.

      • If someone told you they would destroy the universe if you didn’t give them $1, how would you think about justifying not doing so?

      • Does giving the mugger a dollar constitute “doing good”?

Now, we will discuss a few issues that EA’s debate when it comes to how to do good.

  • EA as a community has tended to avoid politics, but the topic has been an ongoing debate.

    • The basic case against EA in politics comes in a few parts:

      • First, politics is one of the hardest areas to be rational and apply the tools and principles of EA. Read Politics is the Mindkiller on LessWrong (500 words).

      • Second, recall the three components of effectiveness: Scale, tractability, and uncrowdedness. While political interventions are often large in scale, they often struggle in their tractability and uncrowdedness. In terms of tractability, political interventions are often met with opposition from other interested parties. In terms of uncrowdedness, if we thought there were pareto-efficient (things that make everyone better off without making anyone worse off) opportunities to do good via politics, it seems likely that an entrepreneurial politician would already have taken advantage of them. If an intervention isn’t pareto-efficient, it might lack tractability as there will be opposition. In the debate over EAs in politics, we should always consider whether a specific political position would pass the effectiveness framework. Perhaps there are innovative policies that have not been considered that would do a large amount of good without incurring political opposition.

      • Third, we must always remember that our epistemic judgments on the best ways to do good are deeply uncertain. There is a general weakness of democratic politics to reevaluate policies once they have been enacted, so if a policy ends up failing to produce a worthwhile amount of good at its cost (or even does harm), it may be very hard to alter or repeal.

      • Fourth, if EA as a movement became very political, it could isolate people with reasonable political disagreements. Extremely few people disagree are against lowering infant mortality; many more disagree about the optimum immigration framework.

    • At the same time, there has been recent discussion in the EA community about being more politically involved. Read or watch Hiski Haukkala: Policy Makers Love Their Children Too (2700 words)

      • Haukkala’s foremost concern seems to be changing government incentives to be focused on the long term rather than the short term. How does this goal, which seems to be institutional, compare to the other sorts of political goals one may want to pursue in the name of EA?

      • Are there some ways to do good that will always be more effective when done by NGOs and individuals and some are better done through governments?

    • There is often a conflation in this debate about whether or not EA (the movement) should be political and whether EAs (as people) should be political. There is a perfectly reasonable position that EA as a movement should not be political, but EAs as people should feel welcome to be politically involved. That just might mean acting outside of their identities as EAs.

  • An adjacent issue is Systemic Change vs Marginal Change (3700 words). Begin by reading that post. If you don’t understand what those are arguing for systemic change are arguing for, try reading some of the posts Scott Alexander links to at the beginning.

    • We can think of most traditional EA interventions as “marginal changes.” For example, there are some insecticide-treated bed nets in a region with high malaria-carrying mosquito populations, and we think that each additional net would do some additional amount of good. Yet some argue that the current set of global institutions is responsible for creating environments where suffering exists and continues to exist. To make lasting change, they argue, systemic change needs to be made. This may come in the form of changing economic systems, forms of government, or large-scale policy changes, like removing all immigration restrictions. And yet, where there is much upside opportunity, there is often much downside risk. This podcast episode of Julia Galef’s Rationally Speaking podcast gets deeper into a similar debate for those interested.

      • How worried should we be about the risks that come with systemic change? How much greater is the variance in possible results when systemic changes are attempted?

      • How empirically correct do you see those who claim a need for systemic change? Are current institutions truly fundamentally flawed?

Careers [Tu Feb 25]

This week we will begin discussing how the principles of EA recommend we live. One of the biggest set of choices we make has to do with our career. We will begin by looking at some advice EAs have created on embarking on a career to do good effectively.

  • 80,000 Hours is an EA organization dedicated to helping people find careers that fit them well and help them be more effective. With five million readers, they are probably the most well known EA organization. Start by reading their Key Ideas page (8600 words). Be sure to expand and read the “five career categories for generating options”

      • How do your past/​current career plans look in light of this perspective?

      • What could you see yourself doing?

      • What 5 career strategies do they mention?

      • What priority paths are you interested in?

  • Stay on that page and look closely at the “Our Priority Paths” section. Choose at least one to read more about and visit the links under Further Reading. (~5000 words)

      • Why is this high impact?

      • What are some key steps you could take for a career in this?

  • Now browse this 80,000 Hours website a bit more. They have lots of great resources including a podcast, a job board, problem profiles, career reviews, and an archive. These are definitely worth checking out as you think about your career. We also recommend joining their newsletter. (~30min)

  • Another career you might consider is starting your own charity. Here (2000 words) Givewell discusses charities they would like to see started.

      • Might you be interested in charity entrepreneurship? If so, check out this incubator program for people who want to start high impact charities

  • An important trade-off to think about in regards to your career is direct work vs earning to give (ETG). Read this short concept page on Earning to Give (200 words). The debate around whether it’s better to go into direct work or earning to give has been prominent since the early days of EA. When EA started, ETG was seen as probably the most effective thing to do. Then over time people in the EA community started thinking that a lot of important problems were more talent constrained than funding constrained so the movement shifted away from ETG. Read at least the first section (Approach 1) of this page (1600 words) to get familiar with the case for and against ETG.

      • What are the main cases for and against earning to give?

    • [optional] if you are interested you should also check out this recent post on the EA forum making a case for ETG: The Future of Earning to Give by Peter McCluskey.

Suffering [Tu Mar 3]

The two weeks after this week will be discussions of how we can prevent suffering, first in humans, and then in animals. Before we get into these questions, we need to clarify what we are talking about when we are talking about suffering

  • First thing to clarify is what sort of things can suffer? For our purposes this is the same as asking what sort of things are conscious /​ sentient? (although there is certainly no agreement on that or any definition of consciousness for that matter). Watch this Kurzgesagt video: The Origin of Consciousness—How Unaware Things Became Aware (10min)

      • What criteria you put on something to say that if can suffer?

      • What important evolutionary steps seem to lead to consciousness?

    • If you want to get more confused about consciousness check out this Vsauce video: What is Consciousness? (Optional: 7min)

    • for many more readings on the philosophy of consciousness see this (optional)

  • Now let’s think about if we should prioritize reducing suffering over increasing happiness. Read this piece: The Principle of Sympathy for Intense Suffering (6600 words)

      • What do you think about the roller-coaster example? Should the children go on the roller coaster even if one child will have a negative experience? What if instead of knowing that 1 out of 10 children will have a negative experience it is that there is a 1 in 10 chance that a child going on the roller coaster will have a negative experience?

      • How does the bias that you prefer not to think about extreme suffering affect decision making?

  • Taking these questions further we will ask: Are Happiness and Suffering Symmetric? (5000 words). Think about your intuition first, then read this and think again.

      • Do you think equality an inherent good (not just maximizing happiness and reducing suffering)?

Addressing Human Suffering [Tu Mar 10]

One of the best ways to address human suffering is through public health initiatives and economic development. It is an area where, even before EA as a movement, much research was done, so we have good ideas about what is effective.

  • Let’s start by debunking some myths about aid. Read Giving What We Can’s Myths about Aid and Charity (2100 words)

    • GWWC is another EA organization focused on education for high impact donations that help development. Here, they go over common misconceptions that keep many people from giving.

      • Have you heard any of these before?

      • Does anything surprise you?

  • Now, read Jess Whittlestone’s piece, Global Health and Development (3900 words), where she introduces the cause area.

    • Many EA efforts focus on developing countries rather than developed countries. This goes against a certain, “Think globally, act locally” mindset that is popular in certain strains of altruistic thinking. The reason for this is that we are able to be much more effective in developing countries. Recall the effectiveness framework. Often charitable efforts in developed countries are relatively crowded. Because of purchasing power parity, money given to people in developed countries can go much further. EAs want to do good regardless of the distance between us and the people affected. That doesn’t mean EAs shouldn’t also care about their own communities, but it does imply that the bulk of efforts should go to where they can do the most good.

    • Whittlestone points out that, though there is good evidence with regard to global health and development, we shouldn’t fall victim to the “streetlight fallacy.” That is to say, we can’t assume it is the best area to do good in just because it is the best researched area.

  • GiveWell is commonly thought to be the best evaluator of charities, particularly global health and development charities. Their process is highly transparent, so there is much to look through on their website regarding how they evaluate charities. But, begin by taking a look at their Top Charities (800 words)

    • Take a closer look any of the charities’ websites that interest you.

      • What do most/​all of these charities have in common? Type of intervention? Location? Age of people targeted?

      • Why do they have these in common?

  • Recall our discussion of expected value. Two of GiveWell’s top charities are deworming charities. These both have a large caveat: Deworming Might Have a Huge Impact, But Might Have Close to Zero Impact (1400 words)

    • GiveWell is highly uncertain about the impact of deworming, but nonetheless has put it as a top priority. Keep in mind that this post is from 2017 and is somewhat outdated.

      • GiveWell embraces a risk-neutral approach which contrasts with a risk-averse one. Do you agree?

      • How much moral value might we place on new evidence about deworming?

    • Keep in mind that GiveDirectly is subject to some of the same issues. Though giving people money clearly does something, there are outstanding questions about the scale of these interventions because there have not yet been long studies. Even malaria bednet distribution has certain uncertainties, though it has been studied for much longer. Uncertainty is worrying, but we should do our best to think in risk-neutral expected value terms.

  • Let’s end the readings with some good news. Because of the efforts of many people, governments, and NGOs (plus some recent help from EAs) global health and development has improved drastically over the past 200 years. Take a look at the Our World in Data (~30min)

    • This is a project from Oxford University to help make data on key global issues clear and accessible. Explore a bit and read one of the pages that it links to in the Research by Topic menu. We recommend looking into one under the Health or Growth & Inequality sections.

      • What do you notice? Are things getting better or worse? Do most people know this?

      • Why? What type of work still needs to be done?

Addressing Animal Suffering [Tu Mar 17]

It probably comes as no surprise that some forms of animal suffering are incredibly large in scale, very neglected and tractable. Notably — factory farming.

  • This will not be the easiest thing to watch, but it is important to see what sort of suffering we are discussing. Watch this video (3min) to learn about the use of gestation crates in the pork industry. Unfortunately most sows (female pigs) in the US still spend their lives in these crates.

      • Why do you think this is happening?

  • But since this is EA we should think about scale. How many animals and how much suffering? Read through this description of animals on factory farms from the ASPCA (400 words). Think about where the worst cases of animal suffering are occurring.

      • Which animal has the greatest scale of suffering?

  • Now let’s take a step back and consider to what extent we ought to prioritize animals in our moral decision making. Read Brian Tomasik: Suffering in Animals Versus Humans (1900 words)

      • Other than animals’ moral status, why else might we want to better protect the environment and reduce animal farming?

      • Could we say that human suffering is fundamentally any different from nonhuman suffering?

      • Is moral status a yes/​no thing or is it on a continuum?

      • What does something need to have to possess moral status? Put some thought into this one. We will have an extensive discussion on it this week.

  • Read this cause area profile: Animal Welfare (3100 words) by Jess Whittlestone

      • How does animal welfare fare on the scale/​tractability/​neglectedness framework?

      • How is it related to other cause areas?

  • Animal Charity Evaluators (ACE) is an EA-network org that is the leader in animal charity evaluation. They are similar to GiveWell. Check out their Recommended Charities (900 words). You may notice that there are less recommended charities than at GiveWell and there isn’t the same high-level rigor of effectiveness either. Highly effective animal welfare charities seem to be even more neglected than global development charities. Maybe you can start such a charity though.

      • What do the recommended charities seem to have in common?

      • Why might these commonalities be so effective?

  • Let’s return to our old friend 80,000 Hours to read their problem profile on Factory Farming (1200 words)

      • Could you see yourself working to reduce suffering from factory farming? Why?

  • The director of the Good Food Institute, Bruce Friedrich, makes an excellent case for food innovation in plant-based and clean meat. Watch or read Friedrich’s talk: From Agitator to Innovator (5000 words)

      • Why is clean meat such a priority?

      • What are some reasons to be optimistic about the clean meat revolution?

  • [OPTIONAL] If you are interested, there are some great 80k podcasts on reducing animal suffering. This one is long, but it is an excellent overview of nearly every approach being taken to end factory farming, which ones work best, and how you can make the biggest impact in the field: Ending Factory Farming Podcast by 80k and Lewis Bollard (3 hrs—can adjust speed)

      • Which approach do you think is most promising? Most pressing/​time-sensitive?

      • Could you see yourself working to end factory farming? Why?

Preventing Suffering Through Progress [Tu Mar 31]

We’ve talked a lot about moral illusions like distance in this course so far. These next two weeks, we are going to be asking what doing good means when we dispel another moral illusion: Time. This week, we ask how we can make the future better for those who inhabit it.

  • Read Tyler Cowen’s Stubborn Attachments (160 pages). This is one of the longest readings of the course, but it is an essential reflection on many of the topics we have discussed so far.

    • Cowen’s essential point is that societies should pursue what he calls Growth+. That means economic growth, plus respect for human rights and environmental stability. Think about the reasons why Cowen thinks that growth is a good thing.

    • Cowen’s take is idiosyncratic in many ways. Try to think about how his arguments relate to the others we have studied in the course so far.

      • For Cowen, what role does Growth+ play in creating a better future?

      • It seems like Cowen thinks people fulfill their moral duties by working towards this growth? Do we have additional moral duties? Or, is EA an opportunity to do good beyond that.

    • If you are interested, take a look at Appendix B to see how Cowen addresses Parfit’s Repugnant Conclusion and animal welfare. Are his solutions satisfactory?

  • As EAs, perhaps we should think more about how to promote Growth+. Cowen and Patrick Collison’s write in the Atlantic to call for a new discipline: Progress Studies (2100 words)

    • If we think Growth+ is important, we might want to have a better idea of how it comes about. This might mean both figuring out how to set up institutions which foster economic growth, and also figuring out which policies are needed to make that growth sustainable, both environmentally and otherwise.

      • What questions here seem most pressing to you? The causes of the industrial revolution? Environmental impacts? Something else?

  • Robert Wiblin writes on an example of doing good through entrepreneurship in Doing Good Through For-Profits: Wave and Financial Tech (1100 words). He focuses on the financial tech startup Wave, which as a startup not only helps promote economic growth, but also also does good for its customers.

    • Wave allows immigrants to send money to relatives with lower fees than if they used services like MoneyGram or Western Union. Using this frame, Wiblin explores how to do good through for-profit work.

      • What are the advantages of working at a start-up? What are the advantages of for-profit work in general?

      • Can you think of any other for-profit companies worth starting to make the world better?

Allowing Flourishing to Continue: X-risk [Tu Apr 7]

Last week, we talked about making the future better through progress. But, for the future to be better, it must also exist. This week we will think about making it more likely that the future is able to continue by eliminating existential risks (X-risk).

  • First, take a quick look at “information hazards?” (150 words). The term will arise as we discuss existential risk.

    • How do information hazards factor into our discussion?

  • Dylan Matthews is a journalist at Vox and is also a prominent member of the EA community. As late as 2015, he was very skeptical about the risk of AI as a cause area in EA. Since then, he has come around to taking it seriously. AI Risk is easy to be skeptical of from the outside, but it is something many researchers in AI, as well as other technologists, take very seriously. Read about Dylan’s thoughts on the subject here (1500 words).

    • One reason that AI risk is a more prominent EA cause than other potential X-Risks is that it is extremely uncrowded. In fact, 10 years ago, there was only a handful of people in the entire world working on it. Even if you think it is very implausible, you still might think it is better to have a handful of people on it than no one.

    • If you are looking for more, Kelsey Piper (another EA and journalist at Vox) has a great article here: AI could be a disaster for humanity. A top computer scientist thinks he has the solution (2400 words)

    • The classic work on the subject is Nick Bostrom’s Superintelligence. We aren’t reading it for this course, but it is still probably the best place to start if you want to look more deeply into the issue.

  • Whether or not climate change is or is not an X-Risk has been a contentious issue in EA. Before we get into this, note that just because a risk is not existential does not mean it is not very harmful. Begin with Ozymandias’ take on the subject in the EA Forums here: Why Climate Change is Probably Not an X-Risk (1500 words). Then, read the other side of the debate from Roman Duca of 80k: Climate Change (Extreme Risks) (1700 words)

    • There are a few issues to untangle here. If climate change is a short-term X-Risk, it should almost certainly be a top priority for EA (because of its scale). If it is not a short term X-Risk, we must see whether certain interventions to prevent climate change pass the effectiveness framework. Unfortunately, many seem not to, because the area is often relatively crowded and tractability is low because of opposing interests. If you think that a certain climate change intervention would pass the effectiveness framework, that would make for a great blog post or final project.

  • Two more traditional areas where researchers have worried about existential risk are in disease pandemics and nuclear security. Take a quick look at both.

    • 80,000 hours on Pandemics (1600 words)

    • Peter McIntyre on Nuclear Security (2100 words)

      • How do you weigh the issue of nuclear security against other X-risks? Do the risk scenarios seem more likely to happen? Are they harder to address? How crowded are they

      • Do you think either seem like promising career fields? Is there an impact to be made outside the role of policymakers?

  • If you have time, listen to Robert Wiblin interviewing Tyler Cowen (Podcast and Transcript) (2:30!, the first half is especially important) on Stubborn Attachments.

    • Last week, we discussed Cowen’s case for Growth+. In this podcast, Robert Wiblin, director of 80,000 hours, interviews Cowen about the book. Specifically, Wiblin presses Cowen on the relationship between Growth+, long-term stability, and existential risk. It seems like growth may make democracies more stable, but Wiblin questions whether it could also lead to a greater chance of existential risk via geopolitical conflict or runaway technology. This is a very important area for future research.

      • Big questions include: Which is more tractable for most people, increasing growth or reducing existential risk? Does increasing growth also increase existential risks? Does decreasing growth increase existential risks?

The wide world of EA [Tu Apr 14]

We’ve discussed the classic cause areas in EA: Global development, animal welfare, and existential risk. We’ve also discussed a possible new area: Progress. Yet there are many other experimental ideas within EA. We’ll use this week to discuss a few of them. While you are looking at these proposals, keep the effectiveness framework in mind. Does the effectiveness of any seem close to the classic cause areas?

EA in Action [Tu Apr 21]

Now that we are close to the end of our course, we are going to begin to prepare for final projects. The goal of your final project is to make a research contribution to the EA community. To begin to prepare for this, explore the following twenty-one organizations and read AT LEAST THREE about at least three that are of particular interest to you. Think about questions the organization’s projects or research raise — What could you look into? Are any organizations failing to explore an area or project it might be worthwhile for them to explore? This week’s discussion will depend on what you found interesting, so please come, with notes, prepared to share what you found.

  • 80,000 Hours: Annual Review—December 2018 (8000 words)

    • 80,000 Hours is an EA organization dedicated to helping people find careers that fit them well and help them be more effective.

      • What are the services that 80k aims to provide?

      • What are their metrics of evaluation and criticisms of themselves?

  • Against Malaria Foundation: Website (explore the whole website)

    • There’s no single overview page, but please peruse the whole website. AMF is one of the most promising developmental organizations in the world and focuses on delivering insecticide-treated bed nets to people in malaria-prone regions.

      • Why malaria and why nets?

      • What factors make AMF so effective?

  • Animal Charity Evaluators 2017 Year in Review (3800 words)

    • ACE is comparable to GiveWell but focuses on evaluating animal welfare organizations.

      • Spend some time looking at their Charity Comparison Chart. What do you find that is interesting?

  • Center for Applied Rationality: Mission (900 words)

    • CFAR is a bay-area organization that hosts workshops on applied rationality techniques.

      • Why might this be key for altruistic movement building?

  • Center for Security and Emerging Technology: Our Mission and Research (500 words)

    • If you’re interested in government and policy, you might be interested in CSET. It’s a relatively new organization centered at Georgetown and focused on tech policy.

      • What key areas of research do they focus on?

  • Centre for Effective Altruism: CEA’s Current Thinking (4100 words)

    • CEA is ground zero for EA.

      • What are the ways it supports the community?

  • Centre for the Study of Existential Risk: Our Research (2100 words)

    • Read all 4 linked sections. CSER researches strategies for reducing X-risk.

      • What are the key risks and potential solutions in each of the 4 sections?

  • DeepMind: Research (variable, but ~2500 words)

    • Deepmind works on cutting edge technologies, applications, and safety for AI. Scan the research page and explore one of the things featured on it.

      • What type of innovations is DeepMind working on?

      • What might be the risks posed by the technology involved in the link you clicked?

  • Evidence Action: Our Approach and History (1700 words + 3 minute video)

    • EvAc is a charity focusing on research-backed interventions to support global poverty alleviation.

      • What makes it such a unique organization?

      • What are its two key interventions?

  • Future of Humanity Institute: Research Areas (2400 words)

    • Read all four tabs on the research areas. FHI is a key research organization working on X risk and shaping the far future.

      • What papers listed seem interesting to you?

  • Future of Life Institute: 2018 Annual Report (3000 words)

    • FLI is a research and advocacy organization focused on X-risk and the far future.

      • What are LAWS, and why does FLI consider banning them a priority?

      • What are their other priorities?

  • GiveDirectly: FAQ (3100 words)

    • GiveDirectly is a unique microfinance organization that transfers cash directly to people in some of the poorest communities in the world.

      • How does the cash transfer process work?

      • How do their transfers affect recipients?

  • Global Priorities Institute: About Us and Research Agenda (1000 words)

    • Read the About Us page and peruse the Research Agenda’s table of contents. GPI is a priorities research institute focused on key causes as well as strategic and philosophical questions.

      • Why does GPI take a long-termist approach?

      • What are some of the crucial considerations for altruism that it is working on answering?

  • Machine Intelligence Research Institute: Research (1300 words)

    • MIRI focuses on philosophy, risks, benefits from AI. This page is mainly for researchers and not the general public, so pay close attention when reading it.

      • What papers seem interesting?

      • Click two or three and read the abstracts.

  • Malaria Consortium About Us & Featured Projects (800 words + 2 minute video)

    • Please browse around the site a bit more as well. MC is one of the key anti-malaria organizations in the world.

      • What are their key approaches to fighting malaria?

  • OpenAI: Progress (variable, but ~2500 words)

    • OpenAI is dedicated to AI development, safety, and transparency. Scan the page and click one of the boxes to learn more.

      • What type of innovations is OpenAI working on?

      • What might be the risks posed by the technology involved in the link you clicked?

  • Open Philanthropy Project: Progress to Date (2000 words)

    • OpenPhil makes key research-based philanthropic recommendations and grants and is not dedicated to work in a particular cause area.

      • Why is OpenPhil unique?

      • What are its key focus areas?

  • Qualia Research Institute: QRI is building a new Science of Consciousness & Introduction (1200 words + 5 minute video)

    • QRI is dedicated to researching consciousness with a systematic scientific and philosophical approach.

    • Why are questions in qualia research crucial considerations?

    • What are some of QRI’s goals?

  • Schistosomiasis Control Initiative: Website (explore the whole website)

    • There’s no single overview page, but please explore the whole website. SCI is focused on fighting schistosomiasis, a parasitic worm infection.

      • What approaches does SCI take?

      • What benefits result from its deworming work?

  • Strong Minds: Website (explore the whole website)

    • There’s no single overview page, but please peruse the whole website. Strong Minds focuses on scalable mental health improvement in Uganda.

      • What programs do Strong Minds work on?

      • What are the effects?

      • How do they measure impact?

  • The Life You Can Save: 2018 Annual Report (4800 words)

    • TLYCS is a metacharity organization working to increase how much people donate to high impact charity.

      • How to they promote effective giving?

  • Some other questions that you might be interested in

    • Hanson’s EA global argument: How does the cost of doing good change over time? Will doing good be more expensive in the future? Does doing good get expensive more quickly than returns on capital compound? Should EAs put all their money in a mutual fund to donate in the future?

    • The Hinge of History Hypothesis: Are we living in the most influential time ever? This is perhaps one of the most provocative questions in EA. Here is William MacAskill on the topic: Are we living at the most influential time in history?


Thanks for reading!