Some core assumptions of effective altruism, according to me

Zvi recently posted a (critical) list of core assumptions of effective altruism.

The list is interesting, but I think much of it is somewhere between “a bit off” and “clearly inaccurate”.

In this post I redraft the list—keeping the order and breakdown that Zvi used, but applying suggested edits to each point.

Compared to Zvi’s list, mine is somewhat aspirational, but I also think it’s a more accurate description of the current reality of effective altruism (as body of ideas, and as community).


Important: these are just my takes! I’m not speaking on behalf of current or past employers, key figures in the movement, or anything like that.

This list is not intended to be comprehensive.

I’d love to read your thoughts—including your own suggested edits and additions—in the comments. If you like, make a copy of my Google Doc!

I spent 3-4 hours writing this post. In the future I might share a list I write from scratch, but for now I found it much easier to just go make edits to the list Zvi made.


Some core assumptions of effective altruism, according to me

  1. Two-thirds utilitarianism. Utilitarianism is a useful and underrated way to think about what matters in some circumstances. Other theories of value and normative frameworks should be given serious consideration and weight, partly due to moral uncertainty. Taking utilitarianism seriously does not imply that people should go around thinking in utilitarian terms most of the time. The mindsets suggested by moral perfectionism, deontology, virtue ethics and common-sense ethics are often more helpful in daily life. [1]

  2. Importance of suffering. All else equal, suffering is bad, and happiness/​pleasure is good. Morally, it may be more important to reduce suffering than to increase happiness. Empirically, it may be easier to reduce suffering than to increase happiness (though this is not obvious). This assumption does not oblige us to only care about pleasure and suffering, and certainly not to “focus on the floor of the human condition, rather than the ceiling”. [2]

  3. Model-based interventions. Making explicit models, as opposed to compelling stories, is important. In some areas (e.g. global health), we can learn a lot by investing in careful empirical measurement and testing. In others (e.g. anthropogenic existential risk) we are obliged to rely on speculative (but still useful) models, often informed by the projection of historical trends, evolutionary theory, and/​or first principles thinking. We should not be afraid to bet heavily on these models.[3]

  4. Diverse funding models. If you want >$1m funding, you probably need to apply to one of a few large funders. But if you want small project or seed funding, there are many funding sources available to you, including >50 individuals who can say “yes” with very little constraint from other parties.[4]

  5. Scope sensitivity. Preventing 100 people going blind is 100x better than preventing one person going blind. We should have run COVID vaccine challenge trials in January 2020. Shut up and multiply.

  6. Duty of privilege. If you are fortunate to have freedom, security, good health, and so on, you should dedicate some part of your resources (e.g. time and money) to trying to help others as much as possible. You should decide how much, but we encourage at least 10%.[5]

  7. Effectiveness. Do what works. Seek feedback. Keep learning. Cultivate intellectual virtues, such as quickly updating when you’re wrong, threading the needle between overconfidence and underconfidence, etc etc etc etc.

  8. Impartial altruism. One of the best ways to do good yourself is to take up an impartial perspective when you’re thinking about how to spend your altruistic resources. This yields surprisingly large opportunities at the moment, because relatively few people do this. This may be especially true if you endorse a zero-discount rate for welfare, which probably implies that the interests of far-future generations are gravely neglected.

  9. We can see altruism as opportunity or obligation. Some people are motivated by the joy of helping others; others see it as a moral obligation.

  10. Coordination. Working together is sometimes more effective than cultivating competition, especially when your values are shared. But incentives, feedback loops and public choice theory suggest that many things that can be a for-profit probably should be a for-profit.

  11. Impartiality. From the perspective of the universe, your welfare is no more important than that of other similar moral patients, no matter where they exist in space and time. This perspective is compatible with the idea that, in practice, you should value yourself, your family, and those around you more than others (see (1) above and also appendix 3).

  12. Self-recommending. Belief in the movement and methods themselves.[6]

  13. Evangelism. Belief that it is good to grow the movement, in terms of human capital, social capital, financial capital, and general influence. Views differ on how “big” effective altruism should eventually become, which audiences we should focus on, and what growth rate is desirable. Several central features of the current movement probably don’t scale gracefully to all audiences.

  14. Reputation. The reputation of EA is a crucial factor for its overall success. Careful communication, community health[7], and cooperativeness with other groups is important on these (and other) grounds. There are strong instrumental arguments in favour of “common sense” virtues like integrity.

  15. Mixed feelings about mainstream institutions and expertise; belief that you may be able to do better. As a first cut: trust experts and institutions with good reputations. But beware: many “experts” have terrible track records of prediction, and many institutions are extremely dysfunctional, or at least harbour islands of dysfunction. Governments often drop important balls, non-fiction books are rarely fact-checked, lots of research doesn’t replicate, incentives in academia are often awful, some “experts” in medical ethics would fail an Ethics 101 class, etc etc etc. You may be able to find huge opportunities in areas that seem to be “covered” by existing groups.

  16. Existential risk. There could be an immense amount of value in the future, but there could also be very little, or even immense amounts of disvalue. Most people who’ve looked into this think that the probably of existential catastrophe before 2100 is disturbingly high (>1%), largely due to new risks from emerging technologies such as artificial intelligence and biotechnology. Few people are trying to understand and reduce these risks; therefore it is one of the most promising areas to focus on.

  17. Value of sacrifice. Sometimes personal sacrifice can help set an inspiring example, or communicate moral seriousness. All else equal, sacrifice for its own sake is not valuable, or at least not particularly valuable: what matters most is the future consequences of your actions.

  18. Encouragement. We should praise and reward people who act upon (or criticise and improve upon) these assumptions. It is usually unhelpful to blame or condemn those who act differently—our patterns of blame should, to a large extent, reflect those of common sense morality.

  19. Veganism. If you are not vegan many EAs treat you as non-serious (or even evil). [8]

  20. Grace. In practice, people can’t live up to this list fully and that’s acceptable.

  21. Non-totalising. People who are unfamiliar with the effective altruism community sometimes perceive it as a “totalising” community (or set of ideas) [9], which asks people to commit most or all of their lives to the movement. This is not the case. [10] Different people make different levels of commitment, and individual commitment fluctuates as people’s situations change. People take breaks. People often prioritise their own wellbeing, their family commitments, and so on—independently of their commitment to effective altruism. That said, many people do make big changes to their lives—e.g. change their career plans, or move to a new city—either because they are inspired to do so, or because, on reflection, they feel a sense of duty to try to make things as good as they can, with whatever resources they’ve decided to dedicate to altruistic ends. [11]

Appendix 1. Zvi’s list of assumptions, for comparison

Copy-pasted from here.

  1. Utilitarianism. Alternatives are considered at best to be mistakes.

  2. Importance of Suffering. Suffering is The Bad. Happiness/​pleasure is The Good.

  3. Quantification. Emphasis on that which can be seen and measured.

  4. Bureaucracy. Distribution of funds via organizational grants and applications.

  5. Scope Sensitivity. Shut up and multiply, two are twice as good as one.

  6. Intentionality. You should to plan your life around the impact it will have.

  7. Effectiveness. Do what works. The goal is to cut the enemy.

  8. Altruism. The best way to do good yourself is to act selflessly to do good.

  9. Obligation. We owe the future quite a lot, arguably everything.

  10. Coordination. Working together is more effective than cultivating competition.

  11. Selflessness. You shouldn’t value yourself, locals or family more than others.

  12. Self-Recommending. Belief in the movement and methods themselves.

  13. Evangelicalism. Belief that it is good to convert others and add resources to EA.

  14. Reputation. EA should optimize largely for EA’s reputation.

  15. Modesty. Non-neglected topics can be safely ignored, often consensus trusted.

  16. Existential Risk. Wiping out all value in the universe is really, really bad.

  17. Sacrifice. Important to set a good example, and to not waste resources.

  18. Judgement. Not living up to this list is morally bad. Also sort of like murder.

  19. Veganism. If you are not vegan many EAs treat you as non-serious (or even evil).

  20. Grace. In practice people can’t live up to this list fully and that’s acceptable.

  21. Totalization. Things outside the framework are considered to have no value.

Appendix 2. Twitter version of this post

https://​​twitter.com/​​peterhartree/​​status/​​1552950728137871361

Appendix 3. Theory of value does not determine normative ethics

Added 2022-08-16, because a couple people asked me about this.

Theory of value does not determine what actions or ways of thinking are best for individuals. One has to combine a theory of value with a normative theory, plus a bunch of empirical facts.

The question: “what is ultimately valuable?” is quite different from questions like “how should humans behave?” and “what patterns of praise and blame would have good consequences”?

(Philosophers sometimes distinguish “axiology (theory of value)” from “normative theory”, “practical ethics” or “decision procedure”.)

People vary on how quickly they move from impartial axiology to normative ethics, and how revisionary they want to be of traditional ethics, including partiality.

Both Peter Singer and Tyler Cowen start with an impartial theory of value (Tyler is complicated, but he at least uses this framework sometimes). But Tyler tends to take the constraints of human nature more seriously, and thinks that, for humans, the normative software that leads to best consequences is not a million miles away from what we have now.

Utilitarians usually make moves like Tyler, at least in particular cases. Singer does this to some degree.

One could, of course, believe that impartial theory of value makes no sense, and instead embrace a partial theory of value, scoped (for example) to human values. Bernard Williams famously defends this perspective.


  1. ↩︎
  2. ↩︎
  3. ↩︎
  4. ↩︎

    This number went up a lot in 2022 due to the Future Fund regranting program

  5. ↩︎
  6. ↩︎

    I guess most movements involve this? Perhaps Zvi is suggesting excessive belief in some particular methods, but I think the commitment to effectiveness (“whatever works”, item 7 above) is more fundamental.

  7. ↩︎

    CEA has a community health team.

  8. ↩︎

    I think Zvi is wildly off on this one. This doesn’t match my experience of UK /​ London /​ Oxford community at all. I’ve not spent much time in the various US and Bay Area communities, so I can’t personally speak to that, but I asked a couple people who are more familiar and they also didn’t recognise Zvi’s description.

  9. ↩︎

    It’s worth noting that some people retain this perception even as they become quite involved in things. I see this mainly as a communication problem that EA groups should work to fix, rather than a fundamental issue.

  10. ↩︎

    Various factors drive this misperception. I won’t try to quickly summarise them right now. One big one: poorly crafted memes and discussion around the ideal of “maximisation”. This stuff is hard, but my hunch is that”do the most good” was a mistake, all-things-considered. MacAskill (2019) has good, careful definition: “the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources” But this qualification hasn’t yet been made salient enough in the intro materials, key talking points, and so on.

  11. ↩︎