The Capability Approach to Human Welfare

This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.

This post will have four sections. First, I will describe the capability approach. Second, I will give some simple examples that illustrate why I think that aiming to maximize capabilities[1] is the best way to do good for people. I’ll frame these examples in opposition to other common approaches, but my goal here is mostly constructive and to argue for the capability approach rather than against maximizing, for example, SWB. Third, I will describe what I see as the largest downsides to the capability approach as well as possible responses to these downsides. Fourth and finally, I will explain my weakly-held theory that a lot of the ways that global health or international development organizations, including GiveWell, behave owes to the deep (but often unrecognized) influence of the capability approach on their thought.

The capability approach

The fundamental unit of the value in the capability approach is a functioning, which is anything that you can be or do. Eating is a functioning. Being an EA is a functioning. Other functionings include: being a doctor, running, practicing Judaism, sleeping, and being a parent. Capabilities are options to be or do a functioning.[2] The goal of the capability approach is not to maximize the number of capabilities available to people, it is instead to maximize the number of sets of capabilities. The notion here is that if you maximized simply the number of capabilities then you might enable someone to be: a parent or employed outside the home. But someone might want to do both. If you’re focusing on maximizing the number of sets of capabilities then you’ll end up with: parent, employed, both parent and employed, and neither. The simple beauty of this setup is that it is aiming to maximize the options that people have available to them, from which they then select the group of functionings that they want most. This is why one great book about this approach is entitled “Development as Freedom.” The argument is that development is the process of expanding capabilities, or individual freedom to live the kind of life that you want.

I will come to criticisms later on, but one thing people may note is that this approach will lead to a lot of sets of capabilities and we will need some way to rank them or condense the list. In theory, we would want to do this based on how much people value each capability set. I will discuss this issue in more detail in the third section.

Examples of why I love the capability approach

Here I’ll lay out a few examples that show why I think the capability approach is the best way to think about improving human welfare.

First, in opposition to preference-satisfaction approaches, the capability approach values options not taken. I think this accords with most of our intuitions, and that it takes real work for economics to train it out of people. Here are two examples:

  1. Imagine two children. The first has domineering parents that inform her that she has to grow up to be a doctor. They closely control her school and extracurriculars in order to make this happen, but she doesn’t mind. As it happens she wants to be a doctor, and she will grow up to be a doctor. The second child has parents that tell her she can do what she wants and they broadly support her. She picks the same school and extracurricular options as the first child, and she also grows up to be a doctor. The two children had the same outcomes, and they both were able to select their top options for school, extracurriculars, and career. On most preference-satisfaction approaches they are equally well off. Under the capability approach, however, the second child was much better off as she had so many more options open to her.

  2. Imagine two cities. In one, it is safe for women to walk around at night and in the second it is not. I think the former city is better even if women don’t want to walk around at night, because I think that option is valuable to people even if they do not take it. Preference-satisfaction approaches miss this.

Second, I think that the capability approach gives us a better sense of who is facing deprivation and a better sense of how to prioritize allocating resources/​aid than other approaches. I think that this partially has to do with the fact that capabilities are genuine options to do functionings and so they are objective in a way that SWB or happiness is not. For example, you either do or do not have the option of being well nourished. This allows the capability approach to avoid some of the problems related to people possibly having odd functions relating some kind of aid to utility or happiness and therefore becoming either utility monsters or being considered unworthy of aid because the adversity that they face doesn’t translate into SWB or utility.

As a toy example, we can imagine someone that has a low level of capabilities because of discrimination in their society. Such discrimination usually comes along with stories for why it is good or normal—or it’s simply taken for granted—and so it isn’t far-fetched to suggest that someone facing discrimination could have utility or happiness that is as high as people from groups that do not face such discrimination. One could conclude from this that there is nothing to be done, as discrimination doesn’t affect happiness in this case. I find this repugnant, as the group facing discrimination has fewer capabilities. While this is a toy example, it’s not hard to find real cases. For example, in the United States women were happier than men in the 1970s.[3] Does this imply that we should have been focusing more on helping men than women in the 1970s? I doubt it, and my reasoning stems from the fact that it seems to me that 50 years ago women lacked many important capabilities that men had. Using subjective measures to allocate aid means that targeting will depend in part on people’s ability to imagine a better future (and thus feel dissatisfaction with the present). I don’t think conditioning aid on one’s imagination is justified, and so I would prefer measures based on objective criteria such as whether or not one has the ability to do things (eat, vote, send kids to school).

Third, maximizing is (usually) perilous. I think the perilousness of maximizing strongly applies to maximizing money or happiness or SWB, but I actually think maximizing capabilities is a lot safer because they are options. I’ll give two examples, one personal and one that is (hopefully) more fanciful. Personally, I’m nearly always a 710 or 810 on the common happiness-type questions. I guess that this means that I could “improve” this, but honestly I’m not trying to do this at all. Worse, if you told me that I was 1010 happy over a long period of time I would be worried about my mental state. I don’t want to be that happy. Like everyone, I’m a confused mess of priorities. I want some happiness, but I also want to work hard on things even if it ends up making me sad. I want to build things both for other people, but also just so that they exist. I had kids even though I knew that in expectation they would make me less happy, and I don’t regret this choice even a little. I want good art to exist literally for its own sake. Sometimes I feel good but then seek out art that makes me so sad that I cry. (Model that!) I want humans to better understand the laws of the universe, regardless of happiness. When I reflect on what I care about, I care about humans becoming wildly capable. Further, I expect that as we increase our options we will all care about different things and maximize different things. This fits the capability approach really well because it is giving us options. This does not fit any approach that seeks to maximize “the one best functioning.”

More fancifully, imagine that we manage to successfully train a god-like AI to actually maximize something.[4] I think if we make that something “happiness” then we’re in big trouble. We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare. However, if we maximize (value-weighted) capabilities then I think we’re in a much better position because, again, these are just options available to people.[5] My point here is not that this solves alignment or whatever, it’s that if you agree that maximizing capabilities is not as fraught as maximizing other things, then that’s a really big deal and strongly suggests that this approach is pointing us in a good direction.

The capability approach is less fraught than others because it is mostly agnostic about what people choose to do with their capabilities. This really matters because (1) we are so diverse, and (2) most optimizers (including us EAs) know less about what each person wants than they do. As a final example of this problem, let’s consider the billions of very religious people alive right now, many of whom live in low-income countries. Do they want to maximize their personal happiness or SWB? The religious people I know do not seem to want to do that. They care, among other things, about their religious community and giving proper respect and glory to God, and they care about the afterlife. As EAs, we should try to give these (and all) people more options to do things that they care about. We should not impose on them our favourite functioning, like having high SWB.

Downsides to the capability approach

While strong in theory, the capability approach has a number of serious downsides that limit how fully it can be implemented in practice. I will describe some of them here, and why I think that despite these downsides the capability approach offers the best conceptual machinery for thinking about how to most improve human welfare.

The first (potential) downside is that the capability approach is highly liberal and anti-paternalistic. It treats individuals as the unit of analysis (groups of people cannot have capabilities) and it assumes that people know best what they want. The goal of policy makers, EAs, aid workers, or some godlike AI is to give people valuable options. People then get to pick for themselves what they actually want to do. If you are not liberal or if you are paternalistic, then you may not like the capability approach.

A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context. When faced with this problem people do a few things. If you’re Martha Nussbaum you end up making a master list of capabilities and then try to get people to focus on those. This is unattractive to me. If you’re Amartya Sen, you embrace the chaos and try to be pragmatic. Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right. Here are three examples that start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right. My goal in giving these examples is not to say that these are ideal, but just to give illustrations about how people try to start from the capability approach and then move into the realm of practice.

The Human Development Index grew out of a desire to have a country-level index that was inspired by the capability approach. This was from the start going to look odd, as countries cannot have capabilities. The approach ended with a sort of average of country-level scores on education, health, and productivity. There are all kinds of issues with this, but I think it has value relative to an alternative that says “the HDI is confusing or assigns hard-to-defend weights to some dimension, so I will give 100% weight to dimension x (income, happiness) and 0% to everything else.” I’d rather us be approximately right rather than precisely wrong.

The second way of operationalizing the capability approach is to push things down to the level of individuals and then do a roughly similar kind of exercise. This yields, for example, the Multidimensional Poverty Index.

The third approach, which I personally prefer, is to not even try to make an index but instead to track various clearly important dimensions separately and to try to be open and pragmatic and get lots of feedback from the people “being helped.” If you take this approach and think about it a bit, then you will realize that there are two things that are very important to a very large number of capability sets. Those things are (1) being alive, and (2) having resources, which often means money. The argument for this should be familiar to EAs, as it’s very similar to why some of us think that AI agents might try to not be turned off and to gather resources: These things are generally very useful.

The influence of the capability approach

This leads me to my last point, which is that the capability approach has been so influential in international development thought that many people and organizations do things that make sense under the capability approach even though they may not realize it. The Keynes quote about practical men applies here, though incredibly Amartya Sen is still alive.

The most relevant example of this to EA is in GiveWell and OpenPhil both prioritizing income gains and lives saved as privileged metrics. This has been sometimes criticized, but under the capability approach it makes a lot of sense. If you don’t know precisely what to maximize for people, then picking staying alive and having resources is a very good start. I don’t know if people at these organizations actively read Sen or other related writers, but I think the capability approach offers a powerful defense of this choice. Money and staying alive are not necessarily where you want to end up, but they are very good starting points.

Thanks to anyone who read this far. If you want more, Inequality Re-examined by Amartya Sen is quite good. I’d also recommend the capability approach page of the Stanford Encyclopedia of Philosophy, which goes into details on key issues that I glossed over.

Minor edit: it was (correctly) pointed out to me on twitter that the capability approach claims “that capabilities are the appropriate space for well-being comparisons and says nothing about whether capabilities should be maximized.” He’s right. My post mixes the capability approach with an implicit EA mindset, but for Sen those would be quite distinct.

  1. ^

    As will become clearer later, the capability approach aims to maximize the number of groups (sets) of capabilities that people can select. Talk of “maximizing capabilities” is lazy shorthand.

  2. ^

    They are options that you really could do. So, for example, given my age and fitness I do not have the capability of being a pro athlete. There is no rule stopping me, but let’s be real, it’s not going to happen.

  3. ^

    The male-female happiness gap in the US then shrunk until the 2000s when men and women in the US were about equally happy. Should you actually believe this result? Maybe not, but please apply any skepticism you feel about this result to all of the other happiness research.

  4. ^

    I’m not an AI person. Please let me use this as an example without people responding with comments about how some inner-optimizer is doing… whatever. Take the point as one about goals, not about training or anything else.

  5. ^

    The approach here would be something like, “maximize value-weighted sets of capabilities, where you can figure out the value of each set based on how people act. Be Bayesian and partially pool information across people that are socially close.” But again, let’s not get hung up on details that aren’t relevant to the broader post. And yes, we’d need to do something about animals, though recognize that “seeing animal x” is a functioning that many people value.