Thanks for posting this! I do think lots of people in EA take a more measuring-happiness/preference-satisfaction approach, and it’s really useful to offer alternatives that are popular elsewhere.
My notes and questions on the post:
Here’s how I understand the main framework of the “capability approach,” based mostly on this post, the linked Tweet, and some related resources (including SEP and ChatGPT):[1]
“Freedom to achieve [well-being]” is the main thing that matters from a moral perspective.
(This post then implies that we should focus on increasing people’s freedom to achieve well-being / we should maximize (value-weighted) capabilities.)
“Well-being” breaks down into functionings (stuff you can be or do, like jogging or being a parent) and capabilities (the ability to realize a functioning: to take some options — choices)
Examples of capabilities: having the option of becoming a parent, having the option of not having children, having the option of jogging, having the option of not jogging, etc. Note: if you live in a country where you’re allowed to jog, but there are no safe places to jog, you do not actually have the capability to jog.
Not all functionings/capabilities are equal: we shouldn’t naively list options and count them. (So e.g. the ability to spin and clap 756 times is not the same as the option to have children, jog, or practice a religion.) My understanding is that the capability approach doesn’t dictate a specific approach to comparing different capabilities, and the post argues that this is a complexity that is just a fact of life that we should accept and pragmatically move forward with:
“Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right.”
In particular, even if it turns out that someone is content not jogging, them having the ability to jog is still better than them not having this ability.
My understanding of the core arguments of the post, with some questions or concerns I have (corrections or clarifications very much appreciated!):
What the “capability approach” is — see above.
Why this approach is good
It generally aligns with our intuitions about what is good.
I view this as both a genuine positive, and also as slightly iffy as an argument — I think it’s good to ground an approach in intuitions like “it’s good for a woman to choose whether to walk at night even if she might not want to”, but when we get into things like comparing potential areas of work, I worry about us picking approaches that satisfy intuitions that might be wrong. See e.g. Don’t Balk at Animal-friendly Results, if I remember that argument correctly, or just consider various philanthropic efforts that focus on helping people locally even if they’re harder to help and in better conditions than people who are farther away — I think this is generally justified with things like “it’s important to help people locally,” which to me seems like over-fitting on intuitions.
At the same time, the point about women being happier than men in the 1970s in the US seems compelling. Similarly, I agree that I don’t personally maximize anything like my own well-being — I’m also “a confused mess of priorities.”
It’s safer to maximize capabilities than it is to maximize well-being (directly), which both means that it’s safer to use the capabilities approach and is a signal that the capabilities approach is “pointing us in the right direction.”
A potentially related point that I didn’t see explicitly: this approach also seems safer given our uncertainty about what people value/what matters. This is also related to 2d.
This approach is less dependent on things like people’s ability to imagine a better situation for themselves.
This approach is more agnostic about what people choose to do with their capabilities, which matters because we’re diverse and don’t really know that much about the people we’re trying to help.
This seems right, but I’m worried that once you add the value-weighting for the capabilities, you’re imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world.
So it seems possible that this approach is either not very useful by saying: “we need to maximize value-weighted capabilities, but we can’t choose the value-weightings,” (see this comment, which makes sense to me) or transforms back into a generic approach like the ones more commonly used often in EA — deciding that there are good states and trying to get beings into those states (healthy, happy, etc.). [See 3bi for a counterpoint, though.]
Some downsides of the approach (as listed by the post)
It uses individuals as the unit of analysis and assumes that people know best what they want, and if you dislike that, you won’t like the approach. [SEE COMMENT THREAD...]
I just don’t really see this as a downside.
“A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context.”
The post argues that we can accept this complexity and move forward pragmatically in a better way than going with clean-but-wrong indices. It lists three examples (two indcies and one approach of tracking individual dimensions) that “start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right.” These seem to mostly track things that seem like common requirements for many other capabilities, like health/being alive, resources, education, etc.
The influence of the capability approach
Three follow-up confusions/uncertainties/questions (beyond the ones embedded in the summary above):
Did I miss important points, or get something wrong above?
If we just claim that people value having freedoms (or freedoms that will help them achive well-being), is this structurally similar to preference satisfaction?
The motivation for the approach makes intuitive sense to me, but I’m confused about how this works with various things I’ve heard about how choices are sometimes bad. (Wiki page I found after a quick search, which seemed relevant after a skim.) (I would buy that a lot of what I’ve heard is stuff that’s failed in replications, though.)
Sometimes I actually really want to be told, “we’re going jogging tonight,” instead of being asked, “So, what do you want to do?”
My guess is that these choices are different, and there’s something like a meta-freedom to choose when my choice gets taken away? But it’s all pretty muddled.
Thank you for this excellent summary! I can try to add a little extra information around some of the questions. I might miss some questions or comments, so do feel free to respond if I missed something or wrote something that was confusing.
--
On alignment with intuitions as being “slightly iffy as an argument”: I basically agree, but all of these theories necessarily bottom out somewhere and I think they all basically bottom out in the same way (e.g. no one is a “pain maximizer” because of our intuitions around pain being bad). I think we want to be careful about extrapolation, which may have been your point in the comment, because I think that is where we can either be overly conservative or overly “crazy” (in the spirit of the “crazy train”). Best I can tell where one stops is mostly a matter of taste, even if we don’t like to admit that or state it bluntly. I wish it was not so.
--
″...I’m worried that once you add the value-weighting for the capabilities, you’re imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world. ”
I understand what you’re saying. As was noted in a comment, but not in my post, Sen in particular would advocate for a process where relatively small communities worked out for themselves which capabilities they cared most about and the ordering of the sets. This would not aggregate up into a global ordered list, but it would allow for prioritization within practical situations. If you want to depart from Sen but still try to respect the approach when doing this kind of weighting, one can draw on survey evidence (which is doable and done in practice).
--
I don’t think I have too much to add to 3bi or the questions around “does this collapse into preference satisfaction?”. I agree that in many places this approach will recommend things that look like normal welfarism. However, I think it’s very useful to remember that the reason we’re doing these things is not because we’re trying to maximize happiness or utility or whatnot. For example, if you think maximizing happiness is the actual goal then it would make sense to benchmark lots of interventions on how effectively they do this per dollar (and this is done). To me, this is a mistake borne out of confusing the map for the territory. Someone inspired by the capability approach would likely track some uncontentiously important capabilities (life, health, happiness, at least basic education, poverty) and see how various interventions impact them and try to draw on evidence from the people affected about what they prioritize (this sort of thing is done).
Something I didn’t mention in the post that will also be different from normal welfarism is that the capability approach naturally builds in the idea that one’s endowments (wealth, but also social position, gender, physical fitness, etc) interact with the commodities they can access to produce capabilities. So if we care about basic mobility (e.g. the capability to get to a store or market to buy food) then someone who is paraplegic and poor and remote will need a larger transfer than someone who is able bodied but poor and remote in order to get the same capability. This idea that we care about comparisons across people “in the capability space” rather than “in the money space” or “in the happiness space” can be important (e.g. it can inform how we draw poverty lines or compare interventions) and it is another place where the capability approach differs from others.
All that said, I agree that in practice the stuff capability-inspired people do will often not look very different from what normal welfarism would recommend.
--
Related: you asked “If we just claim that people value having freedoms (or freedoms that will help them achieve well-being), is this structurally similar to preference satisfaction?”
I think this idea is similar to this comment and I think it will break for similar meta-level reasons. Also, it feels a bit odd to me to put myself in a preference satisfaction mindset and then assert someone’s preferences. To me, a huge part of the value of preference satisfaction approaches is that they respect individual preferences.
--
Re: paradox of choice: If more choices are bad for happiness then this would be another place where the capability approach differs from a “max happiness” approach, at least in theory. In practice, one might think that the practical results of limiting choices is likely to usually be bad (who gets to make the limits? how?, etc.) and so this won’t matter. I personally would bet against most of those empirical results mattering. I have large doubts that they would replicate in their original consumer choice context, and even if they do replicate I have doubts that they would apply to the “big” things in life that the capability approach would usually focus on. But all that said, I’m very comfortable with the idea that this approach may not max happiness (or any other single functioning).
On the particular example of: “Sometimes I actually really want to be told, “we’re going jogging tonight,” instead of being asked, “So, what do you want to do?”″
Yeah, I’m with you on being told to exercise. I’m guessing you like this because you’re being told to do it, but you know that you have the option to refuse. I think that there are lots of cases where we like this sort of thing, and they often seem to exist around base appetites or body-related drives (e.g. sex, food, exercise). To me, this really speaks to the power of capabilities. My hunch is you like being told “you have to go jogging” when you know that you can refuse but you’d hate it if you were genuinely forced to go jogging (if you genuinely lacked the option to say no).
--
Again, thank you for such a careful read and for offering such a nice summary. In a lot of places you expressed these ideas better than I did. I was fun to read.
Great, thank you! I appreciate this response, it made sense and cleared some things up for me.
Re:
Yeah, I’m with you on being told to exercise. I’m guessing you like this because you’re being told to do it, but you know that you have the option to refuse.
I think you might be right, and this is just something like the power of defaults (rather than choices being taken away). Having good defaults is good.
(Also, I’m curating the post; I think more people should see it. Thanks again for sharing!)
Thanks for posting this! I do think lots of people in EA take a more measuring-happiness/preference-satisfaction approach, and it’s really useful to offer alternatives that are popular elsewhere.
My notes and questions on the post:
Here’s how I understand the main framework of the “capability approach,” based mostly on this post, the linked Tweet, and some related resources (including SEP and ChatGPT):[1]
“Freedom to achieve [well-being]” is the main thing that matters from a moral perspective.
(This post then implies that we should focus on increasing people’s freedom to achieve well-being / we should maximize (value-weighted) capabilities.)
“Well-being” breaks down into functionings (stuff you can be or do, like jogging or being a parent) and capabilities (the ability to realize a functioning: to take some options — choices)
Examples of capabilities: having the option of becoming a parent, having the option of not having children, having the option of jogging, having the option of not jogging, etc. Note: if you live in a country where you’re allowed to jog, but there are no safe places to jog, you do not actually have the capability to jog.
Not all functionings/capabilities are equal: we shouldn’t naively list options and count them. (So e.g. the ability to spin and clap 756 times is not the same as the option to have children, jog, or practice a religion.) My understanding is that the capability approach doesn’t dictate a specific approach to comparing different capabilities, and the post argues that this is a complexity that is just a fact of life that we should accept and pragmatically move forward with:
“Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right.”
In particular, even if it turns out that someone is content not jogging, them having the ability to jog is still better than them not having this ability.
My understanding of the core arguments of the post, with some questions or concerns I have (corrections or clarifications very much appreciated!):
What the “capability approach” is — see above.
Why this approach is good
It generally aligns with our intuitions about what is good.
I view this as both a genuine positive, and also as slightly iffy as an argument — I think it’s good to ground an approach in intuitions like “it’s good for a woman to choose whether to walk at night even if she might not want to”, but when we get into things like comparing potential areas of work, I worry about us picking approaches that satisfy intuitions that might be wrong. See e.g. Don’t Balk at Animal-friendly Results, if I remember that argument correctly, or just consider various philanthropic efforts that focus on helping people locally even if they’re harder to help and in better conditions than people who are farther away — I think this is generally justified with things like “it’s important to help people locally,” which to me seems like over-fitting on intuitions.
At the same time, the point about women being happier than men in the 1970s in the US seems compelling. Similarly, I agree that I don’t personally maximize anything like my own well-being — I’m also “a confused mess of priorities.”
It’s safer to maximize capabilities than it is to maximize well-being (directly), which both means that it’s safer to use the capabilities approach and is a signal that the capabilities approach is “pointing us in the right direction.”
A potentially related point that I didn’t see explicitly: this approach also seems safer given our uncertainty about what people value/what matters. This is also related to 2d.
This approach is less dependent on things like people’s ability to imagine a better situation for themselves.
This approach is more agnostic about what people choose to do with their capabilities, which matters because we’re diverse and don’t really know that much about the people we’re trying to help.
This seems right, but I’m worried that once you add the value-weighting for the capabilities, you’re imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world.
So it seems possible that this approach is either not very useful by saying: “we need to maximize value-weighted capabilities, but we can’t choose the value-weightings,” (see this comment, which makes sense to me) or transforms back into a generic approach like the ones more commonly used often in EA — deciding that there are good states and trying to get beings into those states (healthy, happy, etc.). [See 3bi for a counterpoint, though.]
Some downsides of the approach (as listed by the post)
It uses individuals as the unit of analysis and assumes that people know best what they want, and if you dislike that, you won’t like the approach. [SEE COMMENT THREAD...]
I just don’t really see this as a downside.
“A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context.”
The post argues that we can accept this complexity and move forward pragmatically in a better way than going with clean-but-wrong indices. It lists three examples (two indcies and one approach of tracking individual dimensions) that “start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right.” These seem to mostly track things that seem like common requirements for many other capabilities, like health/being alive, resources, education, etc.
The influence of the capability approach
Three follow-up confusions/uncertainties/questions (beyond the ones embedded in the summary above):
Did I miss important points, or get something wrong above?
If we just claim that people value having freedoms (or freedoms that will help them achive well-being), is this structurally similar to preference satisfaction?
The motivation for the approach makes intuitive sense to me, but I’m confused about how this works with various things I’ve heard about how choices are sometimes bad. (Wiki page I found after a quick search, which seemed relevant after a skim.) (I would buy that a lot of what I’ve heard is stuff that’s failed in replications, though.)
Sometimes I actually really want to be told, “we’re going jogging tonight,” instead of being asked, “So, what do you want to do?”
My guess is that these choices are different, and there’s something like a meta-freedom to choose when my choice gets taken away? But it’s all pretty muddled.
I don’t have a philosophy background, or much knowledge of philosophy!
Thank you for this excellent summary! I can try to add a little extra information around some of the questions. I might miss some questions or comments, so do feel free to respond if I missed something or wrote something that was confusing.
--
On alignment with intuitions as being “slightly iffy as an argument”: I basically agree, but all of these theories necessarily bottom out somewhere and I think they all basically bottom out in the same way (e.g. no one is a “pain maximizer” because of our intuitions around pain being bad). I think we want to be careful about extrapolation, which may have been your point in the comment, because I think that is where we can either be overly conservative or overly “crazy” (in the spirit of the “crazy train”). Best I can tell where one stops is mostly a matter of taste, even if we don’t like to admit that or state it bluntly. I wish it was not so.
--
I understand what you’re saying. As was noted in a comment, but not in my post, Sen in particular would advocate for a process where relatively small communities worked out for themselves which capabilities they cared most about and the ordering of the sets. This would not aggregate up into a global ordered list, but it would allow for prioritization within practical situations. If you want to depart from Sen but still try to respect the approach when doing this kind of weighting, one can draw on survey evidence (which is doable and done in practice).
--
I don’t think I have too much to add to 3bi or the questions around “does this collapse into preference satisfaction?”. I agree that in many places this approach will recommend things that look like normal welfarism. However, I think it’s very useful to remember that the reason we’re doing these things is not because we’re trying to maximize happiness or utility or whatnot. For example, if you think maximizing happiness is the actual goal then it would make sense to benchmark lots of interventions on how effectively they do this per dollar (and this is done). To me, this is a mistake borne out of confusing the map for the territory. Someone inspired by the capability approach would likely track some uncontentiously important capabilities (life, health, happiness, at least basic education, poverty) and see how various interventions impact them and try to draw on evidence from the people affected about what they prioritize (this sort of thing is done).
Something I didn’t mention in the post that will also be different from normal welfarism is that the capability approach naturally builds in the idea that one’s endowments (wealth, but also social position, gender, physical fitness, etc) interact with the commodities they can access to produce capabilities. So if we care about basic mobility (e.g. the capability to get to a store or market to buy food) then someone who is paraplegic and poor and remote will need a larger transfer than someone who is able bodied but poor and remote in order to get the same capability. This idea that we care about comparisons across people “in the capability space” rather than “in the money space” or “in the happiness space” can be important (e.g. it can inform how we draw poverty lines or compare interventions) and it is another place where the capability approach differs from others.
All that said, I agree that in practice the stuff capability-inspired people do will often not look very different from what normal welfarism would recommend.
--
Related: you asked “If we just claim that people value having freedoms (or freedoms that will help them achieve well-being), is this structurally similar to preference satisfaction?”
I think this idea is similar to this comment and I think it will break for similar meta-level reasons. Also, it feels a bit odd to me to put myself in a preference satisfaction mindset and then assert someone’s preferences. To me, a huge part of the value of preference satisfaction approaches is that they respect individual preferences.
--
Re: paradox of choice: If more choices are bad for happiness then this would be another place where the capability approach differs from a “max happiness” approach, at least in theory. In practice, one might think that the practical results of limiting choices is likely to usually be bad (who gets to make the limits? how?, etc.) and so this won’t matter. I personally would bet against most of those empirical results mattering. I have large doubts that they would replicate in their original consumer choice context, and even if they do replicate I have doubts that they would apply to the “big” things in life that the capability approach would usually focus on. But all that said, I’m very comfortable with the idea that this approach may not max happiness (or any other single functioning).
On the particular example of: “Sometimes I actually really want to be told, “we’re going jogging tonight,” instead of being asked, “So, what do you want to do?”″
Yeah, I’m with you on being told to exercise. I’m guessing you like this because you’re being told to do it, but you know that you have the option to refuse. I think that there are lots of cases where we like this sort of thing, and they often seem to exist around base appetites or body-related drives (e.g. sex, food, exercise). To me, this really speaks to the power of capabilities. My hunch is you like being told “you have to go jogging” when you know that you can refuse but you’d hate it if you were genuinely forced to go jogging (if you genuinely lacked the option to say no).
--
Again, thank you for such a careful read and for offering such a nice summary. In a lot of places you expressed these ideas better than I did. I was fun to read.
Great, thank you! I appreciate this response, it made sense and cleared some things up for me.
Re:
I think you might be right, and this is just something like the power of defaults (rather than choices being taken away). Having good defaults is good.
(Also, I’m curating the post; I think more people should see it. Thanks again for sharing!)