Thanks very much for writing this. It’s helpful to have a clear and succinct summary of the capabilities approach on the Forum and I thought the post was constructive and well-written. It provides a nice counterpoint to HLI’s post, To WELLBY or not to WELLBY?
However, the capabilities approach (as you describe it here) strikes me as deeply paternalistic. How do we decide which capabilities to prioritise without asking people how much they value them? We can’t just defer to Nussbaum.
In the post you say:
The third approach, which I personally prefer, is to not even try to make an index but instead to track various clearly important dimensions separately
and also,
If you don’t know precisely what to maximize for people, then picking staying alive and having resources is a very good start.
To me, it looks like you’ve decided what the priorities should be based on what you think is “clearly important”. But as this post shows, humans are terrible at ‘affective forecasting’ i.e. we underestimate the importance of things that are resistant to hedonic adaptation and difficult to mentally simulate.
The thing is, we don’t have to guess. We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives. The Origins of Happiness is the best example I’ve seen of this and is packed full of surprising insights. If adversity or discrimination had no effect on your subjective wellbeing, then those terms would be meaningless.
I think the crux of our different views is that I don’t see subjective wellbeing as one of many functionings. Instead, I place high credence on the view that wellbeing is the intrinsic good. Everyone cares about lots of things (positive emotions, achievements, having kids, art, knowledge, freedom, religious belief etc.) but you need to make trade-offs between them and that requires a common unit.
I’m nearly always a 7⁄10 or 8⁄10 on the common happiness-type questions. I guess that this means that I could “improve” this, but honestly I’m not trying to do this at all.
Here, I think you’re confusing emotional states with evaluations of life satisfaction. Most people don’t want to feel happy at a funeral. Instead, we want to be satisfied with our current experience, free from desires for a different state of affairs. When you chose to have kids, I expect you were trading off positive emotions for greater life satisfaction and that’s a totally reasonable thing to do. There’s a great clip of Daniel Kahneman discussing this here.
Using subjective measures to allocate aid means that targeting will depend in part on people’s ability to imagine a better future (and thus feel dissatisfaction with the present).
For me, dissatisfaction and suffering are synonymous so I would prioritise an unhappy billionaire over a happy rural farmer, even though this may seem counterintuitive to many. In practice, however, there are a lot of unhappy rural farmers and it’s much cheaper to help them. The reason I work at the Happier Lives Institute is that I want to understand what will really help them the most, rather than deferring to the common assumption that it must be income gains and lives saved.
I think that there are two main points where we disagree: first on paternalism and second on prioritizing mental states. I don’t expect I will convince you, or vice versa, but I hope that a reply is useful for the sake of other readers.
On paternalism, what makes the capability approach anti-paternalistic is that the aim is to give people options, from which they can then do whatever they want. Somewhat loosely (see fn1 and discussion in text), for an EA the capability approach means trying to max their choices. If instead one decides to try to max any specific functioning, like happiness, then you are being paternalistic as you have decided for them what matters. Now you correctly noted that I said that in practice I think increasing income is useful. Importantly, this is not because “being rich” is a key functioning. It is because for poor people income is a factor limiting their ability to do very many things, so increasing their income increases their capabilities quite a bit. The same thing clearly applies to not dying. Perhaps of interest to HLI, I can believe that not being depressed or not having other serious mental conditions is also a very important functioning that unlocks many capabilities.
You wrote that “We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives.” Putting aside gigantic causal inference questions, which matter, you still cannot identify “which capabilities have the most impact on people’s lives”. At best, you will identify which functionings cause increases in your measured DV, which would be something like a happiness scale. To me, this is an impoverished view of what matters to people’s lives. I will note that you did not respond to my point about getting an AI to maximize happiness or to the point that many people, such as many religious people, will just straight tell you they aren’t trying to maximize their own happiness. I think these arguments makes the point that happiness is important, but it is not the one thing that we all care about.
On purely prioritizing mental states, I think it is a mistake to prioritize “an unhappy billionaire over a happy rural farmer.” I think happiness as the one master metric breaks in all sorts of real-life cases, such as the one that I gave of women in the 1970s. Rather than give more cases, which might at this point be tedious, I think we can productively relate this point back to paternalism. I think if we polled American women and asked if they would want to go back to the social world of the 1970s—when they were on average happier—they would overwhelmingly say no. I think this is because they value the freedoms they gained from the 1970s forward. If I am right that they would not want to go back to the 1970s, then to say that they are mistaken and that life for American women was better in the 1970s is, again, to me paternalistic.
Finally, I should also say thank you for engaging on this. I think the topic is important and I appreciate the questions and criticisms.
Thanks very much for your reply. I agree this topic is important and should be discussed more.
Re: paternalism
I guess all altruistic acts have some element of paternalism (despite our best intentions). I think we both agree that we should give people options to improve their wellbeing, rather than forcing them into something. However, we have to decide which option(s) to provide—increasing income, extending lives, treating mental illness etc. - and this is where we differ. You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.
Re: mental states
In the case of polling women, the results would be subject to all of the affective forecasting biases I mentioned before. To avoid paternalism, we should let the data speak for itself. If women in the 1970s said they were 8⁄10 and women in the 2020s say they are 7⁄10 (I’m using made-up numbers here) then we should try to identify the cause(s) of that decline rather than dismiss the data on the assumption that life for women is clearly better than it used to be. Some things have clearly improved, but those improvements might be cancelled out by other factors which are not immediately obvious.
Re: AI maximising happiness
You said: “We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare.” but that’s a very speculative claim. Personally, I would be very surprised if a happiness-maximising AI would put you in a situation that you perceived as a nightmare.
Re: religion
Here, I will defer to a blog post written by my colleague, Samuel Dupret, who thinks very deeply about this question.
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited in the types of questions to ask in such surveys. The capability approach simply highlights that life satisfaction or wellbeing are not necessarily the only measures that can be used. For instance, you could also ask what functionings provide meaning to your life, which may be correlated to life satisfaction but not necessarily the same thing (e.g., see examples that were given).
On paternalism, just a note to point out that unlike Nussbaum, Sen and others have resisted offering specific capabilities, the idea being that these should not be handed down by economists but democratically derived. (I’m not sure how workable this is in practice or to what extent it’s been tried, would be interested if anyone knows more!)
That’s good to know, thanks for clarifying. A democratic process is definitely better than a top-down approach, but everyone who participates in that process will be subject to affective forecasting biases too. That’s why I favour using subjective wellbeing data, but I’m keen to hear about alternative options too.
Thanks very much for writing this. It’s helpful to have a clear and succinct summary of the capabilities approach on the Forum and I thought the post was constructive and well-written. It provides a nice counterpoint to HLI’s post, To WELLBY or not to WELLBY?
However, the capabilities approach (as you describe it here) strikes me as deeply paternalistic. How do we decide which capabilities to prioritise without asking people how much they value them? We can’t just defer to Nussbaum.
In the post you say:
and also,
To me, it looks like you’ve decided what the priorities should be based on what you think is “clearly important”. But as this post shows, humans are terrible at ‘affective forecasting’ i.e. we underestimate the importance of things that are resistant to hedonic adaptation and difficult to mentally simulate.
The thing is, we don’t have to guess. We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives. The Origins of Happiness is the best example I’ve seen of this and is packed full of surprising insights. If adversity or discrimination had no effect on your subjective wellbeing, then those terms would be meaningless.
I think the crux of our different views is that I don’t see subjective wellbeing as one of many functionings. Instead, I place high credence on the view that wellbeing is the intrinsic good. Everyone cares about lots of things (positive emotions, achievements, having kids, art, knowledge, freedom, religious belief etc.) but you need to make trade-offs between them and that requires a common unit.
Here, I think you’re confusing emotional states with evaluations of life satisfaction. Most people don’t want to feel happy at a funeral. Instead, we want to be satisfied with our current experience, free from desires for a different state of affairs. When you chose to have kids, I expect you were trading off positive emotions for greater life satisfaction and that’s a totally reasonable thing to do. There’s a great clip of Daniel Kahneman discussing this here.
For me, dissatisfaction and suffering are synonymous so I would prioritise an unhappy billionaire over a happy rural farmer, even though this may seem counterintuitive to many. In practice, however, there are a lot of unhappy rural farmers and it’s much cheaper to help them. The reason I work at the Happier Lives Institute is that I want to understand what will really help them the most, rather than deferring to the common assumption that it must be income gains and lives saved.
(commenting in a personal capacity etc.)
Thanks for these questions.
I think that there are two main points where we disagree: first on paternalism and second on prioritizing mental states. I don’t expect I will convince you, or vice versa, but I hope that a reply is useful for the sake of other readers.
On paternalism, what makes the capability approach anti-paternalistic is that the aim is to give people options, from which they can then do whatever they want. Somewhat loosely (see fn1 and discussion in text), for an EA the capability approach means trying to max their choices. If instead one decides to try to max any specific functioning, like happiness, then you are being paternalistic as you have decided for them what matters. Now you correctly noted that I said that in practice I think increasing income is useful. Importantly, this is not because “being rich” is a key functioning. It is because for poor people income is a factor limiting their ability to do very many things, so increasing their income increases their capabilities quite a bit. The same thing clearly applies to not dying. Perhaps of interest to HLI, I can believe that not being depressed or not having other serious mental conditions is also a very important functioning that unlocks many capabilities.
You wrote that “We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives.” Putting aside gigantic causal inference questions, which matter, you still cannot identify “which capabilities have the most impact on people’s lives”. At best, you will identify which functionings cause increases in your measured DV, which would be something like a happiness scale. To me, this is an impoverished view of what matters to people’s lives. I will note that you did not respond to my point about getting an AI to maximize happiness or to the point that many people, such as many religious people, will just straight tell you they aren’t trying to maximize their own happiness. I think these arguments makes the point that happiness is important, but it is not the one thing that we all care about.
On purely prioritizing mental states, I think it is a mistake to prioritize “an unhappy billionaire over a happy rural farmer.” I think happiness as the one master metric breaks in all sorts of real-life cases, such as the one that I gave of women in the 1970s. Rather than give more cases, which might at this point be tedious, I think we can productively relate this point back to paternalism. I think if we polled American women and asked if they would want to go back to the social world of the 1970s—when they were on average happier—they would overwhelmingly say no. I think this is because they value the freedoms they gained from the 1970s forward. If I am right that they would not want to go back to the 1970s, then to say that they are mistaken and that life for American women was better in the 1970s is, again, to me paternalistic.
Finally, I should also say thank you for engaging on this. I think the topic is important and I appreciate the questions and criticisms.
Thanks very much for your reply. I agree this topic is important and should be discussed more.
Re: paternalism
I guess all altruistic acts have some element of paternalism (despite our best intentions). I think we both agree that we should give people options to improve their wellbeing, rather than forcing them into something. However, we have to decide which option(s) to provide—increasing income, extending lives, treating mental illness etc. - and this is where we differ. You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.
Re: mental states
In the case of polling women, the results would be subject to all of the affective forecasting biases I mentioned before. To avoid paternalism, we should let the data speak for itself. If women in the 1970s said they were 8⁄10 and women in the 2020s say they are 7⁄10 (I’m using made-up numbers here) then we should try to identify the cause(s) of that decline rather than dismiss the data on the assumption that life for women is clearly better than it used to be. Some things have clearly improved, but those improvements might be cancelled out by other factors which are not immediately obvious.
Re: AI maximising happiness
You said: “We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare.” but that’s a very speculative claim. Personally, I would be very surprised if a happiness-maximising AI would put you in a situation that you perceived as a nightmare.
Re: religion
Here, I will defer to a blog post written by my colleague, Samuel Dupret, who thinks very deeply about this question.
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited in the types of questions to ask in such surveys. The capability approach simply highlights that life satisfaction or wellbeing are not necessarily the only measures that can be used. For instance, you could also ask what functionings provide meaning to your life, which may be correlated to life satisfaction but not necessarily the same thing (e.g., see examples that were given).
On paternalism, just a note to point out that unlike Nussbaum, Sen and others have resisted offering specific capabilities, the idea being that these should not be handed down by economists but democratically derived. (I’m not sure how workable this is in practice or to what extent it’s been tried, would be interested if anyone knows more!)
That’s good to know, thanks for clarifying. A democratic process is definitely better than a top-down approach, but everyone who participates in that process will be subject to affective forecasting biases too. That’s why I favour using subjective wellbeing data, but I’m keen to hear about alternative options too.