I think that there are two main points where we disagree: first on paternalism and second on prioritizing mental states. I don’t expect I will convince you, or vice versa, but I hope that a reply is useful for the sake of other readers.
On paternalism, what makes the capability approach anti-paternalistic is that the aim is to give people options, from which they can then do whatever they want. Somewhat loosely (see fn1 and discussion in text), for an EA the capability approach means trying to max their choices. If instead one decides to try to max any specific functioning, like happiness, then you are being paternalistic as you have decided for them what matters. Now you correctly noted that I said that in practice I think increasing income is useful. Importantly, this is not because “being rich” is a key functioning. It is because for poor people income is a factor limiting their ability to do very many things, so increasing their income increases their capabilities quite a bit. The same thing clearly applies to not dying. Perhaps of interest to HLI, I can believe that not being depressed or not having other serious mental conditions is also a very important functioning that unlocks many capabilities.
You wrote that “We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives.” Putting aside gigantic causal inference questions, which matter, you still cannot identify “which capabilities have the most impact on people’s lives”. At best, you will identify which functionings cause increases in your measured DV, which would be something like a happiness scale. To me, this is an impoverished view of what matters to people’s lives. I will note that you did not respond to my point about getting an AI to maximize happiness or to the point that many people, such as many religious people, will just straight tell you they aren’t trying to maximize their own happiness. I think these arguments makes the point that happiness is important, but it is not the one thing that we all care about.
On purely prioritizing mental states, I think it is a mistake to prioritize “an unhappy billionaire over a happy rural farmer.” I think happiness as the one master metric breaks in all sorts of real-life cases, such as the one that I gave of women in the 1970s. Rather than give more cases, which might at this point be tedious, I think we can productively relate this point back to paternalism. I think if we polled American women and asked if they would want to go back to the social world of the 1970s—when they were on average happier—they would overwhelmingly say no. I think this is because they value the freedoms they gained from the 1970s forward. If I am right that they would not want to go back to the 1970s, then to say that they are mistaken and that life for American women was better in the 1970s is, again, to me paternalistic.
Finally, I should also say thank you for engaging on this. I think the topic is important and I appreciate the questions and criticisms.
Thanks very much for your reply. I agree this topic is important and should be discussed more.
Re: paternalism
I guess all altruistic acts have some element of paternalism (despite our best intentions). I think we both agree that we should give people options to improve their wellbeing, rather than forcing them into something. However, we have to decide which option(s) to provide—increasing income, extending lives, treating mental illness etc. - and this is where we differ. You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.
Re: mental states
In the case of polling women, the results would be subject to all of the affective forecasting biases I mentioned before. To avoid paternalism, we should let the data speak for itself. If women in the 1970s said they were 8⁄10 and women in the 2020s say they are 7⁄10 (I’m using made-up numbers here) then we should try to identify the cause(s) of that decline rather than dismiss the data on the assumption that life for women is clearly better than it used to be. Some things have clearly improved, but those improvements might be cancelled out by other factors which are not immediately obvious.
Re: AI maximising happiness
You said: “We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare.” but that’s a very speculative claim. Personally, I would be very surprised if a happiness-maximising AI would put you in a situation that you perceived as a nightmare.
Re: religion
Here, I will defer to a blog post written by my colleague, Samuel Dupret, who thinks very deeply about this question.
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited in the types of questions to ask in such surveys. The capability approach simply highlights that life satisfaction or wellbeing are not necessarily the only measures that can be used. For instance, you could also ask what functionings provide meaning to your life, which may be correlated to life satisfaction but not necessarily the same thing (e.g., see examples that were given).
Thanks for these questions.
I think that there are two main points where we disagree: first on paternalism and second on prioritizing mental states. I don’t expect I will convince you, or vice versa, but I hope that a reply is useful for the sake of other readers.
On paternalism, what makes the capability approach anti-paternalistic is that the aim is to give people options, from which they can then do whatever they want. Somewhat loosely (see fn1 and discussion in text), for an EA the capability approach means trying to max their choices. If instead one decides to try to max any specific functioning, like happiness, then you are being paternalistic as you have decided for them what matters. Now you correctly noted that I said that in practice I think increasing income is useful. Importantly, this is not because “being rich” is a key functioning. It is because for poor people income is a factor limiting their ability to do very many things, so increasing their income increases their capabilities quite a bit. The same thing clearly applies to not dying. Perhaps of interest to HLI, I can believe that not being depressed or not having other serious mental conditions is also a very important functioning that unlocks many capabilities.
You wrote that “We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives.” Putting aside gigantic causal inference questions, which matter, you still cannot identify “which capabilities have the most impact on people’s lives”. At best, you will identify which functionings cause increases in your measured DV, which would be something like a happiness scale. To me, this is an impoverished view of what matters to people’s lives. I will note that you did not respond to my point about getting an AI to maximize happiness or to the point that many people, such as many religious people, will just straight tell you they aren’t trying to maximize their own happiness. I think these arguments makes the point that happiness is important, but it is not the one thing that we all care about.
On purely prioritizing mental states, I think it is a mistake to prioritize “an unhappy billionaire over a happy rural farmer.” I think happiness as the one master metric breaks in all sorts of real-life cases, such as the one that I gave of women in the 1970s. Rather than give more cases, which might at this point be tedious, I think we can productively relate this point back to paternalism. I think if we polled American women and asked if they would want to go back to the social world of the 1970s—when they were on average happier—they would overwhelmingly say no. I think this is because they value the freedoms they gained from the 1970s forward. If I am right that they would not want to go back to the 1970s, then to say that they are mistaken and that life for American women was better in the 1970s is, again, to me paternalistic.
Finally, I should also say thank you for engaging on this. I think the topic is important and I appreciate the questions and criticisms.
Thanks very much for your reply. I agree this topic is important and should be discussed more.
Re: paternalism
I guess all altruistic acts have some element of paternalism (despite our best intentions). I think we both agree that we should give people options to improve their wellbeing, rather than forcing them into something. However, we have to decide which option(s) to provide—increasing income, extending lives, treating mental illness etc. - and this is where we differ. You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.
Re: mental states
In the case of polling women, the results would be subject to all of the affective forecasting biases I mentioned before. To avoid paternalism, we should let the data speak for itself. If women in the 1970s said they were 8⁄10 and women in the 2020s say they are 7⁄10 (I’m using made-up numbers here) then we should try to identify the cause(s) of that decline rather than dismiss the data on the assumption that life for women is clearly better than it used to be. Some things have clearly improved, but those improvements might be cancelled out by other factors which are not immediately obvious.
Re: AI maximising happiness
You said: “We’ll all end up on some IV drug drip or the equivalent, and to me that’s a nightmare.” but that’s a very speculative claim. Personally, I would be very surprised if a happiness-maximising AI would put you in a situation that you perceived as a nightmare.
Re: religion
Here, I will defer to a blog post written by my colleague, Samuel Dupret, who thinks very deeply about this question.
Just to explain why I downvoted this comment. I think it is pretty defensive and not really engaging with the key points of the response, which made no indication that would justify a conclusion like: „You seem to be prioritising the options based on intuition, whereas I prefer to use evidence from self-reports.“
There is nothing in the capability approach as explained that would keep you from using survey data to consider which options to provide. On the opposite, I would argue it to be more open and flexible for such an approach because it is less limited in the types of questions to ask in such surveys. The capability approach simply highlights that life satisfaction or wellbeing are not necessarily the only measures that can be used. For instance, you could also ask what functionings provide meaning to your life, which may be correlated to life satisfaction but not necessarily the same thing (e.g., see examples that were given).