While the capability approach definitely has some upsides; such as that it measures wellbeing in terms of people’s positive freedom (rather than simply being not infringed upon by others, people are only “free” if they have meaningful opportunities available to them), one downside of this approach is that it still has similar problems to other utilitarian metrics if the goal is to maximise wellbeing. For example, even with regards to discrimination, if the people doing the discriminating gained more capabilities than were lost by those being discriminated against, then the discrimination would be justified. One would still need to have a harm cap that states that when any one person or group of people lose enough capabilities, then no such actions are justified even if there is a net increase in capabilities.
Also, I think the problem associated with traditional methods of measuring wellbeing (e.g., happiness, SWB) where they don’t align with people’s priorities can be solved if the metric being measured is personal meaning: even if having children, believing in a religion, or viewing pieces of art that evoke sadness don’t necessarily maximise happiness, they can all facilitate feelings of subjective meaning in individuals. That being said, this still isn’t perfect, as the AI example could just be about using a different drug that maximises different neurotransmitters like serotonin or oxytocin rather than simply dopamine.
While the capability approach definitely has some upsides; such as that it measures wellbeing in terms of people’s positive freedom (rather than simply being not infringed upon by others, people are only “free” if they have meaningful opportunities available to them), one downside of this approach is that it still has similar problems to other utilitarian metrics if the goal is to maximise wellbeing. For example, even with regards to discrimination, if the people doing the discriminating gained more capabilities than were lost by those being discriminated against, then the discrimination would be justified. One would still need to have a harm cap that states that when any one person or group of people lose enough capabilities, then no such actions are justified even if there is a net increase in capabilities.
Also, I think the problem associated with traditional methods of measuring wellbeing (e.g., happiness, SWB) where they don’t align with people’s priorities can be solved if the metric being measured is personal meaning: even if having children, believing in a religion, or viewing pieces of art that evoke sadness don’t necessarily maximise happiness, they can all facilitate feelings of subjective meaning in individuals. That being said, this still isn’t perfect, as the AI example could just be about using a different drug that maximises different neurotransmitters like serotonin or oxytocin rather than simply dopamine.