I tried to address the fist one in the second part of the Downsides section. It is indeed the case that while the list of capability sets available to you is objective, your personal ranking of them is subjective and the weights can vary quite a bit. I don’t think this problem is worse than the problems other theories face (turns out adding up utility is hard), but it is a problem. I don’t want to repeat myself too much, but you can respond to this by trying to make a minimal list of capabilities that we all value highly (Nussbaum), or you can try to be very contextual (within a society or subgroup of a society, the weights may not be so different), or you can try to find minimal things that unlock lots of capabilities (like income or staying alive). There may be other things one can do too. I’d say more research here could be very useful. This approach is very young.
Re: actually satisfying preferences, if my examples about the kid growing up to be a doctor or the option to walk around at night don’t speak to you, then perhaps we just have different intuitions. One thing I will say on this is that you might think that your preferences are satisfied if the set of options is small (you’ll always have a top choice, and you might even feel quite good about it), but if the set grows you might realize that the old thing you were satisfied with is no longer what you want. You’ll only realize this if we keep increasing the capability sets you can pick from, so it does seem to me that it is useful to try to maximize the number of (value-weighted) capability sets available to people.
Good questions.
I tried to address the fist one in the second part of the Downsides section. It is indeed the case that while the list of capability sets available to you is objective, your personal ranking of them is subjective and the weights can vary quite a bit. I don’t think this problem is worse than the problems other theories face (turns out adding up utility is hard), but it is a problem. I don’t want to repeat myself too much, but you can respond to this by trying to make a minimal list of capabilities that we all value highly (Nussbaum), or you can try to be very contextual (within a society or subgroup of a society, the weights may not be so different), or you can try to find minimal things that unlock lots of capabilities (like income or staying alive). There may be other things one can do too. I’d say more research here could be very useful. This approach is very young.
Re: actually satisfying preferences, if my examples about the kid growing up to be a doctor or the option to walk around at night don’t speak to you, then perhaps we just have different intuitions. One thing I will say on this is that you might think that your preferences are satisfied if the set of options is small (you’ll always have a top choice, and you might even feel quite good about it), but if the set grows you might realize that the old thing you were satisfied with is no longer what you want. You’ll only realize this if we keep increasing the capability sets you can pick from, so it does seem to me that it is useful to try to maximize the number of (value-weighted) capability sets available to people.