Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you’ll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Moral uncertainty may suggest we should assign some weight to views other than hedonistic utilitarianism. This includes other moral views , not just people’s preferences, and we can discern what moral views people endorse through surveys (as I mention here). So we should ask about what people value and/or think is morally right and good, not merely what they prefer.
In addition, some moral views assign value to things which can be determined through surveys, including preferences (which you mention), but potentially including things like respecting people’s values, autonomy/self-determination, democratic will, and not traducing people’s wishes or coercing them.
But, separately, even if we only value maximizing wellbeing, given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing. For example, if we observe that having children seems to lead to lower wellbeing, but that people report that they value and prefer having children, that seems like it should be assigned some weight.
I think that both moral uncertainty and non-moral epistemic uncertainty (if you’ll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Only ~7% of all people who ever lived are currently alive. What’s the justification for focusing on humans living in 2022? Is it just that figuring out the values of past generations is less tractable?
It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
I agree that moral uncertainty implies it’s a good idea to know what people’s moral views are.
Related to your last point:
given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing.
Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I’d be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.
Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you’ll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Moral uncertainty may suggest we should assign some weight to views other than hedonistic utilitarianism. This includes other moral views , not just people’s preferences, and we can discern what moral views people endorse through surveys (as I mention here). So we should ask about what people value and/or think is morally right and good, not merely what they prefer.
In addition, some moral views assign value to things which can be determined through surveys, including preferences (which you mention), but potentially including things like respecting people’s values, autonomy/self-determination, democratic will, and not traducing people’s wishes or coercing them.
But, separately, even if we only value maximizing wellbeing, given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing. For example, if we observe that having children seems to lead to lower wellbeing, but that people report that they value and prefer having children, that seems like it should be assigned some weight.
Only ~7% of all people who ever lived are currently alive. What’s the justification for focusing on humans living in 2022? Is it just that figuring out the values of past generations is less tractable?
It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
I agree that moral uncertainty implies it’s a good idea to know what people’s moral views are.
Related to your last point:
Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I’d be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.