I agree that moral uncertainty implies it’s a good idea to know what people’s moral views are.
Related to your last point:
given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing.
Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I’d be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.
I agree that moral uncertainty implies it’s a good idea to know what people’s moral views are.
Related to your last point:
Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I’d be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.