Public surveys would be crucial for developing better QALYs / DALYs / WELBYs / etc (see these posts).
Public surveys are also needed to make trade offs between health and things not captured by QALYs / DALYs (such as increased income or justice), to trade of between years of life and quality of life (especially for some population ethics view) and so on.
Surveys in developing countries would be particularly useful.
SoGive conducts research like this as part of its moral weights process.
Our first such study was last year and focused primarily on how much people value
Saving lives
Doubling consumption
Averting severe depression
Averting animal suffering
We invested less effort in, but also explored
Saving species from extinction
Comparing a life in the far future with a life today
Education
We were interested in comparing them against each other, and in a quantitative comparison (i.e. how much is one valued more than another)
We were motivated to conduct this research because our work is intended to serve a broad audience, and we wanted to incorporate the perspective of the population as a whole.
Because the questions we wanted to answer were quantitative, we posed quantitative questions in the survey. We found that several respondents gave answers which did not seem consistent, when consistency checks where included in the survey. This is a key challenge for this type of work, and suggests that a slightly more indirect approach to answering the questions may be more effective.
I think that JPAL has done some work on this but I can’t find the papers after a quick google search.
I am a bit confused why this is actually useful? Is it mostly for optimising for preferences, if so I can see why this would be useful but if you don’t strongly prioritise preferences then I don’t see why this would help you create better metrics?
I think my question is ‘for people who don’t prioritise preferences (e.g. hedonistic utilitarians), do you still think that moral weight surveying is useful? ’?
E.g. maybe an intervention will be more widely adopted by a community if appeals to their preferences, so efficacy is increased from a non-preference perspective.
Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you’ll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Moral uncertainty may suggest we should assign some weight to views other than hedonistic utilitarianism. This includes other moral views , not just people’s preferences, and we can discern what moral views people endorse through surveys (as I mention here). So we should ask about what people value and/or think is morally right and good, not merely what they prefer.
In addition, some moral views assign value to things which can be determined through surveys, including preferences (which you mention), but potentially including things like respecting people’s values, autonomy/self-determination, democratic will, and not traducing people’s wishes or coercing them.
But, separately, even if we only value maximizing wellbeing, given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing. For example, if we observe that having children seems to lead to lower wellbeing, but that people report that they value and prefer having children, that seems like it should be assigned some weight.
I think that both moral uncertainty and non-moral epistemic uncertainty (if you’ll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Only ~7% of all people who ever lived are currently alive. What’s the justification for focusing on humans living in 2022? Is it just that figuring out the values of past generations is less tractable?
It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
I agree that moral uncertainty implies it’s a good idea to know what people’s moral views are.
Related to your last point:
given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing.
Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I’d be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.
I would be interested to know the results of such a survey on these topics.
Similarly, if experimental philosophy hasn’t already answered these questions, then I’d like to know if the public has any coherent views on “what wellbeing is, and what’s the badness of death?” I haven’t found anything in the literature that I could use, but I’m not very familiar with the research in this space. I think David has mentioned there being some extant literature surveying views on the badness of death, but I was not able to find it.
I know that David, you said:
I mostly intend to rule out things like surveying effective altruists or elite policy-makers.
But if we survey the public moral views, I’d like to know how much they differ from EAs. At the very least for communications
Any human-focused moral weights work!!
How much do members of the public care about:
Subjective wellbeing
Increases in income
Increases in happiness
Reductions in pain
Mental health
Education
Being alive
Public surveys would be crucial for developing better QALYs / DALYs / WELBYs / etc (see these posts).
Public surveys are also needed to make trade offs between health and things not captured by QALYs / DALYs (such as increased income or justice), to trade of between years of life and quality of life (especially for some population ethics view) and so on.
Surveys in developing countries would be particularly useful.
SoGive conducts research like this as part of its moral weights process.
Our first such study was last year and focused primarily on how much people value
Saving lives
Doubling consumption
Averting severe depression
Averting animal suffering
We invested less effort in, but also explored
Saving species from extinction
Comparing a life in the far future with a life today
Education
We were interested in comparing them against each other, and in a quantitative comparison (i.e. how much is one valued more than another)
We were motivated to conduct this research because our work is intended to serve a broad audience, and we wanted to incorporate the perspective of the population as a whole.
Because the questions we wanted to answer were quantitative, we posed quantitative questions in the survey. We found that several respondents gave answers which did not seem consistent, when consistency checks where included in the survey. This is a key challenge for this type of work, and suggests that a slightly more indirect approach to answering the questions may be more effective.
I think that JPAL has done some work on this but I can’t find the papers after a quick google search.
I am a bit confused why this is actually useful? Is it mostly for optimising for preferences, if so I can see why this would be useful but if you don’t strongly prioritise preferences then I don’t see why this would help you create better metrics?
I think my question is ‘for people who don’t prioritise preferences (e.g. hedonistic utilitarians), do you still think that moral weight surveying is useful? ’? E.g. maybe an intervention will be more widely adopted by a community if appeals to their preferences, so efficacy is increased from a non-preference perspective.
Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you’ll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Moral uncertainty may suggest we should assign some weight to views other than hedonistic utilitarianism. This includes other moral views , not just people’s preferences, and we can discern what moral views people endorse through surveys (as I mention here). So we should ask about what people value and/or think is morally right and good, not merely what they prefer.
In addition, some moral views assign value to things which can be determined through surveys, including preferences (which you mention), but potentially including things like respecting people’s values, autonomy/self-determination, democratic will, and not traducing people’s wishes or coercing them.
But, separately, even if we only value maximizing wellbeing, given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it’s plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing. For example, if we observe that having children seems to lead to lower wellbeing, but that people report that they value and prefer having children, that seems like it should be assigned some weight.
Only ~7% of all people who ever lived are currently alive. What’s the justification for focusing on humans living in 2022? Is it just that figuring out the values of past generations is less tractable?
It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
I agree that moral uncertainty implies it’s a good idea to know what people’s moral views are.
Related to your last point:
Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I’d be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.
I would be interested to know the results of such a survey on these topics.
Similarly, if experimental philosophy hasn’t already answered these questions, then I’d like to know if the public has any coherent views on “what wellbeing is, and what’s the badness of death?” I haven’t found anything in the literature that I could use, but I’m not very familiar with the research in this space. I think David has mentioned there being some extant literature surveying views on the badness of death, but I was not able to find it.
I know that David, you said:
But if we survey the public moral views, I’d like to know how much they differ from EAs. At the very least for communications