What if anything might be a better question and/or set of answers?
I’ve been thinking it’d be best to have the question ask what philosophy if any people lean to, and drop the parenthetical clause from the answer option “Consequentialist/utilitarian (or think that this covers the most important considerations)”. It might also be good to encourage people more strongly to say something like ‘don’t know’ or ‘not familiar with the philosophies’ if appropriate.
I don’t think that a great deal of improvement can be made on the questions to make them such that people not already familiar with the theories could meaningfully answer them and so I would recommend keeping them limited to the explicit theories, and viewing them as targeting only people who do have a familiarity with the theories and can identify as leaning towards one or another- while encouraging people who aren’t familiar with the theories to just select the “not sure” or “not familiar” option. (You could even make the question “Which of these moral theories do you lean towards?” conditional on them answering yes to “Are you familiar with any/all of these moral theories?”)
I’m sceptical of the possibility of framing questions that can be meaningfully answered by people who don’t already have explicit commitments vis-a-vis the moral theories. I’ve found that it’s very difficult to propose questions about people’s moral commitments that they will understand and actually have a view on. I’ve been involved in, I think, 5 surveys aimed at testing people’s moral commitments so far and have found than however seemingly plain you make the language, people almost invariably understand it in deviant ways (because the theoretical questions are so divorced from how most people think about moral decision-making, unless they have already been inducted into this way of dividing up moral thought theoretically). Even questions like: two people disagree about whether such and such is right or wrong, must one of them be mistaken (the question Goodwin and Darley use) elicit completely different understandings to the one intended (e.g. people answer as though you asked whether the people were justified in making a mistake, not whether they were mistaken or not). Actual mistakes aside, I think that asking people whether “consequences” or “rules” guide their decision-making (or whatever) will likely be largely meaningless to people who haven’t already thought through these theories explicitly. I think most people recognise both rules and consequences as considerations in certain circumstances, but won’t have any stance on whether either one of these are criteria of rightness which is what’s at stake in these questions. When I asked people about their moral views using in depth interviews, invariably, respondents happily shifted between seemingly incompatible positions sentence by sentence, so I think a simple tick box survey question will fail to capture this kind of variability, inconsistency and indeterminacy in people’s actual moral positions (except in those who are already explicitly committed to some theory anyway).
I recognise this isn’t a comprehensive case thus far, so I’m happy to elaborate further if people have objections.
One approach would be
‘1. do you have a moral philosophy?’ ‘2. how do you describe your moral philosophy? _’
An alternative would be to use plain-language e.g. What guides your moral decisions? (the consequences of my actions/the rules i’m following/the person i want to be) with the ability to check multiple boxes.
the results would still have some doubt because people would be more likely to say that consequences are important to their decisions when they know they’re being asked about effective altruism but that part is less avoidable.
“What guides your moral decisions? (the consequences of my actions/the rules i’m following” wouldn’t distinguish between people with consequentialist or non-consequentialist intuitions, if they weren’t familiar with philosophy.
If people said their moral decisions came from wanting to make as many people as possible happier, then that would reveal a pretty consequentialist intuition.
The complication is that the distinctive aspect of consequetialism is that it makes this the only motive or consideration, and it’s hard to discover what the general public think about this as they’re not used to breaking morality down into all its component factors to find an exhaustive list of their motives or considerations.
What if anything might be a better question and/or set of answers?
I’ve been thinking it’d be best to have the question ask what philosophy if any people lean to, and drop the parenthetical clause from the answer option “Consequentialist/utilitarian (or think that this covers the most important considerations)”. It might also be good to encourage people more strongly to say something like ‘don’t know’ or ‘not familiar with the philosophies’ if appropriate.
I don’t think that a great deal of improvement can be made on the questions to make them such that people not already familiar with the theories could meaningfully answer them and so I would recommend keeping them limited to the explicit theories, and viewing them as targeting only people who do have a familiarity with the theories and can identify as leaning towards one or another- while encouraging people who aren’t familiar with the theories to just select the “not sure” or “not familiar” option. (You could even make the question “Which of these moral theories do you lean towards?” conditional on them answering yes to “Are you familiar with any/all of these moral theories?”)
I’m sceptical of the possibility of framing questions that can be meaningfully answered by people who don’t already have explicit commitments vis-a-vis the moral theories. I’ve found that it’s very difficult to propose questions about people’s moral commitments that they will understand and actually have a view on. I’ve been involved in, I think, 5 surveys aimed at testing people’s moral commitments so far and have found than however seemingly plain you make the language, people almost invariably understand it in deviant ways (because the theoretical questions are so divorced from how most people think about moral decision-making, unless they have already been inducted into this way of dividing up moral thought theoretically). Even questions like: two people disagree about whether such and such is right or wrong, must one of them be mistaken (the question Goodwin and Darley use) elicit completely different understandings to the one intended (e.g. people answer as though you asked whether the people were justified in making a mistake, not whether they were mistaken or not). Actual mistakes aside, I think that asking people whether “consequences” or “rules” guide their decision-making (or whatever) will likely be largely meaningless to people who haven’t already thought through these theories explicitly. I think most people recognise both rules and consequences as considerations in certain circumstances, but won’t have any stance on whether either one of these are criteria of rightness which is what’s at stake in these questions. When I asked people about their moral views using in depth interviews, invariably, respondents happily shifted between seemingly incompatible positions sentence by sentence, so I think a simple tick box survey question will fail to capture this kind of variability, inconsistency and indeterminacy in people’s actual moral positions (except in those who are already explicitly committed to some theory anyway).
I recognise this isn’t a comprehensive case thus far, so I’m happy to elaborate further if people have objections.
One approach would be ‘1. do you have a moral philosophy?’ ‘2. how do you describe your moral philosophy? _’
An alternative would be to use plain-language e.g. What guides your moral decisions? (the consequences of my actions/the rules i’m following/the person i want to be) with the ability to check multiple boxes.
the results would still have some doubt because people would be more likely to say that consequences are important to their decisions when they know they’re being asked about effective altruism but that part is less avoidable.
“What guides your moral decisions? (the consequences of my actions/the rules i’m following” wouldn’t distinguish between people with consequentialist or non-consequentialist intuitions, if they weren’t familiar with philosophy.
If people said their moral decisions came from wanting to make as many people as possible happier, then that would reveal a pretty consequentialist intuition.
The complication is that the distinctive aspect of consequetialism is that it makes this the only motive or consideration, and it’s hard to discover what the general public think about this as they’re not used to breaking morality down into all its component factors to find an exhaustive list of their motives or considerations.