In my previous job, we used the technique described below to prioritize feature requests and estimate their relative value. Feel free to skip this comment if you’re not interested in slightly related survey techniques.
Show a random sample of five items to a survey participant
Participant selects the most important and least important (leaving three items “somewhere in-between”)
Repeat
Each iteration creates six links between items (A > B, A > C, A > D, B > E, C > E, D > E) plus, transitively, A > E. After enough iterations, a preference order can be established using something like the Schulze Method.
I’ve forgotten the name of this survey method, but found it quite neat. It is both easy to use for participants and yields rich information. I remember participants saying that it was “hard to cheat” in this type of survey, and so it might result in fewer inconsistencies than using the utility function extractor.
Thank you for telling about this! In economics, the discrete choice model is used to estimate a scale-free utility function in similar way. It is used in health research for estimating QALYs, among other things, see e.g. this review paper.
But discrete choice / the Schulze method should probably not be used by themselves, as they cannot give us information about scale, only ordering. A possibility, which I find promising, is to combine the methods. Say that I have ten items I0…I9 I want you to rate. Then I can ask “Do you prefer Ii to Ij?” for some pairs and “How many times better is Ii than Ij?” for other pairs, hopefully in an optimal way. Then we would lessen the cognitive load of the study participants and make it easier to scale this kind of thing up.
In my previous job, we used the technique described below to prioritize feature requests and estimate their relative value. Feel free to skip this comment if you’re not interested in slightly related survey techniques.
Show a random sample of five items to a survey participant
Participant selects the most important and least important (leaving three items “somewhere in-between”)
Repeat
Each iteration creates six links between items (A > B, A > C, A > D, B > E, C > E, D > E) plus, transitively, A > E. After enough iterations, a preference order can be established using something like the Schulze Method.
I’ve forgotten the name of this survey method, but found it quite neat. It is both easy to use for participants and yields rich information. I remember participants saying that it was “hard to cheat” in this type of survey, and so it might result in fewer inconsistencies than using the utility function extractor.
Thank you for telling about this! In economics, the discrete choice model is used to estimate a scale-free utility function in similar way. It is used in health research for estimating QALYs, among other things, see e.g. this review paper.
But discrete choice / the Schulze method should probably not be used by themselves, as they cannot give us information about scale, only ordering. A possibility, which I find promising, is to combine the methods. Say that I have ten items I0…I9 I want you to rate. Then I can ask “Do you prefer Ii to Ij?” for some pairs and “How many times better is Ii than Ij?” for other pairs, hopefully in an optimal way. Then we would lessen the cognitive load of the study participants and make it easier to scale this kind of thing up.
(The congitive load of using distributions is the main reason why I’m skeptical about having participants using them in place of point estimates when doing pairwise comparisons.)