Yeh, I definitely agree that asking multiple questions per object of interest to assess reliability would be good. But also agree that this would lengthen a survey that people already thought was too long (which would likely reduce response quality in itself). So I think this would only be possible if people wanted us to prioritise gathering more data about a smaller number of questions.
Fwiw, for the value of hires questions, we have at least seen these questions posed in multiple different ways over the years (e.g. here) and continually produce very high valuations. My guess is that, if those high valuations are misleading, this is driven more by factors like social desirability than difficulty/conceptual confusion. There are some other questions which have been asked in different ways across years (we made a few changes to the wording this year to improve clarity, but aimed to keep the same where possible), but I’ve not formally assessed how those results differ.
Yeh, I definitely agree that asking multiple questions per object of interest to assess reliability would be good. But also agree that this would lengthen a survey that people already thought was too long (which would likely reduce response quality in itself). So I think this would only be possible if people wanted us to prioritise gathering more data about a smaller number of questions.
Fwiw, for the value of hires questions, we have at least seen these questions posed in multiple different ways over the years (e.g. here) and continually produce very high valuations. My guess is that, if those high valuations are misleading, this is driven more by factors like social desirability than difficulty/conceptual confusion. There are some other questions which have been asked in different ways across years (we made a few changes to the wording this year to improve clarity, but aimed to keep the same where possible), but I’ve not formally assessed how those results differ.