I don’t think we have data on selection bias (and I can’t think of a good way to measure this).
Yes, the 2019 survey’s matched-panel data is certainly comparable, but some other responses may not be comparable (in contrast to our 2022 survey, where we asked the old questions to a mostly-new set of humans).
One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: https://arxiv.org/pdf/2206.04132.pdf ).
Ah, yes, sorry I was unclear; I claim there’s no good way to determine bias from the MIRI logo in particular (or the Oxford logo, or various word choices in the survey email, etc.).
I don’t think we have data on selection bias (and I can’t think of a good way to measure this).
Yes, the 2019 survey’s matched-panel data is certainly comparable, but some other responses may not be comparable (in contrast to our 2022 survey, where we asked the old questions to a mostly-new set of humans).
One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: https://arxiv.org/pdf/2206.04132.pdf ).
Ah, yes, sorry I was unclear; I claim there’s no good way to determine bias from the MIRI logo in particular (or the Oxford logo, or various word choices in the survey email, etc.).
Sounds right!