“I reused the diet questions in my plan from MFA 2013 study on leafleting”
In my view, this study asked way too much. When you try to ask too much detail people drop out. Additionally, it asks about things like diet change, but to pick up on changes we should be comparing the experimental and control groups, not comparing one group with its (reported) earlier self.
What I’d like to see is just “do you eat meat” along with a few distractor questions:
Are you religious?
Is English your native language?
Do you eat meat?
Do you own a car?
Yes, we’d like to know way more detail than this, and in practice people are weird about how they use “meat”, but the main issue here is getting enough responses to be able to see any difference at all between the two groups.
“I reused the diet questions in my plan from MFA 2013 study on leafleting”
Ah sorry again I was not quite clear, what I meant by this was the question about diet is one I had copied from the MFA study, not that I would reuse all of them. The questions I list in 2.1 are the only ones I would ask (probably with a bonus distractor question and maybe some extra options as suggested by jimrandomh).
Additionally, it asks about things like diet change, but to pick up on changes we should be comparing the experimental and control groups, not comparing one group with its (reported) earlier self.
Asking about ‘change in diet’ vs just diet generally is basically required to get sufficient statistical power, as the base rate of people saying yes to “have you become vegetarian in the last two weeks” is much much lower than “are you vegetarian” but the effect size we are looking for in each case is the same. One can then compare the control and experimental groups on this metric.
To illustrate the size of this effect, in the post I calculate that with a sample of 5000, by asking about change in the last 2 weeks you would have a 90% chance to find an effect of 1⁄124 leaflets creating one vegetarian, but if you just asked “are you vegetarian?” you would only be able to find a 1⁄24 effect at the same power. (Assuming a 20% base rate of vegetarianism, and using this calculator ).
When you try to ask too much detail people drop out.
I agree about using as few questions as possible, and that the MFA study asked far too much (although I think it was administered by volunteers as opposed to online, which would hopefully counteract the drop out effect in their case).
“I reused the diet questions in my plan from MFA 2013 study on leafleting”
In my view, this study asked way too much. When you try to ask too much detail people drop out. Additionally, it asks about things like diet change, but to pick up on changes we should be comparing the experimental and control groups, not comparing one group with its (reported) earlier self.
What I’d like to see is just “do you eat meat” along with a few distractor questions:
Are you religious?
Is English your native language?
Do you eat meat?
Do you own a car?
Yes, we’d like to know way more detail than this, and in practice people are weird about how they use “meat”, but the main issue here is getting enough responses to be able to see any difference at all between the two groups.
Ah sorry again I was not quite clear, what I meant by this was the question about diet is one I had copied from the MFA study, not that I would reuse all of them. The questions I list in 2.1 are the only ones I would ask (probably with a bonus distractor question and maybe some extra options as suggested by jimrandomh).
Asking about ‘change in diet’ vs just diet generally is basically required to get sufficient statistical power, as the base rate of people saying yes to “have you become vegetarian in the last two weeks” is much much lower than “are you vegetarian” but the effect size we are looking for in each case is the same. One can then compare the control and experimental groups on this metric.
To illustrate the size of this effect, in the post I calculate that with a sample of 5000, by asking about change in the last 2 weeks you would have a 90% chance to find an effect of 1⁄124 leaflets creating one vegetarian, but if you just asked “are you vegetarian?” you would only be able to find a 1⁄24 effect at the same power. (Assuming a 20% base rate of vegetarianism, and using this calculator ).
I agree about using as few questions as possible, and that the MFA study asked far too much (although I think it was administered by volunteers as opposed to online, which would hopefully counteract the drop out effect in their case).