In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn’t replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
‘Conditional on positive’ results are less reliable because of the potential for differential selection, but that is still a bit interesting. (But it could be e.g., ’bigger push to get people to donate means you attract less interested people on average, so they respond with smaller amounts.)
The equivalence testing is close to what I meant (do you want to expand on/link those), but no, not quite the same.
Quickly, what I had in mind is a ‘Bayesian regression’. You input a model, priors over all parameters (perhaps ‘weakly informative’ priors centered at a 0 effect) and you can then compute the posterior belief for these parameters. R’s BRMS package is good for this. Then you can report ‘what share of the posterior falls into each of the categories I mentioned below’.
I’ll try to follow up on this more specifically, and perhaps share some code.
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn’t replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
‘Conditional on positive’ results are less reliable because of the potential for differential selection, but that is still a bit interesting. (But it could be e.g., ’bigger push to get people to donate means you attract less interested people on average, so they respond with smaller amounts.)
The equivalence testing is close to what I meant (do you want to expand on/link those), but no, not quite the same.
Quickly, what I had in mind is a ‘Bayesian regression’. You input a model, priors over all parameters (perhaps ‘weakly informative’ priors centered at a 0 effect) and you can then compute the posterior belief for these parameters. R’s BRMS package is good for this. Then you can report ‘what share of the posterior falls into each of the categories I mentioned below’.
I’ll try to follow up on this more specifically, and perhaps share some code.
Thanks David, that would be great! I’ll check to see if there is a way to run it on STATA, but if not I can just run it on R.