Very much agree with the key points, which are related to what I wrote here.
My unsatisfying conclusion was that there are three approaches when facing an “appropriately underpowered” question:
Don’t try to answer these questions empirically, use other approaches. If data cannot resolve the problem to the customary “standard” of p<0.05, then use qualitative approaches or theory driven methods instead.
Estimate the effect and show that it is statistically non-significant. This will presumably be interpreted as the effect having a small or insignificant practical effect, despite the fact that that isn’t how p-values work.
Do a Bayesian analysis with comparisons of different prior beliefs to show how the posterior changes. This will not alter the fact that there is too little data to convincingly show an answer, and is difficult to explain. Properly uncertain prior beliefs will show that the answer is still uncertain after accounting for the new data, but will perhaps shift the estimated posterior slightly to the right, and narrow the distribution.
I also strongly agree with Will’s comment that this doesn’t (always) imply that we shouldn’t do such work, it just means that we’re doing qualitative work, which as he suggests, can be valuable in different ways.
Very much agree with the key points, which are related to what I wrote here.
My unsatisfying conclusion was that there are three approaches when facing an “appropriately underpowered” question:
Don’t try to answer these questions empirically, use other approaches.
If data cannot resolve the problem to the customary “standard” of p<0.05, then use qualitative approaches or theory driven methods instead.
Estimate the effect and show that it is statistically non-significant.
This will presumably be interpreted as the effect having a small or insignificant practical effect, despite the fact that that isn’t how p-values work.
Do a Bayesian analysis with comparisons of different prior beliefs to show how the posterior changes.
This will not alter the fact that there is too little data to convincingly show an answer, and is difficult to explain. Properly uncertain prior beliefs will show that the answer is still uncertain after accounting for the new data, but will perhaps shift the estimated posterior slightly to the right, and narrow the distribution.
I also strongly agree with Will’s comment that this doesn’t (always) imply that we shouldn’t do such work, it just means that we’re doing qualitative work, which as he suggests, can be valuable in different ways.