A good rule of thumb is to have at least 10 subjects per group for confirmatory analysis, and you have less than that for exploratory analysis. Because your sample is so small, I would be surprised if very many (if any) of your rankings survived correction for multiple-hypothesis testing.
I would suggest grouping some of the individual conditions together in different analyses (for instance, test all “opportunity” groups against all “obligation” groups), although this may introduce bias since the “opportunity” groups varied systematically from the “obligation” groups in other ways.
On a related note, you don’t have to simply assert that
the overall ranking of the prompts… has the power of our full sample size of 167 behind it, so that we’re somewhat confident that conclusions drawn about prompts close to its extreme points are valuable.
Instead, you can use something like bootstrap resampling) to get an idea of the variance of ranking. I would be interested to see how variable the rankings are under bootstrap resamplings, especially since 167 is actually not that large of a sample for this many groups.
Grouping: The gallery I linked is an almost unfiltered assortment of all the graphs I generated, but I eventually ignored the ones where some cohorts were very small. Even in the case of motivation vs. education, where I already grouped the originally five levels into two, the result (that people who hadn’t visited a university were more easily motivated for or curious about EA) was not “significant” (or what the proper term is, Bayes factor of 0.5).
That was a grouping of demographic levels, though. Is what you’re suggesting closer to this or this one?
Bootstrapping: My university course and my textbook only touch on that in the context of things they wish they had had the time to cover… Do you mean that I could use bootstrapping to determine the variance of the individual measures or of the rank of the items? The first seems doable to me, the latter more tricky.
A good rule of thumb is to have at least 10 subjects per group for confirmatory analysis, and you have less than that for exploratory analysis. Because your sample is so small, I would be surprised if very many (if any) of your rankings survived correction for multiple-hypothesis testing.
I would suggest grouping some of the individual conditions together in different analyses (for instance, test all “opportunity” groups against all “obligation” groups), although this may introduce bias since the “opportunity” groups varied systematically from the “obligation” groups in other ways.
On a related note, you don’t have to simply assert that
Instead, you can use something like bootstrap resampling) to get an idea of the variance of ranking. I would be interested to see how variable the rankings are under bootstrap resamplings, especially since 167 is actually not that large of a sample for this many groups.
Grouping: The gallery I linked is an almost unfiltered assortment of all the graphs I generated, but I eventually ignored the ones where some cohorts were very small. Even in the case of motivation vs. education, where I already grouped the originally five levels into two, the result (that people who hadn’t visited a university were more easily motivated for or curious about EA) was not “significant” (or what the proper term is, Bayes factor of 0.5).
That was a grouping of demographic levels, though. Is what you’re suggesting closer to this or this one?
Bootstrapping: My university course and my textbook only touch on that in the context of things they wish they had had the time to cover… Do you mean that I could use bootstrapping to determine the variance of the individual measures or of the rank of the items? The first seems doable to me, the latter more tricky.