This is a risk, but we’ll still have the pre-test rankings and can probably do something clever here.
Fwiw, I’d imagine you are all less succumb to weighting other evaluators negative points (different interests at play to journal reviewers) - but still may be a bias here.
Peaked my curiosity, what sort of clever thing?
Babbling:
Allocating some of the funding using the pre-test rankings;
or the other way, using the diff between pre and post as a measure for how bad/fragile the pre was;
otherwise working out whether each evaluator leans under- or over-confident and using this to correct their post ranking.
Thanks Gavin!
This is a risk, but we’ll still have the pre-test rankings and can probably do something clever here.
Fwiw, I’d imagine you are all less succumb to weighting other evaluators negative points (different interests at play to journal reviewers) - but still may be a bias here.
Peaked my curiosity, what sort of clever thing?
Babbling:
Allocating some of the funding using the pre-test rankings;
or the other way, using the diff between pre and post as a measure for how bad/fragile the pre was;
otherwise working out whether each evaluator leans under- or over-confident and using this to correct their post ranking.
Thanks Gavin!