Forecast procedure competitions

This is an idle thought: maybe there’s value for competitions that incentivise people to submit good instructions for making forecasts, instead of directly making forecasts.

Consider a forecasting competition with a collection of questions that are clustered and then only some common feature of the clusters are made public (e.g. question type “who will win {election of some kind} in {unknown country}?”—contains 3 questions). Instead of submitting forecasts, forecasting participants submit methods for making forecasts on each question. These methods are implemented by a number of randomly selected teams of implementers, and the full question details are then revealed. Implementers are scored by how well they agree with other implementers on the same method while forecasters are ranked by the success of their methods.

Why might this be interesting? Currently, we can use forecasting competitions to answer particular questions, but the questions actually asked might not end up being as relevant to decision making because they were too specific, and it may not be practical to anticipate which questions need answers long enough before the answers are needed for them to be added to the competition. In these situations, robust forecast procedures could be more helpful than robust forecasts.