Almost all of the question names are rather ad-hoc.
To check I understand, is the main thing you have in mind as a problem here that a similar topic might be asked about in a very different way by two different questions, such that itâs hard for an aggregator/âsearch tool to pick up both questions together? Or just that it can be hard to actually tell what a question is about from the name itself, without reading the details in the description? Or something else?
And could you say a bit about what sort of norms or guidelines youâd prefer to see followed by the writers of forecasting questions?
There are a few public dashboards, but these are rather few compared to all of the existing forecasting questions, and these often arenât particularly well done
Iâm not sure I know what you mean by âpublic dashboardsâ in this context. Do you mean other aggregators and search tools, similar but inferior to Metaforecast?
On the first part: The main problem that Iâm worried about itâs not that the terminology is different (most of these questions use fairly basic terminology so far), but rather that there is no order to all the questions. This means that readers have very little clue what kinds of things are forecasted.
Wikidata does a good job of having a semantic structure where if you want any type of fact, you could know where to look. Compare this page of Barack Obama, to a long list of facts, some about Obama, some about Obama and one or two other people, all somewhat randomly written and ordered. See the semantic web or discussion on web ontologies for more on this subject.
I expect that questions will eventually follow a much more semantic structure, and correspondingly, there will be far more questions at some points in the future.
These are very different from Metaforecast because they have different features. Metaforecast has thousands of different questions, and allows one to search by them, but it doesnât show historic data and it doesnât have curated lists. The dashboards, in comparison, have these features, but are typically limited to a very specific set of questions.
To check I understand, is the main thing you have in mind as a problem here that a similar topic might be asked about in a very different way by two different questions, such that itâs hard for an aggregator/âsearch tool to pick up both questions together? Or just that it can be hard to actually tell what a question is about from the name itself, without reading the details in the description? Or something else?
And could you say a bit about what sort of norms or guidelines youâd prefer to see followed by the writers of forecasting questions?
Iâm not sure I know what you mean by âpublic dashboardsâ in this context. Do you mean other aggregators and search tools, similar but inferior to Metaforecast?
On the first part:
The main problem that Iâm worried about itâs not that the terminology is different (most of these questions use fairly basic terminology so far), but rather that there is no order to all the questions. This means that readers have very little clue what kinds of things are forecasted.
Wikidata does a good job of having a semantic structure where if you want any type of fact, you could know where to look. Compare this page of Barack Obama, to a long list of facts, some about Obama, some about Obama and one or two other people, all somewhat randomly written and ordered. See the semantic web or discussion on web ontologies for more on this subject.
I expect that questions will eventually follow a much more semantic structure, and correspondingly, there will be far more questions at some points in the future.
On the second part:
By public dashboards, I mean a rather static webpage that shows one set of questions, but includes the most recent data about them. Thereâs been a few of these done so far. These are typically optimized for readers, not forecasters.
See:
https://ââgoodjudgment.io/ââsuperforecasts/ââ#1464
https://ââpandemic.metaculus.com/ââdashboard#/ââglobal-epidemiology
These are very different from Metaforecast because they have different features. Metaforecast has thousands of different questions, and allows one to search by them, but it doesnât show historic data and it doesnât have curated lists. The dashboards, in comparison, have these features, but are typically limited to a very specific set of questions.