Citizens are incentivized to predict what experts will say? This seems a little bit weak, because experts can be arbitrarily removed from reality. You might think that, no, our experts have a great grasp of reality, but I’d intuitively be skeptical. As in, I don’t really know that many people who have a good grasp of what the most pressing problems of the world are.
Yes, there are not many experts with this kind of grasp, but a DELPHI done by a diversified group of experts from various fields seems to be currently the best method for identifying megatrends (while some methods of text analysis, technological forecasting, or serious games can help). Only the expertise represented in the group will be known in advance, not the identity of experts.
So in effect, if that’s the case, then the key feedback loops of your system are the ones between experts using the Delphi system <> reality, and the loop between experts <> forecasters seems secondary.
“What are the top national/world priorities” is usually so complex, that it will remain to be a mostly subjective judgment. Then, how else would you resolve it than by looking for some kind of future consensus?
But I agree that even if the individual experts are not known, their biases could be predictable, especially if the pool of relevant local experts is small or there is a lot of academic inbreeding. This could be solved by lowering the bar for expertise (e.g. involving junior experts—Ph.D. students/postdocs in the same fields) so that each year, different experts participate in the resolution-DELPHI.
If the high cost and length of a resolution-DELPHI turns out to be a problem (I suppose so), those junior experts could just participate in a quick forecasting tournament on “what would senior experts say, if we run a DELPHI next month?”, 1 out of 4 of these tournaments would be randomly followed by a DELPHI, while the rewards here would be 4x higher. But this adds a lot of complexity.
Perhaps CSET has something to say here. In particular, they have a neat method of taking big picture questions and decomposing them into scenarios and then into smaller, more forecastable questions.
Thanks! We are in touch with CSET and I think their approach is super useful. Hopefully, we´ll be able to specify some more research questions together before we start the trials.
This may have the problem that once the public identifies a “leader”, either a very good forecaster or a persuasive pundit, they can just copy their forecasts.
Yeah, that’s a great point—if the leader is consistently a good forecaster and a lot of people (but probably not more than a couple of % of participants in case of a widespread adoption) copy him, there are fewer info inputs, but it has other benefits (a lot of people now feel ownership in the right causes, they gain traction etc.). There will also be influential “activists” that will get copied a lot (it’s probably unrealistic to prevent everyone from revealing real-life identity if they want to), but since there is cash at stake and no direct social incentive (unlike with e.g. retweeting), I think most people will be more cautious about the priorities of the person they want to copy.
This depends on how much of the budget is chosen this way. In the worst case scenario, this gives a veneer of respectability to a process which only lets citizens decide over a very small portion of the budget.
A small portion of the budget (e.g. 1%) would still be an improvement—most citizens would not think about how little of budget the allocate, but that they allocate not negligible $200, and they would feel like they actually participated in the whole political process, not only in 1% of it.
“What are the top national/world priorities” is usually so complex, that it will remain to be a mostly subjective judgment. Then, how else would you resolve it than by looking for some kind of future consensus?
You could decompose that complex question into smaller questions which are more forecastable, and forecast those questions instead, in a similar way to what CSET is doing for geopolitical scenarios. For example:
Will a new category of government spending take up more than X% of a country’s GDP? If so, which category?
Will the Czech Republic see war in the next X years?
Will we see transformative technological change? In particular, will we see robust technological discontinuities in any of these X domains / some other sign-posts of transformative technological change?
...
This might require having infrastructure to create and answer large number of forecasting questions efficiently, and it will require having a good ontology of “priorities/mega-trends” (so most possible new priorities are included and forecasted), as well as a way to update that ontology.
Possibly. Yes, it could be split between separate mechanisms 1) Public budgeting tool using quadratic voting for what I want govs to fund now, and 2) Forecasting tournament/prediction market for what will be the data/consensus about national priorities 3y later (without knowing forecasters´ prior performance, multiple-choice Surprising popularity approach could also be very relevant here). I see benefits in trying to merge these and wanted to put it out here, but yes, I’m totally in favor of more experimenting with these ideas separately, that’s what we hope to do in our Megatrends project :)
Yes, there are not many experts with this kind of grasp, but a DELPHI done by a diversified group of experts from various fields seems to be currently the best method for identifying megatrends (while some methods of text analysis, technological forecasting, or serious games can help). Only the expertise represented in the group will be known in advance, not the identity of experts.
“What are the top national/world priorities” is usually so complex, that it will remain to be a mostly subjective judgment. Then, how else would you resolve it than by looking for some kind of future consensus?
But I agree that even if the individual experts are not known, their biases could be predictable, especially if the pool of relevant local experts is small or there is a lot of academic inbreeding. This could be solved by lowering the bar for expertise (e.g. involving junior experts—Ph.D. students/postdocs in the same fields) so that each year, different experts participate in the resolution-DELPHI.
If the high cost and length of a resolution-DELPHI turns out to be a problem (I suppose so), those junior experts could just participate in a quick forecasting tournament on “what would senior experts say, if we run a DELPHI next month?”, 1 out of 4 of these tournaments would be randomly followed by a DELPHI, while the rewards here would be 4x higher. But this adds a lot of complexity.
Thanks! We are in touch with CSET and I think their approach is super useful. Hopefully, we´ll be able to specify some more research questions together before we start the trials.
Yeah, that’s a great point—if the leader is consistently a good forecaster and a lot of people (but probably not more than a couple of % of participants in case of a widespread adoption) copy him, there are fewer info inputs, but it has other benefits (a lot of people now feel ownership in the right causes, they gain traction etc.). There will also be influential “activists” that will get copied a lot (it’s probably unrealistic to prevent everyone from revealing real-life identity if they want to), but since there is cash at stake and no direct social incentive (unlike with e.g. retweeting), I think most people will be more cautious about the priorities of the person they want to copy.
A small portion of the budget (e.g. 1%) would still be an improvement—most citizens would not think about how little of budget the allocate, but that they allocate not negligible $200, and they would feel like they actually participated in the whole political process, not only in 1% of it.
You could decompose that complex question into smaller questions which are more forecastable, and forecast those questions instead, in a similar way to what CSET is doing for geopolitical scenarios. For example:
Will a new category of government spending take up more than X% of a country’s GDP? If so, which category?
Will the Czech Republic see war in the next X years?
Will we see transformative technological change? In particular, will we see robust technological discontinuities in any of these X domains / some other sign-posts of transformative technological change?
...
This might require having infrastructure to create and answer large number of forecasting questions efficiently, and it will require having a good ontology of “priorities/mega-trends” (so most possible new priorities are included and forecasted), as well as a way to update that ontology.
Have you considered that you’re trying to do too many things at the same time?
Possibly. Yes, it could be split between separate mechanisms 1) Public budgeting tool using quadratic voting for what I want govs to fund now, and 2) Forecasting tournament/prediction market for what will be the data/consensus about national priorities 3y later (without knowing forecasters´ prior performance, multiple-choice Surprising popularity approach could also be very relevant here). I see benefits in trying to merge these and wanted to put it out here, but yes, I’m totally in favor of more experimenting with these ideas separately, that’s what we hope to do in our Megatrends project :)