I’m an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.
In the past, I’ve studied Maths and Philosophy, dropped out in exhasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
I like to spend my time acquiring deeper models of the world, and a good fraction of my research is available on nunosempere.github.io.
With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also enjoy winning bets against people too confident in their beliefs.
I was a Future of Humanity Institute 2020 Summer Research Fellow, and I’m working on a grant from the Long Term Future Fund to do “independent research on forecasting and optimal paths to improve the long-term.” You can share feedback anonymously with me here.
Substantive points
Wait, so citizens are incentivized to predict what experts will say? This seems a little bit weak, because experts can be arbitrarily removed from reality. You might think that, no, our experts have a great grasp of reality, but I’d intuitively be skeptical. As in, I don’t really know that many people who have a good grasp of what the most pressing problems of the world are.
So in effect, if that’s the case, then the key feedback loops of your system are the ones between experts using the Delphi system <> reality, and the loop between experts <> forecasters seems secondary. For example, if I’m asked what Eliezer Yudkowsky will say the world’s top priority is in three years, I pretty much know that he’s going to say “artificial intelligence”, and if you ask me to predict what Greta Thunberg will say, I pretty much know that she’s going to go with “climate change”.
I think that eventually you’ll need a cleverer system which has more contact with reality. I don’t know how that system would look, though. Perhaps CSET has something to say here. In particular, they have a neat method of taking big picture questions and decomposing them into scenarios and then into smaller, more forecastable questions.
Anyways, despite this the first round seems like an interesting governance/forecasting experiment.
Also, 150-250 people seems like too little to get great forecasters. If you were optimizing for forecasting accuracy, you might be better off hiring a bunch of superforecasters.
Re: Predict-O-Matic problems, see some more here
Nitpicks:
This may have the problem that once the public identifies a “leader”, either a very good forecaster or a persuasive pundit, they can just copy their forecasts. As a result, this part:
seems like an overestimate; you wouldn’t be harnessing that many inputs after all
This depends on how much of the budget is chosen this way. In the worst case scenario, this gives a veneer of respectability to a process which only lets citizens decide over a very small portion of the budget.