In principle, I like the research question and the comparison above is probably the most you can make out from what is published. That said, it is the year 2022, capabilites and methodology have advanced enormously at least with those PM firms operating successfully in the commercial world markets. So it’s the proverbial comparing apples and oranges on several dimensions to talk about how “prediction markets” (sic) perform for whatever. Different platform implementations have very different capabilities suited to very different tasks. Moreover, like any advanced tool, practical application of the more advanced PM platforms need a high degree of methodic knowhow on how to use their specific capabilities—based on real experience of what works and what does’t.
As a semi-active user of prediction markets and a person who looked up a bunch of studies about them, I don’t see that many innovations or at least anything that crucially changes the picture. I would be excited to be proven wrong, and am curious to know what you would characterize as advances in capability and methodology.
I am partly basing my impression on Mellers & Tetlock (2019), they write “We gradually got better at improving prediction polls with various behavioral and statistical interventions, but it proved stubbornly hard to improve prediction markets.” And my impression is that they experimented quite a bit with them.
In principle, I like the research question and the comparison above is probably the most you can make out from what is published. That said, it is the year 2022, capabilites and methodology have advanced enormously at least with those PM firms operating successfully in the commercial world markets. So it’s the proverbial comparing apples and oranges on several dimensions to talk about how “prediction markets” (sic) perform for whatever. Different platform implementations have very different capabilities suited to very different tasks. Moreover, like any advanced tool, practical application of the more advanced PM platforms need a high degree of methodic knowhow on how to use their specific capabilities—based on real experience of what works and what does’t.
As a semi-active user of prediction markets and a person who looked up a bunch of studies about them, I don’t see that many innovations or at least anything that crucially changes the picture. I would be excited to be proven wrong, and am curious to know what you would characterize as advances in capability and methodology.
I am partly basing my impression on Mellers & Tetlock (2019), they write “We gradually got better at improving prediction polls with various behavioral and statistical interventions, but it proved stubbornly hard to improve prediction markets.” And my impression is that they experimented quite a bit with them.