There’s a ton of ways to manipulate a prediction market about yourself if the market isn’t expecting you to (e.g. say publicly you won’t do X and then bet on and do X; make a small bet against X then a large bet for X after the market shifts; wait until right before market closing to do X), and I don’t think this one is particularly bad.
I’m not quite sure what it would mean to “solve” this. Ultimately, I expect markets will stay calibrated as investors account for these possibilities, albeit the market prices will be less informative. For markets I create about myself, I try to combat this by explicitly promising in the market description to not trade on the market, or to not manipulate the market.
You can easily manipulate the markets without getting caught, which makes a persons promise to not manipulate the market impossible to check and therefore impossible to trust.
And it’s not just markets about yourself that are easy to manipulate, it’s markets about everything you can change. So if there is a market about if the bin in my street will be tipped over, that market isn’t about me, but it’s trivially easy to manipulate. As humanity becomes more powerful the things we can change become larger and larger and the things we can use prediction markets for become smaller and smaller.
The calibrating solution wouldn’t work because the premises of perfect information and logical omniscience only hold in economists imagination. See my other comment for a concrete example.
You don’t need perfect trust, everything is probabilistic. If I can trust someone with probability greater than 90% (and there are many people for which this is the case), then my worrying about manipulation won’t affect my trading much. Similarly, I’m pretty sure that there are enough people who trust me to not manipulate markets that this isn’t an obstacle in getting good predictions.
I agree that if prediction markets become huge, manipulation becomes much more of a problem. Still, the stock market doesn’t seem to be creating too much incentive to assassinate CEOs, so I doubt that this will prevent prediction markets from becoming very useful (pretty sure Robin Hanson makes this point in more detail somewhere, but I can’t seem to find where).
I’m confused by your last paragraph. You can be calibrated without perfect information or logical omniscience. Calibration just means that markets at 60% resolve YES 60% of the time, and manipulation won’t change this. If prediction markets are consistently miscalibrated, than anyone can make consistent money by correcting percentages (if markets at 60% resolve YES 80% of the time, than you can make money by betting up all the 60% markets to 80%, without having to worry about any detail of how the markets resolve).
It will affect the trading and worse it will affect the trading inconsistently so we can’t even use mathematics to subtract it. When the president promises not to manipulate the market about event X some people will trust him 90% and bet accordingly while others will trust him 10% and bet according to that. But that’s not the same thing as people having 10% credence event X will happen, it can be that you think the president is trustworthy but the event itself is unlikely. On top of that the president might react to the market, e.g if there is a lot of money to be gained he’d risk it but otherwise he wouldn’t, but some people might think the president will manipulate the market at 51% when there is some money to be gained, while others think he will only act at 90% when there is a lot of money to be gained (or any other number of percentages), at seemingly random intervals people will suddenly buy or sell stock depending on what the price itself is doing. And the president might react to that etc, it becomes a recursive mindgame instead of being about event X. The resulting percentage cannot be used as a measure of trustworthiness nor as a measure of the underlying probability of event X.
assassinate CEOs
This is always the go-to example defenders of prediction markets use, but it’s not representative since it’s the most extreme possible version of manipulation. No, I don’t think most people will start murdering folk to influence prediction markets, not just because it could land you in jail, but also because it’s difficult to execute and most people have an aversion to murder. But you can manipulate prediction markets in much more easy, mundane and legal ways. If I say I will do something and can subsequently win a bunch of money by not doing it, without anyone noticing, that’s not even illegal, let alone difficult.
I’m confused by your last paragraph
I’ll use the example I used in my other comment:
If people can make it rain and there is e.g a market for every hour of the day, will people really check that every single user who has betted on time T has also betted on time T+1 or else find it suspicious? Because that would mean that now T+1 becomes suspicious if they didn’t bet on T+2 etc.
Not only that, it also effects related markets like “will it hail at time T” and if subsequently not betting on that one is suspicious, then markets that are related to that one, like “will it be cold at time T” also become suspicious etc.
So every time a new market is opened and people see that not every user who has betted on related markets has betted on this new market, then everyone will update the related markets which will in turn update their related markets etc. In this model the only way for prediction markets to become accurate is if every user bets on every market and every user is aware of the bets of every other user and is updating on all those bets 24⁄7.
Suffice to say, if you combine the fact that 1) humans can’t instantly know all the new information with 2) the fact we can’t know whether the market has updated because of information that we already know, new information, or people updating on the assumption that there is new information, with 3) recursive mindgames and 4) these constant ‘ripples’ of shifting uncertainties; you’ll get so much asynchronous noise that the prediction market becomes an unreliable source. Again, this is only for things we can change, prediction markets will work on things we can’t change, like predicting supernovas (until the day we can cause them)
It will affect the trading and worse it will affect the trading inconsistently so we can’t even use mathematics to subtract it
Nothing ever affects the trading consistently! It’s never the case that in an important market you can just use math to decide what to bet.
The resulting percentage cannot be used as a measure of trustworthiness nor as a measure of the underlying probability of event X.
Sure it can. If you ever see a prediction market which you don’t think is measuring the underlying probability of its event, you can make money from it (note that this is about manipulating whether the event will happen. Obviously if the market might be misresolved all bets are off). It’s provable that, no matter what manipulation or other meta-dependencies exist, there’s always some calibrated probability the market can settle at. If a manipulator has complete control whether an event will happen or not and will manipulate it to maximize its potential profit, the market settle at or fluctuate slightly around 50%, and in fact the event thus has a 50% chance of happening. If you give me any other manipulation scenarios, I can similarly show how the market will settle at a calibrated probability. Manipulation is bad because it creates bad incentives outside the market, and because it usually increases the entropy of events (bringing probabilities closer to 50%; this will always be the case if manipulation in either direction is equally cheap), but I don’t think it can threaten the calibration of a market.
But you can manipulate prediction markets in much more easy, mundane and legal ways.
I think my point generalizes. There’s a bunch of ways to manipulate stock prices. I assume they cause some problems, but use laws and norms to prevent the worst behavior and it ends up working pretty well. Prediction markets may face more of a problem, since I’d expect them to be easier to manipulate, but I don’t think there’s a qualitative difference.
Suffice to say, if you combine the fact that 1) humans can’t instantly know all the new information with 2) the fact we can’t know whether the market has updated because of information that we already know, new information, or people updating on the assumption that there is new information, with 3) recursive mindgames and 4) these constant ‘ripples’ of shifting uncertainties; you’ll get so much asynchronous noise that the prediction market becomes an unreliable source.
Sometimes reality is deeply unpredictable, and in those cases prediction markets won’t help. But if you think that a prediction market will be unreliable in cases where any other method is reliable, you can use that to get rich.
I think the core of what I’m trying to get across is that (modulo transaction costs), a prediction market is as reliable as any other method, and if it’s not you can correct it and/or get rich. Manipulation is bad because it changes the probability that the event happens, not because it makes prediction markets unreliable. Manipulation can make all methods of prediction work less well, it cannot make prediction markets work less well than another method.
There’s a ton of ways to manipulate a prediction market about yourself if the market isn’t expecting you to (e.g. say publicly you won’t do X and then bet on and do X; make a small bet against X then a large bet for X after the market shifts; wait until right before market closing to do X), and I don’t think this one is particularly bad.
I’m not quite sure what it would mean to “solve” this. Ultimately, I expect markets will stay calibrated as investors account for these possibilities, albeit the market prices will be less informative. For markets I create about myself, I try to combat this by explicitly promising in the market description to not trade on the market, or to not manipulate the market.
You can easily manipulate the markets without getting caught, which makes a persons promise to not manipulate the market impossible to check and therefore impossible to trust.
And it’s not just markets about yourself that are easy to manipulate, it’s markets about everything you can change. So if there is a market about if the bin in my street will be tipped over, that market isn’t about me, but it’s trivially easy to manipulate. As humanity becomes more powerful the things we can change become larger and larger and the things we can use prediction markets for become smaller and smaller.
The calibrating solution wouldn’t work because the premises of perfect information and logical omniscience only hold in economists imagination. See my other comment for a concrete example.
You don’t need perfect trust, everything is probabilistic. If I can trust someone with probability greater than 90% (and there are many people for which this is the case), then my worrying about manipulation won’t affect my trading much. Similarly, I’m pretty sure that there are enough people who trust me to not manipulate markets that this isn’t an obstacle in getting good predictions.
I agree that if prediction markets become huge, manipulation becomes much more of a problem. Still, the stock market doesn’t seem to be creating too much incentive to assassinate CEOs, so I doubt that this will prevent prediction markets from becoming very useful (pretty sure Robin Hanson makes this point in more detail somewhere, but I can’t seem to find where).
I’m confused by your last paragraph. You can be calibrated without perfect information or logical omniscience. Calibration just means that markets at 60% resolve YES 60% of the time, and manipulation won’t change this. If prediction markets are consistently miscalibrated, than anyone can make consistent money by correcting percentages (if markets at 60% resolve YES 80% of the time, than you can make money by betting up all the 60% markets to 80%, without having to worry about any detail of how the markets resolve).
It will affect the trading and worse it will affect the trading inconsistently so we can’t even use mathematics to subtract it. When the president promises not to manipulate the market about event X some people will trust him 90% and bet accordingly while others will trust him 10% and bet according to that. But that’s not the same thing as people having 10% credence event X will happen, it can be that you think the president is trustworthy but the event itself is unlikely.
On top of that the president might react to the market, e.g if there is a lot of money to be gained he’d risk it but otherwise he wouldn’t, but some people might think the president will manipulate the market at 51% when there is some money to be gained, while others think he will only act at 90% when there is a lot of money to be gained (or any other number of percentages), at seemingly random intervals people will suddenly buy or sell stock depending on what the price itself is doing. And the president might react to that etc, it becomes a recursive mindgame instead of being about event X. The resulting percentage cannot be used as a measure of trustworthiness nor as a measure of the underlying probability of event X.
This is always the go-to example defenders of prediction markets use, but it’s not representative since it’s the most extreme possible version of manipulation. No, I don’t think most people will start murdering folk to influence prediction markets, not just because it could land you in jail, but also because it’s difficult to execute and most people have an aversion to murder. But you can manipulate prediction markets in much more easy, mundane and legal ways. If I say I will do something and can subsequently win a bunch of money by not doing it, without anyone noticing, that’s not even illegal, let alone difficult.
I’ll use the example I used in my other comment:
Suffice to say, if you combine the fact that 1) humans can’t instantly know all the new information with 2) the fact we can’t know whether the market has updated because of information that we already know, new information, or people updating on the assumption that there is new information, with 3) recursive mindgames and 4) these constant ‘ripples’ of shifting uncertainties; you’ll get so much asynchronous noise that the prediction market becomes an unreliable source.
Again, this is only for things we can change, prediction markets will work on things we can’t change, like predicting supernovas (until the day we can cause them)
Nothing ever affects the trading consistently! It’s never the case that in an important market you can just use math to decide what to bet.
Sure it can. If you ever see a prediction market which you don’t think is measuring the underlying probability of its event, you can make money from it (note that this is about manipulating whether the event will happen. Obviously if the market might be misresolved all bets are off). It’s provable that, no matter what manipulation or other meta-dependencies exist, there’s always some calibrated probability the market can settle at. If a manipulator has complete control whether an event will happen or not and will manipulate it to maximize its potential profit, the market settle at or fluctuate slightly around 50%, and in fact the event thus has a 50% chance of happening. If you give me any other manipulation scenarios, I can similarly show how the market will settle at a calibrated probability. Manipulation is bad because it creates bad incentives outside the market, and because it usually increases the entropy of events (bringing probabilities closer to 50%; this will always be the case if manipulation in either direction is equally cheap), but I don’t think it can threaten the calibration of a market.
I think my point generalizes. There’s a bunch of ways to manipulate stock prices. I assume they cause some problems, but use laws and norms to prevent the worst behavior and it ends up working pretty well. Prediction markets may face more of a problem, since I’d expect them to be easier to manipulate, but I don’t think there’s a qualitative difference.
Sometimes reality is deeply unpredictable, and in those cases prediction markets won’t help. But if you think that a prediction market will be unreliable in cases where any other method is reliable, you can use that to get rich.
I think the core of what I’m trying to get across is that (modulo transaction costs), a prediction market is as reliable as any other method, and if it’s not you can correct it and/or get rich. Manipulation is bad because it changes the probability that the event happens, not because it makes prediction markets unreliable. Manipulation can make all methods of prediction work less well, it cannot make prediction markets work less well than another method.