Manifold markets isn’t very good
Disclaimer
I currently have an around 400-day streak on Manifold Markets (though lately I only spend a minute or two a day on it) and have no particular vendetta against it. I also use Metaculus. I’m reasonably well-ranked on both but have not been paid by either platform, ignoring a few Manifold donations. I have not attended any Manifest. I think Manifold has value as a weird form of social media, but I think it’s important to be clear that this is what it is, and not a manifestation of collective EA or rationalist consciousness, or an effective attempt to improve the world in its current form.
Overview of Manifold
Manifold is a prediction market website where people can put virtual money (called “mana”) into bets on outcomes. There are several key features of this: 1. You’re rewarded with virtual money both for participating and for predicting well, though you can also pay to get more. 2. You can spend this mana to ask questions, which you will generally vet and resolve yourself (allowing many more questions than on comparable sites). Moderators can reverse unjustified decisions but it’s usually self-governed. Until recently, you could also donate mana to real charities, though recently this stopped; now only a few “prize” questions provide a more exclusive currency that can be donated, and most questions produce unredeemable mana.
How might it claim to improve the world?
There are two ways in which Manifold could be improving the world. It could either make good predictions (which would be intrinsically valuable for improving policy or making wealth) or it could donate money to charities. Until recently, the latter looked quite reasonable: the company appeared to be rewarding predictive power with the ability to donate money to charities. The counterfactuality of these donations is questionable, however, since the money for it came from EA-aligned grants, and most of it goes to very mainstream EA charities. It has a revenue stream from people buying mana, but this is less than $10k/month, some of which isn’t really revenue (since it will ultimately be converted to donations), and presumably this doesn’t cover the staff costs. The founders appear to believe that eventually they will get paid enough money to run markets for other organisations, in which case the donations would be counterfactual. But this relies on the markets producing good predictions.
Sadly, Manifold does not produce particularly good predictions. In last year’s ACX contest, it performed worse than simply averaging predictions from the same number of people who took part in each market. Their calibration, while good by human standards, has a clear systematic bias towards predicting things will happen when they don’t (Yes bias). By contrast, rival firm Metaculus has no easily-corrected bias and seems to perform better at making predictions on the same questions (including in the ACX contest). Metaculus’ self-measured Brier score is 0.111, compared to Manifold’s 0.168 (lower is better, and this is quite a lot lower, though they are not answering all the same questions). Metaculus doesn’t publish the number of monthly active users like Manifold does, but the number of site visits they receive are comparable (slightly higher for Metaculus by one measure, lower by another), so it doesn’t seem like the prediction difference can be explained by user numbers alone.
Can the predictive power be improved?
Some of the problems with Manifold, like the systematic Yes bias, can be algorithmically fixed by potential users. Others are more intrinsic to the medium. Many questions resolve based on extensive discussions about exactly how to categorise reality, meaning that subtle clarifications by the author can result in huge swings in probability. The market mechanism of Manifold produces an information premium that incentivises people to act quickly on information, meaning that questions also swing wildly based on rumours.
The idea behind giving people mana for predictions is that wealth should accumulate with predictive prowess, resulting in people who have historically predicted better being able to weight their opinions more strongly. However in practice Manifold has many avenues for people incapable of making correct predictions to get Mana. It:
1. Wants bad predictors to pay money to keep playing
2. Hands out mana for asking engaging questions (which are not at all the same as useful questions)
3. Has many weird meta markets where people with lots of money can typically make more mana without predicting anything other than the spending of the superwealthy (“whalebait”)
4. Allows personal markets, where people can earn mana by doing tasks they set for themselves. This is an officially endorsed form of insider trading, insider trading being generally accepted.
5. Lets people gamble, go into negative equity, then burn their accounts and make new ones if bets go badly. The use of “puppet accounts” to manipulate markets in this or more complex ways is actively fought but still happens, and several of the all-time highest earners have transpired to use them to generate wealth.
6. Tends to leave markets open until after the answer to a question is publicly known and the answer is posted in the comments, so as well as large mana rewards for reading the news fast, simply reacting to posts on the site itself can produce a reliable profit
The first of these is entirely structural to their business model. While in principle the others could be “fixed” (if you consider them bugs) and some have lately become smaller (e.g. you can no longer bet things outside the 1-99% range, reducing profit from publicly known events), several have proven rather robust. This is quite apart from insider trading (which arguably makes the markets more accurate, even if it rewards people unfairly) and pump-and-dump schemes, which are hard to fix in any market system.
Framed as a fun pastime, having tricks to make mana without making predictions is fine, and this could drive up engagement. But these are all signs that the accuracy of the predictions (as opposed to news-reading) is not a highly ranked goal of the site. And it’s important to highlight the tradeoff between attracting idiots who will pay to play, as on gambling sites, and attracting companies who will pay to get good results. The assumption that idiots are distributed across opinions and so their biases will cancel each other out is ill-founded, particularly if a site has poor diversity.
By comparison, Metaculus simply applies an algorithm to weight people’s opinions based on past performance. There are obviously systematic differences between who uses which website, but a study indicated that under ideal conditions, a non-market approach to aggregating opinions was more successful than a prediction market. Prediction markets may be better than just doing a poll (for short-term predictions – Manifold can argue for its calibration over longer times via its loan system, which is plausible but this hasn’t been demonstrated), but worse than measures that can actually look at who is saying what. This would indicate that even without the quirky mana sources and time-dynamics, we wouldn’t expect Manifold to produce the best predictions. You can also see market failures explicitly in a handful of long-term markets (often about AI extinction risk, which cannot possibly pay out to humans) where a small number of very rich people have pegged the value of the market at the levels that they want it, in spite of this being essentially guaranteed to lose mana. Arguably this is the opposite criticism to the free-market criticisms—humans being willing to lose currency to do what they perceive as the right thing to highlight AI extinction risk—but it still represents an oligarchic market failure resulting in biased predictions. Unless prediction markets are really very large indeed and have no form of oligarchy, the neoliberal assumption that we can ignore who is putting up the currency is invalid.
This creates interesting possibilities to improve predictions. One, Manifold could have taxes to reduce opinion inequality. Two, Manifold could report the market values for predictions but (for a price) give you the better probabilities calculated by an appropriate algorithm, which is basically guaranteed to be better (it can always degenerate to “just use the market value” if that’s genuinely best). Why hasn’t it done this already? Well, there are probably several reasons, but one that I’m highlighting here is the link to neoliberal ideology. This is not to say that users of the site are all neoliberal – though they probably are more so than average. I mean that the “contrarian” ideologues present at the controversial Manifest24 and that the co-founders of manifold clearly find most interesting are consistently from a right-wing, market-trusting perspective, and this creates material blind spots. In spite of having many tens of talks on how to make predictions at this event, people don’t seem to point out that one of the host organisations is making very basic mistakes in how they go about their main job.
The harm of platforming bigots is manifold: firstly, that the people they hate are directly harmed; secondly that they will most likely stay away, reducing your diversity and thus the variety of perspectives and potential insight present; thirdly, that you will be perceived as bigoted, harming you socially and further reducing the diversity of the event. These problems have been extensively discussed elsewhere, and I see no value in discussing them again here. The issue that hasn’t been discussed is how the people who feel excluded are those who are most used to critiquing power inequality, and if they still engage with your platform at all, will focus on discussing your bigotry, rather than other structural issues.
This post feels structurally misleading to me. You spend most of it diving into reasonably common but useful technical critiques before, in the final paragraphs, shifting abruptly to what appears to be the substance of your dispute: that you would prefer to exclude some people from events connected to it. In contrast to your data-driven, numbers-heavy analysis of its predictive power, you assert in brief and without evidence that “neoliberal ideology” and the participation of people you consider bigots has meaningfully reduced its accuracy as a market.
I think both topics in isolation are worth discussing, and perhaps there would be a productive way to combine the technical and cultural critique, but a response to your first fifteen paragraphs looks dramatically different to a response to your last two paragraphs, such that combining the two clouds more than it elucidates.
Well said; this was my impression as well.
Yeah I would have upvoted the post aside from that—even though I agree with some of the OPs sentiment in the last 2 paragraphs, I really dislike conflating issues.
The problems I outline are all caused by the fact that Manifold requires that all value be denominated in a fungible and impersonal currency* that relates probabilities to rewards, and assumes that market forces will resolve irregularities in the distribution of this currency. This assumption is what I am criticising and is a reasonable definition of neoliberal. I neither assert nor believe that bigots participating in the market make it worse (as long as they are diverse bigots who aren’t publicly abusive), I am criticising the lack of thought diversity in the design of the market.
*Yes, I know it has two currencies now which are hard to trade between one direction, but they’re not used in systematically different ways within the site. Some of these criticisms could be alleviated if, say, personal markets produced a currency that can’t be spent on political markets.
Hi, I work on Manifold! Thanks for the critical take — we’re always happy to hear these issues so we can continue improving.
IMO this is the crux:
While Metaculus does perform better (seriously impressive work!), we have >130,000 user-created markets across all matter of topics at various levels of seriousness. Personal questions tend to have higher brier scores than more objective science questions for example.
When compared to other prediction market sites (Kalshi, Polymarket), we generally perform better. See for example https://calibration.city
If you pick a topic that you think is important, I think you will find that Manifold is contributing a lot of relevant forecasts.
For example, we have >1,000 questions just in the topic “AI Safety”. I think getting a good forecast on all those questions is a valuable service to the world.
You mentioned that Manifold did worse this year in the ACX forecasting contest. That worries me as well, though I will note that the year before we did “extraordinarily well” placing in the 99.5% percentile: https://www.astralcodexten.com/p/who-predicted-2022
Markets are a different forecasting mechanism than polling and I think they have a lot of contribute to the forecasting space. In particular, I think that scaled up markets have hope to be the most accurate forecasting method, because there can be large incentives to find the most accurate price. Any bias you see in markets can also be rectified with trading strategies to correct them, which give feedback in terms of profitability.
We have recently changed the Manifold economy significantly to reduce bonuses and prevent the kind of exploits where accounts could go negative, so a lot of your points here have been addressed.
Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets. On balance, we think it creates more value for the world to not shut down ideas we might personally disagree with. Much more has been said on this topic elsewhere.
How does this fit in with Manifold as a business, though? I wouldn’t expect super-high correlation between any specific business owner’s personal ideology and good business sense for their business. It can happen, but claims usually make my radar for suspiciously convenient convergence start a-chirping.
I remain confused as to Manifold’s primary intended customers and (by extension) its core product, but the quote above doesn’t jive well with most of the candidate answers I come up with. It’s very likely that staying “controversial” will lead most would-be users who are not “controversial” to go elsewhere. And being marked as “controversial” is going to disqualify Manifold from funding at a lot of foundations, without any clear countervailing advantage. It’s also toxic for corporations who might be interested in paying for information.
I suppose there’s a scenario in which whales become the main customers and pseudo-gambling is the main product, although you’re limiting your supply of candidate whales there. Maybe hunting for whales in the right-wing sea could be profitable, but I think it’s going to be hard to make a socially useful product with right-wing whale as your main food.
I think most broad platforms that get started do not ban “controversial” users from it.
Allowing “controversial” subreddits did not cause Reddit to fail. Allowing “controversial” videos did not cause Youtube to fail. Allowing “controversial” people on Facebook did not cause Facebook to fail. Allowing “controversial” tweets did not cause Twitter to fail. Indeed, I think banning people instead of being an open or neutral platform is very heavily correlated with failing as a piece of online infrastructure.
I think your counterexamples have a few shared characteristics—their revenue models are advertising-based, and they had the financial means to operate in the red for quite a while. I don’t think they needed to go in the black as quickly as I think Manifold will, so they had time to grow into large enough forces that advertiser-customers felt they needed to work with them. Although I don’t understand the system for purchasing advertising well, my understanding is that there are tons of intermediaries and agents in a way that makes maintaining brand safety very challenging. Moreover, the advertising market is so massive that you can lose a large fraction of potential advertiser-customers and still be OK. Finally, the users of those sites aren’t giving them any significant amount of money.
I don’t see Manifold as operating in such a forgiving niche, especially for some of the potential candidate business models. There aren’t many foundations interested in prediction markets, and even fewer where neither the optics or personal distaste for association with “controversial” content won’t be a barrier. The market for corporations purchasing public intelligence from prediction markets is murky to me, but alienating a bunch of your customers seems like a dubious move. Not only does this market not have the side of the advertising market, it also has a different inventory model. There’s a set amount of advertising users will tolerate, so the likely consequence of alienating a bunch of advertisers is that you have to sell that inventory for a lower price. The market for corporations who want to pay for Manifold-based intelligence is probably not limited in the same way, so each alienated potential client translates into a fully lost potential sale.
I know little about whale psychology, but I sense that most whales are emotionally invested to a significant degree in the platform on which they whale. Catering to a class of whales seems a somewhat more viable business model . . . but it would create a bad set of incentives for Manifold. Running the platform in a way that emphasizes attractiveness to whales seems at odds with running a socially useful service.
As far as user experience, those sites had mechanisms for ensuring that “controversial” content was only served to those who wanted it. Subreddit mods wield great censorship power, a Facebook user decided who to friend (the wall algorithms being less aggressive at the time IIRC), and YouTube isn’t really a social platform. You can mute individual users on Manifold, but at least when I was there last there wasn’t a good way to avoid “controversial” content if you didn’t want to see it. Finally, I speculate that users are more tolerant of issues with platforms on which they are not forking over significant amounts of money. Once money is involved, they may start to experience cognitive dissonance at supporting a business that is doing things that don’t align with their values.
I’m a bit confused by this discussion, since I haven’t in any way suggested banning people from using the site. That’s a completely separate issue from managing the balance of ideologies behind the site design. As it happens, Manifold liberally bans people but mostly because they manipulate markets via bots/puppets, troll, or are abusive: this is required for a balanced markets and good community spirit, and seems a reasonable balance.
James brought up site moderation philosophy in a comment (“Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets.”). I responded by asking how that jived with plausible business models for the company. So it’s a discussion about an issue first raised in the comments. I do think it’s of some relevance to a broader question hinted at in your post: whether the founders’ prior ideological commitments are causing them to make suboptimal business decisions.
I think these are fine hypotheses about the tradeoffs here, though I disagree with most of the analysis. I have thought and read a lot about it, since like, my primary job is indeed to handle these exact tradeoffs and to build successful platforms here, but this current thread doesn’t seem like the right context to dig into them.
As one point, I think Manifold’s basic business model is “take a cut of the trading profits/volume/revenue”. The best alternative business model is “have people pay for finding out information via subsidies for markets”.
I don’t think Manifolds business model relies on advertisers or foundations. I think it scales pretty well with accuracy and usefulness of markets.
In the “take a rake of trading volume” model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won’t work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales.
If that’s right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn’t be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it’s just that the aroma of positive EV will attract them.
This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn’t seem to jive well with its prosocial objectives. Cf. the conflict in “free-to-play” video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.
I disagree with this. I think the obvious source of money for a prediction platform like Manifold is from people who want to get accurate information about a question, who then fund subsidies which Manifold gets a cut off. That’s ultimately where the value proposition of the platform comes from, and so where it makes sense to extract the money.
I read your comment as “have people pay for finding out information via subsidies for markets” being your “alternative” model, rather than being the “take a cut of the trading profits/volume/revenue” model. Anyway, I mentioned earlier why I don’t think being “controversial” (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: “The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . .”
I think it’s a minor issue that is unlikely to drive anyway who actually has a “hair-on-fire” problem of the type that a prediction market might solve. I am confident anyone with experience building internet platforms like this would consider this a very irrelevant thing to worry about at the business stage where Manifold is at.
I don’t think this will be an issue for Manifold. For example Polymarket allows controversial users and content, and indeed barely moderates comments sections at all, and currently has much higher volume than Manifold. And Polymarket does provide some socially useful predictions—it’s somewhat frequently cited by the mainstream media for presidential election odds.
It’s the other way around. Prediction markets that anyone can create are good, but it’s such crazy idea that one has to be pretty libertarian to be able to even think of the idea in the first place
I don’t see the idea of “prediction markets that anyone can create” as particularly crazy or even novel. Many people were frustrated with regulatory barriers preventing prediction markets in the US before Manifold came along; my impression is that Manifold’s innovation is “if we use play money rather than real money we avoid a lot of the regulation and some of the incentive for abuse, but hopefully still motivate people to participate”. Plus, perhaps, innovation in market structure that attempts to simplify the market interface and/or compensate for the lack of liquidity. (And a bunch of product work actually getting the details right.)
I realise you actually work for Manifold so maybe you have access to better information than me, but this is what it seems like to me from the outside.
Ah, yes, then we have succeeded in normalizing the idea that you should be able to create a market, trade in it, and then resolve it according to your own judgment.
When we were getting started literally everyone we told this to, including YC partners, thought it was crazy.
Only Scott Alexander saw the value in users creating and resolving their own markets. He awarded us the ACX grant, shared the link to our prototype with his readers, and the rest is history!
Yeah, the idea that self-resolution and insider trading don’t require central regulation to manage does seem more like a novelty, that’s fair.
you’re right, and there were anyone-created prediction markets before Manifold, like Augur. I misspoke. the real new-unintuitive thing was markets anyone could create and resolve themselves rather than deferring to a central committee or court system. I think this level of self-sovereignty is genuinely hard to think of. It’s not enough to be a crypto fan who likes cypherpunk vibes; one has to be the kind of person who thinks about free banking or who gets the antifragile advantages that street merchants on rugs have over shopping malls.
although it’s quite possible that Manifold got popular more because the UX was better than other prediction markets or because a lot of rationalists to joined at the same time which let the social aspect take off
Allowing users to create any question they want with no moderation gate (unlike Metaculus or any other prediction market site) is a big part of Manifold’s success (even business success!). We further empower users to judge and resolve their markets. While not everyone is perfect or is always acting in good faith, this system largely works.
This openness has been key, but is it the same thing as allowing anyone to go to our conference? Not exactly, but they are related. Another example is that the scheduling software allowed anyone to book any time slot for any room at Manifest, and we basically didn’t have any problems there.
My take is that:
1. the “controversy” is way overblown based on an unfair connecting of dots from journalists at The Guardian who didn’t even attend. The actual Manifest was amazing and people who attended were nearly unanimous on that point, except:
2. There may have been one person at Manifest who was both edgy and impolite, bordering on aggressive. If we could have kicked that person out, then the anonymous EA forum poster wouldn’t have ever felt threatened, and we wouldn’t be having this conversation.
So, I do agree that some moderation can be helpful with business goals, just like we have had to ban some users from our site. But you need less moderation than you would think!
I strongly encourage people to discuss manifest elsewhere—as stated above, I didn’t go and only comment on it to illustrate the lack of thought-diversity in the site design.
If you run events with “controversial” speakers and attendees, and allow “controversial” stuff on your platform, then having critical pieces run against your business is part of the territory whether any specific article is fair or unfair.
Likewise, moderation and vetting are necessarily imprecise and error-prone; attempting to draw the line at X means that ~15% of the time you will actually draw the line at [X + 1 sd] and you’ll slide to [X + 2 sd] from time to time. I don’t know who the “one person” you are describing was, but given a near-miss on letting M.V. attend if he had bought a ticket, I’m hard pressed to see the “one person” as an extraordinarily uncommon [X + 3-4 sd] type error. It’s unlikely that there would be multiple extreme outlier misses associated with the same event. Also, I do not believe the poster characterized the problem as primarily linked to a single person who was preeminent in their problematic behavior. All that is to say that letting people like the “one person” slip through is probably not going to be rare given where you seem to have X set at the moment.
Thanks for engaging positively! You’re correct about the crux—if the resulting prediction market worked really well, the technical complains wouldn’t matter. But the number of predictions is much less important to me than their trustworthiness and the precision of specifying exactly what is being predicted. Being well-calibrated is good, but does not necessarily indicate good precision (i.e. a good Brier score), and that calibration.city is quite misleading in presenting the orders of magnitude more questions on manifold as a larger dot, rather than using dot size to indicate uncertainty bounds in the calibration.
It’s not true that markets at any scale produce the most accurate forecasts. There’s extensive literature showing that long-term prediction markets need to worry about the time-value of money and risk aversion influencing the market valuation. Manifold’s old loan system helped alleviate the time-value problem but gave you a negative equity problem. I don’t see this time value effect in your calibration data, but I suspect that’s dominated by short-term markets. Because market participation is strongly affected by liquidity, smaller markets don’t have incentives for people to get involved in them unless they’re very wrong. Thus getting markets to scale up when they’re not intrinsically controversial and therefore interesting is a substantial problem. The incentives to make accurate predictions can just be prizes for accurate individual predictions which can be aggregated into a site prediction by any other mechanism. The key feature of a market mechanism for prediction aggregation is that the reward must be tied to the probability of the event, and must be blind to who is providing the money. There’s no reason to believe either of these are useful constraints, and I don’t believe they’re optimal.
I note that many accounts are still in negative equity, and that a few such accounts that primarily generated their wealth by betting on weird metamarkets substantially influence the price of AI extinction risk markets. The number and variety of markets is therefore potentially punitive to the accuracy of predictions, particularly given the power-law rewards to market participation. While I refer to negative equity, the fact that we can still create puppets and transfer their $200 to another user (directly or via bad bets) means the problem persists to a smaller extent without anyone’s account going negative.
I’m a frequent Manifold user.
While I like criticism of Manifold, I think these criticisms miss the mark, and are mostly false or outdated.
The best criticism here is the link the the ACX article. I’d still defend manifold, the ACX predictions occurred very early in the site’s life, and most of the current top traders either didn’t use the site at all or didn’t have enough capital to move markets, so one would expect it to be less accurate. But we did perform badly.
You compare Metaculus’s “Metaculus prediction” calibration graph to Manifold’s calibration. Metaculus has two probabilities for each question—a “Community Prediction”, a simple aggregation of forecasts, and a more complicated Metaculus Prediction that weights by track record. The Metaculus Prediction is not public while markets are open, and I can’t see it for any of the metaculus markets I’m interested in. So, when comparing the utility of the two sites the Community Prediction is the right comparison, and it’s as bad as or worse than Manifold’s.
Manifold’s hosted calibration graph graph covers markets through all of manifold’s history. I’d expect manifold to do better recently. Using https://calibration.city/, using the ‘market midpoint’ and starting from Jul 2023, Manifold’s calibration looks great (later dates seem noisy but unbiased, the other mode is buggy). I also recalculated time-weighted calibration myself over the past months of Manifold, and didn’t see deviation from y=x other than noise. (It does replicate manifold’s old bias on all past data). Manifold’s site shows the old bias, but we’ve improved!
Manifold’s overall Brier score is not comparable to Metaculus’s. “they are not answering all the same questions” is an issue, the distribution of user-generated questions is different from highly curated questions. Manifold also has a much longer tail of less-traded questions that you’d expect to score lower even if the question distributions were the same.
For the analysis of brier scores on comparable questions, it’s early in Manifold’s life, and from the post, “Metaculus, on average had a much higher number of forecasters”, and from a comment “Some of the markets in this dataset probably only ever got single-digit number of trades and were obviously mispriced”. This shouldn’t generalize to comparisons between metaculus and manifold today, with comparable numbers of users.
“Many questions resolve based on extensive discussions about exactly how to categorise reality, meaning that subtle clarifications by the author can result in huge swings in probability”. This does happen, one of the tradeoffs with user-created markets is the criteria are less detailed, but it’s not common enough to significantly affect profits for the top traders. In my rough estimation, around 1 in 20 big manifold questions have criteria issues, and many of those N/A. As a comparison, 3 out of the 50 Bridgewater Metaculus competition questions had to be annulled due to bad criteria.
The study is interesting, but the experiment has significant differences from manifold and I don’t get from the paper that prediction polls are overall superior to markets.
You name “many avenues for people incapable of making correct predictions to get Mana”. After the pivot, in Manifold’s current state, all but ~1.5 of these are no longer relevant.
“2. Hands out mana for asking engaging questions (which are not at all the same as useful questions)”—In the past, Manifold awarded generous creator bonuses and market liquidity for traders, inflating the mana supply. Post-pivot, creators fund liquidity pools themselves and only earn mana through roughly 3-1.5% fees on their markets, so market creation is now a mana sink for creators (and, net, for traders, Manifold gets fees as well). (There remains a partner program to pay selected creators, but that pays out in USD, is smaller, and partners are generally not converting that to mana.)
“3. Has many weird meta markets where people with lots of money can typically make more mana without predicting anything other than the spending of the superwealthy (“whalebait”)”—Also true in the past, with the Whales vs Minnows https://news.manifold.markets/p/isaac-kings-whales-vs-minnows-and incident as the apex, making half the profit of the top trader at the time. (whalebait skill is correlated with forecasting skill, that trader remains #3 now that whalebait profit isn’t counted). But after that Manifold started hiding whalebait, and today large whalebait markets are effectively dead.
“Allows personal markets, where people can earn mana by doing tasks they set for themselves. This is an officially endorsed form of insider trading, insider trading being generally accepted.”—Post pivot, creators fund the liquidity pools for personal markets and trading on markets no longer prints mana via bonuses (it’s negative sum due to fees), so you can’t earn mana like that.
“Lets people gamble, go into negative equity, then burn their accounts and make new ones if bets go badly”—With the pivot, Manifold has stopped giving out loans, which was what allowed people to go into negative equity.
“6. Tends to leave markets open until after the answer to a question is publicly known and the answer is posted in the comments, so as well as large mana rewards for reading the news fast, simply reacting to posts on the site itself can produce a reliable profit”—I traded on this in the past, but it’s 10x less effective now that market creators fund their own liquidity pools, so they now usually close markets before the answer is known to recover some of their money (prize markets do that too). Trading on news still happens though, and is probably inevitable with prediction markets.
“1. Wants bad predictors to pay money to keep playing” is a problem, and is an issue with any prediction market involving real money. Real money has benefits, though—as prediction markets scale, it incentivize professional prediction. Bad traders incentivize good traders to price markets well.
> But these are all signs that the accuracy of the predictions (as opposed to news-reading) is not a highly ranked goal of the site
As I see it, the fact that we did make all of these changes is a sign that prediction accuracy is a top goal!
The counterfactuality of the charity program was an issue, but post-pivot that’s not relevant anymore as you can just withdraw prizepoints as USD and then donate.
I do have concerns about the utility of prediction market (and forecasting generally). Pushing information into probabilities might remove most of the value that a richer written exchange would have. Prediction markets incentivize smart people to keep information other than their bets private. But this post missed the mark, I think.
I probably got one, maybe a few, things wrong here, but the general point stands
Thanks for your considered comments! I agree that Metaculus should make its best prediction more available. I also attach low importance to the self-reported Brier scores, though Manifold already excludes a tail of low-traded questions when reporting, so that’s not really a good explanation for the discrepancy.
To be clear, the paper specifies that *algorithmic adjustments* of polls out-perform markets, not that the means of polls are better than the means of markets (in line with the differences between the two Metaculus predictions). If you don’t adjust, they’re worse, as expected and seen in the Metaculus calibration data. This conclusion is clearly written in the abstract, and they didn’t try very complicated algorithms to combine estimates.
I agree (and mentioned) that recent changes alleviate some of these points. I don’t think it cures them as thoroughly as you indicate though. Firstly, the pivot didn’t retroactively apply these changes, so people who successfully asked engaging questions or caught whalebait still have huge mana supplies. If they’re not limited by engagement time, people with any positive predictive power can exponentially grow the cash injection, and the profit will naturally then be laundered into conventional markets. In practice, I don’t think top whales are exponentially growing their income most of the time—growth usually seems pretty linear, probably due to the difficulty of finding appropriate markets. But if you wanted to prove that good whalebait hunters are good predictors, you will need to demonstrate that they get a good rate of return on their investment, not merely that they have also derived M from other sources.
People can no longer go into negative equity, though you can still create accounts and transfer the M600 or make risky bets, reducing but not fixing the issue.
I just went on the site and found free mana for day-old news within the top 10 links. Ironically the pivot/transaction taxes means that there’s less incentive for people with limited M to pick up these pennies, so they’re left out for longer and mainly benefit whales. There are mechanisms to stop news-based trading (e.g. you could retroactively reverse post-news transactions) but they will create negative equity problems again.
I am generally skeptical that some of the changes made during the pivot will remain in the long term, as it seems like the number of users has trending downwards since it happened, and changes have broken some other things. Most noteworthily, there is now no force mitigating the time value of money effects, so we do not expect the market value of long-term markets to equal the expectation of that market even under ideal circumstances. Also, the transaction taxes are large, which creates market inefficiencies, lowering the precision of the market (because it’s not worth correcting a market error unless it’s wrong by a larger margin now). These problems are ones that neoliberal economists ought be aware of though, so I imagine there are plans to mitigate them.
The idea that real money improves performance is another of these neoliberal assumptions with limited evidence. There are a range of papers on this issue that come to different answers as to what, if any, conditions exist for it to be true.
https://www.tandfonline.com/doi/abs/10.1080/1019678042000245254
https://www.electronicmarkets.org/fileadmin/user_upload/doc/Issues/Volume_16/Issue_01/V16I1_Statistical_Tests_of_Real-Money_versus_Play-Money_Prediction_Markets.pdf
https://ubplj.org/index.php/jpm/article/view/441
https://www.ubplj.org/index.php/jpm/article/view/479
It is almost certainly not true for extinction risk factors, which is a substantial EA interest for prediction-making. It could be true that there is some threshold beyond which money becomes strongly influential, but for instance, Metaculus informally finds running competitions for $1000s to harm engagement in questions.
I think you misunderstood the counterfactuality point. The counterfactuality issue of the charity program is that the EA orgs could just have given them to the charities they normally do, without putting them into Manifold bank accounts in the meantime and waiting for people to choose which ones to give to. Allowing people to take the money out as dollars is irrelevant, and just delays things more.
(What follows are a lot of disagreements that are very minor in the grand scheme of things, I think the points I made in the first comment are still valid)
I think that, going forward, they have been cured “as thoroughly as [I] indicate”, and the impact from past printed mana is small, although I do wish Manifold had never caused those issues.
Sure. Bad traders who caught whalebait have, mostly, lost their winnings since whalebait was restricted though. Catnee, for instance, was the biggest winner on WvM, but has since lost it all to bad predictions. Marcus was the second biggest winner on WvM, and he hasn’t lost his winnings since he’s the 3rd best trader on ‘real’ questions, and he’s donated away more than his WvM winnings anyway. And the devaluation helps reduce the impact of ill-gotten mana that remains. But you used those points to support this claim in the original post:
Which was very misleading.
I don’t think this is significantly distorting prices. Of the active people on the creator leaderboard, people like Jack and Joshua who are top traders have 10x higher net worths than the highest creators who aren’t top traders like the top 3 highest, BTE, Isaac, and strutheo. And creators need to use that mana to pay for more questions.
I wouldn’t claim that in general (Marcus specifically was good at both). The current top trader leaderboard doesn’t count whalebait profits at all.
I don’t think I understand the claim about exponential growth.
Yeah, this is a big problem for prediction markets. Although I don’t think loans helped in the past so much as masked the problem by allowing people to overleverage.
The fees for a trade at 50% are around 3.5%, which you’d expect to distort the probability by at most 1.8% in either direction.
Manifold bans people if we notice them doing this, and I don’t think this is different in kind from creating sockpuppet accounts on Metaculus.
I strongly agree with that concern about manifold’s old charity program! But in the future if you win on Manifold and withdraw winnings and donate them, that money comes from money other traders used to purchase mana, not EA donations to manifold, so it’s not an issue anymore.
Sure, news trading was the .5 in my 1.5 of 6 - still exists but much lower rewards. The amount of such free mana available now is generally on the order of ones or tens of cents, though, sometimes a dollar or two, so it isn’t really worth it for whales to pick it up.
Yeah, I wouldn’t assume that, I noted benefits and drawbacks. I am not sure which of the models of Manifold or Metaculus are better for prediction, or indeed if either are socially valuable enough to be worth using or watching. I think the same point about limited evidence and different assumptiosn applies to the paper you cite comparing adjusted prediction polls to markets.
Yeah, those incentives won’t work for extinction risk markets on any platform because you’d be dead.
That particular argument is once the money involved is large enough to make a career or run a business off of, it’ll draw smart people in. There are a few people who professionally trade polymarket and make a good income.
By my accounts, you have implicitly agreed that all of 1-6 used to be issues, but 2-4 are currently not issues and 5 now needs the phrase “negative equity” deleted. I’m still making mana by reading the news, so don’t see that you’ve halved that claim. You’re right that whalebait is less profitable, and I now need to actually search for free mana to find the free mana markets. The fact that I can still do this and then throw all my savings into it means that we should expect exponential growth of mana at some risk-free rate (depending on the saturation of these markets), which is then the comparison point for determining investment skill. In practice there are most likely better things to do with it, and also I can’t be bothered.
I recognise the benefit of inflation as a good thing in countering historic wealth inequality, and will remark that it’s effectively a wealth tax. It unfortunately coincided with the other changes which make it harder and less rewarding to donate and worsening the time-value problem, triggering my general disengagement with the site. I agree that loans never fixed this problem, but they mitigated it partially.
The difference between this and Metaculus sock puppets is that there’s no reward for making them there. The virtual currencies can’t be translated into real-world gain, and only one “reward” depends on other people, so making bad predictions with your sock puppets doesn’t make you look that much better if people look at multiple metrics. Similarly, by requiring currency to express a belief, Manifold structurally limits engagement on questions with no positive resolution possibility—it’s cost-free to predict extinction on Metaculus, but on Manifold, even with perfect foresight (or the guarantee that the market will be NAd later) you still sacrifice the time value of your mana to warn people of a risk. This problem is unique to prediction markets. They make it costly (but potentially remunerated) to express your beliefs.
The other problem unique to adversarial prediction grading is that collaboration is disincentivised. Currently, because mana isn’t that valuable, the comments section is full of people exchanging information for social kudos. But when the market becomes financially lucrative people stop doing this—the comments on polymarket are basically pure spam. This is one of the reasons why I find the idea that Manifold should become more financialised very unwise. It’s not clear that the collaborative factor is smaller than the professionalisation factor for net predictive power (as indicated by the fact that polymarket doesn’t have that good a calibration). To make money on these things, you don’t need to beat a superforecasting team (the thing that actually beats all of these statistical aggregation methods, least we forget), you need to beat the individual whose salary the prize can support.
I don’t believe the original donation has been redistributed and donations are now curtailed by the pivot, so I imagine it will last a while longer. I know the founders believe donations will eventually come from mana purchases (or more venture capital), I’m just skeptical.
I am absolutely on the anti-Manifold side of the Manifest racism controversies and maybe the technical objections here are all correct, but I find the mixing of them together a bit shady.
Its market is too illiquid and too expensive to trade to produce any good predictions. Its mechanics seem more like a social media site for amusement than any serious market.
I think these prediction markets need more short-term issues to stay relevant than the current situation where there are many markets but not enough liquidity and high transaction costs.
It is very difficult to create an efficient market in any sense. Many futures exchanges have many contracts with zero volume. The actual way of doing things on this site is even worse than the neoliberals: at least the neoliberals know that they should do their best to use the government to enforce and protect the market, rather than assuming that the free market will work on its own. Maybe they need to study some finance papers to learn how to set up effective exchanges.
Executive summary: Manifold Markets, a prediction market website, has several flaws that hinder its ability to produce accurate predictions and effectively improve the world, despite its potential for charitable donations.
Key points:
Manifold’s predictive power is worse than simply averaging predictions and has a systematic bias towards predicting events that don’t happen.
The platform’s design allows users to gain virtual currency (mana) through means other than making accurate predictions, undermining the intended accumulation of wealth based on predictive prowess.
The site’s focus on engagement and attracting paying users may conflict with the goal of providing accurate predictions for potential clients.
Manifold’s adherence to neoliberal ideology and market-trusting perspectives may create blind spots in addressing the platform’s shortcomings.
The controversial Manifest24 event and the platforming of bigots can harm diversity, reduce insight, and deter critiques of the platform’s structural issues.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.