Hi, I work on Manifold! Thanks for the critical take — we’re always happy to hear these issues so we can continue improving.
IMO this is the crux:
Metaculus’ self-measured Brier score is 0.111, compared to Manifold’s 0.168
While Metaculus does perform better (seriously impressive work!), we have >130,000 user-created markets across all matter of topics at various levels of seriousness. Personal questions tend to have higher brier scores than more objective science questions for example.
When compared to other prediction market sites (Kalshi, Polymarket), we generally perform better. See for example https://calibration.city
If you pick a topic that you think is important, I think you will find that Manifold is contributing a lot of relevant forecasts.
For example, we have >1,000 questions just in the topic “AI Safety”. I think getting a good forecast on all those questions is a valuable service to the world.
You mentioned that Manifold did worse this year in the ACX forecasting contest. That worries me as well, though I will note that the year before we did “extraordinarily well” placing in the 99.5% percentile: https://www.astralcodexten.com/p/who-predicted-2022
Markets are a different forecasting mechanism than polling and I think they have a lot of contribute to the forecasting space. In particular, I think that scaled up markets have hope to be the most accurate forecasting method, because there can be large incentives to find the most accurate price. Any bias you see in markets can also be rectified with trading strategies to correct them, which give feedback in terms of profitability.
We have recently changed the Manifold economy significantly to reduce bonuses and prevent the kind of exploits where accounts could go negative, so a lot of your points here have been addressed.
Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets. On balance, we think it creates more value for the world to not shut down ideas we might personally disagree with. Much more has been said on this topic elsewhere.
Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets. On balance, we think it creates more value for the world to not shut down ideas we might personally disagree with. Much more has been said on this topic elsewhere.
How does this fit in with Manifold as a business, though? I wouldn’t expect super-high correlation between any specific business owner’s personal ideology and good business sense for their business. It can happen, but claims usually make my radar for suspiciously convenient convergence start a-chirping.
I remain confused as to Manifold’s primary intended customers and (by extension) its core product, but the quote above doesn’t jive well with most of the candidate answers I come up with. It’s very likely that staying “controversial” will lead most would-be users who are not “controversial” to go elsewhere. And being marked as “controversial” is going to disqualify Manifold from funding at a lot of foundations, without any clear countervailing advantage. It’s also toxic for corporations who might be interested in paying for information.
I suppose there’s a scenario in which whales become the main customers and pseudo-gambling is the main product, although you’re limiting your supply of candidate whales there. Maybe hunting for whales in the right-wing sea could be profitable, but I think it’s going to be hard to make a socially useful product with right-wing whale as your main food.
I think most broad platforms that get started do not ban “controversial” users from it.
Allowing “controversial” subreddits did not cause Reddit to fail. Allowing “controversial” videos did not cause Youtube to fail. Allowing “controversial” people on Facebook did not cause Facebook to fail. Allowing “controversial” tweets did not cause Twitter to fail. Indeed, I think banning people instead of being an open or neutral platform is very heavily correlated with failing as a piece of online infrastructure.
I think your counterexamples have a few shared characteristics—their revenue models are advertising-based, and they had the financial means to operate in the red for quite a while. I don’t think they needed to go in the black as quickly as I think Manifold will, so they had time to grow into large enough forces that advertiser-customers felt they needed to work with them. Although I don’t understand the system for purchasing advertising well, my understanding is that there are tons of intermediaries and agents in a way that makes maintaining brand safety very challenging. Moreover, the advertising market is so massive that you can lose a large fraction of potential advertiser-customers and still be OK. Finally, the users of those sites aren’t giving them any significant amount of money.
I don’t see Manifold as operating in such a forgiving niche, especially for some of the potential candidate business models. There aren’t many foundations interested in prediction markets, and even fewer where neither the optics or personal distaste for association with “controversial” content won’t be a barrier. The market for corporations purchasing public intelligence from prediction markets is murky to me, but alienating a bunch of your customers seems like a dubious move. Not only does this market not have the side of the advertising market, it also has a different inventory model. There’s a set amount of advertising users will tolerate, so the likely consequence of alienating a bunch of advertisers is that you have to sell that inventory for a lower price. The market for corporations who want to pay for Manifold-based intelligence is probably not limited in the same way, so each alienated potential client translates into a fully lost potential sale.
I know little about whale psychology, but I sense that most whales are emotionally invested to a significant degree in the platform on which they whale. Catering to a class of whales seems a somewhat more viable business model . . . but it would create a bad set of incentives for Manifold. Running the platform in a way that emphasizes attractiveness to whales seems at odds with running a socially useful service.
As far as user experience, those sites had mechanisms for ensuring that “controversial” content was only served to those who wanted it. Subreddit mods wield great censorship power, a Facebook user decided who to friend (the wall algorithms being less aggressive at the time IIRC), and YouTube isn’t really a social platform. You can mute individual users on Manifold, but at least when I was there last there wasn’t a good way to avoid “controversial” content if you didn’t want to see it. Finally, I speculate that users are more tolerant of issues with platforms on which they are not forking over significant amounts of money. Once money is involved, they may start to experience cognitive dissonance at supporting a business that is doing things that don’t align with their values.
I’m a bit confused by this discussion, since I haven’t in any way suggested banning people from using the site. That’s a completely separate issue from managing the balance of ideologies behind the site design. As it happens, Manifold liberally bans people but mostly because they manipulate markets via bots/puppets, troll, or are abusive: this is required for a balanced markets and good community spirit, and seems a reasonable balance.
James brought up site moderation philosophy in a comment (“Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets.”). I responded by asking how that jived with plausible business models for the company. So it’s a discussion about an issue first raised in the comments. I do think it’s of some relevance to a broader question hinted at in your post: whether the founders’ prior ideological commitments are causing them to make suboptimal business decisions.
I think these are fine hypotheses about the tradeoffs here, though I disagree with most of the analysis. I have thought and read a lot about it, since like, my primary job is indeed to handle these exact tradeoffs and to build successful platforms here, but this current thread doesn’t seem like the right context to dig into them.
As one point, I think Manifold’s basic business model is “take a cut of the trading profits/volume/revenue”. The best alternative business model is “have people pay for finding out information via subsidies for markets”.
I don’t think Manifolds business model relies on advertisers or foundations. I think it scales pretty well with accuracy and usefulness of markets.
In the “take a rake of trading volume” model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won’t work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales.
If that’s right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn’t be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it’s just that the aroma of positive EV will attract them.
This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn’t seem to jive well with its prosocial objectives. Cf. the conflict in “free-to-play” video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.
I disagree with this. I think the obvious source of money for a prediction platform like Manifold is from people who want to get accurate information about a question, who then fund subsidies which Manifold gets a cut off. That’s ultimately where the value proposition of the platform comes from, and so where it makes sense to extract the money.
I read your comment as “have people pay for finding out information via subsidies for markets” being your “alternative” model, rather than being the “take a cut of the trading profits/volume/revenue” model. Anyway, I mentioned earlier why I don’t think being “controversial” (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: “The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . .”
I think it’s a minor issue that is unlikely to drive anyway who actually has a “hair-on-fire” problem of the type that a prediction market might solve. I am confident anyone with experience building internet platforms like this would consider this a very irrelevant thing to worry about at the business stage where Manifold is at.
I don’t think this will be an issue for Manifold. For example Polymarket allows controversial users and content, and indeed barely moderates comments sections at all, and currently has much higher volume than Manifold. And Polymarket does provide some socially useful predictions—it’s somewhat frequently cited by the mainstream media for presidential election odds.
I wouldn’t expect super-high correlation between any specific business owner’s personal ideology and good business sense for their business.
It’s the other way around. Prediction markets that anyone can create are good, but it’s such crazy idea that one has to be pretty libertarian to be able to even think of the idea in the first place
I don’t see the idea of “prediction markets that anyone can create” as particularly crazy or even novel. Many people were frustrated with regulatory barriers preventing prediction markets in the US before Manifold came along; my impression is that Manifold’s innovation is “if we use play money rather than real money we avoid a lot of the regulation and some of the incentive for abuse, but hopefully still motivate people to participate”. Plus, perhaps, innovation in market structure that attempts to simplify the market interface and/or compensate for the lack of liquidity. (And a bunch of product work actually getting the details right.)
I realise you actually work for Manifold so maybe you have access to better information than me, but this is what it seems like to me from the outside.
Ah, yes, then we have succeeded in normalizing the idea that you should be able to create a market, trade in it, and then resolve it according to your own judgment.
When we were getting started literally everyone we told this to, including YC partners, thought it was crazy.
Only Scott Alexander saw the value in users creating and resolving their own markets. He awarded us the ACX grant, shared the link to our prototype with his readers, and the rest is history!
you’re right, and there were anyone-created prediction markets before Manifold, like Augur. I misspoke. the real new-unintuitive thing was markets anyone could create and resolve themselves rather than deferring to a central committee or court system. I think this level of self-sovereignty is genuinely hard to think of. It’s not enough to be a crypto fan who likes cypherpunk vibes; one has to be the kind of person who thinks about free banking or who gets the antifragile advantages that street merchants on rugs have over shopping malls.
although it’s quite possible that Manifold got popular more because the UX was better than other prediction markets or because a lot of rationalists to joined at the same time which let the social aspect take off
Allowing users to create any question they want with no moderation gate (unlike Metaculus or any other prediction market site) is a big part of Manifold’s success (even business success!). We further empower users to judge and resolve their markets. While not everyone is perfect or is always acting in good faith, this system largely works.
This openness has been key, but is it the same thing as allowing anyone to go to our conference? Not exactly, but they are related. Another example is that the scheduling software allowed anyone to book any time slot for any room at Manifest, and we basically didn’t have any problems there.
My take is that: 1. the “controversy” is way overblown based on an unfair connecting of dots from journalists at The Guardian who didn’t even attend. The actual Manifest was amazing and people who attended were nearly unanimous on that point, except:
2. There may have been one person at Manifest who was both edgy and impolite, bordering on aggressive. If we could have kicked that person out, then the anonymous EA forum poster wouldn’t have ever felt threatened, and we wouldn’t be having this conversation.
So, I do agree that some moderation can be helpful with business goals, just like we have had to ban some users from our site. But you need less moderation than you would think!
I strongly encourage people to discuss manifest elsewhere—as stated above, I didn’t go and only comment on it to illustrate the lack of thought-diversity in the site design.
If you run events with “controversial” speakers and attendees, and allow “controversial” stuff on your platform, then having critical pieces run against your business is part of the territory whether any specific article is fair or unfair.
Likewise, moderation and vetting are necessarily imprecise and error-prone; attempting to draw the line at X means that ~15% of the time you will actually draw the line at [X + 1 sd] and you’ll slide to [X + 2 sd] from time to time. I don’t know who the “one person” you are describing was, but given a near-miss on letting M.V. attend if he had bought a ticket, I’m hard pressed to see the “one person” as an extraordinarily uncommon [X + 3-4 sd] type error. It’s unlikely that there would be multiple extreme outlier misses associated with the same event. Also, I do not believe the poster characterized the problem as primarily linked to a single person who was preeminent in their problematic behavior. All that is to say that letting people like the “one person” slip through is probably not going to be rare given where you seem to have X set at the moment.
Thanks for engaging positively! You’re correct about the crux—if the resulting prediction market worked really well, the technical complains wouldn’t matter. But the number of predictions is much less important to me than their trustworthiness and the precision of specifying exactly what is being predicted. Being well-calibrated is good, but does not necessarily indicate good precision (i.e. a good Brier score), and that calibration.city is quite misleading in presenting the orders of magnitude more questions on manifold as a larger dot, rather than using dot size to indicate uncertainty bounds in the calibration.
It’s not true that markets at any scale produce the most accurate forecasts. There’s extensive literature showing that long-term prediction markets need to worry about the time-value of money and risk aversion influencing the market valuation. Manifold’s old loan system helped alleviate the time-value problem but gave you a negative equity problem. I don’t see this time value effect in your calibration data, but I suspect that’s dominated by short-term markets. Because market participation is strongly affected by liquidity, smaller markets don’t have incentives for people to get involved in them unless they’re very wrong. Thus getting markets to scale up when they’re not intrinsically controversial and therefore interesting is a substantial problem. The incentives to make accurate predictions can just be prizes for accurate individual predictions which can be aggregated into a site prediction by any other mechanism. The key feature of a market mechanism for prediction aggregation is that the reward must be tied to the probability of the event, and must be blind to who is providing the money. There’s no reason to believe either of these are useful constraints, and I don’t believe they’re optimal.
I note that many accounts are still in negative equity, and that a few such accounts that primarily generated their wealth by betting on weird metamarkets substantially influence the price of AI extinction risk markets. The number and variety of markets is therefore potentially punitive to the accuracy of predictions, particularly given the power-law rewards to market participation. While I refer to negative equity, the fact that we can still create puppets and transfer their $200 to another user (directly or via bad bets) means the problem persists to a smaller extent without anyone’s account going negative.
Hi, I work on Manifold! Thanks for the critical take — we’re always happy to hear these issues so we can continue improving.
IMO this is the crux:
While Metaculus does perform better (seriously impressive work!), we have >130,000 user-created markets across all matter of topics at various levels of seriousness. Personal questions tend to have higher brier scores than more objective science questions for example.
When compared to other prediction market sites (Kalshi, Polymarket), we generally perform better. See for example https://calibration.city
If you pick a topic that you think is important, I think you will find that Manifold is contributing a lot of relevant forecasts.
For example, we have >1,000 questions just in the topic “AI Safety”. I think getting a good forecast on all those questions is a valuable service to the world.
You mentioned that Manifold did worse this year in the ACX forecasting contest. That worries me as well, though I will note that the year before we did “extraordinarily well” placing in the 99.5% percentile: https://www.astralcodexten.com/p/who-predicted-2022
Markets are a different forecasting mechanism than polling and I think they have a lot of contribute to the forecasting space. In particular, I think that scaled up markets have hope to be the most accurate forecasting method, because there can be large incentives to find the most accurate price. Any bias you see in markets can also be rectified with trading strategies to correct them, which give feedback in terms of profitability.
We have recently changed the Manifold economy significantly to reduce bonuses and prevent the kind of exploits where accounts could go negative, so a lot of your points here have been addressed.
Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets. On balance, we think it creates more value for the world to not shut down ideas we might personally disagree with. Much more has been said on this topic elsewhere.
How does this fit in with Manifold as a business, though? I wouldn’t expect super-high correlation between any specific business owner’s personal ideology and good business sense for their business. It can happen, but claims usually make my radar for suspiciously convenient convergence start a-chirping.
I remain confused as to Manifold’s primary intended customers and (by extension) its core product, but the quote above doesn’t jive well with most of the candidate answers I come up with. It’s very likely that staying “controversial” will lead most would-be users who are not “controversial” to go elsewhere. And being marked as “controversial” is going to disqualify Manifold from funding at a lot of foundations, without any clear countervailing advantage. It’s also toxic for corporations who might be interested in paying for information.
I suppose there’s a scenario in which whales become the main customers and pseudo-gambling is the main product, although you’re limiting your supply of candidate whales there. Maybe hunting for whales in the right-wing sea could be profitable, but I think it’s going to be hard to make a socially useful product with right-wing whale as your main food.
I think most broad platforms that get started do not ban “controversial” users from it.
Allowing “controversial” subreddits did not cause Reddit to fail. Allowing “controversial” videos did not cause Youtube to fail. Allowing “controversial” people on Facebook did not cause Facebook to fail. Allowing “controversial” tweets did not cause Twitter to fail. Indeed, I think banning people instead of being an open or neutral platform is very heavily correlated with failing as a piece of online infrastructure.
I think your counterexamples have a few shared characteristics—their revenue models are advertising-based, and they had the financial means to operate in the red for quite a while. I don’t think they needed to go in the black as quickly as I think Manifold will, so they had time to grow into large enough forces that advertiser-customers felt they needed to work with them. Although I don’t understand the system for purchasing advertising well, my understanding is that there are tons of intermediaries and agents in a way that makes maintaining brand safety very challenging. Moreover, the advertising market is so massive that you can lose a large fraction of potential advertiser-customers and still be OK. Finally, the users of those sites aren’t giving them any significant amount of money.
I don’t see Manifold as operating in such a forgiving niche, especially for some of the potential candidate business models. There aren’t many foundations interested in prediction markets, and even fewer where neither the optics or personal distaste for association with “controversial” content won’t be a barrier. The market for corporations purchasing public intelligence from prediction markets is murky to me, but alienating a bunch of your customers seems like a dubious move. Not only does this market not have the side of the advertising market, it also has a different inventory model. There’s a set amount of advertising users will tolerate, so the likely consequence of alienating a bunch of advertisers is that you have to sell that inventory for a lower price. The market for corporations who want to pay for Manifold-based intelligence is probably not limited in the same way, so each alienated potential client translates into a fully lost potential sale.
I know little about whale psychology, but I sense that most whales are emotionally invested to a significant degree in the platform on which they whale. Catering to a class of whales seems a somewhat more viable business model . . . but it would create a bad set of incentives for Manifold. Running the platform in a way that emphasizes attractiveness to whales seems at odds with running a socially useful service.
As far as user experience, those sites had mechanisms for ensuring that “controversial” content was only served to those who wanted it. Subreddit mods wield great censorship power, a Facebook user decided who to friend (the wall algorithms being less aggressive at the time IIRC), and YouTube isn’t really a social platform. You can mute individual users on Manifold, but at least when I was there last there wasn’t a good way to avoid “controversial” content if you didn’t want to see it. Finally, I speculate that users are more tolerant of issues with platforms on which they are not forking over significant amounts of money. Once money is involved, they may start to experience cognitive dissonance at supporting a business that is doing things that don’t align with their values.
I’m a bit confused by this discussion, since I haven’t in any way suggested banning people from using the site. That’s a completely separate issue from managing the balance of ideologies behind the site design. As it happens, Manifold liberally bans people but mostly because they manipulate markets via bots/puppets, troll, or are abusive: this is required for a balanced markets and good community spirit, and seems a reasonable balance.
James brought up site moderation philosophy in a comment (“Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets.”). I responded by asking how that jived with plausible business models for the company. So it’s a discussion about an issue first raised in the comments. I do think it’s of some relevance to a broader question hinted at in your post: whether the founders’ prior ideological commitments are causing them to make suboptimal business decisions.
I think these are fine hypotheses about the tradeoffs here, though I disagree with most of the analysis. I have thought and read a lot about it, since like, my primary job is indeed to handle these exact tradeoffs and to build successful platforms here, but this current thread doesn’t seem like the right context to dig into them.
As one point, I think Manifold’s basic business model is “take a cut of the trading profits/volume/revenue”. The best alternative business model is “have people pay for finding out information via subsidies for markets”.
I don’t think Manifolds business model relies on advertisers or foundations. I think it scales pretty well with accuracy and usefulness of markets.
In the “take a rake of trading volume” model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won’t work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales.
If that’s right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn’t be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it’s just that the aroma of positive EV will attract them.
This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn’t seem to jive well with its prosocial objectives. Cf. the conflict in “free-to-play” video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.
I disagree with this. I think the obvious source of money for a prediction platform like Manifold is from people who want to get accurate information about a question, who then fund subsidies which Manifold gets a cut off. That’s ultimately where the value proposition of the platform comes from, and so where it makes sense to extract the money.
I read your comment as “have people pay for finding out information via subsidies for markets” being your “alternative” model, rather than being the “take a cut of the trading profits/volume/revenue” model. Anyway, I mentioned earlier why I don’t think being “controversial” (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: “The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . .”
I think it’s a minor issue that is unlikely to drive anyway who actually has a “hair-on-fire” problem of the type that a prediction market might solve. I am confident anyone with experience building internet platforms like this would consider this a very irrelevant thing to worry about at the business stage where Manifold is at.
I don’t think this will be an issue for Manifold. For example Polymarket allows controversial users and content, and indeed barely moderates comments sections at all, and currently has much higher volume than Manifold. And Polymarket does provide some socially useful predictions—it’s somewhat frequently cited by the mainstream media for presidential election odds.
It’s the other way around. Prediction markets that anyone can create are good, but it’s such crazy idea that one has to be pretty libertarian to be able to even think of the idea in the first place
I don’t see the idea of “prediction markets that anyone can create” as particularly crazy or even novel. Many people were frustrated with regulatory barriers preventing prediction markets in the US before Manifold came along; my impression is that Manifold’s innovation is “if we use play money rather than real money we avoid a lot of the regulation and some of the incentive for abuse, but hopefully still motivate people to participate”. Plus, perhaps, innovation in market structure that attempts to simplify the market interface and/or compensate for the lack of liquidity. (And a bunch of product work actually getting the details right.)
I realise you actually work for Manifold so maybe you have access to better information than me, but this is what it seems like to me from the outside.
Ah, yes, then we have succeeded in normalizing the idea that you should be able to create a market, trade in it, and then resolve it according to your own judgment.
When we were getting started literally everyone we told this to, including YC partners, thought it was crazy.
Only Scott Alexander saw the value in users creating and resolving their own markets. He awarded us the ACX grant, shared the link to our prototype with his readers, and the rest is history!
Yeah, the idea that self-resolution and insider trading don’t require central regulation to manage does seem more like a novelty, that’s fair.
you’re right, and there were anyone-created prediction markets before Manifold, like Augur. I misspoke. the real new-unintuitive thing was markets anyone could create and resolve themselves rather than deferring to a central committee or court system. I think this level of self-sovereignty is genuinely hard to think of. It’s not enough to be a crypto fan who likes cypherpunk vibes; one has to be the kind of person who thinks about free banking or who gets the antifragile advantages that street merchants on rugs have over shopping malls.
although it’s quite possible that Manifold got popular more because the UX was better than other prediction markets or because a lot of rationalists to joined at the same time which let the social aspect take off
Allowing users to create any question they want with no moderation gate (unlike Metaculus or any other prediction market site) is a big part of Manifold’s success (even business success!). We further empower users to judge and resolve their markets. While not everyone is perfect or is always acting in good faith, this system largely works.
This openness has been key, but is it the same thing as allowing anyone to go to our conference? Not exactly, but they are related. Another example is that the scheduling software allowed anyone to book any time slot for any room at Manifest, and we basically didn’t have any problems there.
My take is that:
1. the “controversy” is way overblown based on an unfair connecting of dots from journalists at The Guardian who didn’t even attend. The actual Manifest was amazing and people who attended were nearly unanimous on that point, except:
2. There may have been one person at Manifest who was both edgy and impolite, bordering on aggressive. If we could have kicked that person out, then the anonymous EA forum poster wouldn’t have ever felt threatened, and we wouldn’t be having this conversation.
So, I do agree that some moderation can be helpful with business goals, just like we have had to ban some users from our site. But you need less moderation than you would think!
I strongly encourage people to discuss manifest elsewhere—as stated above, I didn’t go and only comment on it to illustrate the lack of thought-diversity in the site design.
If you run events with “controversial” speakers and attendees, and allow “controversial” stuff on your platform, then having critical pieces run against your business is part of the territory whether any specific article is fair or unfair.
Likewise, moderation and vetting are necessarily imprecise and error-prone; attempting to draw the line at X means that ~15% of the time you will actually draw the line at [X + 1 sd] and you’ll slide to [X + 2 sd] from time to time. I don’t know who the “one person” you are describing was, but given a near-miss on letting M.V. attend if he had bought a ticket, I’m hard pressed to see the “one person” as an extraordinarily uncommon [X + 3-4 sd] type error. It’s unlikely that there would be multiple extreme outlier misses associated with the same event. Also, I do not believe the poster characterized the problem as primarily linked to a single person who was preeminent in their problematic behavior. All that is to say that letting people like the “one person” slip through is probably not going to be rare given where you seem to have X set at the moment.
Thanks for engaging positively! You’re correct about the crux—if the resulting prediction market worked really well, the technical complains wouldn’t matter. But the number of predictions is much less important to me than their trustworthiness and the precision of specifying exactly what is being predicted. Being well-calibrated is good, but does not necessarily indicate good precision (i.e. a good Brier score), and that calibration.city is quite misleading in presenting the orders of magnitude more questions on manifold as a larger dot, rather than using dot size to indicate uncertainty bounds in the calibration.
It’s not true that markets at any scale produce the most accurate forecasts. There’s extensive literature showing that long-term prediction markets need to worry about the time-value of money and risk aversion influencing the market valuation. Manifold’s old loan system helped alleviate the time-value problem but gave you a negative equity problem. I don’t see this time value effect in your calibration data, but I suspect that’s dominated by short-term markets. Because market participation is strongly affected by liquidity, smaller markets don’t have incentives for people to get involved in them unless they’re very wrong. Thus getting markets to scale up when they’re not intrinsically controversial and therefore interesting is a substantial problem. The incentives to make accurate predictions can just be prizes for accurate individual predictions which can be aggregated into a site prediction by any other mechanism. The key feature of a market mechanism for prediction aggregation is that the reward must be tied to the probability of the event, and must be blind to who is providing the money. There’s no reason to believe either of these are useful constraints, and I don’t believe they’re optimal.
I note that many accounts are still in negative equity, and that a few such accounts that primarily generated their wealth by betting on weird metamarkets substantially influence the price of AI extinction risk markets. The number and variety of markets is therefore potentially punitive to the accuracy of predictions, particularly given the power-law rewards to market participation. While I refer to negative equity, the fact that we can still create puppets and transfer their $200 to another user (directly or via bad bets) means the problem persists to a smaller extent without anyone’s account going negative.