I think most broad platforms that get started do not ban “controversial” users from it.
Allowing “controversial” subreddits did not cause Reddit to fail. Allowing “controversial” videos did not cause Youtube to fail. Allowing “controversial” people on Facebook did not cause Facebook to fail. Allowing “controversial” tweets did not cause Twitter to fail. Indeed, I think banning people instead of being an open or neutral platform is very heavily correlated with failing as a piece of online infrastructure.
I think your counterexamples have a few shared characteristics—their revenue models are advertising-based, and they had the financial means to operate in the red for quite a while. I don’t think they needed to go in the black as quickly as I think Manifold will, so they had time to grow into large enough forces that advertiser-customers felt they needed to work with them. Although I don’t understand the system for purchasing advertising well, my understanding is that there are tons of intermediaries and agents in a way that makes maintaining brand safety very challenging. Moreover, the advertising market is so massive that you can lose a large fraction of potential advertiser-customers and still be OK. Finally, the users of those sites aren’t giving them any significant amount of money.
I don’t see Manifold as operating in such a forgiving niche, especially for some of the potential candidate business models. There aren’t many foundations interested in prediction markets, and even fewer where neither the optics or personal distaste for association with “controversial” content won’t be a barrier. The market for corporations purchasing public intelligence from prediction markets is murky to me, but alienating a bunch of your customers seems like a dubious move. Not only does this market not have the side of the advertising market, it also has a different inventory model. There’s a set amount of advertising users will tolerate, so the likely consequence of alienating a bunch of advertisers is that you have to sell that inventory for a lower price. The market for corporations who want to pay for Manifold-based intelligence is probably not limited in the same way, so each alienated potential client translates into a fully lost potential sale.
I know little about whale psychology, but I sense that most whales are emotionally invested to a significant degree in the platform on which they whale. Catering to a class of whales seems a somewhat more viable business model . . . but it would create a bad set of incentives for Manifold. Running the platform in a way that emphasizes attractiveness to whales seems at odds with running a socially useful service.
As far as user experience, those sites had mechanisms for ensuring that “controversial” content was only served to those who wanted it. Subreddit mods wield great censorship power, a Facebook user decided who to friend (the wall algorithms being less aggressive at the time IIRC), and YouTube isn’t really a social platform. You can mute individual users on Manifold, but at least when I was there last there wasn’t a good way to avoid “controversial” content if you didn’t want to see it. Finally, I speculate that users are more tolerant of issues with platforms on which they are not forking over significant amounts of money. Once money is involved, they may start to experience cognitive dissonance at supporting a business that is doing things that don’t align with their values.
I’m a bit confused by this discussion, since I haven’t in any way suggested banning people from using the site. That’s a completely separate issue from managing the balance of ideologies behind the site design. As it happens, Manifold liberally bans people but mostly because they manipulate markets via bots/puppets, troll, or are abusive: this is required for a balanced markets and good community spirit, and seems a reasonable balance.
James brought up site moderation philosophy in a comment (“Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets.”). I responded by asking how that jived with plausible business models for the company. So it’s a discussion about an issue first raised in the comments. I do think it’s of some relevance to a broader question hinted at in your post: whether the founders’ prior ideological commitments are causing them to make suboptimal business decisions.
I think these are fine hypotheses about the tradeoffs here, though I disagree with most of the analysis. I have thought and read a lot about it, since like, my primary job is indeed to handle these exact tradeoffs and to build successful platforms here, but this current thread doesn’t seem like the right context to dig into them.
As one point, I think Manifold’s basic business model is “take a cut of the trading profits/volume/revenue”. The best alternative business model is “have people pay for finding out information via subsidies for markets”.
I don’t think Manifolds business model relies on advertisers or foundations. I think it scales pretty well with accuracy and usefulness of markets.
In the “take a rake of trading volume” model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won’t work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales.
If that’s right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn’t be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it’s just that the aroma of positive EV will attract them.
This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn’t seem to jive well with its prosocial objectives. Cf. the conflict in “free-to-play” video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.
I disagree with this. I think the obvious source of money for a prediction platform like Manifold is from people who want to get accurate information about a question, who then fund subsidies which Manifold gets a cut off. That’s ultimately where the value proposition of the platform comes from, and so where it makes sense to extract the money.
I read your comment as “have people pay for finding out information via subsidies for markets” being your “alternative” model, rather than being the “take a cut of the trading profits/volume/revenue” model. Anyway, I mentioned earlier why I don’t think being “controversial” (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: “The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . .”
I think it’s a minor issue that is unlikely to drive anyway who actually has a “hair-on-fire” problem of the type that a prediction market might solve. I am confident anyone with experience building internet platforms like this would consider this a very irrelevant thing to worry about at the business stage where Manifold is at.
I don’t think this will be an issue for Manifold. For example Polymarket allows controversial users and content, and indeed barely moderates comments sections at all, and currently has much higher volume than Manifold. And Polymarket does provide some socially useful predictions—it’s somewhat frequently cited by the mainstream media for presidential election odds.
I think most broad platforms that get started do not ban “controversial” users from it.
Allowing “controversial” subreddits did not cause Reddit to fail. Allowing “controversial” videos did not cause Youtube to fail. Allowing “controversial” people on Facebook did not cause Facebook to fail. Allowing “controversial” tweets did not cause Twitter to fail. Indeed, I think banning people instead of being an open or neutral platform is very heavily correlated with failing as a piece of online infrastructure.
I think your counterexamples have a few shared characteristics—their revenue models are advertising-based, and they had the financial means to operate in the red for quite a while. I don’t think they needed to go in the black as quickly as I think Manifold will, so they had time to grow into large enough forces that advertiser-customers felt they needed to work with them. Although I don’t understand the system for purchasing advertising well, my understanding is that there are tons of intermediaries and agents in a way that makes maintaining brand safety very challenging. Moreover, the advertising market is so massive that you can lose a large fraction of potential advertiser-customers and still be OK. Finally, the users of those sites aren’t giving them any significant amount of money.
I don’t see Manifold as operating in such a forgiving niche, especially for some of the potential candidate business models. There aren’t many foundations interested in prediction markets, and even fewer where neither the optics or personal distaste for association with “controversial” content won’t be a barrier. The market for corporations purchasing public intelligence from prediction markets is murky to me, but alienating a bunch of your customers seems like a dubious move. Not only does this market not have the side of the advertising market, it also has a different inventory model. There’s a set amount of advertising users will tolerate, so the likely consequence of alienating a bunch of advertisers is that you have to sell that inventory for a lower price. The market for corporations who want to pay for Manifold-based intelligence is probably not limited in the same way, so each alienated potential client translates into a fully lost potential sale.
I know little about whale psychology, but I sense that most whales are emotionally invested to a significant degree in the platform on which they whale. Catering to a class of whales seems a somewhat more viable business model . . . but it would create a bad set of incentives for Manifold. Running the platform in a way that emphasizes attractiveness to whales seems at odds with running a socially useful service.
As far as user experience, those sites had mechanisms for ensuring that “controversial” content was only served to those who wanted it. Subreddit mods wield great censorship power, a Facebook user decided who to friend (the wall algorithms being less aggressive at the time IIRC), and YouTube isn’t really a social platform. You can mute individual users on Manifold, but at least when I was there last there wasn’t a good way to avoid “controversial” content if you didn’t want to see it. Finally, I speculate that users are more tolerant of issues with platforms on which they are not forking over significant amounts of money. Once money is involved, they may start to experience cognitive dissonance at supporting a business that is doing things that don’t align with their values.
I’m a bit confused by this discussion, since I haven’t in any way suggested banning people from using the site. That’s a completely separate issue from managing the balance of ideologies behind the site design. As it happens, Manifold liberally bans people but mostly because they manipulate markets via bots/puppets, troll, or are abusive: this is required for a balanced markets and good community spirit, and seems a reasonable balance.
James brought up site moderation philosophy in a comment (“Regarding Manifest and controversial attendees, we kept the same ethos as a our site, where anyone can create markets.”). I responded by asking how that jived with plausible business models for the company. So it’s a discussion about an issue first raised in the comments. I do think it’s of some relevance to a broader question hinted at in your post: whether the founders’ prior ideological commitments are causing them to make suboptimal business decisions.
I think these are fine hypotheses about the tradeoffs here, though I disagree with most of the analysis. I have thought and read a lot about it, since like, my primary job is indeed to handle these exact tradeoffs and to build successful platforms here, but this current thread doesn’t seem like the right context to dig into them.
As one point, I think Manifold’s basic business model is “take a cut of the trading profits/volume/revenue”. The best alternative business model is “have people pay for finding out information via subsidies for markets”.
I don’t think Manifolds business model relies on advertisers or foundations. I think it scales pretty well with accuracy and usefulness of markets.
In the “take a rake of trading volume” model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won’t work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales.
If that’s right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn’t be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it’s just that the aroma of positive EV will attract them.
This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn’t seem to jive well with its prosocial objectives. Cf. the conflict in “free-to-play” video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.
I disagree with this. I think the obvious source of money for a prediction platform like Manifold is from people who want to get accurate information about a question, who then fund subsidies which Manifold gets a cut off. That’s ultimately where the value proposition of the platform comes from, and so where it makes sense to extract the money.
I read your comment as “have people pay for finding out information via subsidies for markets” being your “alternative” model, rather than being the “take a cut of the trading profits/volume/revenue” model. Anyway, I mentioned earlier why I don’t think being “controversial” (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: “The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . .”
I think it’s a minor issue that is unlikely to drive anyway who actually has a “hair-on-fire” problem of the type that a prediction market might solve. I am confident anyone with experience building internet platforms like this would consider this a very irrelevant thing to worry about at the business stage where Manifold is at.
I don’t think this will be an issue for Manifold. For example Polymarket allows controversial users and content, and indeed barely moderates comments sections at all, and currently has much higher volume than Manifold. And Polymarket does provide some socially useful predictions—it’s somewhat frequently cited by the mainstream media for presidential election odds.