Hi, the general model for the platform would be something akin to a web-based news site (e.g. WIRED, Vox, etc.) and a subreddit combined. There’s the human run in depth coverage part, where the work should be done to increase impartiality, but there’s also the linklist part which allows community members to “float” content they find interesting without getting bogged down in writing it up, so to speak. The links shared will be opinionated, definitely, but that should be mitigated by the human coverage, and the limitations of human coverage (speed of updates, long reading time) can hopefully be compensated by the linklist/subreddit portion of the site.
Light_of_Illuvatar
Karma: 16
It is true that this is not likely to solve the disinformation crisis. It is also true that the successful implementation of such a platform would be quite difficult. However, there are reasons why I outlined the platform as I did:
Small online newsrooms like 404 media have recently come into existence with subscriber based models that allow them to produce high quality content while catering to specialised audiences. If the sufficient resources are there to attract high quality reporters (whom I note in the post perform a function that cannot be easily replaced by algorithms), then the platform has a good chance of producing technical, scientific, or cause based news that is worthy of reading on its own.
Subreddits have been widely noted as efficient ways of finding answers to complex domain-specific questions, largely because they concentrate a domain-specific regular technical userbase and feature ruthless downvoting for posts that spread misinformation. Similarly, facebook’s system of emoji reacts has been found to correlate certain reactions strongly with inflammatory news spreading. Of course, both of these platforms have monetisation incentives that mean that they cannot act properly on these signals. A subscription based model would hopefully reduce these perverse incentives and allow for better algorithms than exist today.
“High Quality” as an indicator here is not about the quality of the reporting, evidence etc. in a given link but “relative quality” in a manner similar to content-agnostic ranking algorithms like PageRank. Since the model approximates news tickers with new links coming in over time rather than having websites linking to each other spatially, a version of reddit’s content ranking algorithms (which are open sourced) can be used.
Finally, I understand being dismissive of certain expert groups and some forms of crowd based information sourcing. However, if you reject both of them at once then we’re really left with quite limited options for information gathering.
Again, this is not a solution in the sense of a silver bullet. But it is also not as fanciful as perhaps it appears at first glance. A lot of the technology is here, and with the proper investment and application it can be used to provide a positive impact.