I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance.
Suggesting that we simply develop an algorithm to identify “high quality content” and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that’s why we have a disinformation crisis.
Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage—seems like a decent way to try and learn. Of course, good intentions to solve a ‘disinformation crisis’ like this aren’t sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem’s existence as evidence.
I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about “high quality content”. But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.
But I’m doing the same thing and just being meta here, which is easy, so I’ll try too in another comment
It is true that this is not likely to solve the disinformation crisis. It is also true that the successful implementation of such a platform would be quite difficult. However, there are reasons why I outlined the platform as I did:
Small online newsrooms like 404 media have recently come into existence with subscriber based models that allow them to produce high quality content while catering to specialised audiences. If the sufficient resources are there to attract high quality reporters (whom I note in the post perform a function that cannot be easily replaced by algorithms), then the platform has a good chance of producing technical, scientific, or cause based news that is worthy of reading on its own.
Subreddits have been widely noted as efficient ways of finding answers to complex domain-specific questions, largely because they concentrate a domain-specific regular technical userbase and feature ruthless downvoting for posts that spread misinformation. Similarly, facebook’s system of emoji reacts has been found to correlate certain reactions strongly with inflammatory news spreading. Of course, both of these platforms have monetisation incentives that mean that they cannot act properly on these signals. A subscription based model would hopefully reduce these perverse incentives and allow for better algorithms than exist today.
“High Quality” as an indicator here is not about the quality of the reporting, evidence etc. in a given link but “relative quality” in a manner similar to content-agnostic ranking algorithms like PageRank. Since the model approximates news tickers with new links coming in over time rather than having websites linking to each other spatially, a version of reddit’s content ranking algorithms (which are open sourced) can be used.
Finally, I understand being dismissive of certain expert groups and some forms of crowd based information sourcing. However, if you reject both of them at once then we’re really left with quite limited options for information gathering.
Again, this is not a solution in the sense of a silver bullet. But it is also not as fanciful as perhaps it appears at first glance. A lot of the technology is here, and with the proper investment and application it can be used to provide a positive impact.
I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance.
Suggesting that we simply develop an algorithm to identify “high quality content” and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that’s why we have a disinformation crisis.
Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage—seems like a decent way to try and learn. Of course, good intentions to solve a ‘disinformation crisis’ like this aren’t sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem’s existence as evidence.
I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about “high quality content”. But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.
But I’m doing the same thing and just being meta here, which is easy, so I’ll try too in another comment
It is true that this is not likely to solve the disinformation crisis. It is also true that the successful implementation of such a platform would be quite difficult. However, there are reasons why I outlined the platform as I did:
Small online newsrooms like 404 media have recently come into existence with subscriber based models that allow them to produce high quality content while catering to specialised audiences. If the sufficient resources are there to attract high quality reporters (whom I note in the post perform a function that cannot be easily replaced by algorithms), then the platform has a good chance of producing technical, scientific, or cause based news that is worthy of reading on its own.
Subreddits have been widely noted as efficient ways of finding answers to complex domain-specific questions, largely because they concentrate a domain-specific regular technical userbase and feature ruthless downvoting for posts that spread misinformation. Similarly, facebook’s system of emoji reacts has been found to correlate certain reactions strongly with inflammatory news spreading. Of course, both of these platforms have monetisation incentives that mean that they cannot act properly on these signals. A subscription based model would hopefully reduce these perverse incentives and allow for better algorithms than exist today.
“High Quality” as an indicator here is not about the quality of the reporting, evidence etc. in a given link but “relative quality” in a manner similar to content-agnostic ranking algorithms like PageRank. Since the model approximates news tickers with new links coming in over time rather than having websites linking to each other spatially, a version of reddit’s content ranking algorithms (which are open sourced) can be used.
Finally, I understand being dismissive of certain expert groups and some forms of crowd based information sourcing. However, if you reject both of them at once then we’re really left with quite limited options for information gathering.
Again, this is not a solution in the sense of a silver bullet. But it is also not as fanciful as perhaps it appears at first glance. A lot of the technology is here, and with the proper investment and application it can be used to provide a positive impact.