I like to think that open exchange of ideas, if conducted properly, converges on the correct answer. Of course, the forum in which this exchange occurs is crucial, especially the systems and software. Compare the amount of truth that you obtain from BBC, Wikipedia, Stack Overflow, Kialo, Facebook, Twitter, Reddit, and EA forum. All of these have different methods of verifying truth. The beauty of a place like each of these is that with the exception of BBC, you can post whatever you want.
But the inconvenient truth will be penalized in different ways. On Wikipedia, it might get edited out for something more tame, though often not. On Stack Overflow, it will be downvoted but still available, and likely read. On Kialo it will get refuted, although if it is the truth, it will be promoted. On Facebook and Twitter, many might even reshare it, though into their own echochambers. On Reddit, it’ll get downvoted and then posted into r/unpopularopinion.
The important thing is to design a system where it takes more work to a) post a lie b) refute the truth. And also, somehow design said system such that there is incentive to a) post the truth b) refute a lie, and importantly c) read/spread the truth. Whether this is by citations or a reputation-based voting system is beyond me but something I’ve been mulling over for quite some time.
Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.
I’ve raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we’d sort according to a prediction market’s expectation of the eventual ratio between positive and negative reports from readers). They say they’ve thought about it.
I like to think that open exchange of ideas, if conducted properly, converges on the correct answer. Of course, the forum in which this exchange occurs is crucial, especially the systems and software. Compare the amount of truth that you obtain from BBC, Wikipedia, Stack Overflow, Kialo, Facebook, Twitter, Reddit, and EA forum. All of these have different methods of verifying truth. The beauty of a place like each of these is that with the exception of BBC, you can post whatever you want.
But the inconvenient truth will be penalized in different ways. On Wikipedia, it might get edited out for something more tame, though often not. On Stack Overflow, it will be downvoted but still available, and likely read. On Kialo it will get refuted, although if it is the truth, it will be promoted. On Facebook and Twitter, many might even reshare it, though into their own echochambers. On Reddit, it’ll get downvoted and then posted into r/unpopularopinion.
The important thing is to design a system where it takes more work to a) post a lie b) refute the truth. And also, somehow design said system such that there is incentive to a) post the truth b) refute a lie, and importantly c) read/spread the truth. Whether this is by citations or a reputation-based voting system is beyond me but something I’ve been mulling over for quite some time.
I guess prediction markets will help.
Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.
I’ve raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we’d sort according to a prediction market’s expectation of the eventual ratio between positive and negative reports from readers). They say they’ve thought about it.