What makes EA, EA, what makes EA antifragile, is its ruthless transparency
- although I really want to move to a world where radical transparency wins, I currently don’t believe that we’re in a world like that right now (I wish I could explain why I think that without immediately being punished for excess transparency, but for obvious reasons that seems impossible).
How do we get to that world? Or if you see this world in better light than I do, if you believe that the world is already mostly managing to avoid punishing important true ideas, what’re the dynamics that preserve and promote that?
I like to think that open exchange of ideas, if conducted properly, converges on the correct answer. Of course, the forum in which this exchange occurs is crucial, especially the systems and software. Compare the amount of truth that you obtain from BBC, Wikipedia, Stack Overflow, Kialo, Facebook, Twitter, Reddit, and EA forum. All of these have different methods of verifying truth. The beauty of a place like each of these is that with the exception of BBC, you can post whatever you want.
But the inconvenient truth will be penalized in different ways. On Wikipedia, it might get edited out for something more tame, though often not. On Stack Overflow, it will be downvoted but still available, and likely read. On Kialo it will get refuted, although if it is the truth, it will be promoted. On Facebook and Twitter, many might even reshare it, though into their own echochambers. On Reddit, it’ll get downvoted and then posted into r/unpopularopinion.
The important thing is to design a system where it takes more work to a) post a lie b) refute the truth. And also, somehow design said system such that there is incentive to a) post the truth b) refute a lie, and importantly c) read/spread the truth. Whether this is by citations or a reputation-based voting system is beyond me but something I’ve been mulling over for quite some time.
Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.
I’ve raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we’d sort according to a prediction market’s expectation of the eventual ratio between positive and negative reports from readers). They say they’ve thought about it.
Although I cheer for this,
- although I really want to move to a world where radical transparency wins, I currently don’t believe that we’re in a world like that right now (I wish I could explain why I think that without immediately being punished for excess transparency, but for obvious reasons that seems impossible).
How do we get to that world? Or if you see this world in better light than I do, if you believe that the world is already mostly managing to avoid punishing important true ideas, what’re the dynamics that preserve and promote that?
I like to think that open exchange of ideas, if conducted properly, converges on the correct answer. Of course, the forum in which this exchange occurs is crucial, especially the systems and software. Compare the amount of truth that you obtain from BBC, Wikipedia, Stack Overflow, Kialo, Facebook, Twitter, Reddit, and EA forum. All of these have different methods of verifying truth. The beauty of a place like each of these is that with the exception of BBC, you can post whatever you want.
But the inconvenient truth will be penalized in different ways. On Wikipedia, it might get edited out for something more tame, though often not. On Stack Overflow, it will be downvoted but still available, and likely read. On Kialo it will get refuted, although if it is the truth, it will be promoted. On Facebook and Twitter, many might even reshare it, though into their own echochambers. On Reddit, it’ll get downvoted and then posted into r/unpopularopinion.
The important thing is to design a system where it takes more work to a) post a lie b) refute the truth. And also, somehow design said system such that there is incentive to a) post the truth b) refute a lie, and importantly c) read/spread the truth. Whether this is by citations or a reputation-based voting system is beyond me but something I’ve been mulling over for quite some time.
I guess prediction markets will help.
Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.
I’ve raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we’d sort according to a prediction market’s expectation of the eventual ratio between positive and negative reports from readers). They say they’ve thought about it.