I’ve always found the “karma” system utterly repulsive and deeply disturbing (across online forums in general) . It’s a tool that can so easily catalyze bias and censorship, to the point that it becomes way more dangerous than it is useful. And the addition of older members having higher votes is extremely dangerous in preventing new members to question ideas from the dominant majority, hence leading to dogmatism.
The assumption that this will be prevented by already existing variety of views is not at all good enough guarantee: on the one hand, all the current members may share a certain (unreflected) bias; on the other hand, some members may become less active in certain periods of time, which may break the system of the plurality of views that’s supposed to keep each other’s biases in check.
What’s the alternative? Perhaps value-based votes, allowing you to see what like-minded people (your interest-neighbors, so to say) like. Think of last.fm and the way it ranks music that’s recommended to you, given what your neighbors are listening, where you can still check the newest or most liked stuff even if it doesn’t belong to your immediate set of preferences. If that’s hard to implement, well that doesn’t mean taking a ticket towards potentially dogmatic and undemocratic community is a way to go (where by undemocratic I mean directly impeding democratic principles, such as the ability of a community to sustain the challenge of the minority opinions, and to preserve channels via which those opinions can be heard and openly argued with).
By the way, I don’t think the idea of karma has anything to do with “elitism”. It has to do with in-group bias, and dangers that emerge from it, such as the censorship of the minority views. So if I weigh the danger of in-group bias vs. a bit tedious search of many posts, I’d always prefer the latter.
Thank you for your very interesting and thoughtful comment!
I just want to extend your thinking a little bit further into possible solutions. The blockchain space in particular has provided some interesting new ideas in terms of trust and how to organize communities around it. For example, Stellars Consensus Protocol works with “Quorum Slices” that are determined by people you trust to give you a “personal” view on the overall state. Similar you could nominate a “Member Slice” where some member votes are excluded/weighted down or weighted up in the calculation of your post weights. This would allow you to tailor what you see to your needs as your thinking evolves. So if a tyranny ensues you have the possibility of “navigating around”. And depending on how you implement it, people could subscribe to your view of the forum and thus, propagate this new algorithm for weighting posts. Hope this is not too complicated… (for those interested in more details, here is a link to a graphic novel explaining the Stellar CSP: https://www.stellar.org/stories/adventures-in-galactic-consensus-chapter-1)
my main point was just to agree with you that having a very hierarchical voting system may profit from some “countermeasures” that can be used in times of misuse or tyranny.
Thanks for the input, alexherwix! This proposal sounds very interesting. In general, I find this question really challenging: which model of quality control best mitigates the dangers of an in-group bias? On the one hand, the model you suggest (which seems quite close to what I had in mind above) seems really appealing. On the other hand, it would be interesting to see actual studies on the comparative impact of different solutions: e.g. the trust-based mechanism vs. top-down (“institutional”) injection of opposing views. For example, the controversial tab on reddit seems to do a nice job in keeping polarizing views around.
I’ve always found the “karma” system utterly repulsive and deeply disturbing (across online forums in general) . It’s a tool that can so easily catalyze bias and censorship, to the point that it becomes way more dangerous than it is useful. And the addition of older members having higher votes is extremely dangerous in preventing new members to question ideas from the dominant majority, hence leading to dogmatism.
The assumption that this will be prevented by already existing variety of views is not at all good enough guarantee: on the one hand, all the current members may share a certain (unreflected) bias; on the other hand, some members may become less active in certain periods of time, which may break the system of the plurality of views that’s supposed to keep each other’s biases in check.
What’s the alternative? Perhaps value-based votes, allowing you to see what like-minded people (your interest-neighbors, so to say) like. Think of last.fm and the way it ranks music that’s recommended to you, given what your neighbors are listening, where you can still check the newest or most liked stuff even if it doesn’t belong to your immediate set of preferences. If that’s hard to implement, well that doesn’t mean taking a ticket towards potentially dogmatic and undemocratic community is a way to go (where by undemocratic I mean directly impeding democratic principles, such as the ability of a community to sustain the challenge of the minority opinions, and to preserve channels via which those opinions can be heard and openly argued with).
By the way, I don’t think the idea of karma has anything to do with “elitism”. It has to do with in-group bias, and dangers that emerge from it, such as the censorship of the minority views. So if I weigh the danger of in-group bias vs. a bit tedious search of many posts, I’d always prefer the latter.
Thank you for your very interesting and thoughtful comment!
I just want to extend your thinking a little bit further into possible solutions. The blockchain space in particular has provided some interesting new ideas in terms of trust and how to organize communities around it. For example, Stellars Consensus Protocol works with “Quorum Slices” that are determined by people you trust to give you a “personal” view on the overall state. Similar you could nominate a “Member Slice” where some member votes are excluded/weighted down or weighted up in the calculation of your post weights. This would allow you to tailor what you see to your needs as your thinking evolves. So if a tyranny ensues you have the possibility of “navigating around”. And depending on how you implement it, people could subscribe to your view of the forum and thus, propagate this new algorithm for weighting posts. Hope this is not too complicated… (for those interested in more details, here is a link to a graphic novel explaining the Stellar CSP: https://www.stellar.org/stories/adventures-in-galactic-consensus-chapter-1)
my main point was just to agree with you that having a very hierarchical voting system may profit from some “countermeasures” that can be used in times of misuse or tyranny.
Thanks for the input, alexherwix! This proposal sounds very interesting. In general, I find this question really challenging: which model of quality control best mitigates the dangers of an in-group bias? On the one hand, the model you suggest (which seems quite close to what I had in mind above) seems really appealing. On the other hand, it would be interesting to see actual studies on the comparative impact of different solutions: e.g. the trust-based mechanism vs. top-down (“institutional”) injection of opposing views. For example, the controversial tab on reddit seems to do a nice job in keeping polarizing views around.