That will give me 3-6 times the strong voting power of a forum beginner, which seems like way too much.
Personally I’d rather want the difference to be bigger, since I find it much more informative what the best-informed users think.
Ideally the karma system would also be more sensitive to the average quality of users’ comments/posts. Right now sheer quantity is awarded more than ideal, in my view. But I realise it’s non-trivial to improve on the current system.
We could give weight to the average vote per comment/post, e.g. a factor calculated by adding all weighted votes on someone’s comments and posts and then dividing by the number of votes on their comments (not the number of comments/posts, to avoid penalizing comments in threads that aren’t really read).
We could also use a prior on this factor, so that users with a small number of highly upvoted things don’t get too much power.
I don’t fully understand what you’re saying, but my guess is that you’re suggesting we should take a user’s total karma and divide that by votes. I don’t understand what it means to “give weight to this”—does the resulting calculation become their strong vote power? I am not being arch, I literally fully don’t understand, like I’m dumb.
I know someone who has some data and studied the forum voting realizations and weak/strong upvotes. They are are totally not a nerd, I swear!
Thoughts:
A proximate issue with the idea I think you are proposing is that currently, voting patterns and concentration of voting or strong upvotes differs in a systematic way by the “class of post/comment”:
There is a class of “workaday” comments/posts that no one finds problematic and gets just regular upvotes on.
Then there is a class of “I’m fighting the War in Heaven for the Lisan al Gaib” comments/posts that gets a large amount of strong upvotes. In my opinion, the karma gains here aren’t by merit or content.
I had an idea to filter for this (that is sort of exactly the opposite of yours) to downweight the karma of these comments by their environment, to get a “true measure” of the content. Also, the War in Heaven posts have a sort of frenzy to them. It’s not impossible that giving everyone a 2x − 10x multiplier on their karma might contribute to this frenzy, so moderating this algorithmically seems good.
A deeper issue is that people will be much more risk averse in a system like this that awards them for their average, not total karma.
In my opinion, people are already too risk averse, in a way that prevents confrontation, but at least that leads to a lot of generally positive comments/posts, which is a good thing.
Now, this sort of gives a new incentive, to aim for zingers or home runs. This seems pretty bad. I actually don’t think it is that dangerous/powerful in terms of actual karma gain, because as you mentioned, this can be moderated algorithmically. I think it’s more a problem that this can lead to changes in perception or culture shifts.
Now, this sort of gives a new incentive, to aim for zingers or home runs. This seems pretty bad.
Hmm ya, that seems fair. It might generally encourage preaching to a minority of strong supporters who strong upvote, with the rest indifferent and abstaining from voting.
We could have a meta voting system that awards karma or adjusts upvoting power dependent on getting upvotes from different groups of people.
Examples motivating this vision:
If the two of us had a 15 comment long exchange, and we upvoted each other each time, we would gain a lot of karma. I don’t think our karma gains should be worth hugely more than say, 4 “outside” people upvoting both of us once for that exchange.
If you receive a strong upvote for the first time, from someone from another “faction”, or from a person who normally doesn’t upvote you, that should be noted, rewarded and encouraged (but still anonymized[1]).
On the other hand, upvotes from the same group of people who upvoted you 50x in the past, and does it 5x a week, should be attenuated somewhat.
This is a little tricky:
There’s some care here, so we aren’t getting the opposite problem of encouraging arbitrary upvotes from random people. We probably want to filter to get substantial opinions. (At the risk of being ideological, I think “diversity”, which is the idea here, is valuable but needs to be thoughtfully designed and not done blindly.)
Some of the ideas, like solving for “factions”, involves “the vote graph” (a literal mathematical object, analogous to the friend graph on FB but for votes). This requires literal graph theory, e.g. I would probably consult a computer scientist or applied math lady.
I could also see using the view graph as useful.
This isn’t exactly the same as your idea, but a lot of your idea can be folded in (while less direct, there’s several ways we can bake in something like “people who get strong upvotes consistently are more rewarded”).
Maybe there is a tension between notifying people of upvoting diversity, and giving away identity, which might be one reason why some actual graph theory probably needs to be used.
I think this is an interesting idea. I would probably recommend against trying to “group” users, because it would be messy and hard to understand, and I am just generally worried about grouping users and treating them differently based on their groups. Just weakening your upvotes on users you often upvote seems practical and easy to understand.
Would minority views/interests become less visible this way, though?
I agree. There’s issues. It seems comple and not transparent at first[1]
I would probably recommend against trying to “group” users, because it would be messy and hard to understand, and I am just generally worried about grouping users and treating them differently based on their groups. Just weakening your upvotes on users you often upvote seems practical and easy to understand.
Would minority views/interests become less visible this way, though?
I think the goal of using the graphs and building “factions” (“faction” is not the best word I would use here, like if you were to put this on a grant proposal) is that it makes it visible and legible.
This might be more general and useful than it seems and can be used prosocially.
For example, once legible, you could identify minority views and give them credence (or just make this a “slider” for people to examine).
Like you said, this is hard to execute. I think this is hard in the sense that the designer needs to find the right patterns for both socialization and practical use. Once found, the patterns can ultimately can be relatively simple and transparent.
Misc comments:
To be clear, I think the ideas I’m discussing in this post and reforms to the forum is at least a major project, up to a major set of useful interventions, maybe comparable to all of “prediction markets” (in the sense of the diversity of different projects it could support, investment, and potential impact that would justify it).
This isn’t something I am actively pursuing (but these forum discussions are interesting and hard to resist).
Yes, I think that’s right, I am guessing because both involve graph theory (and I guess in pagerank using the edges turns it into a linear algebra problem too?).
Note that I hardly know any more about graph theory.
PageRank mostly involved graph theory in the mere observation that there’s a directed graph of pages linking to each other. It then immediately turns to linear algebra, where the idea is that you want a page’s weight to correspond to the sum of the weights of the pages linking to it—and this exactly describes finding an eigenvector of the graph matrix.
On second thought I guess your idea for karma is more complicated, maybe I’ll look at some simple examples and see what comes up if I happen to have the time.
That’s interesting to know about pagerank. It’s smart it just goes to linear algebra.
I think building the graph requires data that isn’t publicly available like identity of votes and views. It might be hard to get a similar dataset to see if a method works or not. Some of the “clustering techniques” might not apply to other data.
I don’t fully understand what you’re saying, but my guess is that you’re saying is like, that a user’s total karma and divide that by votes.
Ya, that’s right, total karma divided by the number of votes.
I don’t understand what it means to “give weight to this”—does the resulting calculation become their strong vote power?
What I proposed could be what determines strong vote power alone, but strong vote power could be based on a combination of multiple things. What I meant by “give weight to this” is that it could just be one of multiple determinants of strong vote power.
A deeper issue is that people will be much more risk averse in a system like this that awards them for their average, not total karma.
This is why I was thinking it would only be one factor. We could use both their total and average to determine the strong vote power. But also, maybe a bit of risk-aversion is good? The EA Forum is getting used more, so it would be good if the comments and posts people made were high quality rather than noise.
Also, again, it wouldn’t penalize users for making comments that don’t get voted on; it encourages them to chase strong upvotes and avoid downvotes (relative to regular upvotes or no votes).
I did have thought about a meta karma system, but where the voting effect or output differs by context (so a strong upvote or strong downvote has a different effect on comments or posts depending on the situation).
This would avoid the current situation of “I need to make sure the comment I like ranks highly or has the karma I want” and also prevent certain kinds of hysteresis.
While the voting effect or output can change radically in this system, based on context, IMO, it’s important make sure that the latent underlying voting power of the user accumulates in a fairly mundane/simple/gradual way, otherwise you get these weird incentives.
Maybe this underweights many people appreciating a given comment or post and regular upvoting it, but few of them appreciating it enough to strong upvote it, relative to a post or comment getting both a few strong upvotes and at most a few regular votes?
I think having +6/7 votes seems too high because it’s not clear who is voting when you see the breakdown of karma to votes. I end up never using my strong upvote, but would if it was in the 3-5 range
I think it would also help if we had upvotes for usefulness vs I agree with this vs “this applies to me” other things, and if we had in-post polls especially for anecdotal posts.
I’d be interested in an empirical experiment of looking at posts via unweighted karma vs weighted karma, though of course “ground truth” might be hard.
‘Personally I’d rather want the difference to be bigger, since I find it much more informative what the best-informed users think.’
This seems very strange to me. I accept that there’s some correlation between upvoted posters and epistemic rigour, but there’s a huge amount of noise, both in reasons for upvotes and in subject areas. EA includes a huge diversity of subject areas each requiring specialist knowledge. If I want to learn improv, I don’t go to a Fields Medalist winner or Pulitzer prize winning environmental journalist, so why should the equivalent be true on here?
I’d welcome topic-specific karma in principle, but I’m unsure how hard it is to implement/how much of a priority it is. And whether karma is topic-specific or not, I think that large differences in voting power increase accuracy and quality.
I don’t think I am too convinced by the logical flow of your argument, which if I understand correctly is:
more karma = more informed = higher value on opinion
I think that at each of these steps (more karma --> more informed, more informed --> higher value of opinion), you lose a bunch of definition, such that I am a lot less convinced of this.
I’m not saying that there’s a perfect correlation between levels of karma and quality of opinion. The second paragraph of my comment points to quantity of comments/posts being a distorting factor; and there are no doubt others.
Downvoted because I think “more active on the forum for a longer time” may be a good proxy for “well informed about what other forum posters think” (and even that’s doubtful), but is a bad proxy for “well informed about reality”.
Ok, this comment is ideological (but really interesting).
This comment is pushing on a lot of things, please don’t take any of this personally .
(or take it personally if you’re one of the top people listed here and fighting me!)
So below is the actual list of top karma users.
BIG CAVEATS:
I am happy to ruthlessly, mercilessly attack, the top ranked person there, on substantial issues like on their vision of the forum. Like, karma means nothing in the real world (but for onlookers, my choice is wildly not career optimal[1]).
Some people listed are sort of dumb-dumbs.
This list is missing the vast majority (95%) of the talented EAs and people just contributing in a public way, much less those who don’t post or blog.
The ranking and composition could be drastically improved
But, yes, contra you, I would say that this list of people do have better and reasonable views of reality than the average person and probably average EA.
More generally, this is positively correlated with karma.
Secondly, importantly, we don’t need EAs on the forum to have the best “view of reality”.
We need EAs to have the best views of generating impact, such as creating good meta systems, avoiding traps, driving good culture, allocating funding correctly, attracting real outside talent, and appointing EAs and others to be impactful in reality.
Another issue is that there isn’t really a good way to resolve “beef” between EAs right now.
Funders and other senior EAs, in the most prosocial, prudent way, are wary of this bad behavior and consequent effects (lock-in). So it’s really not career optimal to just randomly fight and be disagreeable.
I think I would disagree with this. At the very least, I think people on the list write pretty useful posts and comments.
Still, the ranking doesn’t really match who I think consistently makes the most valuable comments or posts, and I think it reflects volume too much. I’m probably as high as I am mainly because of a large number of comments that got 1 or 2 regular upvotes (I’ve made the 5th most comments among EA Forum users).
(I don’t think people should be penalized for making posts or comments that don’t get much or any votes; this would discourage writing on technical or niche topics, and commenting on posts that aren’t getting much attention anymore or never did. This is why I proposed dividing total karma by number of votes on your comments/posts, rather than dividing total karma by the number of your comments/posts.)
This list is missing the vast majority (95%) of the talented EAs and people just contributing in a public way, much less those who don’t post or blog.
I agree.
The ranking and composition could be drastically improved
I agree that they could probably be improved substantially. I’m not sure about “drastically”, but I think “substantially” is enough to do something about it.
I’m not quite sure I follow your reasoning. I explicitly say in the second paragraph of my comment that “right now sheer quantity is awarded more than ideal”.
You said: “Downvoted because I think “more active on the forum for a longer time” may be a good proxy for “well informed about what other forum posters think” (and even that’s doubtful), but is a bad proxy for “well informed about reality”.”
I pointed out that the second paragraph in my comment made clear that I think that quantity of comments/posts (what you call “more active on the forum for a longer time”) is a distorting factor. Thus, we seem to be in agreement on this point, which made your objection/downvote unclear to me.
Your new comment doesn’t seem to address 2, but makes an unrelated point.
This wasn’t an unrelated point, but I’ll try to make my argument more explicit.
Your original comment said:
Personally I’d rather want the difference to be bigger [than having 3 times the voting power], since I find it much more informative what the best-informed users think.
and your second paragraph implied that by “best-informed” you still mean something that’s measurable from their forum stats.
What I’m saying is this is not a good idea, regardless of whether you mean the current or an “improved” metric, since being actually best informed probably has close to nothing to do with any of those, and you’ll just end up amplifying unwanted effects.
More generally, the idea that we can objectively and consistently judge which users are most useful to all other users—or which will have a predictably higher impact if their votes are listened to—seems wrong to me. The prioritisation ideas that we apply to e.g. global health interventions may only be very weakly applicable to discourse.
Personally I’d rather want the difference to be bigger, since I find it much more informative what the best-informed users think.
Ideally the karma system would also be more sensitive to the average quality of users’ comments/posts. Right now sheer quantity is awarded more than ideal, in my view. But I realise it’s non-trivial to improve on the current system.
We could give weight to the average vote per comment/post, e.g. a factor calculated by adding all weighted votes on someone’s comments and posts and then dividing by the number of votes on their comments (not the number of comments/posts, to avoid penalizing comments in threads that aren’t really read).
We could also use a prior on this factor, so that users with a small number of highly upvoted things don’t get too much power.
I don’t fully understand what you’re saying, but my guess is that you’re suggesting we should take a user’s total karma and divide that by votes. I don’t understand what it means to “give weight to this”—does the resulting calculation become their strong vote power? I am not being arch, I literally fully don’t understand, like I’m dumb.
I know someone who has some data and studied the forum voting realizations and weak/strong upvotes. They are are totally not a nerd, I swear!
Thoughts:
A proximate issue with the idea I think you are proposing is that currently, voting patterns and concentration of voting or strong upvotes differs in a systematic way by the “class of post/comment”:
There is a class of “workaday” comments/posts that no one finds problematic and gets just regular upvotes on.
Then there is a class of “I’m fighting the War in Heaven for the Lisan al Gaib” comments/posts that gets a large amount of strong upvotes. In my opinion, the karma gains here aren’t by merit or content.
I had an idea to filter for this (that is sort of exactly the opposite of yours) to downweight the karma of these comments by their environment, to get a “true measure” of the content. Also, the War in Heaven posts have a sort of frenzy to them. It’s not impossible that giving everyone a 2x − 10x multiplier on their karma might contribute to this frenzy, so moderating this algorithmically seems good.
A deeper issue is that people will be much more risk averse in a system like this that awards them for their average, not total karma.
In my opinion, people are already too risk averse, in a way that prevents confrontation, but at least that leads to a lot of generally positive comments/posts, which is a good thing.
Now, this sort of gives a new incentive, to aim for zingers or home runs. This seems pretty bad. I actually don’t think it is that dangerous/powerful in terms of actual karma gain, because as you mentioned, this can be moderated algorithmically. I think it’s more a problem that this can lead to changes in perception or culture shifts.
Hmm ya, that seems fair. It might generally encourage preaching to a minority of strong supporters who strong upvote, with the rest indifferent and abstaining from voting.
This is a good point itself.
This raises a new, pretty sanguine idea:
We could have a meta voting system that awards karma or adjusts upvoting power dependent on getting upvotes from different groups of people.
Examples motivating this vision:
If the two of us had a 15 comment long exchange, and we upvoted each other each time, we would gain a lot of karma. I don’t think our karma gains should be worth hugely more than say, 4 “outside” people upvoting both of us once for that exchange.
If you receive a strong upvote for the first time, from someone from another “faction”, or from a person who normally doesn’t upvote you, that should be noted, rewarded and encouraged (but still anonymized[1]).
On the other hand, upvotes from the same group of people who upvoted you 50x in the past, and does it 5x a week, should be attenuated somewhat.
This is a little tricky:
There’s some care here, so we aren’t getting the opposite problem of encouraging arbitrary upvotes from random people. We probably want to filter to get substantial opinions. (At the risk of being ideological, I think “diversity”, which is the idea here, is valuable but needs to be thoughtfully designed and not done blindly.)
Some of the ideas, like solving for “factions”, involves “the vote graph” (a literal mathematical object, analogous to the friend graph on FB but for votes). This requires literal graph theory, e.g. I would probably consult a computer scientist or applied math lady.
I could also see using the view graph as useful.
This isn’t exactly the same as your idea, but a lot of your idea can be folded in (while less direct, there’s several ways we can bake in something like “people who get strong upvotes consistently are more rewarded”).
Maybe there is a tension between notifying people of upvoting diversity, and giving away identity, which might be one reason why some actual graph theory probably needs to be used.
I think this is an interesting idea. I would probably recommend against trying to “group” users, because it would be messy and hard to understand, and I am just generally worried about grouping users and treating them differently based on their groups. Just weakening your upvotes on users you often upvote seems practical and easy to understand.
Would minority views/interests become less visible this way, though?
I agree. There’s issues. It seems comple and not transparent at first[1]
I think the goal of using the graphs and building “factions” (“faction” is not the best word I would use here, like if you were to put this on a grant proposal) is that it makes it visible and legible.
This might be more general and useful than it seems and can be used prosocially.
For example, once legible, you could identify minority views and give them credence (or just make this a “slider” for people to examine).
Like you said, this is hard to execute. I think this is hard in the sense that the designer needs to find the right patterns for both socialization and practical use. Once found, the patterns can ultimately can be relatively simple and transparent.
Misc comments:
To be clear, I think the ideas I’m discussing in this post and reforms to the forum is at least a major project, up to a major set of useful interventions, maybe comparable to all of “prediction markets” (in the sense of the diversity of different projects it could support, investment, and potential impact that would justify it).
This isn’t something I am actively pursuing (but these forum discussions are interesting and hard to resist).
This sounds somewhat like a search-engine eigenvector thing, e.g. PageRank.
Yes, I think that’s right, I am guessing because both involve graph theory (and I guess in pagerank using the edges turns it into a linear algebra problem too?).
Note that I hardly know any more about graph theory.
PageRank mostly involved graph theory in the mere observation that there’s a directed graph of pages linking to each other. It then immediately turns to linear algebra, where the idea is that you want a page’s weight to correspond to the sum of the weights of the pages linking to it—and this exactly describes finding an eigenvector of the graph matrix.
On second thought I guess your idea for karma is more complicated, maybe I’ll look at some simple examples and see what comes up if I happen to have the time.
That’s interesting to know about pagerank. It’s smart it just goes to linear algebra.
I think building the graph requires data that isn’t publicly available like identity of votes and views. It might be hard to get a similar dataset to see if a method works or not. Some of the “clustering techniques” might not apply to other data.
Maybe there is a literature for this already.
Ya, that’s right, total karma divided by the number of votes.
What I proposed could be what determines strong vote power alone, but strong vote power could be based on a combination of multiple things. What I meant by “give weight to this” is that it could just be one of multiple determinants of strong vote power.
This is why I was thinking it would only be one factor. We could use both their total and average to determine the strong vote power. But also, maybe a bit of risk-aversion is good? The EA Forum is getting used more, so it would be good if the comments and posts people made were high quality rather than noise.
Also, again, it wouldn’t penalize users for making comments that don’t get voted on; it encourages them to chase strong upvotes and avoid downvotes (relative to regular upvotes or no votes).
I did have thought about a meta karma system, but where the voting effect or output differs by context (so a strong upvote or strong downvote has a different effect on comments or posts depending on the situation).
This would avoid the current situation of “I need to make sure the comment I like ranks highly or has the karma I want” and also prevent certain kinds of hysteresis.
While the voting effect or output can change radically in this system, based on context, IMO, it’s important make sure that the latent underlying voting power of the user accumulates in a fairly mundane/simple/gradual way, otherwise you get these weird incentives.
Maybe this underweights many people appreciating a given comment or post and regular upvoting it, but few of them appreciating it enough to strong upvote it, relative to a post or comment getting both a few strong upvotes and at most a few regular votes?
I think having +6/7 votes seems too high because it’s not clear who is voting when you see the breakdown of karma to votes. I end up never using my strong upvote, but would if it was in the 3-5 range
I think it would also help if we had upvotes for usefulness vs I agree with this vs “this applies to me” other things, and if we had in-post polls especially for anecdotal posts.
I agree that a multi-dimensional voting system would have some advantages.
I’d be interested in an empirical experiment of looking at posts via unweighted karma vs weighted karma, though of course “ground truth” might be hard.
‘Personally I’d rather want the difference to be bigger, since I find it much more informative what the best-informed users think.’
This seems very strange to me. I accept that there’s some correlation between upvoted posters and epistemic rigour, but there’s a huge amount of noise, both in reasons for upvotes and in subject areas. EA includes a huge diversity of subject areas each requiring specialist knowledge. If I want to learn improv, I don’t go to a Fields Medalist winner or Pulitzer prize winning environmental journalist, so why should the equivalent be true on here?
I think that a fairly large fraction of posts is of a generalist nature. Also, my guess is that people with a large voting power usually don’t vote on topics they don’t know (though no doubt there are exceptions).
I’d welcome topic-specific karma in principle, but I’m unsure how hard it is to implement/how much of a priority it is. And whether karma is topic-specific or not, I think that large differences in voting power increase accuracy and quality.
So much optimates energy. Strong upvoted.
I don’t think I am too convinced by the logical flow of your argument, which if I understand correctly is:
more karma = more informed = higher value on opinion
I think that at each of these steps (more karma --> more informed, more informed --> higher value of opinion), you lose a bunch of definition, such that I am a lot less convinced of this.
I’m not saying that there’s a perfect correlation between levels of karma and quality of opinion. The second paragraph of my comment points to quantity of comments/posts being a distorting factor; and there are no doubt others.
Downvoted because I think “more active on the forum for a longer time” may be a good proxy for “well informed about what other forum posters think” (and even that’s doubtful), but is a bad proxy for “well informed about reality”.
Ok, this comment is ideological (but really interesting).
This comment is pushing on a lot of things, please don’t take any of this personally .
(or take it personally if you’re one of the top people listed here and fighting me!)
So below is the actual list of top karma users.
BIG CAVEATS:
I am happy to ruthlessly, mercilessly attack, the top ranked person there, on substantial issues like on their vision of the forum. Like, karma means nothing in the real world (but for onlookers, my choice is wildly not career optimal[1]).
Some people listed are sort of dumb-dumbs.
This list is missing the vast majority (95%) of the talented EAs and people just contributing in a public way, much less those who don’t post or blog.
The ranking and composition could be drastically improved
But, yes, contra you, I would say that this list of people do have better and reasonable views of reality than the average person and probably average EA.
More generally, this is positively correlated with karma.
Secondly, importantly, we don’t need EAs on the forum to have the best “view of reality”.
We need EAs to have the best views of generating impact, such as creating good meta systems, avoiding traps, driving good culture, allocating funding correctly, attracting real outside talent, and appointing EAs and others to be impactful in reality.
Another issue is that there isn’t really a good way to resolve “beef” between EAs right now.
Funders and other senior EAs, in the most prosocial, prudent way, are wary of this bad behavior and consequent effects (lock-in). So it’s really not career optimal to just randomly fight and be disagreeable.
I think I would disagree with this. At the very least, I think people on the list write pretty useful posts and comments.
Still, the ranking doesn’t really match who I think consistently makes the most valuable comments or posts, and I think it reflects volume too much. I’m probably as high as I am mainly because of a large number of comments that got 1 or 2 regular upvotes (I’ve made the 5th most comments among EA Forum users).
(I don’t think people should be penalized for making posts or comments that don’t get much or any votes; this would discourage writing on technical or niche topics, and commenting on posts that aren’t getting much attention anymore or never did. This is why I proposed dividing total karma by number of votes on your comments/posts, rather than dividing total karma by the number of your comments/posts.)
I agree.
I agree that they could probably be improved substantially. I’m not sure about “drastically”, but I think “substantially” is enough to do something about it.
Thank you for the corrections, which I agree with. It is generous of you to graciously indulge my comment.
Ok, debate aside as it’s 2am here, where does one get these data?
It’s on Issa Rice’s site: https://eaforum.issarice.com/userlist?sort=karma
I’m not quite sure I follow your reasoning. I explicitly say in the second paragraph of my comment that “right now sheer quantity is awarded more than ideal”.
I think it’s probable that the answer to “whose voice should count the most” is either:
No one’s—all voices should be equal, or
Something not close to any internally-available metric.
The dialectic was as follows:
You said: “Downvoted because I think “more active on the forum for a longer time” may be a good proxy for “well informed about what other forum posters think” (and even that’s doubtful), but is a bad proxy for “well informed about reality”.”
I pointed out that the second paragraph in my comment made clear that I think that quantity of comments/posts (what you call “more active on the forum for a longer time”) is a distorting factor. Thus, we seem to be in agreement on this point, which made your objection/downvote unclear to me.
Your new comment doesn’t seem to address 2, but makes an unrelated point.
This wasn’t an unrelated point, but I’ll try to make my argument more explicit.
Your original comment said:
and your second paragraph implied that by “best-informed” you still mean something that’s measurable from their forum stats.
What I’m saying is this is not a good idea, regardless of whether you mean the current or an “improved” metric, since being actually best informed probably has close to nothing to do with any of those, and you’ll just end up amplifying unwanted effects.
More generally, the idea that we can objectively and consistently judge which users are most useful to all other users—or which will have a predictably higher impact if their votes are listened to—seems wrong to me. The prioritisation ideas that we apply to e.g. global health interventions may only be very weakly applicable to discourse.
OK, thanks for explaining your reasoning.
On the object level issue, maybe we’ll have to agree to disagree. Fwiw I don’t think the karma system has to be perfect to be useful.