I’m not sure I follow how your 20% version relates to original post/proposal about categorized voting: summaries seem reasonable/good but unrelated, and the two points about tagging just seem to be “it would be nice if we used/had more tags.”
There are a lot of other points/responses I could address, but I think that it’s probably better to step back and summarize my big-picture concerns rather than continue narrowing in:
Time: How much time would this system require on the part of users?
Quality: At the estimated time input, will the quality/consistency reach a point where the system can actually be reliably used to the extent that it saves time/improves understanding?
I think the answer to (1) is “probably a lot”:
Suppose there are 10 relevant posts per day on average.
Suppose that each of the 25 dimensions requires an average minimum of ~1 minute of thought to make a single passable evaluation (especially before users become familiar with doing this, and then even once they become familiar they “have to continuously reallocate scarce points”). We’ll just ignore the eighth vector.
This produces an estimate of ~250 minutes (>4 hours) per day for a single perspective on each article, on average.
It seems plausible that for the metric to have much value, it probably warrants at least 2–3 perspectives per article, effectively >doubling the time commitment for it to be valuable.
I’m not going to go much deeper to cover (2), as I think the issue is fairly understandable, but I will just highlight that the time and quality are clearly proportional, and so skimping on time will make the quality suffer.
Ultimately, I do not see this metric being sufficiently valuable to be worth a daily commitment of >5 hours of EA time; I would much rather people spend that time creating new posts, commenting on existing posts, etc.
1. Anyone who would opt in to switch or add voting matrices, about 30 minutes to learn on their favorite post and then similarly to one-score voting, times how many categories/subcategories they want to vote on (if you intuitively assign an upvote, you would just intuitively assign maybe 3 upvotes by clicking on images).
2. Yes, depending on the learning curve, and assuming people who would spend too much time learning would not opt in, this would be sufficiently accurate and quick. This would also provide aggregate data—however, it may be easier if experts who have seen a lot of posts make estimates. So, assuming that one to a few humans keeps awareness of posts and can assess what a person may like, then someone like an EA Librarian can recommend posts an individual would best benefit from. The recommendations can be of higher quality and more efficient. So, you may be right, the quality/time ratio may be much worse than the best alternative.
Oh, yes, if there is a moderator who would have to be digitizing their perspective—plus, would probably not capture the complexity of the post by these categories—the human brain is much better in this—a reminder note can function better. But, if you upvote only one post per week by clicking once and you would have to upvote one post per week by clicking 4x4 times, on average, it is still ok. Yes, the reallocation of the points—users would be so affected they would even stop paying attention to FB or other media since there are these demands on upvoting .. Yes, at lest 10 similar perspectives can be taken as saturation, unless new perspectives emerge?
Hm, I guess you are not so much about intuitive understanding of these infographics—in general, when persons develop something then it is much easier for them to orient in the summary (including an image) - so, somehow everyone would need to be involved in the development of scoring metrics.
I would be much rather if people regularly pause their posting and commenting to reflect where their actions are leading, why they do what they do, if they are missing something, if there are solutions already developed, what are some problems, who is liking what in the community, etc. This can improve epistemics and cooperation efficiency.
I may agree with you that categorized scoring metrics are not the only way to achieve this objective. There may be much better ways, such as expert recommendations of posts and cooperation opportunities.
I’m not sure I follow how your 20% version relates to original post/proposal about categorized voting: summaries seem reasonable/good but unrelated, and the two points about tagging just seem to be “it would be nice if we used/had more tags.”
There are a lot of other points/responses I could address, but I think that it’s probably better to step back and summarize my big-picture concerns rather than continue narrowing in:
Time: How much time would this system require on the part of users?
Quality: At the estimated time input, will the quality/consistency reach a point where the system can actually be reliably used to the extent that it saves time/improves understanding?
I think the answer to (1) is “probably a lot”:
Suppose there are 10 relevant posts per day on average.
Suppose that each of the 25 dimensions requires an average minimum of ~1 minute of thought to make a single passable evaluation (especially before users become familiar with doing this, and then even once they become familiar they “have to continuously reallocate scarce points”). We’ll just ignore the eighth vector.
This produces an estimate of ~250 minutes (>4 hours) per day for a single perspective on each article, on average.
It seems plausible that for the metric to have much value, it probably warrants at least 2–3 perspectives per article, effectively >doubling the time commitment for it to be valuable.
I’m not going to go much deeper to cover (2), as I think the issue is fairly understandable, but I will just highlight that the time and quality are clearly proportional, and so skimping on time will make the quality suffer.
Ultimately, I do not see this metric being sufficiently valuable to be worth a daily commitment of >5 hours of EA time; I would much rather people spend that time creating new posts, commenting on existing posts, etc.
Hm, ok, maybe just more tags is the solution.
1. Anyone who would opt in to switch or add voting matrices, about 30 minutes to learn on their favorite post and then similarly to one-score voting, times how many categories/subcategories they want to vote on (if you intuitively assign an upvote, you would just intuitively assign maybe 3 upvotes by clicking on images).
2. Yes, depending on the learning curve, and assuming people who would spend too much time learning would not opt in, this would be sufficiently accurate and quick. This would also provide aggregate data—however, it may be easier if experts who have seen a lot of posts make estimates. So, assuming that one to a few humans keeps awareness of posts and can assess what a person may like, then someone like an EA Librarian can recommend posts an individual would best benefit from. The recommendations can be of higher quality and more efficient. So, you may be right, the quality/time ratio may be much worse than the best alternative.
Oh, yes, if there is a moderator who would have to be digitizing their perspective—plus, would probably not capture the complexity of the post by these categories—the human brain is much better in this—a reminder note can function better. But, if you upvote only one post per week by clicking once and you would have to upvote one post per week by clicking 4x4 times, on average, it is still ok. Yes, the reallocation of the points—users would be so affected they would even stop paying attention to FB or other media since there are these demands on upvoting .. Yes, at lest 10 similar perspectives can be taken as saturation, unless new perspectives emerge?
Hm, I guess you are not so much about intuitive understanding of these infographics—in general, when persons develop something then it is much easier for them to orient in the summary (including an image) - so, somehow everyone would need to be involved in the development of scoring metrics.
I would be much rather if people regularly pause their posting and commenting to reflect where their actions are leading, why they do what they do, if they are missing something, if there are solutions already developed, what are some problems, who is liking what in the community, etc. This can improve epistemics and cooperation efficiency.
I may agree with you that categorized scoring metrics are not the only way to achieve this objective. There may be much better ways, such as expert recommendations of posts and cooperation opportunities.
Thank you very much for the reply.