The karma system works similarly to highlight information, but there’s these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.
This forum in relation to the academic peer review system
The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement of access to disparate opinions and perspectives.
The value of disparate information and participants here
Inside the content offered here are recommendations for new information. I evaluate that information according to more conventional critical thinking criteria: peer-reviewed, established science, good methodology. Disparate perspectives among researchers here let me gain access to multiple points of view found in academic literature and fields of study. For example, this forum helped me research a foresight conflict between climate economists and earth scientists that is long-standing (as well as related topics in climate modeling and scenario development).
NOTE:Peer-reviewed information might have problems as well, but not ones to fix with a voting system relying on arbitrary participants.
Forum perspectives should not converge without rigorous argument
Another system that bubbles up what I’d like to read? OK, but will it filter out divergence, unpopular opinions, evidence that a person has a unique background or point of view, or a new source of information that contradicts current information? Will your system make it harder to trawl through other researchers’ academic sources by making it less likely that forum readers ever read those researchers’ posts?
In this environment, among folks who go through summaries, arguments, and opinions for whatever reason, once an information trend appears, if its different and valid, it lets me course correct.
The trend could signal something that needs changing, like “Here’s new info that passes muster! Do something different now!” or signal that there’s a large information gap, like “Woa, this whole conversation is different! I either seem to disagree with all of the conclusions or not understand them at all. What’s going on? What am I missing?”
A learning environment
Forum participants implicitly encourage me to explore bayesianism and superforecasting. Given what I suspect are superforecasting problems (its aggregation algorithms and bias in them and in forecast confirmations), I would be loathe to explore it otherwise. However, obviously smart people continue to assert its value in a way that I digest as a forum participant. My minority opinion of superforecasting actually leads to me learning more about it because I participate in conversation here. However, if I were filtered out in my minority views so strongly that no one ever conversed with me at all I could just blog about how EA folks are really wrong and move on. Not the thing to do, but do you see why tolerance of my opinions here matters? It serves both sides.
From my perspective, it takes patience to study climate economists, superforecasting, bayesian inference, and probabilism. Meanwhile, you folks, with different and maybe better knowledge than mine on these topics, but a different perspective, provide that learning environment. If there can be reciprocation, that’s good, EA folks deserve helpful outside perspectives.
My experience as other people’s epistemic filter
People ignore my recommendations, or the intent behind them. They either don’t read what I recommend, or much more rarely, read it but dismiss it without any discussion. If those people use anonymized voting as their augmentation approach, then I don’t want to be their filter. They need less highlighting of information that they want to find, not more.
Furthermore, at this level of processing information, secondary or tertiary sources, posts already act like filters. Ranking the filtering to decide whether to even read it is a bit much. I wouldn’t want to attempt to provide that service.
Conclusion
ChatGPT, and this new focus on conversational interfaces makes it possible that forum participants in future will be AI, not people. If so,they could be productive participants, rather than spam bots.
Meanwhile, the forum could get rid the karma system altogether, or add configuration that lets a user turn off karma voting and ranking. That would be a pleasant alternative for someone like me, who rarely gets much karma anyway. That would offer even less temptation to focus on popular topics or feign popular perspectives.
I understand, Henrik. Thanks for your reply.
Forum karma
The karma system works similarly to highlight information, but there’s these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.
This forum in relation to the academic peer review system
The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement of access to disparate opinions and perspectives.
The value of disparate information and participants here
Inside the content offered here are recommendations for new information. I evaluate that information according to more conventional critical thinking criteria: peer-reviewed, established science, good methodology. Disparate perspectives among researchers here let me gain access to multiple points of view found in academic literature and fields of study. For example, this forum helped me research a foresight conflict between climate economists and earth scientists that is long-standing (as well as related topics in climate modeling and scenario development).
NOTE:Peer-reviewed information might have problems as well, but not ones to fix with a voting system relying on arbitrary participants.
Forum perspectives should not converge without rigorous argument
Another system that bubbles up what I’d like to read? OK, but will it filter out divergence, unpopular opinions, evidence that a person has a unique background or point of view, or a new source of information that contradicts current information? Will your system make it harder to trawl through other researchers’ academic sources by making it less likely that forum readers ever read those researchers’ posts?
In this environment, among folks who go through summaries, arguments, and opinions for whatever reason, once an information trend appears, if its different and valid, it lets me course correct.
The trend could signal something that needs changing, like “Here’s new info that passes muster! Do something different now!” or signal that there’s a large information gap, like “Woa, this whole conversation is different! I either seem to disagree with all of the conclusions or not understand them at all. What’s going on? What am I missing?”
A learning environment
Forum participants implicitly encourage me to explore bayesianism and superforecasting. Given what I suspect are superforecasting problems (its aggregation algorithms and bias in them and in forecast confirmations), I would be loathe to explore it otherwise. However, obviously smart people continue to assert its value in a way that I digest as a forum participant. My minority opinion of superforecasting actually leads to me learning more about it because I participate in conversation here. However, if I were filtered out in my minority views so strongly that no one ever conversed with me at all I could just blog about how EA folks are really wrong and move on. Not the thing to do, but do you see why tolerance of my opinions here matters? It serves both sides.
From my perspective, it takes patience to study climate economists, superforecasting, bayesian inference, and probabilism. Meanwhile, you folks, with different and maybe better knowledge than mine on these topics, but a different perspective, provide that learning environment. If there can be reciprocation, that’s good, EA folks deserve helpful outside perspectives.
My experience as other people’s epistemic filter
People ignore my recommendations, or the intent behind them. They either don’t read what I recommend, or much more rarely, read it but dismiss it without any discussion. If those people use anonymized voting as their augmentation approach, then I don’t want to be their filter. They need less highlighting of information that they want to find, not more.
Furthermore, at this level of processing information, secondary or tertiary sources, posts already act like filters. Ranking the filtering to decide whether to even read it is a bit much. I wouldn’t want to attempt to provide that service.
Conclusion
ChatGPT, and this new focus on conversational interfaces makes it possible that forum participants in future will be AI, not people. If so,they could be productive participants, rather than spam bots.
Meanwhile, the forum could get rid the karma system altogether, or add configuration that lets a user turn off karma voting and ranking. That would be a pleasant alternative for someone like me, who rarely gets much karma anyway. That would offer even less temptation to focus on popular topics or feign popular perspectives.