Hmm. I’ve watched the scoring of topics on the forum, and have not seen much interest in topics that I thought were important for you, either because the perspective, the topic, or the users, were unpopular. The forum appears to be functioning in accordance with the voting of users, for the most part,because you folks don’t care to read about certain things or hear from certain people. It comes across in the voting.
I filter your content, but only for myself. I wouldn’t want my peers, no matter how well informed, deciding what I shouldn’t read, though I don’t mind them recommending information sources and I don’t mind recommending sources of my own, on a per source basis. I try to follow the rule that I read anything I recommend before I recommend it. By “source” here I mean a specific body of content, not a specific producer of content.
I actually hesitate to strong vote, btw, it’s ironic. I don’t like being part of a trust system, in a way. It’s pressure on me without a solution.
I prefer “friends” to reveal things that I couldn’t find on my own, rather than, for their lack of “trust”, hide things from me. More likely, their lack of trust will prove to be a mistake in deciding what I’d like to read. No one assumes I will accept everything I read, as far as I know, so why should they be protecting me from genuine content? I understand spam, AI spam would be a real pain, all of it leading to how I need viagra to improve my epistemics.
If this were about peer review and scientific accuracy, I would want to allow that system to continue to work, but still be able to hear minority views, particularly as my background knowledge of the science deepens. Then I fear incorrect inferences (and even incorrect data) a bit less. I still prefer scientific research to be as correct as possible, but scientific research is not what you folks do. You folks do shallow dives into various topics and offer lots of opinions. Once in a while there’s some serious research but it’s not peer-reviewed.
You referred to AI gaming the system, etc, and voting rings or citation rings, or whathaveyou. It all sounds bad, and there should be ways of screening out such things, but I don’t think the problem should be handled with a trust system.
An even stronger trust system that will just soft-censor some people or some topics more effectively. You folks have a low tolerance for being filters on your own behalf, I’ve noticed. You continue to rely on systems, like karma or your self-reported epistemic statuses, to try to qualify content before you’ve read it. You absolutely indulge false reasons to reject content out of hand. You must be very busy and so you make that systematic mistake.
Implementing an even stronger trust system will just make you folks even more marginalized in some areas, since EA folks are mistaken in a number of ways. With respect to studies of inference methods, forecasting, and climate change, for example, the posting majority’s view here appears to be wrong.
I think it’s baffling that anyone would ever risk a voting system for deciding the importance of controversial topics open to argument. I can see voting working on Stack Overflow, where answers are easy to test, and give “yes, works well” or “no, doesn’t work well” feedback about, at least in the software sections. There, expertise does filter up via the voting system.
Implementing a more reliable trust system here will just make you folks more insular from folks like me. I’m aware that you mostly ignore me. Well, I develop knowledge for myself by using you folks as a silent, absorbing, disinterested, sounding board. However, if I do post or comment, I offer my best. I suppose you have no way to recognize that though.
I’ve read a lot of funky stuff from well-intentioned people, and I’m usually ok with it. It’s not my job, but there’s usually something to gain from reading weird things even if I continue to disagree with it’s content. At the very least, I develop pattern recognition useful to better understand and disagree with arguments: false premises, bogus inferences, poor information-gathering, unreliable sources, etc, etc. A trust system will deprive you folks of experiencing your own reading that way.
What is in fact a feature must seem like a bug:
“Hey, this thing I’m reading doesn’t fit what I’d like to read and I don’t agree with it. It is probably wrong! How can I filter this out so I never read it again. Can my friends help me avoid such things in future?”
Such an approach is good for conversation. Conversation is about what people find entertaining and reaffirming to discuss, and it does involve developing trust. If that’s what this forum should be about, your stronger trust system will fragment it into tiny conversations, like a party in a big house with different rooms for every little group. Going from room to room would be hard,though. A person like me could adapt by simply offering affirmations and polite questions, and develop an excellent model of every way that you’re mistaken, without ever offering any correction or alternative point of view, all while you trust that I think just like you. That would have actually served me very well in the last several months. So, hey, I have changed my mind. Go ahead. Use your trust system. I’ll adapt.
Or ignore you ignoring me. I suppose that’s my alternative.
I think maybe the word “filter” which I use gives the impression that it is about hiding information. The system is more likely to be used to rank order information, so that information that has been deemed valuable by people you trust is more likely to bubble up to you. It is supposed to be a way to augment your abilities to sort through information and social cues to find competent people and trustworthy information, not a system to replace it.
The karma system works similarly to highlight information, but there’s these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.
This forum in relation to the academic peer review system
The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement of access to disparate opinions and perspectives.
The value of disparate information and participants here
Inside the content offered here are recommendations for new information. I evaluate that information according to more conventional critical thinking criteria: peer-reviewed, established science, good methodology. Disparate perspectives among researchers here let me gain access to multiple points of view found in academic literature and fields of study. For example, this forum helped me research a foresight conflict between climate economists and earth scientists that is long-standing (as well as related topics in climate modeling and scenario development).
NOTE:Peer-reviewed information might have problems as well, but not ones to fix with a voting system relying on arbitrary participants.
Forum perspectives should not converge without rigorous argument
Another system that bubbles up what I’d like to read? OK, but will it filter out divergence, unpopular opinions, evidence that a person has a unique background or point of view, or a new source of information that contradicts current information? Will your system make it harder to trawl through other researchers’ academic sources by making it less likely that forum readers ever read those researchers’ posts?
In this environment, among folks who go through summaries, arguments, and opinions for whatever reason, once an information trend appears, if its different and valid, it lets me course correct.
The trend could signal something that needs changing, like “Here’s new info that passes muster! Do something different now!” or signal that there’s a large information gap, like “Woa, this whole conversation is different! I either seem to disagree with all of the conclusions or not understand them at all. What’s going on? What am I missing?”
A learning environment
Forum participants implicitly encourage me to explore bayesianism and superforecasting. Given what I suspect are superforecasting problems (its aggregation algorithms and bias in them and in forecast confirmations), I would be loathe to explore it otherwise. However, obviously smart people continue to assert its value in a way that I digest as a forum participant. My minority opinion of superforecasting actually leads to me learning more about it because I participate in conversation here. However, if I were filtered out in my minority views so strongly that no one ever conversed with me at all I could just blog about how EA folks are really wrong and move on. Not the thing to do, but do you see why tolerance of my opinions here matters? It serves both sides.
From my perspective, it takes patience to study climate economists, superforecasting, bayesian inference, and probabilism. Meanwhile, you folks, with different and maybe better knowledge than mine on these topics, but a different perspective, provide that learning environment. If there can be reciprocation, that’s good, EA folks deserve helpful outside perspectives.
My experience as other people’s epistemic filter
People ignore my recommendations, or the intent behind them. They either don’t read what I recommend, or much more rarely, read it but dismiss it without any discussion. If those people use anonymized voting as their augmentation approach, then I don’t want to be their filter. They need less highlighting of information that they want to find, not more.
Furthermore, at this level of processing information, secondary or tertiary sources, posts already act like filters. Ranking the filtering to decide whether to even read it is a bit much. I wouldn’t want to attempt to provide that service.
Conclusion
ChatGPT, and this new focus on conversational interfaces makes it possible that forum participants in future will be AI, not people. If so,they could be productive participants, rather than spam bots.
Meanwhile, the forum could get rid the karma system altogether, or add configuration that lets a user turn off karma voting and ranking. That would be a pleasant alternative for someone like me, who rarely gets much karma anyway. That would offer even less temptation to focus on popular topics or feign popular perspectives.
Hmm. I’ve watched the scoring of topics on the forum, and have not seen much interest in topics that I thought were important for you, either because the perspective, the topic, or the users, were unpopular. The forum appears to be functioning in accordance with the voting of users, for the most part,because you folks don’t care to read about certain things or hear from certain people. It comes across in the voting.
I filter your content, but only for myself. I wouldn’t want my peers, no matter how well informed, deciding what I shouldn’t read, though I don’t mind them recommending information sources and I don’t mind recommending sources of my own, on a per source basis. I try to follow the rule that I read anything I recommend before I recommend it. By “source” here I mean a specific body of content, not a specific producer of content.
I actually hesitate to strong vote, btw, it’s ironic. I don’t like being part of a trust system, in a way. It’s pressure on me without a solution.
I prefer “friends” to reveal things that I couldn’t find on my own, rather than, for their lack of “trust”, hide things from me. More likely, their lack of trust will prove to be a mistake in deciding what I’d like to read. No one assumes I will accept everything I read, as far as I know, so why should they be protecting me from genuine content? I understand spam, AI spam would be a real pain, all of it leading to how I need viagra to improve my epistemics.
If this were about peer review and scientific accuracy, I would want to allow that system to continue to work, but still be able to hear minority views, particularly as my background knowledge of the science deepens. Then I fear incorrect inferences (and even incorrect data) a bit less. I still prefer scientific research to be as correct as possible, but scientific research is not what you folks do. You folks do shallow dives into various topics and offer lots of opinions. Once in a while there’s some serious research but it’s not peer-reviewed.
You referred to AI gaming the system, etc, and voting rings or citation rings, or whathaveyou. It all sounds bad, and there should be ways of screening out such things, but I don’t think the problem should be handled with a trust system.
An even stronger trust system that will just soft-censor some people or some topics more effectively. You folks have a low tolerance for being filters on your own behalf, I’ve noticed. You continue to rely on systems, like karma or your self-reported epistemic statuses, to try to qualify content before you’ve read it. You absolutely indulge false reasons to reject content out of hand. You must be very busy and so you make that systematic mistake.
Implementing an even stronger trust system will just make you folks even more marginalized in some areas, since EA folks are mistaken in a number of ways. With respect to studies of inference methods, forecasting, and climate change, for example, the posting majority’s view here appears to be wrong.
I think it’s baffling that anyone would ever risk a voting system for deciding the importance of controversial topics open to argument. I can see voting working on Stack Overflow, where answers are easy to test, and give “yes, works well” or “no, doesn’t work well” feedback about, at least in the software sections. There, expertise does filter up via the voting system.
Implementing a more reliable trust system here will just make you folks more insular from folks like me. I’m aware that you mostly ignore me. Well, I develop knowledge for myself by using you folks as a silent, absorbing, disinterested, sounding board. However, if I do post or comment, I offer my best. I suppose you have no way to recognize that though.
I’ve read a lot of funky stuff from well-intentioned people, and I’m usually ok with it. It’s not my job, but there’s usually something to gain from reading weird things even if I continue to disagree with it’s content. At the very least, I develop pattern recognition useful to better understand and disagree with arguments: false premises, bogus inferences, poor information-gathering, unreliable sources, etc, etc. A trust system will deprive you folks of experiencing your own reading that way.
What is in fact a feature must seem like a bug:
“Hey, this thing I’m reading doesn’t fit what I’d like to read and I don’t agree with it. It is probably wrong! How can I filter this out so I never read it again. Can my friends help me avoid such things in future?”
Such an approach is good for conversation. Conversation is about what people find entertaining and reaffirming to discuss, and it does involve developing trust. If that’s what this forum should be about, your stronger trust system will fragment it into tiny conversations, like a party in a big house with different rooms for every little group. Going from room to room would be hard,though. A person like me could adapt by simply offering affirmations and polite questions, and develop an excellent model of every way that you’re mistaken, without ever offering any correction or alternative point of view, all while you trust that I think just like you. That would have actually served me very well in the last several months. So, hey, I have changed my mind. Go ahead. Use your trust system. I’ll adapt.
Or ignore you ignoring me. I suppose that’s my alternative.
I think maybe the word “filter” which I use gives the impression that it is about hiding information. The system is more likely to be used to rank order information, so that information that has been deemed valuable by people you trust is more likely to bubble up to you. It is supposed to be a way to augment your abilities to sort through information and social cues to find competent people and trustworthy information, not a system to replace it.
I understand, Henrik. Thanks for your reply.
Forum karma
The karma system works similarly to highlight information, but there’s these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.
This forum in relation to the academic peer review system
The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement of access to disparate opinions and perspectives.
The value of disparate information and participants here
Inside the content offered here are recommendations for new information. I evaluate that information according to more conventional critical thinking criteria: peer-reviewed, established science, good methodology. Disparate perspectives among researchers here let me gain access to multiple points of view found in academic literature and fields of study. For example, this forum helped me research a foresight conflict between climate economists and earth scientists that is long-standing (as well as related topics in climate modeling and scenario development).
NOTE:Peer-reviewed information might have problems as well, but not ones to fix with a voting system relying on arbitrary participants.
Forum perspectives should not converge without rigorous argument
Another system that bubbles up what I’d like to read? OK, but will it filter out divergence, unpopular opinions, evidence that a person has a unique background or point of view, or a new source of information that contradicts current information? Will your system make it harder to trawl through other researchers’ academic sources by making it less likely that forum readers ever read those researchers’ posts?
In this environment, among folks who go through summaries, arguments, and opinions for whatever reason, once an information trend appears, if its different and valid, it lets me course correct.
The trend could signal something that needs changing, like “Here’s new info that passes muster! Do something different now!” or signal that there’s a large information gap, like “Woa, this whole conversation is different! I either seem to disagree with all of the conclusions or not understand them at all. What’s going on? What am I missing?”
A learning environment
Forum participants implicitly encourage me to explore bayesianism and superforecasting. Given what I suspect are superforecasting problems (its aggregation algorithms and bias in them and in forecast confirmations), I would be loathe to explore it otherwise. However, obviously smart people continue to assert its value in a way that I digest as a forum participant. My minority opinion of superforecasting actually leads to me learning more about it because I participate in conversation here. However, if I were filtered out in my minority views so strongly that no one ever conversed with me at all I could just blog about how EA folks are really wrong and move on. Not the thing to do, but do you see why tolerance of my opinions here matters? It serves both sides.
From my perspective, it takes patience to study climate economists, superforecasting, bayesian inference, and probabilism. Meanwhile, you folks, with different and maybe better knowledge than mine on these topics, but a different perspective, provide that learning environment. If there can be reciprocation, that’s good, EA folks deserve helpful outside perspectives.
My experience as other people’s epistemic filter
People ignore my recommendations, or the intent behind them. They either don’t read what I recommend, or much more rarely, read it but dismiss it without any discussion. If those people use anonymized voting as their augmentation approach, then I don’t want to be their filter. They need less highlighting of information that they want to find, not more.
Furthermore, at this level of processing information, secondary or tertiary sources, posts already act like filters. Ranking the filtering to decide whether to even read it is a bit much. I wouldn’t want to attempt to provide that service.
Conclusion
ChatGPT, and this new focus on conversational interfaces makes it possible that forum participants in future will be AI, not people. If so,they could be productive participants, rather than spam bots.
Meanwhile, the forum could get rid the karma system altogether, or add configuration that lets a user turn off karma voting and ranking. That would be a pleasant alternative for someone like me, who rarely gets much karma anyway. That would offer even less temptation to focus on popular topics or feign popular perspectives.