re: “filtering”, I really was only talking about “clearly uninteresting/bad” claims—i.e., things that almost no reasonable person would take seriously even before reading counterarguments. I can’t think of many great examples off the top of my head—and in fact it might rarely ever require such moderation among most EAs—but perhaps one example may be conspiracy-theory claims like “Our lizard overlords will forever prevent AGI...” or non-sequiturs like “The color of the sky reflects a human passion for knowledge and discovery, and this love of knowledge can never be instilled in a machine that does not already understand the color blue.”
In contrast, I do think it would probably be a good idea to allow heterodox claims like “AGI/human-level artificial intelligence will never be possible”—especially since such claims would likely be well-rebutted and thus downvoted.
Yes, de-duplication is a major reason why I support using these kinds of platforms: it just seems so wasteful to me that there are people out there who have probably done research on questions of interest to other people but their findings are either not public or not easy to find for someone doing research.
yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient
re: “filtering”, I really was only talking about “clearly uninteresting/bad” claims—i.e., things that almost no reasonable person would take seriously even before reading counterarguments. I can’t think of many great examples off the top of my head—and in fact it might rarely ever require such moderation among most EAs—but perhaps one example may be conspiracy-theory claims like “Our lizard overlords will forever prevent AGI...” or non-sequiturs like “The color of the sky reflects a human passion for knowledge and discovery, and this love of knowledge can never be instilled in a machine that does not already understand the color blue.”
In contrast, I do think it would probably be a good idea to allow heterodox claims like “AGI/human-level artificial intelligence will never be possible”—especially since such claims would likely be well-rebutted and thus downvoted.
Yes, de-duplication is a major reason why I support using these kinds of platforms: it just seems so wasteful to me that there are people out there who have probably done research on questions of interest to other people but their findings are either not public or not easy to find for someone doing research.
yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient