I’m not sure how I never saw this response (perhaps I saw the notification but forgot to read), but thank you for the response!
I’m not familiar with the 6x6x6 synthesis; would it not require 216 participants, though? (That seems quite demanding) Or am I misunderstanding? (Also, the whole 666 thing might not make for the best optics in light of e.g., cult accusations, lol)
I’m not sure what you’re referring to regarding “curated,” but if you’re referring to the collection of ideas/claims on something like Kialo I think my point was just that you can have moderators filter out the ideas that seem clearly uninteresting/bad, duplicative, etc.
ok—yes, it is 5^3 (if you exclude a ‘facilitator’) .. yes, although some events are for even more people.
Hm .. but filtering can be biasing/limiting innovation and motivating by fear rather than support (further limiting critical thinking)? .. this is why overall brainstorming while keeping in mind EA-related ideas can be better (even initial ideas (e. g. even those that are not cost-effective!) can be valuable, because they support the development of more optimal ideas) - ‘curation’ should be exercised as a form of internal complaint (e. g. if someone’s responsiveness to feedback is limited - ‘others are offering more cost-effective solutions and they are not engaging in a dialogue’) - this could be prevented by great built-in feedback mechanism infrastructure (and addressed by some expert evaluation of ideas, such as via EA Funds, that already exist).
duplicative ideas should be identified—even complementary ideas. Then, people can 1) stop developing ideas that others have already developed and do something else, 2) work with others to develop these ideas further, 3) work with others with similar ideas on projects.
re: “filtering”, I really was only talking about “clearly uninteresting/bad” claims—i.e., things that almost no reasonable person would take seriously even before reading counterarguments. I can’t think of many great examples off the top of my head—and in fact it might rarely ever require such moderation among most EAs—but perhaps one example may be conspiracy-theory claims like “Our lizard overlords will forever prevent AGI...” or non-sequiturs like “The color of the sky reflects a human passion for knowledge and discovery, and this love of knowledge can never be instilled in a machine that does not already understand the color blue.”
In contrast, I do think it would probably be a good idea to allow heterodox claims like “AGI/human-level artificial intelligence will never be possible”—especially since such claims would likely be well-rebutted and thus downvoted.
Yes, de-duplication is a major reason why I support using these kinds of platforms: it just seems so wasteful to me that there are people out there who have probably done research on questions of interest to other people but their findings are either not public or not easy to find for someone doing research.
yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient
I’m not sure how I never saw this response (perhaps I saw the notification but forgot to read), but thank you for the response!
I’m not familiar with the 6x6x6 synthesis; would it not require 216 participants, though? (That seems quite demanding) Or am I misunderstanding? (Also, the whole 666 thing might not make for the best optics in light of e.g., cult accusations, lol)
I’m not sure what you’re referring to regarding “curated,” but if you’re referring to the collection of ideas/claims on something like Kialo I think my point was just that you can have moderators filter out the ideas that seem clearly uninteresting/bad, duplicative, etc.
ok—yes, it is 5^3 (if you exclude a ‘facilitator’) .. yes, although some events are for even more people.
Hm .. but filtering can be biasing/limiting innovation and motivating by fear rather than support (further limiting critical thinking)? .. this is why overall brainstorming while keeping in mind EA-related ideas can be better (even initial ideas (e. g. even those that are not cost-effective!) can be valuable, because they support the development of more optimal ideas) - ‘curation’ should be exercised as a form of internal complaint (e. g. if someone’s responsiveness to feedback is limited - ‘others are offering more cost-effective solutions and they are not engaging in a dialogue’) - this could be prevented by great built-in feedback mechanism infrastructure (and addressed by some expert evaluation of ideas, such as via EA Funds, that already exist).
duplicative ideas should be identified—even complementary ideas. Then, people can 1) stop developing ideas that others have already developed and do something else, 2) work with others to develop these ideas further, 3) work with others with similar ideas on projects.
re: “filtering”, I really was only talking about “clearly uninteresting/bad” claims—i.e., things that almost no reasonable person would take seriously even before reading counterarguments. I can’t think of many great examples off the top of my head—and in fact it might rarely ever require such moderation among most EAs—but perhaps one example may be conspiracy-theory claims like “Our lizard overlords will forever prevent AGI...” or non-sequiturs like “The color of the sky reflects a human passion for knowledge and discovery, and this love of knowledge can never be instilled in a machine that does not already understand the color blue.”
In contrast, I do think it would probably be a good idea to allow heterodox claims like “AGI/human-level artificial intelligence will never be possible”—especially since such claims would likely be well-rebutted and thus downvoted.
Yes, de-duplication is a major reason why I support using these kinds of platforms: it just seems so wasteful to me that there are people out there who have probably done research on questions of interest to other people but their findings are either not public or not easy to find for someone doing research.
yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient