yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient
yes, that is the thing—the culture in EA is key—overall great intentions, cooperation, responsiveness to feedback, etc (alongside with EA principles) - can go long way—well, ok, it can be also training in developing good ideas by building on the ongoing discourse: ‘you mean like if animals with relatively limited (apparent) cognitive capacity are in power then AGI can never develop?’ or ‘well machines do not need to love knowledge, they can feel indifferent or dislike it. plus, machines do not need to recognize blue to achieve their objectives’ - this advances some thinking.
the quality of arguments, including those about crucial considerations, should be assessed on their merit of contributing to good idea development (impartially welfarist, unless something better is developed?).
yes but the de-duplication is a real issue. with the current system, it seems to me that there are people thinking in very similar ways about doing the most good so it is very inefficient