What do you think about technical interventions on these problems, and “moral uncertainty expansion” as a more cooperative alternative to “moral circle expansion”?
Working on these problems makes a lot of sense, and I’m not saying that the philosophical issues around what “human values” means will likely be solved by default.
I think increasing philosophical sophistication (or “moral uncertainty expansion”) is a very good idea from many perspectives. (A direct comparison to moral circle expansion would also need to take relative tractability and importance into account, which seems unclear to me.)
What do you think about technical interventions on these problems, and “moral uncertainty expansion” as a more cooperative alternative to “moral circle expansion”?
Working on these problems makes a lot of sense, and I’m not saying that the philosophical issues around what “human values” means will likely be solved by default.
I think increasing philosophical sophistication (or “moral uncertainty expansion”) is a very good idea from many perspectives. (A direct comparison to moral circle expansion would also need to take relative tractability and importance into account, which seems unclear to me.)