What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?
How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedge against the associated moral risk?
How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)
Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?
What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?
How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)
Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?