While there are different value functions, I believe there is a best possible value function.
This may exist at the level of physics, something to do with qualia that we don’t understand perhaps, and I think it would be useful to have an information theory of consciousness which I have been thinking about.
But ultimately, I believe that in theory, even if it’s not at the level of physics, I think you can postulate a meta-social choice theory which evaluates every possible social choice theory under all possible circumstance for every possible mind or value function, and find some sort of game theoretic equilibrium which all value functions and social choice theories for evaluating those functions and meta-social choice theories for deciding between choice theories converge on as the universal best possible set of moral principles—which I think is fundamentally about axiology; what moral choice in any given situation creates the most value across the greatest number of minds/entities/value functions/moral theories? I believe this question has an objective answer, there is actually a best thing to do, good things to do, and bad things to do, even if we don’t know what these are. Moral progress is possible, real, not a meaningless concept.
Jordan Arel comments on Debate: Morality is Objective