You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
Maybe this post can help illustrate better: https://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.