I can notice that Open Philanthropy’s funding comes from one person
One person may well have multiple different parts, or subscribe to multiple different worldviews!
asking oneself how much one values outcomes in different cause areas relative to each other, and then pursuing a measure of aggregate value with more or less vigor
I think your alternative implicitly assumes that, as a single person, you can just “decide” how much you value different outcomes. Whereas in fact I think of worldview diversification as actually a pretty good approximation of the process I’d go through internally if I were asked this question.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.
One person may well have multiple different parts, or subscribe to multiple different worldviews!
I think your alternative implicitly assumes that, as a single person, you can just “decide” how much you value different outcomes. Whereas in fact I think of worldview diversification as actually a pretty good approximation of the process I’d go through internally if I were asked this question.
not “decide”, but “introspect”, or “reflect upon”, or “estimate”. This is in the same way that I can estimate probabilities.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
Maybe this post can help illustrate better: https://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.