it seems like you’re committed to some minimal moral realism
I don’t think I have. In particular, from a moral relativist perspective, I can notice that Open Philanthropy’s funding comes from one person, notice that they have some altruistic & consequentialist inclinations and then wonder about whether worldview diversification is really the best way to go about satisfying those.
Or even simpler, I could be saying something like: “as a moral relativist with consequentialist sympathies, this is not how I would spend my billions if I had them, because I find the dangling relative values thing inelegant.”
I can notice that Open Philanthropy’s funding comes from one person
One person may well have multiple different parts, or subscribe to multiple different worldviews!
asking oneself how much one values outcomes in different cause areas relative to each other, and then pursuing a measure of aggregate value with more or less vigor
I think your alternative implicitly assumes that, as a single person, you can just “decide” how much you value different outcomes. Whereas in fact I think of worldview diversification as actually a pretty good approximation of the process I’d go through internally if I were asked this question.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.
I don’t think I have. In particular, from a moral relativist perspective, I can notice that Open Philanthropy’s funding comes from one person, notice that they have some altruistic & consequentialist inclinations and then wonder about whether worldview diversification is really the best way to go about satisfying those.
Or even simpler, I could be saying something like: “as a moral relativist with consequentialist sympathies, this is not how I would spend my billions if I had them, because I find the dangling relative values thing inelegant.”
One person may well have multiple different parts, or subscribe to multiple different worldviews!
I think your alternative implicitly assumes that, as a single person, you can just “decide” how much you value different outcomes. Whereas in fact I think of worldview diversification as actually a pretty good approximation of the process I’d go through internally if I were asked this question.
not “decide”, but “introspect”, or “reflect upon”, or “estimate”. This is in the same way that I can estimate probabilities.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
Maybe this post can help illustrate better: https://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.
Fair.
(I still stand by the rest of my comment, and that you’re assuming away some moral/normative uncertainty, and maybe most of the problem.)