I like the Property Rights Approach, which is a formalization of resource buckets to separate representatives of theories, proportional to credences in the theories they represent. The representatives can make trades with or borrow from each other based on urgency if they agree to it. I don’t think it necessarily has any issues with Pareto optimality, as long as you can force Pareto optimal cooperation. That being said, I think there are still some issues with the approach that need to be worked out.
Alternatively, worldview diversification can be understood as an attempt to approximate expected value given a limited ability to estimate relative values. If so, then the answer might be to notice that worldview-diversification is a fairly imperfect approximation to any kind of utilitarian/consequentialist expected value maximization, and to try to more perfectly approximate utilitarian/consequentialist expected value maximization. This would involve estimating the relative values of projects in different areas, and attempting to equalize marginal values across cause areas and across years.
I think this would assume away one of the main theoretical challenges of moral/normative uncertainty, which is the absence of an uncontroversial common scale to use across normative theories to measure them all against. If you expect such a common scale to exist and be uncontroversial when found, it seems like you’re committed to some minimal moral realism, at least about the scale. Whether or not you’re committed to that, you’d still have the problem of deciding which scale to use now given multiple candidates, which is basically the same as the original problem for moral/normative uncertainty.
it seems like you’re committed to some minimal moral realism
I don’t think I have. In particular, from a moral relativist perspective, I can notice that Open Philanthropy’s funding comes from one person, notice that they have some altruistic & consequentialist inclinations and then wonder about whether worldview diversification is really the best way to go about satisfying those.
Or even simpler, I could be saying something like: “as a moral relativist with consequentialist sympathies, this is not how I would spend my billions if I had them, because I find the dangling relative values thing inelegant.”
I can notice that Open Philanthropy’s funding comes from one person
One person may well have multiple different parts, or subscribe to multiple different worldviews!
asking oneself how much one values outcomes in different cause areas relative to each other, and then pursuing a measure of aggregate value with more or less vigor
I think your alternative implicitly assumes that, as a single person, you can just “decide” how much you value different outcomes. Whereas in fact I think of worldview diversification as actually a pretty good approximation of the process I’d go through internally if I were asked this question.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.
I like the Property Rights Approach, which is a formalization of resource buckets to separate representatives of theories, proportional to credences in the theories they represent. The representatives can make trades with or borrow from each other based on urgency if they agree to it. I don’t think it necessarily has any issues with Pareto optimality, as long as you can force Pareto optimal cooperation. That being said, I think there are still some issues with the approach that need to be worked out.
I think this would assume away one of the main theoretical challenges of moral/normative uncertainty, which is the absence of an uncontroversial common scale to use across normative theories to measure them all against. If you expect such a common scale to exist and be uncontroversial when found, it seems like you’re committed to some minimal moral realism, at least about the scale. Whether or not you’re committed to that, you’d still have the problem of deciding which scale to use now given multiple candidates, which is basically the same as the original problem for moral/normative uncertainty.
See discussions of “intertheoretic comparisons”.
I don’t think I have. In particular, from a moral relativist perspective, I can notice that Open Philanthropy’s funding comes from one person, notice that they have some altruistic & consequentialist inclinations and then wonder about whether worldview diversification is really the best way to go about satisfying those.
Or even simpler, I could be saying something like: “as a moral relativist with consequentialist sympathies, this is not how I would spend my billions if I had them, because I find the dangling relative values thing inelegant.”
One person may well have multiple different parts, or subscribe to multiple different worldviews!
I think your alternative implicitly assumes that, as a single person, you can just “decide” how much you value different outcomes. Whereas in fact I think of worldview diversification as actually a pretty good approximation of the process I’d go through internally if I were asked this question.
not “decide”, but “introspect”, or “reflect upon”, or “estimate”. This is in the same way that I can estimate probabilities.
You’re assuming there’s a unique coherent and (e.g. vNM) rational value system there to find or settle on, rather than multiple (possibly incoherent) systems to try to weigh and no uniquely best (most satisfying) way to combine them into a single coherent (vNM) rational system. That’s assuming away most of the problem.
Maybe this post can help illustrate better: https://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/
FWIW, I also find estimating unique/precise probabilities objectionably unjustifiable for similar reasons, although less bad than assuming away the hard problem of moral uncertainty.
Fair.
(I still stand by the rest of my comment, and that you’re assuming away some moral/normative uncertainty, and maybe most of the problem.)