One reason to see “dangling” relative values as principled: utility functions are equivalent (i.e. produce the same preferences over actions) up to a positive affine transformation. Hence why we often use voting systems to make decisions in cases where people’s preferences clash, rather than trying to extract a metric of utility which can be compared across people.
I’m not sure whether the following example does anything for you; it could be that our intuitions about what is “elegant” are just very different:
Imagine that I kill all animals except one Koala. Then if your worldview diversification which had some weight on animals, you would spend the remaining of that worldview on the Koala. But you could buy way more human QALYs or units of x-risk work per unit of animal welfare.
More generally, setting up things such that sometimes you would end up valuing e.g., a salmon as 0.01% of a human and other times as 10% of a human just seems pretty inelegant.
This is one reason why you should use normatively defined worldviews to define your buckets, not cause areas. Animal welfare interventions currently dominate on some worldviews that include nonhuman animals and assign them substantial weight. Those worldviews aren’t (almost ever) going to assign an average nonhuman animal many times more average weight than the average human. The animal-inclusive worldviews also care terminally at least about as much about any human as the panda, and assuming they are impartial and maximizing in some way, the panda won’t get any help from us (unless it benefits humans substantially; humans might care more about the last panda than the average human).
Each worldview (and buckets may reflect aggregates of such worldviews) has its own claim on the relative average value of humans and salmons. Those claims can change with new information, but in the usual ways, not because there are fewer salmons or because fewer salmon are helped per $ now than before.
There can be worldviews where the marginal cost-effectiveness of GiveWell interventions are close to the marginal cost-effectiveness of the best nonhuman animal-focused interventions, and small changes to the situation can shift resources from humans towards nonhumans or vice versa. I think most worldviews won’t be like that, though, and will instead have large gaps in marginal cost-effectiveness (going either way).
One reason to see “dangling” relative values as principled: utility functions are equivalent (i.e. produce the same preferences over actions) up to a positive affine transformation. Hence why we often use voting systems to make decisions in cases where people’s preferences clash, rather than trying to extract a metric of utility which can be compared across people.
I’m not sure whether the following example does anything for you; it could be that our intuitions about what is “elegant” are just very different:
Imagine that I kill all animals except one Koala. Then if your worldview diversification which had some weight on animals, you would spend the remaining of that worldview on the Koala. But you could buy way more human QALYs or units of x-risk work per unit of animal welfare.
More generally, setting up things such that sometimes you would end up valuing e.g., a salmon as 0.01% of a human and other times as 10% of a human just seems pretty inelegant.
This is one reason why you should use normatively defined worldviews to define your buckets, not cause areas. Animal welfare interventions currently dominate on some worldviews that include nonhuman animals and assign them substantial weight. Those worldviews aren’t (almost ever) going to assign an average nonhuman animal many times more average weight than the average human. The animal-inclusive worldviews also care terminally at least about as much about any human as the panda, and assuming they are impartial and maximizing in some way, the panda won’t get any help from us (unless it benefits humans substantially; humans might care more about the last panda than the average human).
Each worldview (and buckets may reflect aggregates of such worldviews) has its own claim on the relative average value of humans and salmons. Those claims can change with new information, but in the usual ways, not because there are fewer salmons or because fewer salmon are helped per $ now than before.
There can be worldviews where the marginal cost-effectiveness of GiveWell interventions are close to the marginal cost-effectiveness of the best nonhuman animal-focused interventions, and small changes to the situation can shift resources from humans towards nonhumans or vice versa. I think most worldviews won’t be like that, though, and will instead have large gaps in marginal cost-effectiveness (going either way).
Open Phil has worldview buckets (https://www.openphilanthropy.org/research/update-on-cause-prioritization-at-open-philanthropy/), including an animal-inclusive bucket, but no animal-only bucket. This might look like cause area buckets instead, but what they’re trying to approximate are in fact worldview buckets.
Good point!