This article misses what I think is the biggest weakness with cross-cause prioritisation, which is that uncertainty is hugely increased comparing across causes, compared to within
For example if you are comparing an example which pits two chicken welfare campaigns against each other, the uncertainties about chicken sentience or welfare ranges don’t matter because you are comparing Lilke with like.
The same goes to comparing 2 AI existential risk interventions. Whether AGI is imminent or not will often not affect the decision between the 2 interventions—the better intervention often remains better whether AGI is 5 or 50 years away.
Whereas if you compared chicken welfare to human welfare, then the enormous uncertainties kick in and it becomes very difficult to meaningfully compare the two. Whether the chicken’s relative moral weight is. 01 or 1 makes all the difference in the comparison.
Because of this If we are uncertainty averse, then cross cause prioritisation becomes much less attractive.
The article discusses these difficulties, including the same specific example of cross-species comparisons here:
With that broad a mandate, however, come significant challenges. One of the biggest issues is ethical commensurability – essentially, how do you compare ‘good done’ across wildly different spheres? Each cause tends to have its own metrics and moral values, and these don’t easily line up. Saving a child’s life can be measured in DALYs or QALYs, but how do we directly compare that to reducing the probability of human extinction, or to sparing chickens from factory farms? Cross-cause analysis must somehow weigh very different outcomes against each other, forcing thorny value judgments. One concrete example is comparing global health vs. existential risk. Global health interventions are often evaluated by cost per DALY or life saved, whereas existential risk reduction is about lowering a tiny probability of a huge future catastrophe. A cross-cause perspective has to decide how many present-day lives saved is “equivalent” to a 0.01% reduction in extinction risk – a deeply fraught question. Likewise, comparing human-centric causes to animal-focused causes requires assumptions about the relative moral weight of animal suffering vs. human suffering. If there’s no agreed-upon exchange rate (and people’s intuitions differ), the comparisons can feel toodisparate. Researchers have attempted to resolve this by creating unified metrics or moral weight estimates (for instance, projects to estimate how many shrimp-life improvements rival a human-life improvement), but there’s often no escaping the subjective choices involved. This means cross-cause prioritization can be especially contentious and uncertain: small changes in moral assumptions or estimates can flip the ranking of causes, leading to debate.
...
Aggregating evidence across causes is very hard – the data and methodology you use to assess a poverty program vs. an AI research project are entirely different. Having worked on broad cross-domain analyses of this kind, we have previously noted how difficult it is to incorporate “the vast number of relevant considerations and the full breadth of our uncertainties within a single model” when comparing across domains
We also discuss this in the context of cause prioritisation here. I think it’s important to note that these difficulties apply to any comparison across causes (not just intervention-level cross-cause prioritisation), and so can’t be dodged if you are interested in cause-neutrally identifying the best interventions.
However, in other ways, comparing the value of different causes can be especially challenging. Researchers must consider ethical trade-offs, uncertainty, and the potential for model errors. At its best, this means that cause prioritization can lead to the beneficial development of frameworks, metrics, and criteria that improve prioritization methods overall. At its worst, and sometimes more commonly, it just leads to lots of intuition-jousting between vague qualitative heuristics.
That said, I would encourage people to reflect carefully about their attitudes towards uncertainty before concluding that:
If we are uncertainty averse, then cross cause prioritisation becomes much less attractive.
Whether this makes sense will depend on your specific attitudes towards uncertainty, and the specific circumstances of the case.
Specific kinds of uncertainty aversion might lead a person to favour focusing their resource allocation only on interventions where they are highly certain about the effect of the intervention. If these interventions are concentrated within a single cause, this might lead them to focus their prioritisation within that cause. Or, they might focus their prioritisation within a given cause because this will allow them to maximise their certainty about outcomes (due to domain-specific knowledge, as we discuss elsewhere).
But it’s not clear that such a person should focus their prioritisation within a single cause. It may be that the interventions about which they can be most certain are not concentrated within one cause, but rather spread across different causes. If so, their search for highly certain interventions should potentially spread across causes.
It’s also worth noting explicitly the difference between uncertainty about interventions and uncertainty about comparisons between interventions. Our observations above show that the comparison of interventions in different causes may often be particularly uncertain (e.g. being uncertain about the relative weight of human and chicken suffering). But it seems very unclear what normatively follows from this. Note that if you are uncertain about how to compare A and B, just deciding to focus your efforts on one doesn’t reduce your uncertainty about the comparison at all. And deciding to focus your prioritisation effort on one just doubles-down on your ignorance, by electing to not conduct the prioritisation research that would resolve your uncertainty.
In addition, as we note elsewhere, concern about uncertainty could also push towards diversification, which likely recommends prioritisation across cause areas in order to identify less correlated interventions:
Concern over avoiding wasted efforts calls for diversifying resources across multiple causes to reduce the risk of correlated outcomes from overfocusing on one area. For instance, an organization might allocate funding across global health, AI, and biosecurity projects to ensure that a setback in one field does not derail all progress. Intervention and cause diversification, made possible through a blend of cause-level and cross-cause prioritization work, builds resilience and increases the probability of achieving impact.
Thanks @David_Moss I might be barking up the wrong tree here, but I think I’m talking about a different perhaps more basic point? I agree there’s the ethical commensurability factor, but I’m not talking about moral intuintions, or domain specific knowledge, but about concrete reality—even if that reality is unknown. I’m saying that cross-cause comparison increases objective (not subjective) uncertainty by orders of magnitude.
Let me try again and lets see whether I’m bringing something new or not
Lets say (for arguments sake) we are 90% sure that a chicken’s moral weight is between 0.01 and 0.9 that of a human’s. When comparing chicken welfare interventions this uncertainty becomes irrelevent or cancels out as we will be comparing like with like—whether the moral weight is 0.01 or 0.9 doesn’t matter as we compare as this remains constant betewen both interventions, even as we don’t know what it is.
Again if we compare human interventions that save kids lives, while comparing the exact DALY value of saving a life (whether 20 or 80) doesn’t matter, because this will remain constant between interventions.
But when we try to compare a chicken welfare to a human intervention, the uncertainties of both interventions compound in the comparison. The 4x uncertainty in the human interventino and 100x uncertainty in the animal welfare intervention compound.
To put it another way, in many cases moral weight doesn’t matter for within cause comparison, but it becomes critically imporant between causes.
Of course I’m. not saying we shouldn’t do cross-cause comparison, but I think this huge objective increase in uncertainty here is an important if fairly basic point to recognise.
I agree there’s the ethical commensurability factor, but I’m not talking about moral intuintions, or domain specific knowledge, but about concrete reality—even if that reality is unknown. I’m saying that cross-cause comparison increases objective (not subjective) uncertainty by orders of magnitude.
I think my remarks above apply across different kinds of uncertainty (we discuss ethical, empirical and other kinds of uncertainty above). That said, I’m not sure I follow your intended point about objective uncertainty (your example given seems to be about subjective uncertainty about moral weight), but it seems to me my remarks would apply exactly the same to objective uncertainty.
To put it another way, in many cases moral weight doesn’t matter for within cause comparison, but it becomes critically imporant between causes… this huge objective increase in uncertainty here is an important if fairly basic point to recognise.
We make the point (using the same examples) that comparisons across causes introduce many huge uncertainties, that do not apply within-causes, at multiple points in the passages quoted above and elsewhere. So I fear we may be talking past each other if you see this point as missing from the article.
Thanks for the reply—I was just trying to make a small point, which I still think is missing from your analysis about a large objective uncertainty difference when comparing between causes rather than within. I might be missing it in your text but I can’t see it mentioned at all, maybe you considered it but didn’t write about it explicitly?
This article misses what I think is the biggest weakness with cross-cause prioritisation, which is that uncertainty is hugely increased comparing across causes, compared to within
For example if you are comparing an example which pits two chicken welfare campaigns against each other, the uncertainties about chicken sentience or welfare ranges don’t matter because you are comparing Lilke with like.
The same goes to comparing 2 AI existential risk interventions. Whether AGI is imminent or not will often not affect the decision between the 2 interventions—the better intervention often remains better whether AGI is 5 or 50 years away.
Whereas if you compared chicken welfare to human welfare, then the enormous uncertainties kick in and it becomes very difficult to meaningfully compare the two. Whether the chicken’s relative moral weight is. 01 or 1 makes all the difference in the comparison.
Because of this If we are uncertainty averse, then cross cause prioritisation becomes much less attractive.
Thanks for the comment Nick.
The article discusses these difficulties, including the same specific example of cross-species comparisons here:
We also discuss this in the context of cause prioritisation here. I think it’s important to note that these difficulties apply to any comparison across causes (not just intervention-level cross-cause prioritisation), and so can’t be dodged if you are interested in cause-neutrally identifying the best interventions.
That said, I would encourage people to reflect carefully about their attitudes towards uncertainty before concluding that:
Whether this makes sense will depend on your specific attitudes towards uncertainty, and the specific circumstances of the case.
Specific kinds of uncertainty aversion might lead a person to favour focusing their resource allocation only on interventions where they are highly certain about the effect of the intervention. If these interventions are concentrated within a single cause, this might lead them to focus their prioritisation within that cause. Or, they might focus their prioritisation within a given cause because this will allow them to maximise their certainty about outcomes (due to domain-specific knowledge, as we discuss elsewhere).
But it’s not clear that such a person should focus their prioritisation within a single cause. It may be that the interventions about which they can be most certain are not concentrated within one cause, but rather spread across different causes. If so, their search for highly certain interventions should potentially spread across causes.
It’s also worth noting explicitly the difference between uncertainty about interventions and uncertainty about comparisons between interventions. Our observations above show that the comparison of interventions in different causes may often be particularly uncertain (e.g. being uncertain about the relative weight of human and chicken suffering). But it seems very unclear what normatively follows from this. Note that if you are uncertain about how to compare A and B, just deciding to focus your efforts on one doesn’t reduce your uncertainty about the comparison at all. And deciding to focus your prioritisation effort on one just doubles-down on your ignorance, by electing to not conduct the prioritisation research that would resolve your uncertainty.
In addition, as we note elsewhere, concern about uncertainty could also push towards diversification, which likely recommends prioritisation across cause areas in order to identify less correlated interventions:
Thanks @David_Moss I might be barking up the wrong tree here, but I think I’m talking about a different perhaps more basic point? I agree there’s the ethical commensurability factor, but I’m not talking about moral intuintions, or domain specific knowledge, but about concrete reality—even if that reality is unknown. I’m saying that cross-cause comparison increases objective (not subjective) uncertainty by orders of magnitude.
Let me try again and lets see whether I’m bringing something new or not
Lets say (for arguments sake) we are 90% sure that a chicken’s moral weight is between 0.01 and 0.9 that of a human’s. When comparing chicken welfare interventions this uncertainty becomes irrelevent or cancels out as we will be comparing like with like—whether the moral weight is 0.01 or 0.9 doesn’t matter as we compare as this remains constant betewen both interventions, even as we don’t know what it is.
Again if we compare human interventions that save kids lives, while comparing the exact DALY value of saving a life (whether 20 or 80) doesn’t matter, because this will remain constant between interventions.
But when we try to compare a chicken welfare to a human intervention, the uncertainties of both interventions compound in the comparison. The 4x uncertainty in the human interventino and 100x uncertainty in the animal welfare intervention compound.
To put it another way, in many cases moral weight doesn’t matter for within cause comparison, but it becomes critically imporant between causes.
Of course I’m. not saying we shouldn’t do cross-cause comparison, but I think this huge objective increase in uncertainty here is an important if fairly basic point to recognise.
Thank you for the reply Nick.
I think my remarks above apply across different kinds of uncertainty (we discuss ethical, empirical and other kinds of uncertainty above). That said, I’m not sure I follow your intended point about objective uncertainty (your example given seems to be about subjective uncertainty about moral weight), but it seems to me my remarks would apply exactly the same to objective uncertainty.
We make the point (using the same examples) that comparisons across causes introduce many huge uncertainties, that do not apply within-causes, at multiple points in the passages quoted above and elsewhere. So I fear we may be talking past each other if you see this point as missing from the article.
Thanks for the reply—I was just trying to make a small point, which I still think is missing from your analysis about a large objective uncertainty difference when comparing between causes rather than within. I might be missing it in your text but I can’t see it mentioned at all, maybe you considered it but didn’t write about it explicitly?
All good either way its not the biggest deal!