One thing I’ll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it’s more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these ‘sub-cause’ prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Such prioritisation efforts don’t give us cost-effectiveness estimates at all (e.g. they just evaluate cruxes relevant to making relative within-cause comparisons).
Even if they did, the cost-effectiveness would not be comparable across causes without much additional cross-cause work on moral weights and so on.
There may be additional incommensurable epistemic differences between the prioritisations conducted within the different causes that mean we can’t combine their prioritisations (e.g. GHD favours more well-evidenced, less speculative things and prioritises A>B, GCR favours higher EV, more speculative things and prioritises C>D).
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)
Thanks Jakob!
One thing I’ll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it’s more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these ‘sub-cause’ prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Such prioritisation efforts don’t give us cost-effectiveness estimates at all (e.g. they just evaluate cruxes relevant to making relative within-cause comparisons).
Even if they did, the cost-effectiveness would not be comparable across causes without much additional cross-cause work on moral weights and so on.
There may be additional incommensurable epistemic differences between the prioritisations conducted within the different causes that mean we can’t combine their prioritisations (e.g. GHD favours more well-evidenced, less speculative things and prioritises A>B, GCR favours higher EV, more speculative things and prioritises C>D).
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)