For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of “Human Welfare”.
This is a great example! I think there is a real tension here.
On the one hand, typically we would say that someone comparing near-term human work vs long-term human work (as broad areas) is engaged in cause prioritisation. And the character of the assessment of areas this broad will likely be very much like the character we ascribed to cause prioritisation (i.e. concerning abstract assessment of general characteristics of very broad areas). On the other hand, if we’re classifying the allocation of movement as a whole across different types of prioritisation, it’s clear that prioritisation that was only focused on near-term vs long-term human comparisons would be lacking something important in terms of actually trying to identify the best cause (across cause areas). To give a different example, if the movement only compared invertebrate non-humans vs vertebrate non-humans, I think it’s clear that we’d have essentially given up on cause prioritisation, in an important sense.[1]
I think what I would say here is something like: the term “cause (prioritisation)” is typically associated with multiple different features, which typically go together, but which in edge cases can come apart. And in those cases, it’s non-obvious how we should best describe the case, and there are probably multiple equally reasonable terminological descriptions. In our system, using just the main top-level EA cause areas, classification may be relatively straightforward, but if you divide things differently or introduce subordinate or superordinate causes, then you need to introduce some more complex distinctions like sub-cause-level within-cause prioritisation.
That aside, I think even if you descriptively divide up the field somewhat differently, the same normative points about the relative strengths and weaknesses of prioritisation focuses on larger or smaller objects of analysis (more cause-like vs more intervention-like) and narrower or wider in scope (within a single area vs across more or all areas), can still be applied in the same way. And, descriptively, it still seems like the the movement still has relatively little prioritisation that is more broadly cross-area.
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
I also think that this is crucial for understanding the whole picture. Analogously to employees in a company (or indeed scientists) who work on some narrow task, members of EA could each work on prioritization in a narrow field while the output of the whole community is an unrestricted CP. But I agree that it is important to also have people who think more big-picture and prioritize across different cause areas.
One thing I’ll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it’s more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these ‘sub-cause’ prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Such prioritisation efforts don’t give us cost-effectiveness estimates at all (e.g. they just evaluate cruxes relevant to making relative within-cause comparisons).
Even if they did, the cost-effectiveness would not be comparable across causes without much additional cross-cause work on moral weights and so on.
There may be additional incommensurable epistemic differences between the prioritisations conducted within the different causes that mean we can’t combine their prioritisations (e.g. GHD favours more well-evidenced, less speculative things and prioritises A>B, GCR favours higher EV, more speculative things and prioritises C>D).
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)
It might be helpful to distinguish two related but distinct issues here: a) there are edge-cases of prio-work where it is (even) intuitively unclear whether they should be categorized as CP or WCP and b) my more theoretical point that this kind of categorization is fundamentally relative to cause individuations.
The second issue (b) seems to be the in-principle more damaging one to your results, as it suggests that your findings may hold only relative to one of many possible individuations. But I think it’s plausible (although not obvious to me) that in fact it doesn’t make a big difference because (i) a lot of actual prio-work takes something like your cause individuation for orientation (i.e. there is not that much prio-work between your general causes and relatively specific interventions) and also because (ii) your analysis seems not to apply the specific cause individuation mentioned in the beginning very strictly in the end—it seems that you rather think of causes as something like global health, animals, and catastrophic risks, but not necessarily these in particular? So I wonder if your results could be redescribed as holding relative to a cause individuation of roughly the generality / coarse-grainedness of the one you suggest, where the one you mention is only a proxy or example which could be replaced by similarly coarse-grained individuations. Then, for example, your result that only 8% of prio-work is CP would mean that 8% of prio-work operates on roughly the level of generality of causes like global health, animals, and catastrophic risks, although not all of that work compares these causes in particular.
So I think that your results are probably rather robust in the end. Still, it would be interesting to do the same exercise again based on a medium fine-grained cause individuation that distinguishes between, say, 15 causes (maybe similar to Will’s) and see if anything changes significantly.
Thanks Jakob!
This is a great example! I think there is a real tension here.
On the one hand, typically we would say that someone comparing near-term human work vs long-term human work (as broad areas) is engaged in cause prioritisation. And the character of the assessment of areas this broad will likely be very much like the character we ascribed to cause prioritisation (i.e. concerning abstract assessment of general characteristics of very broad areas). On the other hand, if we’re classifying the allocation of movement as a whole across different types of prioritisation, it’s clear that prioritisation that was only focused on near-term vs long-term human comparisons would be lacking something important in terms of actually trying to identify the best cause (across cause areas). To give a different example, if the movement only compared invertebrate non-humans vs vertebrate non-humans, I think it’s clear that we’d have essentially given up on cause prioritisation, in an important sense.[1]
I think what I would say here is something like: the term “cause (prioritisation)” is typically associated with multiple different features, which typically go together, but which in edge cases can come apart. And in those cases, it’s non-obvious how we should best describe the case, and there are probably multiple equally reasonable terminological descriptions. In our system, using just the main top-level EA cause areas, classification may be relatively straightforward, but if you divide things differently or introduce subordinate or superordinate causes, then you need to introduce some more complex distinctions like sub-cause-level within-cause prioritisation.
That aside, I think even if you descriptively divide up the field somewhat differently, the same normative points about the relative strengths and weaknesses of prioritisation focuses on larger or smaller objects of analysis (more cause-like vs more intervention-like) and narrower or wider in scope (within a single area vs across more or all areas), can still be applied in the same way. And, descriptively, it still seems like the the movement still has relatively little prioritisation that is more broadly cross-area.
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
I also think that this is crucial for understanding the whole picture. Analogously to employees in a company (or indeed scientists) who work on some narrow task, members of EA could each work on prioritization in a narrow field while the output of the whole community is an unrestricted CP. But I agree that it is important to also have people who think more big-picture and prioritize across different cause areas.
Thanks Jakob!
One thing I’ll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it’s more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these ‘sub-cause’ prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Such prioritisation efforts don’t give us cost-effectiveness estimates at all (e.g. they just evaluate cruxes relevant to making relative within-cause comparisons).
Even if they did, the cost-effectiveness would not be comparable across causes without much additional cross-cause work on moral weights and so on.
There may be additional incommensurable epistemic differences between the prioritisations conducted within the different causes that mean we can’t combine their prioritisations (e.g. GHD favours more well-evidenced, less speculative things and prioritises A>B, GCR favours higher EV, more speculative things and prioritises C>D).
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)
Thanks, David!
It might be helpful to distinguish two related but distinct issues here: a) there are edge-cases of prio-work where it is (even) intuitively unclear whether they should be categorized as CP or WCP and b) my more theoretical point that this kind of categorization is fundamentally relative to cause individuations.
The second issue (b) seems to be the in-principle more damaging one to your results, as it suggests that your findings may hold only relative to one of many possible individuations. But I think it’s plausible (although not obvious to me) that in fact it doesn’t make a big difference because (i) a lot of actual prio-work takes something like your cause individuation for orientation (i.e. there is not that much prio-work between your general causes and relatively specific interventions) and also because (ii) your analysis seems not to apply the specific cause individuation mentioned in the beginning very strictly in the end—it seems that you rather think of causes as something like global health, animals, and catastrophic risks, but not necessarily these in particular? So I wonder if your results could be redescribed as holding relative to a cause individuation of roughly the generality / coarse-grainedness of the one you suggest, where the one you mention is only a proxy or example which could be replaced by similarly coarse-grained individuations. Then, for example, your result that only 8% of prio-work is CP would mean that 8% of prio-work operates on roughly the level of generality of causes like global health, animals, and catastrophic risks, although not all of that work compares these causes in particular.
So I think that your results are probably rather robust in the end. Still, it would be interesting to do the same exercise again based on a medium fine-grained cause individuation that distinguishes between, say, 15 causes (maybe similar to Will’s) and see if anything changes significantly.