I think if we individuated “causes” in more fine-grained way, e.g. “Animal Welfare” → “Plant-based meat alternatives”, “Corporate Campaigns” etc., this might not actually change our analysis that much. Why? Prima facie, there are some more people who are working on questions like PBMA vs corporate campaigns, who would otherwise be counted as within-cause prioritisation in our current framework. But, crucially, these researchers are still only making prioritisations within the super-ordinate Animal Welfare cause. They’re not comparing e.g. PBMA to nuclear security initiatives. So I think you would need to say something: like these people are engaged in cause-level but within-cause prioritisation. This is technically a kind of (sub-)cause level prioritisation, but it lacks the cross-cause comparison that our CP and CCP has due to still being constrained within a single cause.
The other thing that I’d note is that we also draw attention to the characteristic styles, and strengths and weaknesses, of cause prioritisation and intervention-level prioritisation. So, we argue, cause prioritisation is characterised more by abstract consideration of general features of the cause, whereas intervention-level prioritisation can increasingly attend to, more closely evaluate and potentially empirically study the specific details of the particular intervention in question. For example, it’s not possible to do a meaningful cost-effectiveness analysis of ‘Animals’ writ large,[1] but it is possible to do so for a particular animal intervention. I would speculate that as you individuated causes in an increasingly fine-grained way, then their evaluation and prioritisation might become more intervention-like and less cause-like, as their evaluation becomes more tightly defined and more empirically tractable. My guess though is that a lot of even these more fine-grained sub-causes, might still be much more like causes than interventions in our analysis, insofar as they will still contain heterogeneous groups of interventions and so need to be evaluated more in terms of general characteristics of the set.
I agree that if you individuated cause areas in an increasingly fine-grained way, so that each “cause” under consideration was an intervention (e.g. malaria nets in Uganda) or even a specific charity, then the cause/intervention distinction would collapse, in practice.
Many thanks for your reply! These are great points and I think there is some truth to them, but here is a bit to push back against them (or I guess just against your first point).
But, crucially, these researchers are still only making prioritisations within the super-ordinate Animal Welfare cause. They’re not comparing e.g. PBMA to nuclear security initiatives. So I think you would need to say something: like these people are engaged in cause-level but within-cause prioritisation.
But I think you could say something analogous for other CP work? For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of “Human Welfare”. So it seems unnecessary for genuine CP that a cause is compared to causes that cannot be categorized under the same super-ordinate cause. This would be too demanding as a condition for CP since you can (almost) always find a common super-ordinary cause for the compared (sub-)causes.
But if that is true, the fine-grainedness of the cause individuation does seem to make a difference to whether something counts as CP. For example, work on whether we should prioritize wild animals or farmed animals would then be genuine CP according to a cause individuation that includes ‘wild animals’ and ‘farmed animals’ but not according to your cause individuation which only includes ‘animals’ as a more general category. Maybe work that only compares ‘wild animals’ with ‘farmed animals’ but not with other causes seems strange, but the ultimate goal of this work could well be to find out what is the best cause overall. A conclusion on this could be reached by putting this work together with other work with a similar level of generality, such as work on whether to prioritize ‘farmed animals’ or ‘global poverty’.
As a concrete example, maybe it’s helpful to look at Will’s recent suggestion that EA should acknowledge as cause areas: AI safety, AI character, AI welfare / digital minds, the economic and political rights of AIs, AI-driven persuasion and epistemic disruption, AI for better reasoning, decision-making and coordination, and the risk of (AI-enabled) human coups. Now imagine someone does research on the comparative effectiveness of Will’s AI causes. Should we consider this CP or WCP? It seems it is CP relative to Will’s cause individuation but WCP relative to the cause individuation that summarizes all of these under ‘AI’.
For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of “Human Welfare”.
This is a great example! I think there is a real tension here.
On the one hand, typically we would say that someone comparing near-term human work vs long-term human work (as broad areas) is engaged in cause prioritisation. And the character of the assessment of areas this broad will likely be very much like the character we ascribed to cause prioritisation (i.e. concerning abstract assessment of general characteristics of very broad areas). On the other hand, if we’re classifying the allocation of movement as a whole across different types of prioritisation, it’s clear that prioritisation that was only focused on near-term vs long-term human comparisons would be lacking something important in terms of actually trying to identify the best cause (across cause areas). To give a different example, if the movement only compared invertebrate non-humans vs vertebrate non-humans, I think it’s clear that we’d have essentially given up on cause prioritisation, in an important sense.[1]
I think what I would say here is something like: the term “cause (prioritisation)” is typically associated with multiple different features, which typically go together, but which in edge cases can come apart. And in those cases, it’s non-obvious how we should best describe the case, and there are probably multiple equally reasonable terminological descriptions. In our system, using just the main top-level EA cause areas, classification may be relatively straightforward, but if you divide things differently or introduce subordinate or superordinate causes, then you need to introduce some more complex distinctions like sub-cause-level within-cause prioritisation.
That aside, I think even if you descriptively divide up the field somewhat differently, the same normative points about the relative strengths and weaknesses of prioritisation focuses on larger or smaller objects of analysis (more cause-like vs more intervention-like) and narrower or wider in scope (within a single area vs across more or all areas), can still be applied in the same way. And, descriptively, it still seems like the the movement still has relatively little prioritisation that is more broadly cross-area.
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
I also think that this is crucial for understanding the whole picture. Analogously to employees in a company (or indeed scientists) who work on some narrow task, members of EA could each work on prioritization in a narrow field while the output of the whole community is an unrestricted CP. But I agree that it is important to also have people who think more big-picture and prioritize across different cause areas.
One thing I’ll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it’s more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these ‘sub-cause’ prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Such prioritisation efforts don’t give us cost-effectiveness estimates at all (e.g. they just evaluate cruxes relevant to making relative within-cause comparisons).
Even if they did, the cost-effectiveness would not be comparable across causes without much additional cross-cause work on moral weights and so on.
There may be additional incommensurable epistemic differences between the prioritisations conducted within the different causes that mean we can’t combine their prioritisations (e.g. GHD favours more well-evidenced, less speculative things and prioritises A>B, GCR favours higher EV, more speculative things and prioritises C>D).
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)
It might be helpful to distinguish two related but distinct issues here: a) there are edge-cases of prio-work where it is (even) intuitively unclear whether they should be categorized as CP or WCP and b) my more theoretical point that this kind of categorization is fundamentally relative to cause individuations.
The second issue (b) seems to be the in-principle more damaging one to your results, as it suggests that your findings may hold only relative to one of many possible individuations. But I think it’s plausible (although not obvious to me) that in fact it doesn’t make a big difference because (i) a lot of actual prio-work takes something like your cause individuation for orientation (i.e. there is not that much prio-work between your general causes and relatively specific interventions) and also because (ii) your analysis seems not to apply the specific cause individuation mentioned in the beginning very strictly in the end—it seems that you rather think of causes as something like global health, animals, and catastrophic risks, but not necessarily these in particular? So I wonder if your results could be redescribed as holding relative to a cause individuation of roughly the generality / coarse-grainedness of the one you suggest, where the one you mention is only a proxy or example which could be replaced by similarly coarse-grained individuations. Then, for example, your result that only 8% of prio-work is CP would mean that 8% of prio-work operates on roughly the level of generality of causes like global health, animals, and catastrophic risks, although not all of that work compares these causes in particular.
So I think that your results are probably rather robust in the end. Still, it would be interesting to do the same exercise again based on a medium fine-grained cause individuation that distinguishes between, say, 15 causes (maybe similar to Will’s) and see if anything changes significantly.
Thanks for your comment Jakob! A few thoughts:
I think if we individuated “causes” in more fine-grained way, e.g. “Animal Welfare” → “Plant-based meat alternatives”, “Corporate Campaigns” etc., this might not actually change our analysis that much. Why? Prima facie, there are some more people who are working on questions like PBMA vs corporate campaigns, who would otherwise be counted as within-cause prioritisation in our current framework. But, crucially, these researchers are still only making prioritisations within the super-ordinate Animal Welfare cause. They’re not comparing e.g. PBMA to nuclear security initiatives. So I think you would need to say something: like these people are engaged in cause-level but within-cause prioritisation. This is technically a kind of (sub-)cause level prioritisation, but it lacks the cross-cause comparison that our CP and CCP has due to still being constrained within a single cause.
The other thing that I’d note is that we also draw attention to the characteristic styles, and strengths and weaknesses, of cause prioritisation and intervention-level prioritisation. So, we argue, cause prioritisation is characterised more by abstract consideration of general features of the cause, whereas intervention-level prioritisation can increasingly attend to, more closely evaluate and potentially empirically study the specific details of the particular intervention in question. For example, it’s not possible to do a meaningful cost-effectiveness analysis of ‘Animals’ writ large,[1] but it is possible to do so for a particular animal intervention. I would speculate that as you individuated causes in an increasingly fine-grained way, then their evaluation and prioritisation might become more intervention-like and less cause-like, as their evaluation becomes more tightly defined and more empirically tractable. My guess though is that a lot of even these more fine-grained sub-causes, might still be much more like causes than interventions in our analysis, insofar as they will still contain heterogeneous groups of interventions and so need to be evaluated more in terms of general characteristics of the set.
I agree that if you individuated cause areas in an increasingly fine-grained way, so that each “cause” under consideration was an intervention (e.g. malaria nets in Uganda) or even a specific charity, then the cause/intervention distinction would collapse, in practice.
Although you could do so for the best single intervention within the cause.
Many thanks for your reply! These are great points and I think there is some truth to them, but here is a bit to push back against them (or I guess just against your first point).
But I think you could say something analogous for other CP work? For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of “Human Welfare”. So it seems unnecessary for genuine CP that a cause is compared to causes that cannot be categorized under the same super-ordinate cause. This would be too demanding as a condition for CP since you can (almost) always find a common super-ordinary cause for the compared (sub-)causes.
But if that is true, the fine-grainedness of the cause individuation does seem to make a difference to whether something counts as CP. For example, work on whether we should prioritize wild animals or farmed animals would then be genuine CP according to a cause individuation that includes ‘wild animals’ and ‘farmed animals’ but not according to your cause individuation which only includes ‘animals’ as a more general category. Maybe work that only compares ‘wild animals’ with ‘farmed animals’ but not with other causes seems strange, but the ultimate goal of this work could well be to find out what is the best cause overall. A conclusion on this could be reached by putting this work together with other work with a similar level of generality, such as work on whether to prioritize ‘farmed animals’ or ‘global poverty’.
As a concrete example, maybe it’s helpful to look at Will’s recent suggestion that EA should acknowledge as cause areas: AI safety, AI character, AI welfare / digital minds, the economic and political rights of AIs, AI-driven persuasion and epistemic disruption, AI for better reasoning, decision-making and coordination, and the risk of (AI-enabled) human coups. Now imagine someone does research on the comparative effectiveness of Will’s AI causes. Should we consider this CP or WCP? It seems it is CP relative to Will’s cause individuation but WCP relative to the cause individuation that summarizes all of these under ‘AI’.
Thanks Jakob!
This is a great example! I think there is a real tension here.
On the one hand, typically we would say that someone comparing near-term human work vs long-term human work (as broad areas) is engaged in cause prioritisation. And the character of the assessment of areas this broad will likely be very much like the character we ascribed to cause prioritisation (i.e. concerning abstract assessment of general characteristics of very broad areas). On the other hand, if we’re classifying the allocation of movement as a whole across different types of prioritisation, it’s clear that prioritisation that was only focused on near-term vs long-term human comparisons would be lacking something important in terms of actually trying to identify the best cause (across cause areas). To give a different example, if the movement only compared invertebrate non-humans vs vertebrate non-humans, I think it’s clear that we’d have essentially given up on cause prioritisation, in an important sense.[1]
I think what I would say here is something like: the term “cause (prioritisation)” is typically associated with multiple different features, which typically go together, but which in edge cases can come apart. And in those cases, it’s non-obvious how we should best describe the case, and there are probably multiple equally reasonable terminological descriptions. In our system, using just the main top-level EA cause areas, classification may be relatively straightforward, but if you divide things differently or introduce subordinate or superordinate causes, then you need to introduce some more complex distinctions like sub-cause-level within-cause prioritisation.
That aside, I think even if you descriptively divide up the field somewhat differently, the same normative points about the relative strengths and weaknesses of prioritisation focuses on larger or smaller objects of analysis (more cause-like vs more intervention-like) and narrower or wider in scope (within a single area vs across more or all areas), can still be applied in the same way. And, descriptively, it still seems like the the movement still has relatively little prioritisation that is more broadly cross-area.
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
I also think that this is crucial for understanding the whole picture. Analogously to employees in a company (or indeed scientists) who work on some narrow task, members of EA could each work on prioritization in a narrow field while the output of the whole community is an unrestricted CP. But I agree that it is important to also have people who think more big-picture and prioritize across different cause areas.
Thanks Jakob!
One thing I’ll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it’s more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these ‘sub-cause’ prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Such prioritisation efforts don’t give us cost-effectiveness estimates at all (e.g. they just evaluate cruxes relevant to making relative within-cause comparisons).
Even if they did, the cost-effectiveness would not be comparable across causes without much additional cross-cause work on moral weights and so on.
There may be additional incommensurable epistemic differences between the prioritisations conducted within the different causes that mean we can’t combine their prioritisations (e.g. GHD favours more well-evidenced, less speculative things and prioritises A>B, GCR favours higher EV, more speculative things and prioritises C>D).
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)
Thanks, David!
It might be helpful to distinguish two related but distinct issues here: a) there are edge-cases of prio-work where it is (even) intuitively unclear whether they should be categorized as CP or WCP and b) my more theoretical point that this kind of categorization is fundamentally relative to cause individuations.
The second issue (b) seems to be the in-principle more damaging one to your results, as it suggests that your findings may hold only relative to one of many possible individuations. But I think it’s plausible (although not obvious to me) that in fact it doesn’t make a big difference because (i) a lot of actual prio-work takes something like your cause individuation for orientation (i.e. there is not that much prio-work between your general causes and relatively specific interventions) and also because (ii) your analysis seems not to apply the specific cause individuation mentioned in the beginning very strictly in the end—it seems that you rather think of causes as something like global health, animals, and catastrophic risks, but not necessarily these in particular? So I wonder if your results could be redescribed as holding relative to a cause individuation of roughly the generality / coarse-grainedness of the one you suggest, where the one you mention is only a proxy or example which could be replaced by similarly coarse-grained individuations. Then, for example, your result that only 8% of prio-work is CP would mean that 8% of prio-work operates on roughly the level of generality of causes like global health, animals, and catastrophic risks, although not all of that work compares these causes in particular.
So I think that your results are probably rather robust in the end. Still, it would be interesting to do the same exercise again based on a medium fine-grained cause individuation that distinguishes between, say, 15 causes (maybe similar to Will’s) and see if anything changes significantly.