In practice, Open Philanthropy Project (which is apparently doing cause prioritization) has fixed a list of cause areas, and is prioritizing among much more specific opportunities within those cause areas. (I’m actually less sure about this as of 2021, since Open Phil seems to have made at least one recent hire specifically for cause prioritization.)
Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.
But I think they also spend substantial effort deciding how much resources to allocate to each of thosebroad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like they’d each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)
Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.
But I think they also spend substantial effort deciding how much resources to allocate to each of those broad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like they’d each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)