I share the view that EAs often seem unclear about precisely what they mean by ācause areaā, and that it seems like there are multiple somewhat different meanings floating around
This also therefore makes ācause prioritisationā a somewhat murky term as well
I think it would probably be valuable for some EAs to spend a bit more time thinking about and/āor explaining what they mean by ācause areaā
I personally think about cause areas mostly in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help
If future beings: Longtermism
If nonhuman animals (especially those in the near-term): Animal welfare
If people in developing countries: Global health & development
This is somewhat similar to Owen Cotton-Barrattās āA goal, something we might devote resources towards optimisingā
But I think āa goalā makes it much less clear how granular weāre being (e.g., that could mean thereās a whole cause area just for āget more academics to think about AI safetyā), compared to āclass of beneficiariesā
Caveats:
There are also possibilities other than those 3
e.g., near-term humans in the developed world
And there are also things I might normally ācause areasā that arenāt sufficiently distinguished just by the class of beneficiaries one aims to help
e.g., longevity/āanti-ageing
I donāt mean to imply that broad cause areas are just a matter of a personās views on moral patienthood; thatās not the only factor influencing which class of beneficiaries one focuses on helping
E.g., two people might agree that itās probably good to help both future humans and chickens, but disagree about empirical questions like the current level of x-risk, or about methodological/āepistemological questions like how much weight to place on chains of reasonings (e.g., the astronomical waste argument) vs empirical evidence
Iām very confident that itās useful to have the concept of ācause areasā, to sometimes carve up the space of all possible altruistic goals into at least the above 3 cause areas, and to sometimes have the standard sorts of cause prioritisation research and discussion
I think the above-mentioned concept of ācause areasā should obviously not be the only unit of analysis
E.g., I think most EAs should spend most of their lifetime altruistic efforts prioritising and acting within broadcause areas like longtermism or animal welfare
E.g., deciding whether to work on reducing risks of extinction, reducing other existential risks, or improving the longterm future in other ways
And also much narrower decisions, like precisely how best to craft and implement some specific nuclear security policy
Iāll add some further thoughts as replies to this answer.
[I think the following comment sounds like Iām disagreeing with you, but Iām not sure whether/āhow much we really have different views, as opposed to just framing and emphasising things differently.]
So it feels like ācause prioritizationā is just a first step, and by the end it might not even matter what cause areas are. It seems like what actually matters is producing a list of individual tasks ranked by how effective they are.
I agree that cause prioritization is just a first step. But it seems to me like a really useful first step.
It seems to me like itād be very difficult, inefficient, and/āor unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like ācause areaā seems like itād be wildly impractical. And the concept of ācause areaā also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.
To illustrate: I think itās a good idea for most EAs to:
Early on, spend some significant amount of time (letās say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
Then gradually move to focusing more on considerations relevant to prioritising and āactually actingā within a broad cause area, as well as focusing more on āactually actingā
And I think itād be a much less good idea for most EAs to:
Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
What would guide this brainstorming?
I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
Then try to evaluate and/āor implement those those tasks
Iām again not really sure how one would evaluate those things
I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
But it would in many cases seem more natural to adjust the value of all/āmost future-focused interventions together, and of all/āmost animal-focused interventions together, etc.
All that said, as noted above, I donāt think cause areasā should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to.
This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas
Iām not sure I understand. I donāt think what I said above requires that it be the case that ā[most or all] different tasks within a promising cause area are generally goodā (it sounds like you were implying āmost or allā?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.
I think that phrasing is somewhat tortured, sorry. What Iām picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though Iām not claiming bell curves are actually the appropriate distribution; thatās more like a metaphor.)
E.g., I think that one will do more good if one narrows oneās search to ālongtermist interventionsā rather than āeither longtermist or present-day developed-world human interventionsā. And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think itās likely that some interventions one could come up with for longtermist purposes would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions.
Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like āEarly on in oneās career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause areaā. I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.
I think that there are two important cases where that is true:
If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve oneās ability to identify and shift direction to the most promising tasks later on.
For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas.
If it is generally easy to find promising tasks within that cause area.
Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices.
A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:
Another important point that I wish to emphasize is that I was looking for promising options or opportunities, rather than promising cause areas. I believe that this methodology is much better suited when looking at the career options of a single person. That is because while some cause area might rank fairly low in general, specific options which might be a great fit for the person in question could be highly impactful (for example, climate change and healthcare [in the developed world] are considered very non-neglected in EA, while I believe that there are promising opportunities in both areas). That said, it surely is natural to look for specific options within a promising cause area.
As explained (EA Forum link; HT Edo Arad) by Owen Cotton-Barratt back in 2014, there are at least two meanings of ācause areaā. My impression is that since then, effective altruists have not really distinguished between these different meanings, which suggests to me that some combination of the following things are happening: (1) the distinction isnāt too important in practice; (2) people are using ācause areaā as a shorthand for something like āthe established cause areas in effective altruism, plus some extra hard-to-specify stuffā; (3) people are confused about what a ācause areaā even is, but lack the metacognitive abilities to notice this.
As noted above, personally, I usually find it most useful to think about cause areas in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help.
I think itād be useful to also āreviveā Owenās suggested term/āconcept of āAn intervention area, i.e. a cluster of interventions which are related and share some characteristicsā, as clearly distinguished from a cause area.
E.g., I think itād be useful to be able to say something like āPolitical advocacy is an intervention area that could be useful for a range of cause areas, such as animal welfare and longtermism. It might be valuable for some EAs to specialise in political advocacy in a relatively cause-neutral way, lending their expertise to various different EA-aligned efforts.ā (Iāve said similar things before, but it will probably be easier now that I have the term āintervention areaā in mind.)
I really agree with this kind of distinction. It seems to me that there are several different kinds of properties by which to cluster interventions, including:
Type of work done (say, Political Advocacy)
Instrumental subgoals (say, Agriculture R&D (which could include supporting work, not just research)). (Iām not sure if itās reasonable to separate these from cause areas as goals)
Epistemic beliefs (say, interventions supported by RCTs for GH&D)
(It seems harder than I thought to think about different ways to cluster. Absent of contrary arguments, I might purpose defining intervention areas as the type of work done)
In practice, Open Philanthropy Project (which is apparently doing cause prioritization) has fixed a list of cause areas, and is prioritizing among much more specific opportunities within those cause areas. (Iām actually less sure about this as of 2021, since Open Phil seems to have made at least one recent hire specifically for cause prioritization.)
Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.
But I think they also spend substantial effort deciding how much resources to allocate to each of thosebroad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like theyād each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)
My main thoughts on this:
I share the view that EAs often seem unclear about precisely what they mean by ācause areaā, and that it seems like there are multiple somewhat different meanings floating around
This also therefore makes ācause prioritisationā a somewhat murky term as well
I think it would probably be valuable for some EAs to spend a bit more time thinking about and/āor explaining what they mean by ācause areaā
I personally think about cause areas mostly in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help
If future beings: Longtermism
If nonhuman animals (especially those in the near-term): Animal welfare
If people in developing countries: Global health & development
We can then subdivide those cause areas into narrower cause areas (e.g. human-centric longtermism vs animal-inclusive longtermism; farm animal welfare vs wild animal welfare)
This is somewhat similar to Owen Cotton-Barrattās āA goal, something we might devote resources towards optimisingā
But I think āa goalā makes it much less clear how granular weāre being (e.g., that could mean thereās a whole cause area just for āget more academics to think about AI safetyā), compared to āclass of beneficiariesā
Caveats:
There are also possibilities other than those 3
e.g., near-term humans in the developed world
And there are also things I might normally ācause areasā that arenāt sufficiently distinguished just by the class of beneficiaries one aims to help
e.g., longevity/āanti-ageing
I donāt mean to imply that broad cause areas are just a matter of a personās views on moral patienthood; thatās not the only factor influencing which class of beneficiaries one focuses on helping
E.g., two people might agree that itās probably good to help both future humans and chickens, but disagree about empirical questions like the current level of x-risk, or about methodological/āepistemological questions like how much weight to place on chains of reasonings (e.g., the astronomical waste argument) vs empirical evidence
Iām very confident that itās useful to have the concept of ācause areasā, to sometimes carve up the space of all possible altruistic goals into at least the above 3 cause areas, and to sometimes have the standard sorts of cause prioritisation research and discussion
I think the above-mentioned concept of ācause areasā should obviously not be the only unit of analysis
E.g., I think most EAs should spend most of their lifetime altruistic efforts prioritising and acting within broad cause areas like longtermism or animal welfare
E.g., deciding whether to work on reducing risks of extinction, reducing other existential risks, or improving the longterm future in other ways
And also much narrower decisions, like precisely how best to craft and implement some specific nuclear security policy
Iāll add some further thoughts as replies to this answer.
[I think the following comment sounds like Iām disagreeing with you, but Iām not sure whether/āhow much we really have different views, as opposed to just framing and emphasising things differently.]
I agree that cause prioritization is just a first step. But it seems to me like a really useful first step.
It seems to me like itād be very difficult, inefficient, and/āor unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like ācause areaā seems like itād be wildly impractical. And the concept of ācause areaā also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.
To illustrate: I think itās a good idea for most EAs to:
Early on, spend some significant amount of time (letās say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
Then gradually move to focusing more on considerations relevant to prioritising and āactually actingā within a broad cause area, as well as focusing more on āactually actingā
And I think itād be a much less good idea for most EAs to:
Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
What would guide this brainstorming?
I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
Then try to evaluate and/āor implement those those tasks
Iām again not really sure how one would evaluate those things
I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
But it would in many cases seem more natural to adjust the value of all/āmost future-focused interventions together, and of all/āmost animal-focused interventions together, etc.
All that said, as noted above, I donāt think cause areasā should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to.
This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas
Iām not sure I understand. I donāt think what I said above requires that it be the case that ā[most or all] different tasks within a promising cause area are generally goodā (it sounds like you were implying āmost or allā?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.
I think that phrasing is somewhat tortured, sorry. What Iām picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though Iām not claiming bell curves are actually the appropriate distribution; thatās more like a metaphor.)
E.g., I think that one will do more good if one narrows oneās search to ālongtermist interventionsā rather than āeither longtermist or present-day developed-world human interventionsā. And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think itās likely that some interventions one could come up with for longtermist purposes would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions.
Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like āEarly on in oneās career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause areaā. I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.
I think that there are two important cases where that is true:
If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve oneās ability to identify and shift direction to the most promising tasks later on.
For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas.
If it is generally easy to find promising tasks within that cause area.
Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices.
A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:
(That link seems to lead back to this question post itselfāIām guessing you meant to link to this other post?)
(thanks! fixed)
As noted above, personally, I usually find it most useful to think about cause areas in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help.
I think itād be useful to also āreviveā Owenās suggested term/āconcept of āAn intervention area, i.e. a cluster of interventions which are related and share some characteristicsā, as clearly distinguished from a cause area.
E.g., I think itād be useful to be able to say something like āPolitical advocacy is an intervention area that could be useful for a range of cause areas, such as animal welfare and longtermism. It might be valuable for some EAs to specialise in political advocacy in a relatively cause-neutral way, lending their expertise to various different EA-aligned efforts.ā (Iāve said similar things before, but it will probably be easier now that I have the term āintervention areaā in mind.)
I really agree with this kind of distinction. It seems to me that there are several different kinds of properties by which to cluster interventions, including:
Type of work done (say, Political Advocacy)
Instrumental subgoals (say, Agriculture R&D (which could include supporting work, not just research)). (Iām not sure if itās reasonable to separate these from cause areas as goals)
Epistemic beliefs (say, interventions supported by RCTs for GH&D)
(It seems harder than I thought to think about different ways to cluster. Absent of contrary arguments, I might purpose defining intervention areas as the type of work done)
Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.
But I think they also spend substantial effort deciding how much resources to allocate to each of those broad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like theyād each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)
Two links with relevant prior discussion:
This postās comments section
This postās appendix