I share the view that EAs often seem unclear about precisely what they mean by “cause area”, and that it seems like there are multiple somewhat different meanings floating around
This also therefore makes “cause prioritisation” a somewhat murky term as well
I think it would probably be valuable for some EAs to spend a bit more time thinking about and/or explaining what they mean by “cause area”
I personally think about cause areas mostly in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help
If future beings: Longtermism
If nonhuman animals (especially those in the near-term): Animal welfare
If people in developing countries: Global health & development
This is somewhat similar to Owen Cotton-Barratt’s “A goal, something we might devote resources towards optimising”
But I think “a goal” makes it much less clear how granular we’re being (e.g., that could mean there’s a whole cause area just for “get more academics to think about AI safety”), compared to “class of beneficiaries”
Caveats:
There are also possibilities other than those 3
e.g., near-term humans in the developed world
And there are also things I might normally “cause areas” that aren’t sufficiently distinguished just by the class of beneficiaries one aims to help
e.g., longevity/anti-ageing
I don’t mean to imply that broad cause areas are just a matter of a person’s views on moral patienthood; that’s not the only factor influencing which class of beneficiaries one focuses on helping
E.g., two people might agree that it’s probably good to help both future humans and chickens, but disagree about empirical questions like the current level of x-risk, or about methodological/epistemological questions like how much weight to place on chains of reasonings (e.g., the astronomical waste argument) vs empirical evidence
I’m very confident that it’s useful to have the concept of “cause areas”, to sometimes carve up the space of all possible altruistic goals into at least the above 3 cause areas, and to sometimes have the standard sorts of cause prioritisation research and discussion
I think the above-mentioned concept of “cause areas” should obviously not be the only unit of analysis
E.g., I think most EAs should spend most of their lifetime altruistic efforts prioritising and acting within broadcause areas like longtermism or animal welfare
E.g., deciding whether to work on reducing risks of extinction, reducing other existential risks, or improving the longterm future in other ways
And also much narrower decisions, like precisely how best to craft and implement some specific nuclear security policy
I’ll add some further thoughts as replies to this answer.
[I think the following comment sounds like I’m disagreeing with you, but I’m not sure whether/how much we really have different views, as opposed to just framing and emphasising things differently.]
So it feels like “cause prioritization” is just a first step, and by the end it might not even matter what cause areas are. It seems like what actually matters is producing a list of individual tasks ranked by how effective they are.
I agree that cause prioritization is just a first step. But it seems to me like a really useful first step.
It seems to me like it’d be very difficult, inefficient, and/or unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like “cause area” seems like it’d be wildly impractical. And the concept of “cause area” also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.
To illustrate: I think it’s a good idea for most EAs to:
Early on, spend some significant amount of time (let’s say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
Then gradually move to focusing more on considerations relevant to prioritising and “actually acting” within a broad cause area, as well as focusing more on “actually acting”
And I think it’d be a much less good idea for most EAs to:
Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
What would guide this brainstorming?
I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
Then try to evaluate and/or implement those those tasks
I’m again not really sure how one would evaluate those things
I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
But it would in many cases seem more natural to adjust the value of all/most future-focused interventions together, and of all/most animal-focused interventions together, etc.
All that said, as noted above, I don’t think cause areas” should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to.
This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas
I’m not sure I understand. I don’t think what I said above requires that it be the case that “[most or all] different tasks within a promising cause area are generally good” (it sounds like you were implying “most or all”?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.
I think that phrasing is somewhat tortured, sorry. What I’m picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though I’m not claiming bell curves are actually the appropriate distribution; that’s more like a metaphor.)
E.g., I think that one will do more good if one narrows one’s search to “longtermist interventions” rather than “either longtermist or present-day developed-world human interventions”. And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think it’s likely that some interventions one could come up with for longtermist purposes would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions.
Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like “Early on in one’s career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause area”. I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.
I think that there are two important cases where that is true:
If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve one’s ability to identify and shift direction to the most promising tasks later on.
For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas.
If it is generally easy to find promising tasks within that cause area.
Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices.
A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:
Another important point that I wish to emphasize is that I was looking for promising options or opportunities, rather than promising cause areas. I believe that this methodology is much better suited when looking at the career options of a single person. That is because while some cause area might rank fairly low in general, specific options which might be a great fit for the person in question could be highly impactful (for example, climate change and healthcare [in the developed world] are considered very non-neglected in EA, while I believe that there are promising opportunities in both areas). That said, it surely is natural to look for specific options within a promising cause area.
As explained (EA Forum link; HT Edo Arad) by Owen Cotton-Barratt back in 2014, there are at least two meanings of “cause area”. My impression is that since then, effective altruists have not really distinguished between these different meanings, which suggests to me that some combination of the following things are happening: (1) the distinction isn’t too important in practice; (2) people are using “cause area” as a shorthand for something like “the established cause areas in effective altruism, plus some extra hard-to-specify stuff”; (3) people are confused about what a “cause area” even is, but lack the metacognitive abilities to notice this.
As noted above, personally, I usually find it most useful to think about cause areas in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help.
I think it’d be useful to also “revive” Owen’s suggested term/concept of “An intervention area, i.e. a cluster of interventions which are related and share some characteristics”, as clearly distinguished from a cause area.
E.g., I think it’d be useful to be able to say something like “Political advocacy is an intervention area that could be useful for a range of cause areas, such as animal welfare and longtermism. It might be valuable for some EAs to specialise in political advocacy in a relatively cause-neutral way, lending their expertise to various different EA-aligned efforts.” (I’ve said similar things before, but it will probably be easier now that I have the term “intervention area” in mind.)
I really agree with this kind of distinction. It seems to me that there are several different kinds of properties by which to cluster interventions, including:
Type of work done (say, Political Advocacy)
Instrumental subgoals (say, Agriculture R&D (which could include supporting work, not just research)). (I’m not sure if it’s reasonable to separate these from cause areas as goals)
Epistemic beliefs (say, interventions supported by RCTs for GH&D)
(It seems harder than I thought to think about different ways to cluster. Absent of contrary arguments, I might purpose defining intervention areas as the type of work done)
In practice, Open Philanthropy Project (which is apparently doing cause prioritization) has fixed a list of cause areas, and is prioritizing among much more specific opportunities within those cause areas. (I’m actually less sure about this as of 2021, since Open Phil seems to have made at least one recent hire specifically for cause prioritization.)
Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.
But I think they also spend substantial effort deciding how much resources to allocate to each of thosebroad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like they’d each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)
My main thoughts on this:
I share the view that EAs often seem unclear about precisely what they mean by “cause area”, and that it seems like there are multiple somewhat different meanings floating around
This also therefore makes “cause prioritisation” a somewhat murky term as well
I think it would probably be valuable for some EAs to spend a bit more time thinking about and/or explaining what they mean by “cause area”
I personally think about cause areas mostly in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help
If future beings: Longtermism
If nonhuman animals (especially those in the near-term): Animal welfare
If people in developing countries: Global health & development
We can then subdivide those cause areas into narrower cause areas (e.g. human-centric longtermism vs animal-inclusive longtermism; farm animal welfare vs wild animal welfare)
This is somewhat similar to Owen Cotton-Barratt’s “A goal, something we might devote resources towards optimising”
But I think “a goal” makes it much less clear how granular we’re being (e.g., that could mean there’s a whole cause area just for “get more academics to think about AI safety”), compared to “class of beneficiaries”
Caveats:
There are also possibilities other than those 3
e.g., near-term humans in the developed world
And there are also things I might normally “cause areas” that aren’t sufficiently distinguished just by the class of beneficiaries one aims to help
e.g., longevity/anti-ageing
I don’t mean to imply that broad cause areas are just a matter of a person’s views on moral patienthood; that’s not the only factor influencing which class of beneficiaries one focuses on helping
E.g., two people might agree that it’s probably good to help both future humans and chickens, but disagree about empirical questions like the current level of x-risk, or about methodological/epistemological questions like how much weight to place on chains of reasonings (e.g., the astronomical waste argument) vs empirical evidence
I’m very confident that it’s useful to have the concept of “cause areas”, to sometimes carve up the space of all possible altruistic goals into at least the above 3 cause areas, and to sometimes have the standard sorts of cause prioritisation research and discussion
I think the above-mentioned concept of “cause areas” should obviously not be the only unit of analysis
E.g., I think most EAs should spend most of their lifetime altruistic efforts prioritising and acting within broad cause areas like longtermism or animal welfare
E.g., deciding whether to work on reducing risks of extinction, reducing other existential risks, or improving the longterm future in other ways
And also much narrower decisions, like precisely how best to craft and implement some specific nuclear security policy
I’ll add some further thoughts as replies to this answer.
[I think the following comment sounds like I’m disagreeing with you, but I’m not sure whether/how much we really have different views, as opposed to just framing and emphasising things differently.]
I agree that cause prioritization is just a first step. But it seems to me like a really useful first step.
It seems to me like it’d be very difficult, inefficient, and/or unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like “cause area” seems like it’d be wildly impractical. And the concept of “cause area” also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.
To illustrate: I think it’s a good idea for most EAs to:
Early on, spend some significant amount of time (let’s say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
Then gradually move to focusing more on considerations relevant to prioritising and “actually acting” within a broad cause area, as well as focusing more on “actually acting”
And I think it’d be a much less good idea for most EAs to:
Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
What would guide this brainstorming?
I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
Then try to evaluate and/or implement those those tasks
I’m again not really sure how one would evaluate those things
I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
But it would in many cases seem more natural to adjust the value of all/most future-focused interventions together, and of all/most animal-focused interventions together, etc.
All that said, as noted above, I don’t think cause areas” should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to.
This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas
I’m not sure I understand. I don’t think what I said above requires that it be the case that “[most or all] different tasks within a promising cause area are generally good” (it sounds like you were implying “most or all”?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.
I think that phrasing is somewhat tortured, sorry. What I’m picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though I’m not claiming bell curves are actually the appropriate distribution; that’s more like a metaphor.)
E.g., I think that one will do more good if one narrows one’s search to “longtermist interventions” rather than “either longtermist or present-day developed-world human interventions”. And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think it’s likely that some interventions one could come up with for longtermist purposes would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions.
Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like “Early on in one’s career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause area”. I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.
I think that there are two important cases where that is true:
If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve one’s ability to identify and shift direction to the most promising tasks later on.
For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas.
If it is generally easy to find promising tasks within that cause area.
Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices.
A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:
(That link seems to lead back to this question post itself—I’m guessing you meant to link to this other post?)
(thanks! fixed)
As noted above, personally, I usually find it most useful to think about cause areas in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help.
I think it’d be useful to also “revive” Owen’s suggested term/concept of “An intervention area, i.e. a cluster of interventions which are related and share some characteristics”, as clearly distinguished from a cause area.
E.g., I think it’d be useful to be able to say something like “Political advocacy is an intervention area that could be useful for a range of cause areas, such as animal welfare and longtermism. It might be valuable for some EAs to specialise in political advocacy in a relatively cause-neutral way, lending their expertise to various different EA-aligned efforts.” (I’ve said similar things before, but it will probably be easier now that I have the term “intervention area” in mind.)
I really agree with this kind of distinction. It seems to me that there are several different kinds of properties by which to cluster interventions, including:
Type of work done (say, Political Advocacy)
Instrumental subgoals (say, Agriculture R&D (which could include supporting work, not just research)). (I’m not sure if it’s reasonable to separate these from cause areas as goals)
Epistemic beliefs (say, interventions supported by RCTs for GH&D)
(It seems harder than I thought to think about different ways to cluster. Absent of contrary arguments, I might purpose defining intervention areas as the type of work done)
Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.
But I think they also spend substantial effort deciding how much resources to allocate to each of those broad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like they’d each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)
Two links with relevant prior discussion:
This post’s comments section
This post’s appendix