[I think the following comment sounds like Iām disagreeing with you, but Iām not sure whether/āhow much we really have different views, as opposed to just framing and emphasising things differently.]
So it feels like ācause prioritizationā is just a first step, and by the end it might not even matter what cause areas are. It seems like what actually matters is producing a list of individual tasks ranked by how effective they are.
I agree that cause prioritization is just a first step. But it seems to me like a really useful first step.
It seems to me like itād be very difficult, inefficient, and/āor unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like ācause areaā seems like itād be wildly impractical. And the concept of ācause areaā also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.
To illustrate: I think itās a good idea for most EAs to:
Early on, spend some significant amount of time (letās say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
Then gradually move to focusing more on considerations relevant to prioritising and āactually actingā within a broad cause area, as well as focusing more on āactually actingā
And I think itād be a much less good idea for most EAs to:
Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
What would guide this brainstorming?
I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
Then try to evaluate and/āor implement those those tasks
Iām again not really sure how one would evaluate those things
I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
But it would in many cases seem more natural to adjust the value of all/āmost future-focused interventions together, and of all/āmost animal-focused interventions together, etc.
All that said, as noted above, I donāt think cause areasā should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to.
This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas
Iām not sure I understand. I donāt think what I said above requires that it be the case that ā[most or all] different tasks within a promising cause area are generally goodā (it sounds like you were implying āmost or allā?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.
I think that phrasing is somewhat tortured, sorry. What Iām picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though Iām not claiming bell curves are actually the appropriate distribution; thatās more like a metaphor.)
E.g., I think that one will do more good if one narrows oneās search to ālongtermist interventionsā rather than āeither longtermist or present-day developed-world human interventionsā. And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think itās likely that some interventions one could come up with for longtermist purposes would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions.
Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like āEarly on in oneās career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause areaā. I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.
I think that there are two important cases where that is true:
If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve oneās ability to identify and shift direction to the most promising tasks later on.
For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas.
If it is generally easy to find promising tasks within that cause area.
Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices.
A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:
Another important point that I wish to emphasize is that I was looking for promising options or opportunities, rather than promising cause areas. I believe that this methodology is much better suited when looking at the career options of a single person. That is because while some cause area might rank fairly low in general, specific options which might be a great fit for the person in question could be highly impactful (for example, climate change and healthcare [in the developed world] are considered very non-neglected in EA, while I believe that there are promising opportunities in both areas). That said, it surely is natural to look for specific options within a promising cause area.
[I think the following comment sounds like Iām disagreeing with you, but Iām not sure whether/āhow much we really have different views, as opposed to just framing and emphasising things differently.]
I agree that cause prioritization is just a first step. But it seems to me like a really useful first step.
It seems to me like itād be very difficult, inefficient, and/āor unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like ācause areaā seems like itād be wildly impractical. And the concept of ācause areaā also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.
To illustrate: I think itās a good idea for most EAs to:
Early on, spend some significant amount of time (letās say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
Then gradually move to focusing more on considerations relevant to prioritising and āactually actingā within a broad cause area, as well as focusing more on āactually actingā
And I think itād be a much less good idea for most EAs to:
Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
What would guide this brainstorming?
I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
Then try to evaluate and/āor implement those those tasks
Iām again not really sure how one would evaluate those things
I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
But it would in many cases seem more natural to adjust the value of all/āmost future-focused interventions together, and of all/āmost animal-focused interventions together, etc.
All that said, as noted above, I donāt think cause areasā should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to.
This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas
Iām not sure I understand. I donāt think what I said above requires that it be the case that ā[most or all] different tasks within a promising cause area are generally goodā (it sounds like you were implying āmost or allā?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.
I think that phrasing is somewhat tortured, sorry. What Iām picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though Iām not claiming bell curves are actually the appropriate distribution; thatās more like a metaphor.)
E.g., I think that one will do more good if one narrows oneās search to ālongtermist interventionsā rather than āeither longtermist or present-day developed-world human interventionsā. And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think itās likely that some interventions one could come up with for longtermist purposes would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions.
Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like āEarly on in oneās career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause areaā. I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.
I think that there are two important cases where that is true:
If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve oneās ability to identify and shift direction to the most promising tasks later on.
For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas.
If it is generally easy to find promising tasks within that cause area.
Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices.
A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:
(That link seems to lead back to this question post itselfāIām guessing you meant to link to this other post?)
(thanks! fixed)