I’m a bit unclear on why you characterise 80,000 Hours as having a “narrower” cause focus than (e.g.) Charity Entrepreneurship. CE’s page cites the following cause areas:
Animal Welfare
Health and Development Policy
Mental Health and Happiness
Family Planning
Capacity Building (EA Meta)
Meanwhile, 80k provide a list of the world’s “most pressing problems”:
Risks from AI
Catastrophic Pandemics
Nuclear War
Great Power Conflict
Climate Change
These areas feel comparably “broad” to me? Likewise for Longview, who you list as part of the “AI x-risk community”, state six distinct focus areas for their grantmaking — only one of which is AI. Unless I’ve missed a recent pivot from these orgs, both Longview & 80k feel more similar to CE in terms of breadth than Animal Advocacy Careers.
I agree that you need “specific values and epistemic assumptions” to agree with the areas these orgs have highlighted as most important, but I think you need specific values and epistemic assumptions to agree with more standard near-termist recommendations for impactful careers and donations, too. So I’m a bit confused about what the difference between “question” and “answer” communities is meant to denote aside from the split between near/longtermism.[1] Is the idea that (for example) CE is more skeptically focused on exploring the relative priorities of distinct cause areas, whereas organizations like Longview and 80k are more focused on funnelling people+money into areas which have already been decided as the most important? Or something else?
I do think it’s correct note that the more ‘longtermist’ side of the community works with different values and epistemics to the more ‘neartermist’ side of the community, and I think it would be beneficial to emphasise this more. But given that you note there are already distinct communities in some sense (e.g., there are x-risk specific conferences), what other concrete steps would you like to see implemented in order to establish distinct communities?
I’m aware that many people justify focus on areas like biorisk and AI in virtue of the risks posed to the present generation, and might not subscribe to longtermism as a philosophical thesis. I still think that the ‘longtermist’ moniker is useful as a sociological label — used to denote the community of people who work on cause areas that longtermists are likely to rate as among the highest priorities.
Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.
1) Tier-based vs. prioritized order
So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other hand, my understanding of 80k’s list is that they would have a strong preference for someone to go into AI vs. climate change. This means that although five areas might be listed by both, the net spread of people going into each of them might be very different. I think overall, I/we should care more about the outcomes than what is written on a website e.g., CE said it worked in these five areas, but in practice, if 80% of our charities were animal-focused, I would consider us an animal organization.
2) Relative size of a given worldview
I think it’s easy to forget how niche some of these cause areas are compared to others, and I believe that makes a significant difference. An area like mental health or global health are orders of magnitude more common a worldview than something like animal welfare or AI. If you consider how many moral and epistemic views would consider something like reducing lead paint as a charitable action vs. working at an AI safety lab, these require markedly different levels of specificity in views. The only area on 80k’s list that I would suggest is a major area outside of EA is climate change, the one listed last.
3) Likelihood of adding additional cause areas that are competitive with number 1
My understanding is that AI has been 80k’s top priority since close to its founding (2011), and that right now, it’s internally not seen as highly likely that something will supersede it. CE, on the other hand, started with animals and GW-style global development and has now added the cause areas listed above. Additionally, it has a continued goal to explore new ones (e.g., we are incubating bio-risk charities this year, and I expect we will tackle another area we have never worked on before in the next 12 months). This is fundamentally because the CE team expects that there are other great cause areas out there that are comparable to our top ones, ones that we/EA have not yet identified.
I think a lot of this could be made clear with more transparency. If, say, 50%+ of 80k’s funding or their staff views on what the top area was were not AI, I would be happy to revise the list and put them back into the exploratory camp. But I would be pretty surprised if this were the case, given my current understanding.
4) Funneling vs. exploring
I think the factor you describe is also relevant. If an organization sees most of their focus in the funneling direction towards a certain cause area, I would definitely categorize them more as an answer-based community. E.g., maybe one could look at the ratio of budget spent on outreach compared to exploration that an organization does. I would not be surprised if that correlated well with a question vs. answer-based approach.
Ultimately, I do think it’s a spectrum, and every organization is a bit answer-based and a bit question-based. However, I do think there is a significant and worthwhile difference between being 25% answer/75% question-oriented, and the reverse.
I feel similarly confused with this somewhat arbitrary categorisation which also seems heavily flawed.
CE is in it’s nature a narrow career focus, it focuses just on entrepreneurs in the neartermist space and is highly biased to thinking this is the most impactful career someone can do, whilst for many starting a new charity would not be. It seems a large stretch to put CE in this category and also doesnt seem to be where CE focuses its time and energy. HIP also focuses just on mid-career professionals but it’s hard to know what they are doing as they seem to change what they are doing and their target audience relatively often.
80,000 hours, Probably Good and Animal Advocacy Careers seem broader in their target audience and seem like the most natural fit for being the most impactful career community. They also advise people on how they can do the most effective thing although obviously, they all have their own biases based on their cause prioritisation.
Hey Anon, indeed, the categorisation is not aimed at the target audience. It’s more aimed at the number and requires specific ethical and epistemic assumptions. I think another way to dive into things would be to consider how broad vs. narrow a given suggested career trajectory is, as something like CE or Effective Altruism might be broad cause area-wise but narrow in terms of career category.
However, even in this sort of case, I think there is a way to frame things into a more answer vs. question-based framework. For example, one might ask something like: “How highly does CE rank the career path of CE relative to five unrelated but seen by others as promising career paths?” I think the more unusual this rating is compared to what, for instance, an EA survey would suggest, the more I would place CE in the answer-based community. I also think a decision mentioned above about how much time an organisation spends on funnelling vs. exploring could be another relevant characteristic when considering how question vs. answer-based an organisation is.
I think the most salient two are connected to the other two posts I made. I think people should have a transparent scope, especially organizations where people might be surprised about their current focus and they should not use polarizing techniques. I think there are tons of further steps that could be taken; a conference for EA global health and development seems like a pretty obvious example of something that is missing in EA.
I’m a bit unclear on why you characterise 80,000 Hours as having a “narrower” cause focus than (e.g.) Charity Entrepreneurship. CE’s page cites the following cause areas:
Animal Welfare
Health and Development Policy
Mental Health and Happiness
Family Planning
Capacity Building (EA Meta)
Meanwhile, 80k provide a list of the world’s “most pressing problems”:
Risks from AI
Catastrophic Pandemics
Nuclear War
Great Power Conflict
Climate Change
These areas feel comparably “broad” to me? Likewise for Longview, who you list as part of the “AI x-risk community”, state six distinct focus areas for their grantmaking — only one of which is AI. Unless I’ve missed a recent pivot from these orgs, both Longview & 80k feel more similar to CE in terms of breadth than Animal Advocacy Careers.
I agree that you need “specific values and epistemic assumptions” to agree with the areas these orgs have highlighted as most important, but I think you need specific values and epistemic assumptions to agree with more standard near-termist recommendations for impactful careers and donations, too. So I’m a bit confused about what the difference between “question” and “answer” communities is meant to denote aside from the split between near/longtermism.[1] Is the idea that (for example) CE is more skeptically focused on exploring the relative priorities of distinct cause areas, whereas organizations like Longview and 80k are more focused on funnelling people+money into areas which have already been decided as the most important? Or something else?
I do think it’s correct note that the more ‘longtermist’ side of the community works with different values and epistemics to the more ‘neartermist’ side of the community, and I think it would be beneficial to emphasise this more. But given that you note there are already distinct communities in some sense (e.g., there are x-risk specific conferences), what other concrete steps would you like to see implemented in order to establish distinct communities?
I’m aware that many people justify focus on areas like biorisk and AI in virtue of the risks posed to the present generation, and might not subscribe to longtermism as a philosophical thesis. I still think that the ‘longtermist’ moniker is useful as a sociological label — used to denote the community of people who work on cause areas that longtermists are likely to rate as among the highest priorities.
Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.
1) Tier-based vs. prioritized order
So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other hand, my understanding of 80k’s list is that they would have a strong preference for someone to go into AI vs. climate change. This means that although five areas might be listed by both, the net spread of people going into each of them might be very different. I think overall, I/we should care more about the outcomes than what is written on a website e.g., CE said it worked in these five areas, but in practice, if 80% of our charities were animal-focused, I would consider us an animal organization.
2) Relative size of a given worldview
I think it’s easy to forget how niche some of these cause areas are compared to others, and I believe that makes a significant difference. An area like mental health or global health are orders of magnitude more common a worldview than something like animal welfare or AI. If you consider how many moral and epistemic views would consider something like reducing lead paint as a charitable action vs. working at an AI safety lab, these require markedly different levels of specificity in views. The only area on 80k’s list that I would suggest is a major area outside of EA is climate change, the one listed last.
3) Likelihood of adding additional cause areas that are competitive with number 1
My understanding is that AI has been 80k’s top priority since close to its founding (2011), and that right now, it’s internally not seen as highly likely that something will supersede it. CE, on the other hand, started with animals and GW-style global development and has now added the cause areas listed above. Additionally, it has a continued goal to explore new ones (e.g., we are incubating bio-risk charities this year, and I expect we will tackle another area we have never worked on before in the next 12 months). This is fundamentally because the CE team expects that there are other great cause areas out there that are comparable to our top ones, ones that we/EA have not yet identified.
I think a lot of this could be made clear with more transparency. If, say, 50%+ of 80k’s funding or their staff views on what the top area was were not AI, I would be happy to revise the list and put them back into the exploratory camp. But I would be pretty surprised if this were the case, given my current understanding.
4) Funneling vs. exploring
I think the factor you describe is also relevant. If an organization sees most of their focus in the funneling direction towards a certain cause area, I would definitely categorize them more as an answer-based community. E.g., maybe one could look at the ratio of budget spent on outreach compared to exploration that an organization does. I would not be surprised if that correlated well with a question vs. answer-based approach.
Ultimately, I do think it’s a spectrum, and every organization is a bit answer-based and a bit question-based. However, I do think there is a significant and worthwhile difference between being 25% answer/75% question-oriented, and the reverse.
I feel similarly confused with this somewhat arbitrary categorisation which also seems heavily flawed.
CE is in it’s nature a narrow career focus, it focuses just on entrepreneurs in the neartermist space and is highly biased to thinking this is the most impactful career someone can do, whilst for many starting a new charity would not be. It seems a large stretch to put CE in this category and also doesnt seem to be where CE focuses its time and energy. HIP also focuses just on mid-career professionals but it’s hard to know what they are doing as they seem to change what they are doing and their target audience relatively often.
80,000 hours, Probably Good and Animal Advocacy Careers seem broader in their target audience and seem like the most natural fit for being the most impactful career community. They also advise people on how they can do the most effective thing although obviously, they all have their own biases based on their cause prioritisation.
Hey Anon, indeed, the categorisation is not aimed at the target audience. It’s more aimed at the number and requires specific ethical and epistemic assumptions. I think another way to dive into things would be to consider how broad vs. narrow a given suggested career trajectory is, as something like CE or Effective Altruism might be broad cause area-wise but narrow in terms of career category.
However, even in this sort of case, I think there is a way to frame things into a more answer vs. question-based framework. For example, one might ask something like: “How highly does CE rank the career path of CE relative to five unrelated but seen by others as promising career paths?” I think the more unusual this rating is compared to what, for instance, an EA survey would suggest, the more I would place CE in the answer-based community. I also think a decision mentioned above about how much time an organisation spends on funnelling vs. exploring could be another relevant characteristic when considering how question vs. answer-based an organisation is.
What concrete actions might this suggest?
I think the most salient two are connected to the other two posts I made. I think people should have a transparent scope, especially organizations where people might be surprised about their current focus and they should not use polarizing techniques. I think there are tons of further steps that could be taken; a conference for EA global health and development seems like a pretty obvious example of something that is missing in EA.