Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.
1) Tier-based vs. prioritized order
So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other hand, my understanding of 80k’s list is that they would have a strong preference for someone to go into AI vs. climate change. This means that although five areas might be listed by both, the net spread of people going into each of them might be very different. I think overall, I/​we should care more about the outcomes than what is written on a website e.g., CE said it worked in these five areas, but in practice, if 80% of our charities were animal-focused, I would consider us an animal organization.
2) Relative size of a given worldview
I think it’s easy to forget how niche some of these cause areas are compared to others, and I believe that makes a significant difference. An area like mental health or global health are orders of magnitude more common a worldview than something like animal welfare or AI. If you consider how many moral and epistemic views would consider something like reducing lead paint as a charitable action vs. working at an AI safety lab, these require markedly different levels of specificity in views. The only area on 80k’s list that I would suggest is a major area outside of EA is climate change, the one listed last.
3) Likelihood of adding additional cause areas that are competitive with number 1
My understanding is that AI has been 80k’s top priority since close to its founding (2011), and that right now, it’s internally not seen as highly likely that something will supersede it. CE, on the other hand, started with animals and GW-style global development and has now added the cause areas listed above. Additionally, it has a continued goal to explore new ones (e.g., we are incubating bio-risk charities this year, and I expect we will tackle another area we have never worked on before in the next 12 months). This is fundamentally because the CE team expects that there are other great cause areas out there that are comparable to our top ones, ones that we/​EA have not yet identified.
I think a lot of this could be made clear with more transparency. If, say, 50%+ of 80k’s funding or their staff views on what the top area was were not AI, I would be happy to revise the list and put them back into the exploratory camp. But I would be pretty surprised if this were the case, given my current understanding.
4) Funneling vs. exploring
I think the factor you describe is also relevant. If an organization sees most of their focus in the funneling direction towards a certain cause area, I would definitely categorize them more as an answer-based community. E.g., maybe one could look at the ratio of budget spent on outreach compared to exploration that an organization does. I would not be surprised if that correlated well with a question vs. answer-based approach.
Ultimately, I do think it’s a spectrum, and every organization is a bit answer-based and a bit question-based. However, I do think there is a significant and worthwhile difference between being 25% answer/​75% question-oriented, and the reverse.
Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.
1) Tier-based vs. prioritized order
So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other hand, my understanding of 80k’s list is that they would have a strong preference for someone to go into AI vs. climate change. This means that although five areas might be listed by both, the net spread of people going into each of them might be very different. I think overall, I/​we should care more about the outcomes than what is written on a website e.g., CE said it worked in these five areas, but in practice, if 80% of our charities were animal-focused, I would consider us an animal organization.
2) Relative size of a given worldview
I think it’s easy to forget how niche some of these cause areas are compared to others, and I believe that makes a significant difference. An area like mental health or global health are orders of magnitude more common a worldview than something like animal welfare or AI. If you consider how many moral and epistemic views would consider something like reducing lead paint as a charitable action vs. working at an AI safety lab, these require markedly different levels of specificity in views. The only area on 80k’s list that I would suggest is a major area outside of EA is climate change, the one listed last.
3) Likelihood of adding additional cause areas that are competitive with number 1
My understanding is that AI has been 80k’s top priority since close to its founding (2011), and that right now, it’s internally not seen as highly likely that something will supersede it. CE, on the other hand, started with animals and GW-style global development and has now added the cause areas listed above. Additionally, it has a continued goal to explore new ones (e.g., we are incubating bio-risk charities this year, and I expect we will tackle another area we have never worked on before in the next 12 months). This is fundamentally because the CE team expects that there are other great cause areas out there that are comparable to our top ones, ones that we/​EA have not yet identified.
I think a lot of this could be made clear with more transparency. If, say, 50%+ of 80k’s funding or their staff views on what the top area was were not AI, I would be happy to revise the list and put them back into the exploratory camp. But I would be pretty surprised if this were the case, given my current understanding.
4) Funneling vs. exploring
I think the factor you describe is also relevant. If an organization sees most of their focus in the funneling direction towards a certain cause area, I would definitely categorize them more as an answer-based community. E.g., maybe one could look at the ratio of budget spent on outreach compared to exploration that an organization does. I would not be surprised if that correlated well with a question vs. answer-based approach.
Ultimately, I do think it’s a spectrum, and every organization is a bit answer-based and a bit question-based. However, I do think there is a significant and worthwhile difference between being 25% answer/​75% question-oriented, and the reverse.