I sometimes say, in a provocative/hyperbolic sense, that the concept of “neglectedness” has been a disaster for EA. I do think the concept is significantly over-used (ironically, it’s not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.
Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it’s just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it’s often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, “percent of the problem solved for each percent increase in resources,” as if this is likely to be a constant!) and the neglectedness of the cause.
That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they’ve gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn’t already being done within that cause. And so on.
What I mean by “disaster for EA” (despite the wins/exceptions in the previous paragraph) is that I often encounter “but that’s not neglected” as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can’t you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they’re not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it “the IT framework”) instead.
Upvoted and disagree-voted. I still think neglectedness is a strong heuristic. I cannot think of any good (in my evaluation) interventions that aren’t neglected.
Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion
I wouldn’t think about it that way because “scientific research” is so broad. That feels kind of like saying shrimp welfare isn’t neglected because a lot of money goes to animal shelters, and those both fall under the “animals” umbrella.
US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn’t already being done within that cause.
If you’re talking about polling on AI safety, that wasn’t being done at all IIRC, so it was indeed highly neglected.
Fair enough on the “scientific research is super broad” point, but I think this also applies to other fields that I hear described as “not neglected” including US politics.
Not talking about AI safety polling, agree that was highly neglected. My understanding, reinforced by some people who have looked into the actually-practiced political strategies of modern campaigns, is that it’s just a stunningly under-optimized field with a lot of low-hanging fruit, possibly because it’s hard to decouple political strategy from other political beliefs (and selection effects where especially soldier-mindset people go into politics).
But neglectedness as a heuristic is very good precisely for narrowing down what you think the good opportunity is. Every neglected field is a subset of a non-neglected field. So pointing out that great grants have come in some subset of a non neglected field doesn’t tell us anything.
To be specific, it’s really important that EA identifies the area within that neglected field where resources aren’t flowing, to minimize funging risk. Imagine that AI safety polling had not been neglected and that in fact there were tons of think tanks who planned to do AI safety polling and tons of funders who wanted to make that happen. Then even though it would be important and tractable, EA funding would not be counterfactually impactful, because those hypothetical factors would lead to AI safety polling happening with or without us. So ignoring neglectedness would lead to us having low impact.
I agree that a lot of EAs seem to make this mistake but I don’t think the issue is with the neglectedness measure, ime people often incorrectly scope the area they are analysing and fail to notice that that specific area can be highly neglected whilst also being tractable and important even if the wider area it’s part of is not very neglected.
For example, working on information security in USG is imo not very neglected but working on standards for datacentres that train frontier LMs is.
Disagree-voted. I think there are issues with the Neglectedness heuristic, but I don’t think the N in ITN is fully captured by I and T.
For example, one possible rephrasing of ITN is: (certainly not covering all the ways in which it is used)
Would it be good to solve problem P?
Can I solve P?
How many other people are trying to solve P?
I think this is a great way to decompose some decision problems. For instance, it seems very useful for thinking about prioritizing research, because (3) helps you answer the important question “If I don’t solve P, will someone else?” (even if this is also affected by 2).
(edited. Originally, I put the question “If I don’t solve P, will someone else?” under 3., which was a bit sloppy)
What is gained by adding the third thing? If the answer to #2 is “yes,” then why does it matter if the answer to #3 is “a lot,” and likewise in the opposite case, where the answers are “no” and “very few”?
Edit: actually yeah the “will someone else” point seems quite relevant.
Also, some of the more neglected topics tend to be more intellectually interesting and especially appealing if you have a bit of a contrarian temperament. One can make the mistake of essentially going all out on neglectedness and mostly work on the most fringe and galaxy-brained topics imaginable.
I’ve been there myself: I think I’ve spent too much time thinking about lab universes, acausal trade, descriptive population ethics, etc.
Perhaps it connects to a deeper “silver bullet worldview bias”: I’ve been too attracted to worldviews according to which I can have lots of impact. Very understandable given how much meaning and self-worth I derive from how much good I believe I do.
The real world is rather messy and crowded, so elegant and neglected ideas for having impact can become incredibly appealing, promising both outsized impact and intellectual satisfaction.
I think this depends crucially on how, and to what object, you are applying the ITN framework:
Applying ITN to broad areas in the abstract, treating what one would do in them as something of a black box (a common approach in earlier cause prioritisation IMO), one might reason:
Malaria is a big problem (Importance)
Progress is easily made against malaria (Tractability)
… It seems clear that Neglectedness should be added to these considerations to avoid moving resources into an area where all the resources needed to solve X are already in place
Applying ITN to a specific intervention or action, it’s more common to be able to reason like so:
Malaria is a big problem (Importance)
Me providing more malaria nets [does / does not] easily increase progress against malaria, given that others [are / are not] already providing them (Tractability)
… In this case it seems that all you need from Neglectedness is already accounted for in Tractability, because you were able to account for whether the actions you could take were counterfactually going to be covered.
On the whole, it seems to me that the further you move aware from abstract evaluations of broad cause areas, and more towards concrete interventions, the less it’s necessary or appropriate to depend on broad heuristics and the more you can simply attempt to estimate expected impact directly.
I think the opposite might be true: when you apply it to broad areas, you’re likely to mistake low neglectedness for a signal of low tractability, and you should just look at “are there good opportunities at current margins.” When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)
That’s interesting, but seems to be addressing a somewhat separate claim to mine.
My claim was that that broad heuristics are more often necessary and appropriate when engaged in abstract evaluation of broad cause areas, where you can’t directly assess how promising concrete opportunities/interventions are, and less so when you can directly assess concrete interventions.
If I understand your claims correctly they are that:
Neglectedness is more likely to be misleading when applied to broad cause areas
When considering individual solutions, it’s useful to consider whether the intervention has already been tried.
I generally agree that applying broad heuristics to broad cause areas is more likely to be misleading than when you can assess specific opportunities directly. Implicit in my claim is that where you don’t have to rely on broad heuristics, but can assess specific opportunities directly, then this is preferable. I agree that considering whether a specific intervention has been tried before is useful and relevant information, but don’t consider that an application of the Neglectedness/Crowdedness heuristic.
I love this take and I think you make a good point but on balance I still think we should keep neglectedness under “ITN”. It’s just a framework it ain’t clean and perfect. You’re right that an issue doesn’t have to be neglected to be a potentially high impact a cause area. I like the way you put it here.
“Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it’s just a heuristic for tractability’
That’s good enough for me though.
I would also say that especially in global development, relative “importance” might become less “necessary” part of the framework as well. If we can spend small amounts of money solving relatively smallish issues cost effectively then why not?
You’re examples are exceptions too, most of the big EA causes were highly neglected before EA got involved.
When explaining EA to people who haven’t heard of it, neglectedness might be the part which makes the most intuitive sense, and what helps people click. When I explain the outsized impact EA has had on factory farming, or lead elimination, or AI Safety because “those issues didn’t have so much attention before”, I sometimes see a lightbulb moment.
I agree and made a similar claim previously. While I believe that many currently effective interventions are neglected, I worry that there are many potential interventions that could be highly effective but are overlooked because they are in cause areas not seen as neglected.
I sometimes say, in a provocative/hyperbolic sense, that the concept of “neglectedness” has been a disaster for EA. I do think the concept is significantly over-used (ironically, it’s not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.
Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it’s just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it’s often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, “percent of the problem solved for each percent increase in resources,” as if this is likely to be a constant!) and the neglectedness of the cause.
That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they’ve gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn’t already being done within that cause. And so on.
What I mean by “disaster for EA” (despite the wins/exceptions in the previous paragraph) is that I often encounter “but that’s not neglected” as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can’t you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they’re not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it “the IT framework”) instead.
Upvoted and disagree-voted. I still think neglectedness is a strong heuristic. I cannot think of any good (in my evaluation) interventions that aren’t neglected.
I wouldn’t think about it that way because “scientific research” is so broad. That feels kind of like saying shrimp welfare isn’t neglected because a lot of money goes to animal shelters, and those both fall under the “animals” umbrella.
If you’re talking about polling on AI safety, that wasn’t being done at all IIRC, so it was indeed highly neglected.
Fair enough on the “scientific research is super broad” point, but I think this also applies to other fields that I hear described as “not neglected” including US politics.
Not talking about AI safety polling, agree that was highly neglected. My understanding, reinforced by some people who have looked into the actually-practiced political strategies of modern campaigns, is that it’s just a stunningly under-optimized field with a lot of low-hanging fruit, possibly because it’s hard to decouple political strategy from other political beliefs (and selection effects where especially soldier-mindset people go into politics).
But neglectedness as a heuristic is very good precisely for narrowing down what you think the good opportunity is. Every neglected field is a subset of a non-neglected field. So pointing out that great grants have come in some subset of a non neglected field doesn’t tell us anything.
To be specific, it’s really important that EA identifies the area within that neglected field where resources aren’t flowing, to minimize funging risk. Imagine that AI safety polling had not been neglected and that in fact there were tons of think tanks who planned to do AI safety polling and tons of funders who wanted to make that happen. Then even though it would be important and tractable, EA funding would not be counterfactually impactful, because those hypothetical factors would lead to AI safety polling happening with or without us. So ignoring neglectedness would lead to us having low impact.
I agree that a lot of EAs seem to make this mistake but I don’t think the issue is with the neglectedness measure, ime people often incorrectly scope the area they are analysing and fail to notice that that specific area can be highly neglected whilst also being tractable and important even if the wider area it’s part of is not very neglected.
For example, working on information security in USG is imo not very neglected but working on standards for datacentres that train frontier LMs is.
Disagree-voted. I think there are issues with the Neglectedness heuristic, but I don’t think the N in ITN is fully captured by I and T.
For example, one possible rephrasing of ITN is: (certainly not covering all the ways in which it is used)
Would it be good to solve problem P?
Can I solve P?
How many other people are trying to solve P?
I think this is a great way to decompose some decision problems. For instance, it seems very useful for thinking about prioritizing research, because (3) helps you answer the important question “If I don’t solve P, will someone else?” (even if this is also affected by 2).
(edited. Originally, I put the question “If I don’t solve P, will someone else?” under 3., which was a bit sloppy)
What is gained by adding the third thing? If the answer to #2 is “yes,” then why does it matter if the answer to #3 is “a lot,” and likewise in the opposite case, where the answers are “no” and “very few”?Edit: actually yeah the “will someone else” point seems quite relevant.
Very much agree.
Also, some of the more neglected topics tend to be more intellectually interesting and especially appealing if you have a bit of a contrarian temperament. One can make the mistake of essentially going all out on neglectedness and mostly work on the most fringe and galaxy-brained topics imaginable.
I’ve been there myself: I think I’ve spent too much time thinking about lab universes, acausal trade, descriptive population ethics, etc.
Perhaps it connects to a deeper “silver bullet worldview bias”: I’ve been too attracted to worldviews according to which I can have lots of impact. Very understandable given how much meaning and self-worth I derive from how much good I believe I do.
The real world is rather messy and crowded, so elegant and neglected ideas for having impact can become incredibly appealing, promising both outsized impact and intellectual satisfaction.
I think this depends crucially on how, and to what object, you are applying the ITN framework:
Applying ITN to broad areas in the abstract, treating what one would do in them as something of a black box (a common approach in earlier cause prioritisation IMO), one might reason:
Malaria is a big problem (Importance)
Progress is easily made against malaria (Tractability)
… It seems clear that Neglectedness should be added to these considerations to avoid moving resources into an area where all the resources needed to solve X are already in place
Applying ITN to a specific intervention or action, it’s more common to be able to reason like so:
Malaria is a big problem (Importance)
Me providing more malaria nets [does / does not] easily increase progress against malaria, given that others [are / are not] already providing them (Tractability)
… In this case it seems that all you need from Neglectedness is already accounted for in Tractability, because you were able to account for whether the actions you could take were counterfactually going to be covered.
On the whole, it seems to me that the further you move aware from abstract evaluations of broad cause areas, and more towards concrete interventions, the less it’s necessary or appropriate to depend on broad heuristics and the more you can simply attempt to estimate expected impact directly.
I think the opposite might be true: when you apply it to broad areas, you’re likely to mistake low neglectedness for a signal of low tractability, and you should just look at “are there good opportunities at current margins.” When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)
That’s interesting, but seems to be addressing a somewhat separate claim to mine.
My claim was that that broad heuristics are more often necessary and appropriate when engaged in abstract evaluation of broad cause areas, where you can’t directly assess how promising concrete opportunities/interventions are, and less so when you can directly assess concrete interventions.
If I understand your claims correctly they are that:
Neglectedness is more likely to be misleading when applied to broad cause areas
When considering individual solutions, it’s useful to consider whether the intervention has already been tried.
I generally agree that applying broad heuristics to broad cause areas is more likely to be misleading than when you can assess specific opportunities directly. Implicit in my claim is that where you don’t have to rely on broad heuristics, but can assess specific opportunities directly, then this is preferable. I agree that considering whether a specific intervention has been tried before is useful and relevant information, but don’t consider that an application of the Neglectedness/Crowdedness heuristic.
I love this take and I think you make a good point but on balance I still think we should keep neglectedness under “ITN”. It’s just a framework it ain’t clean and perfect. You’re right that an issue doesn’t have to be neglected to be a potentially high impact a cause area. I like the way you put it here.
“Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it’s just a heuristic for tractability’
That’s good enough for me though.
I would also say that especially in global development, relative “importance” might become less “necessary” part of the framework as well. If we can spend small amounts of money solving relatively smallish issues cost effectively then why not?
You’re examples are exceptions too, most of the big EA causes were highly neglected before EA got involved.
When explaining EA to people who haven’t heard of it, neglectedness might be the part which makes the most intuitive sense, and what helps people click. When I explain the outsized impact EA has had on factory farming, or lead elimination, or AI Safety because “those issues didn’t have so much attention before”, I sometimes see a lightbulb moment.
I agree and made a similar claim previously. While I believe that many currently effective interventions are neglected, I worry that there are many potential interventions that could be highly effective but are overlooked because they are in cause areas not seen as neglected.