Even before I looked over your model, I agreed with you that we should be wary about assuming higher marginal impact from donations to more āneglectedā causes. Thereās a lot of noise in the EA funding landscape, partly because organizationsā values differ and mostly because noise is inevitable in a context with few funders giving to a large number of organizations, many of which are quite new and/āor have little funding from non-EA orgs.
That said, I like the concept of neglectedness for a few reasons you didnāt mention, at least not in so many words:
When an organization doesnāt have much funding, extra funding may not have unusually high marginal impact, but it will often provide unusually high marginal information:
If I donate to GiveWell so that they can run an up-to-date literature review on a topic theyāve studied for years, Iāll learn a little bit more about that topic.
Of course, it may be that a well-funded research team also produces better research than a team with no track record, but if we focus on just funding research or work into a cause we donāt know much about, it seems likely that weāll proportionally increase our knowledge of that cause more than we would of a better-studied cause. (We can then adjust our beliefs about impact and tractability, so that we no longer need to rely on neglectedness as a tiebreaker.)
Given the number of causes/āprojects in EA, and the limited number of funders, many ideas havenāt been very well-studied. So if Iām worried about an opportunity being weak because itās not well-funded, even Iāve checked carefully and it seems high-impact to me, I should consider that I might be one of the worldās best-informed people on that opportunity.
In financial markets, something being low-value signals that many experts have examined that thing and estimated its true value to be low. Thatās why itās hard to make money buying cheap stocks. But in charitable markets, especially the tiny āmarketā of EA, something being ill-funded could be a sign that almost no one has examined it.
This isnāt always the case, of course, and high funding could also be taken as a signal of quality, but it does seem good to remember that āneglectednessā sometimes means ānewnessā or āno-one-has-looked-at-this-yet-nessā.
I donāt disagree with GiveWell that good opportunities are very likely to find good funders, but Iāve seen enough counterexamples over the last few years that Iām aware of how many gaps remain in the nonprofit funding space.
I think thereās reason to be cautious with the āhighest marginal information comes from studying neglected interventionsā line of reasoning, because of the danger of studies not replicating. If we only ever test new ideas, and then suggest funding the new ideas which appear from their first study to have the highest marginal impact, itās very easy to end up with several false positives being funded even if they donāt work particularly well.
In fact, in some sense the opposite argument could be made; it is possible that the highest marginal information gain will come from research research into a topic which is already receiving lots of funding. Mass deworming is the first example that springs to mind, mostly because thereās such a lack of clarity at the moment, but the marginal impact of finding new evidence about an intervention thereās lots of money in could still be very large.
I guess the rather sad thing is that the biggest impact comes from bad news: if an intervention is currently receiving lots of funding because the research picture looks positive, and a large study fails to replicate, a promising intervention now looks less so. If funding moves towards more promising causes as a result, this is a big positive impact, but it feels like a loss. It certainly feels less like good news than a promising initial study on a new cause area, but Iām not sure it actually results in a smaller impact.
I agree that non-replication is a danger. But I donāt think that positive results are specifically high-value; instead, I care about studies that tell me whether or not an intervention is worth funding.
Iād expect most studies of neglected interventions to turn up ānegativeā results, in the sense that those interventions will seem less valuable than the best ones we already know about. But it still seems good to conduct these studies, for a couple of reasons:
If an intervention does look high-value, thereās a chance weāve stumbled onto a way to substantially improve our impact (pending replication studies).
Studying an intervention that is different from those currently recommended may help us gain new kinds of knowledge we didnāt have from studying other interventions.
For example, if the community is already well-versed in public health research, we might learn less from research on deworming than research on Chinese tobacco taxation policy (which could teach about generally useful topics like Chinese tax policy, the Chinese legislative process, and how Chinese corporate lobbying works).
Of course, doing high-quality China research might be more difficult and expensive, which cuts against it (and other research into āneglectedā areas), but I do like the idea of EA having access to strong data on many different important areas.
That said, you do make good points, and I continue to think that neglectedness is less important as a consideration than scale or tractability.
Related: I really like GiveWellās habit of telling us not only which charities they like, but which ones they looked at and deprioritized. This is helpful for understanding the extent to which the ādeliberate neglect by rational fundersā model pans out for a given intervention/ācharity, and I wish it were done by more grantmaking organizations.
Thanks for the comment. I agree that considering the marginal value of information is important. This may be another source of diminishing marginal total value (where total value = direct impact + value of information). It seems, though, that this is also subject to the same criticism I outline in the post. If other funders also know that neglected causes give more valuable information at the margin, then the link between neglectedness and marginal value will be weakened. The important step, then, is to determine whether other funders are considering the value of information when making decisions. This may vary by context.
Also, could you give me some more justification for why we would expect the value of information to be higher for neglected causes? That doesnāt seem obvious to me. I realize that you might learn more by trying new things, but it seems that what you learn would be more valuable if there were a lot of other funders that could act on the new information (so the information would be more valuable in crowded cause areas like climate change).
On your second point, I agree that when youāre deciding between causes and youāre confident that other funders of these causes have no significant information that you donāt, and youāre confident that there are diminishing returns, then we would expect for neglectedness to be a good signal of marginal impact. Maybe this is a common situation to be in for EA-type causes, but Iām not so sure. A lot of the causes on 80,000 Hoursā page are fairly mainstream (climate change, global development, nuclear security), so a lot of other smart people have thought about them. Alternatively, in cases where we can be confident that other funders are poorly informed or irrational, thereās the worry about increasing returns to scale.
I think the argument is that additional information showing that a cause has high marginal impact might divert causes away towards it from causes with less marginal impact. And getting this kind of information does seem more likely for causes without a track record allowing for a somewhat robust estimation of their (marginal) impact.
This is essentially what I was thinking. If weāre to discover that the ābestā intervention is something that we arenāt funding much now, weāll need to look closer at interventions which are currently neglected.
I agree with the author that neglectedness isnāt a perfect measure, since others may already have examined them and been unimpressed, but I donāt know how often that āprevious examinationā actually happens (probably not too often, given the low number of organizations within EA that conduct in-depth research on causes). Iād still think that many neglected causes have received very little serious attention, especially attention toward the most up-to-date research (maybe GiveWell said no five years ago, but five years is a lot of time for new evidence to emerge).
(As I mentioned in another comment, I wish we knew more about which interventions EA orgs had considered but decided not to fund; that knowledge is the easiest way I can think of to figure out whether or not an idea really is āneglectedā.)
Even before I looked over your model, I agreed with you that we should be wary about assuming higher marginal impact from donations to more āneglectedā causes. Thereās a lot of noise in the EA funding landscape, partly because organizationsā values differ and mostly because noise is inevitable in a context with few funders giving to a large number of organizations, many of which are quite new and/āor have little funding from non-EA orgs.
That said, I like the concept of neglectedness for a few reasons you didnāt mention, at least not in so many words:
When an organization doesnāt have much funding, extra funding may not have unusually high marginal impact, but it will often provide unusually high marginal information:
If I donate to GiveWell so that they can run an up-to-date literature review on a topic theyāve studied for years, Iāll learn a little bit more about that topic.
If I give the same funding to a researcher studying some highly unusual topic (e.g. alternative food sources as a strategy to counter global famine), I might learn a lot about something I knew nothing about.
Of course, it may be that a well-funded research team also produces better research than a team with no track record, but if we focus on just funding research or work into a cause we donāt know much about, it seems likely that weāll proportionally increase our knowledge of that cause more than we would of a better-studied cause. (We can then adjust our beliefs about impact and tractability, so that we no longer need to rely on neglectedness as a tiebreaker.)
Given the number of causes/āprojects in EA, and the limited number of funders, many ideas havenāt been very well-studied. So if Iām worried about an opportunity being weak because itās not well-funded, even Iāve checked carefully and it seems high-impact to me, I should consider that I might be one of the worldās best-informed people on that opportunity.
In financial markets, something being low-value signals that many experts have examined that thing and estimated its true value to be low. Thatās why itās hard to make money buying cheap stocks. But in charitable markets, especially the tiny āmarketā of EA, something being ill-funded could be a sign that almost no one has examined it.
This isnāt always the case, of course, and high funding could also be taken as a signal of quality, but it does seem good to remember that āneglectednessā sometimes means ānewnessā or āno-one-has-looked-at-this-yet-nessā.
I donāt disagree with GiveWell that good opportunities are very likely to find good funders, but Iāve seen enough counterexamples over the last few years that Iām aware of how many gaps remain in the nonprofit funding space.
I think thereās reason to be cautious with the āhighest marginal information comes from studying neglected interventionsā line of reasoning, because of the danger of studies not replicating. If we only ever test new ideas, and then suggest funding the new ideas which appear from their first study to have the highest marginal impact, itās very easy to end up with several false positives being funded even if they donāt work particularly well.
In fact, in some sense the opposite argument could be made; it is possible that the highest marginal information gain will come from research research into a topic which is already receiving lots of funding. Mass deworming is the first example that springs to mind, mostly because thereās such a lack of clarity at the moment, but the marginal impact of finding new evidence about an intervention thereās lots of money in could still be very large.
I guess the rather sad thing is that the biggest impact comes from bad news: if an intervention is currently receiving lots of funding because the research picture looks positive, and a large study fails to replicate, a promising intervention now looks less so. If funding moves towards more promising causes as a result, this is a big positive impact, but it feels like a loss. It certainly feels less like good news than a promising initial study on a new cause area, but Iām not sure it actually results in a smaller impact.
I agree that non-replication is a danger. But I donāt think that positive results are specifically high-value; instead, I care about studies that tell me whether or not an intervention is worth funding.
Iād expect most studies of neglected interventions to turn up ānegativeā results, in the sense that those interventions will seem less valuable than the best ones we already know about. But it still seems good to conduct these studies, for a couple of reasons:
If an intervention does look high-value, thereās a chance weāve stumbled onto a way to substantially improve our impact (pending replication studies).
Studying an intervention that is different from those currently recommended may help us gain new kinds of knowledge we didnāt have from studying other interventions.
For example, if the community is already well-versed in public health research, we might learn less from research on deworming than research on Chinese tobacco taxation policy (which could teach about generally useful topics like Chinese tax policy, the Chinese legislative process, and how Chinese corporate lobbying works).
Of course, doing high-quality China research might be more difficult and expensive, which cuts against it (and other research into āneglectedā areas), but I do like the idea of EA having access to strong data on many different important areas.
That said, you do make good points, and I continue to think that neglectedness is less important as a consideration than scale or tractability.
Related: I really like GiveWellās habit of telling us not only which charities they like, but which ones they looked at and deprioritized. This is helpful for understanding the extent to which the ādeliberate neglect by rational fundersā model pans out for a given intervention/ācharity, and I wish it were done by more grantmaking organizations.
Thanks for the comment. I agree that considering the marginal value of information is important. This may be another source of diminishing marginal total value (where total value = direct impact + value of information). It seems, though, that this is also subject to the same criticism I outline in the post. If other funders also know that neglected causes give more valuable information at the margin, then the link between neglectedness and marginal value will be weakened. The important step, then, is to determine whether other funders are considering the value of information when making decisions. This may vary by context.
Also, could you give me some more justification for why we would expect the value of information to be higher for neglected causes? That doesnāt seem obvious to me. I realize that you might learn more by trying new things, but it seems that what you learn would be more valuable if there were a lot of other funders that could act on the new information (so the information would be more valuable in crowded cause areas like climate change).
On your second point, I agree that when youāre deciding between causes and youāre confident that other funders of these causes have no significant information that you donāt, and youāre confident that there are diminishing returns, then we would expect for neglectedness to be a good signal of marginal impact. Maybe this is a common situation to be in for EA-type causes, but Iām not so sure. A lot of the causes on 80,000 Hoursā page are fairly mainstream (climate change, global development, nuclear security), so a lot of other smart people have thought about them. Alternatively, in cases where we can be confident that other funders are poorly informed or irrational, thereās the worry about increasing returns to scale.
I think the argument is that additional information showing that a cause has high marginal impact might divert causes away towards it from causes with less marginal impact. And getting this kind of information does seem more likely for causes without a track record allowing for a somewhat robust estimation of their (marginal) impact.
This is essentially what I was thinking. If weāre to discover that the ābestā intervention is something that we arenāt funding much now, weāll need to look closer at interventions which are currently neglected.
I agree with the author that neglectedness isnāt a perfect measure, since others may already have examined them and been unimpressed, but I donāt know how often that āprevious examinationā actually happens (probably not too often, given the low number of organizations within EA that conduct in-depth research on causes). Iād still think that many neglected causes have received very little serious attention, especially attention toward the most up-to-date research (maybe GiveWell said no five years ago, but five years is a lot of time for new evidence to emerge).
(As I mentioned in another comment, I wish we knew more about which interventions EA orgs had considered but decided not to fund; that knowledge is the easiest way I can think of to figure out whether or not an idea really is āneglectedā.)