Against neglectedness

tl;dr

80 000 Hours’ cause priorities framework focuses too heavily on neglectedness at the expense of individuals’ traits. It’s inapplicable in causes where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, is insensitive to the slope of diminishing returns from which it draws its relevance, and as an a priori heuristic it has much lower evidential weight than even a shallow dive would provide.

Abstract

For some time I’ve been uneasy about 80,000 Hours’ stated cause prioritisation and the broader EA movement’s valorisation of neglectedness with what I see as very little justification. 80K recently updated their article ‘How to compare different global problems in terms of impact,’ with an elegant new format, which makes it easier to see the parts I agree with, and critique those I don’t. This essay is intended as a constructive exploration of the latter, to wit their definitions of solvability and neglectedness, and why I think they overweigh the importance of the latter.

80K’s framework

80K offer three factors which they think multiplied together give the value of contributing to an area. Here’s the factors along with their definition of each:

Scale (of the problem we’re trying to solve) = Good done /​ % of the problem solved

Solvability = % of the problem solved /​ % increase in resources

Neglectedness = % increase in resources /​ extra person or $

Let’s look at them in turn.

Scale

I’ll skim over this one—for most problems the definition seems to capture decently what we think of as scale. That said, there is a class of issues to which—depending on how you interpret it—it’s misleading or inapplicable. These are what I’ll call ‘clustered value problems’: those where progress yields comparatively little or no ‘good done’ until everything is tied together at the end.1

Examples of this might be getting friendly AI research right, if any deviation from perfect programming might still generate a paperclipper, eliminating an infectious disease, or developing any important breakthrough technology (vaccines, cold fusion, mass produceable in vitro meat, a space elevator etc).

In such cases it wouldn’t make much sense to look only at the value of a ‘% of the problem solved’ unless it was the last percentage (and then that would make it look disproportionately good).

In such cases we should treat scale as ‘good done /​ average % of the problem solved’, distinguishing them from what I’ll call ‘distributed value problems’ (ie any that aren’t clustered), where ‘(marginal) % of the problem solved’ is the better metric (while noting that distributedness/​clusteredness is really a spectrum).

‘Solvability’ and Neglectedness

Solvability is clearly crucial in thinking about prioritising problems. Here’s 80K’s definition of it again:

% of the problem solved /​ % increase in resources

And of neglectedness:

% increase in resources /​ extra person or $

There are, I think, three independently fatal problems that make these factors useless:

  1. As presented, ‘solvability’ would imply that any problem that no-one has worked on is unsolvable, since it would require an infinite % increase in resources to make any progress (or rather, the value would just be undefined, since it’s equivalent to a division by 0).

  2. By making this all a nice cancellable equation, it makes the actual values of all but the first numerator (‘good done’) and the last denominator (‘extra person or $’) irrelevant (unless they happen to be 0/​infinity, per above). This is really just the equation ‘good done/​extra person or $’ in fancy clothing, so the real world values of ‘% of the problem solved’ and ‘% increase in resources’ are no more pertinent to how much good per $ we’ll do than the interloping factor would be in ‘good done/​dragons in Westeros * dragons in Westeros /​ extra person of $’.

    Perhaps 80K didn’t intend the framework to be taken too literally, but inasmuch as they use it at all, it seems reasonable to criticise it on its own terms.

  3. Intuitively, we can see how the real world value of the ‘% of the problem solved’ factor might have a place in a revised equation—eg ‘% of the problem solved/​extra person or $’ (higher would obviously indicate more value from working on the problem, all else being equal). But ‘‘% increase in resources’ has no such use, because it’s a combination of two factors, (100 ) absolute contribution /​ resources already contributed to the problem, the latter of which is totally irrelevant to what we mean by ‘solving a problem’ (and is also the source of the potential division by 0). Because it accounts for contributions of people before me, this factor can increase even if my actual contribution shrinks, and vice versa.2 So it can’t be a reliable multiplier in any list of desirable traits for a high priority cause.

    By using ‘% increase in resources’ as the denominator instead of ‘absolute increase in resources’ I think 80K mean to capture the notion of diminishing returns. But diminishing returns in this regard is the hypothesis that the multiple people working on related areas of a potentially multifarious problem, often in competition with each other, will tend to achieve less than the people who worked on it before them. It’s not a clear example of the economic notion since multiple factors are constantly changing within most cause areas (and the economic notion is about increasing a single variable), and even if it were it shouldn’t be hard-coded into our very definition of problem-solving.

So although it doesn’t fit into a nice cancellable equation, I think we need to model diminishing returns separately—and then check the plausibility of our model for our purposes. Before we think about that, let’s quickly revisit clustered value problems.

Clustered value problems

Because work on clustered value problems (by definition) yields the hypermajority of its value at the point where the last piece is placed in the jigsaw, diminishing returns are largely irrelevant. People might work on the easiest parts of the solution first, but assuming that a fixed number of work-hours—or at least a fixed amount of ingenuity—is necessary to reach the breakthrough, someone will have to plough through the hard stuff to the end, and the value of every resource contributed is equal throughout.

Diminishing returns

So distributed value problems are the important case for diminishing returns, and I assume they are what the 80K piece is addressing.

For them, a more plausible approach to marginal solvability, that captures diminishing marginal returns, follows from the following pair of claims: a) we can estimate the rate of diminishing returns D by looking at multiple points in the history of the project, comparing (% of problem problem solved /​ absolute resources spent) and selecting an appropriate function.3 Therefore… b) we could apply that function to the number of resources that have been used R to figure out the contribution of adding a marginal resource:

Marginal contribution per resource = ((R + 1)D – RD)

And we could now define marginal solvability per resource:

Marginal solvability = % of the problem solved /​ ((R + 1)D – RD)

On this definition, ‘neglectedness’ (or rather, its inverse—heededness?) is just the value of R. And all else being equal, if these claims are true, then we can expect to solve a smaller % of a problem the higher the value of R.

But these claims make two assumptions that we shouldn’t take for granted:

Diminishing returns will apply due to problem prioritisation: if each new resource added (approximately aka each new hire4) within a cause has equivalent competence to those before them (where ‘competence’ is a measure of ’ability to solve the problem to which the organisation applies itself, directly or indirectly), they will tend to achieve less than those recruited before them. Resources will be at best fungible: each new hire within a cause tends to have equivalent or lesser competence than those before them. More precisely, each dollar spent on a marginal hire tends to have equivalent or lesser value than the dollars spent on previous hires. That is, in the new marginal solvability equation, each individual would approximately contribute M marginal resources, where M is a constant.

Looking at these in turn...

Diminishing returns due to problem prioritisation

This seems like a workable approximation in many cases, but the extent to which it applies could vary enormously from cause to cause—sometimes it might even reverse. To the extent that an organisation is perfectly rational, its staff will be picking the lowest hanging fruit to work on, but here are possible caveats, eg (in very roughly ascending order of importance):

  1. Economies of scale, which mean that even if a hire can’t achieve as much as the hire before them, they might be sufficiently cheaper as to be equal or better value.

    Anticipating the value of economies of scale seems sufficiently difficult even from month to month that it’s unlikely to be worth making life plans based on them. However, they might be relevant to someone investigating a new job rather than a new career – or to someone investigating where to donate.

  2. High risk roles. A new organisation might need to reliably produce results early on to justify its existence, but once established might be able to take on projects less likely to succeed, but with higher expectation. This could potentially happen in medium or even very large organisations, depending on the cost and risk of the new projects (the Open Philanthropy Project and Google X are real-life examples of this).

    High-risk roles (distinct from individuals with high-risk strategies, who we’ll come to shortly), are by their nature substantially rarer than normal-risk roles. They also might not require any particularly different skills from other direct work, so again they seem more relevant to people considering donating or changing jobs rather than deciding career paths.

  3. Diminishing returns assumes that all factors except the number of resources you’re adding stay constant. But the bigger the cause, the more variables within it will constantly be changing, so the less this will be the case. For example, new technologies are constantly emerging, many of which require specialist skills to use effectively, if only to see their potential—eg cheap satellites (and cheap orbital access for them) that allow rapid responses to deforestation

  4. Some groundwork—possibly a great deal of groundwork—might be required to determine what the low-hanging fruit are, or to make them pickable when they’re identified. In such cases the actual value of working in a field might increase as such work is done.

    This could be a huge factor, especially given the relative novelty of RCTs and cost-benefit analysis. Even many purportedly saturated fields still have huge uncertainty around the best interventions. Also, as our social scientific tools improve we can reasonably hope to uncover high value interventions (ie low hanging fruit) we’ve hitherto missed.

  5. Most people aren’t particularly perfectly rational altruists—and many of the incentives they face within their field won’t be perfectly altruistic.

    Bizarrely, the nonrationality of people and organisations didn’t even occur to me until draft 2 of this essay—and I suspect in many cases this is a very strong effect. If it weren’t, there would be no need for an EA movement. Looking at Wikipedia’s list of animal charities based in the UK, for eg, I count 4 of 76 who appear to have a focus on factory farming, which most of us would consider by far the most important near-term cause within the area. I won’t single out others for negative comment, and no doubt our specific views will vary widely, but I imagine most EAs would agree with me that the list bears little to no resemblance to an optimal allocation of animal-welfare-oriented resources. In some causes people are provably acting irrationally, since they’re at directly crossing purposes—the climate change activists advocating nuclear power or geoengineering and those arguing against them can’t all be acting rationally, for example.5

    In some cases irrationality could increase the value of more rational people entering the field—eg by creating another RCT in a field that has plenty, or simply by creating the option for project managing or otherwise directing someone toward a more valuable area (this essentially seems to have been the situation which Givewell and Giving What We Can’s founders walked into when they first founded the orgs). It might even tend to do so, if the latter interaction turned out to be a major factor.

  6. The more work fully solving the problem requires, the slower we would expect returns to diminish. The density of sub-problems of any given tractability will be thicker, so it will take more resources to get through the low-hanging fruit. There’s just orders of magnitude more to do in eg creating a robust system of global governance than in eliminating schistosomiasis, so we would expect returns to diminish at a correspondingly fractional rate in the former.6

    We should expect a priori that problem areas will tend to be larger the more people they have working on them. Smaller areas with several people working on them quickly cease to be problem areas, and recruitment for an area would probably slow – if not reverse – as less effective work there is left to be done within it diminishes.

The hypothesis of diminishing returns due to problem prioritisation is ultimately an empirical claim, so shouldn’t be assumed to hold within any given field without at least some checking against some plausible metrics.

Fungible marginal resources

As a sufficiently broad tendency, this is surely true. Just as a rational organisation would pick the lowest hanging fruit to work on, so they would aim to pick the lowest hanging fruit when considering each new staff hire. Nonetheless, there’s one big problem with this assumption.

Problems are disproportionately solved by small numbers of people doing abnormal things, rather than by throwing sufficient fungible resources at them – ie the value of individual contributions follows a power law distribution.

This can be due to a number of reasons both personal and circumstantial, that might not transfer across causes, thus an individual could offer far more to one cause area than another. This is a grandiose way of saying ‘personal fit is very important’, but I think this is a potentially massive factor, that can dwarf any diminishing returns, and means that—at least for direct work—we should think of cause-person pairings, rather than individual causes as our unit of investigation.

The consequences of devaluing neglectedness

When I’ve discussed whether the EA movement overemphasises neglectedness, one response I’ve often heard is that it’s really just a heuristic to help decide which causes to look into in the first place. I think there’s some truth to that, especially for ‘top’ EA areas—I certainly don’t think, for example, that Givewell’s recommendations for combating various neglected tropical diseases are based (even in significant fraction) on them being neglected, that ACE recommend factory farming campaigns because so few animal welfare charities address them, or that FHI are so concerned about AI because other academics weren’t. These areas all seem to have reasonable independent arguments to recommend them.

That said, this essay is partly a response to 80K’s priorities framework, which explicitly describes neglectedness as a core part of their cause assessment. If they were ultimately just using it as a top-level heuristic, I would expect them to say as much, rather than rating it as a multiplier with equal weighting to scale and solvability.

And, what worries me more is we can see neglectedness invoked explicitly to deter EAs from getting involved in causes that would otherwise seem very promising. This is clearest in 80K’s climate change profile, for example. In the ‘overall view’, they present the three components - ‘scale’, ‘solvability’ and ‘neglectedness’ as visual equals. And in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself.

It seems very strange to me to imagine this as a strong deterrent. Effective altruists often compare commercial thinking favourably with the ways we think about doing good7 - but it would be very strange to imagine an entrepreneur thinking that a large injection of cash into an area was a reason not to go into it.

80K’s profile on nuclear security similarly discounts its value in part due to it being ‘likely less neglected than other problems’. And so on in their biosecurity, global health and anti-smoking profiles.

And anecdata-ly, as someone concerned that climate change might be a much more pressing issue than EAs typically credit, I’ve repeatedly had conversations with people who dismiss it as a cause with only some variation of the phrase ‘well, it’s not very neglected’.

We should change this type of thinking.


1. Throughout this essay, you should take it as read that by any change to a value I mean ‘counterfactually adjusted expected change, all else being equal’.

2. For example, if I give $100 to AMF and $100 000 had been given before mine, my % increase in resources would be 0.1 - but if I gave $200 and $1 000 000 had been given before mine, it would be only 0.02.

When I showed 80K an early draft of this essay, they pointed out that the original solvability factor, ‘% of the problem solved /​ % increase in resources’ is really elasticity of solvability, and thought that anyone who noticed the problem above would have recognised that. But since my criticisms are of the components of the factor, I don’t see how relabelling it would matter.

3. This is easier said than done, and I won’t attempt it here. In Defining Returns Functions and Funding Gaps, Max Dalton looks at a few possible categories of model for the slope of diminishing returns, and in the follow up he and Owen Cotton-Barrett give some reasons for choosing between them. I suspect in some fields someone clever enough could throw the huge amounts of data on resources spent and outcomes generated at a machine learnination program and get some surprisingly cogent output. If it were possible to do so for whole cause areas, it could yield huge insights into cause prioritisation.

4. In general this essay discusses individuals working for organisations (which we can extrapolate to individuals working on cause areas), but virtually identical reasoning could extend to organisations if there were any particularly irreplaceable/​irreducible ones. Ie if you replaced all phrases like ‘person hired at an organisation’ with ‘organisation founded to work on a cause’, the argument would be essentially the same.

Throughout the essay I tend to treat individuals as unitary, to be either added to or subtracted from a cause as a whole, but this is for convenience and not strictly accurate. For people it’s a decent approximation: people’s work hours (especially effective altruists’) will rarely diverge by more than a factor of two.

For organisations, it would be more important to account for the difference, since one can be orders of magnitude larger than another.

5. HT Goodwin Gibbins for these examples. There has been political advocacy both to promote it, in it order to mitigate the effects, and to prevent it because of fears that it will reduce the political will to solve climate change or risk even worse harm. Nuclear power clashes are so ubiquitous they have their own Wikipedia pages: in the red corner, https://​​en.wikipedia.org/​​wiki/​​Anti-nuclear_movement, in the blue corner, https://​​en.wikipedia.org/​​wiki/​​Pro-nuclear_movement.

On 80K’s current definition, even if all this adverse advocacy had perfectly cancelled itself it out, the problem of climate change would have become more solvable and, in broader EA parlance, less neglected.

6. This is closely related to Givewell’s room for more funding metric, which I’ve heard people equate with the idea of neglectedness, but which is functionally quite different.

7. Eg Will MacAskill’s discussion in the ‘Overhead Costs, CEO Pay and Confusions’ chapter of Doing Good Better, the ideas discussed esp by Michael Faye in this talk


Thanks to Nick Krempel (not a member of the EA community, but supersmart guy) for a great deal of help thinking through this, and to Michael Plant (who originally I forgot to thank!) for a lot of helpful feedback. Needless to say, errors and oversights thoroughly mine.