Biases in our estimates of Scale, Neglectedness and Solvability?

I suspect there are a few errors that using logarithmic scales for our estimates of Scale, Neglectedness and Solvability can make us prone to, although I’m not sure how bad this is in practice in the EA community. I describe three such possible errors. I also describe another error related only to correlations between the factors and not the log scale. In fact, as a rule, we aren’t accounting for possible correlations between the factors, or we’re effectively assuming the factors aren’t correlated, and there’s no general constant upper bound on how much this could bias our estimates. As a general trend, the more uncertainty in the factors, the greater the bias can be.

These could affect not just cause area analyses, but also grant-making and donations informed by these factors, as done by the Open Philanthropy Project.


Background

Scale/​Importance, Neglectedness/​Crowdedness and Solvability/​Tractability are estimated on a logarithmic scale (log scale) and then added together. See the 80,000 Hours article about the framework. What we really care about is their product, on the linear scale (the regular scale), but since , the logarithm of the product of the factors is the sum of the log scale factors, so using the sum of log scale factors makes sense:



1. Using the expected values of the logarithms instead of the logarithms of the expected values, biasing us against high-risk high-reward causes.

If there’s uncertainty in any of the linear scale factors (or their product), say , then using the expected value of the logarithm underestimates the quantity we care about, , because the logarithm is a strictly concave function, so we have (the reverse) Jensen’s inequality:

When there is a lot of uncertainty, we may be prone to underestimating the log scale factors. The difference can be significant. Consider a 10% probability of 1 billion and a 90% probability of 1 million; in base 10, that’s 14.5 vs 18.4. 80,000 Hours uses base , incrementing by 2 for every factor of 10, so this would be closer to 7 vs 9, which is quite significant, since some of the top causes differ by around this much or less (according to 80,000 Hours’ old estimates).

People could also make this mistake without actually doing any explicit expected value calculations. They might think “this cause looks like somewhere between a 3 and a 5 on Tractability, so I’ll use 4 as the average”, while having a symmetric distribution centred at 4 in mind (i.e. the distribution looks the same if you reflect it left and right through 4). This actually corresponds to using a skewed distribution, with more probability given to the lower values, whereas a uniform distribution over the interval would give you 4.7 on a log scale. That being said, I think it does make sense to generally have these distributions more skewed towards lower values on the linear scale and we might otherwise be biased towards more symmetric distributions over the linear scale, so these two biases could work in opposite directions. Furthermore, we might already be biased in favour of high-risk high-reward interventions, since we aren’t sufficiently skeptical and are subject to the optimizer’s curse.

The solution is to always make sure to deal with the uncertainty before taking logarithms, or be aware that a distribution over the log scale corresponds to a distribution that’s relatively more skewed towards lower values in the linear scale.


2. Upwards bias of log scale factors, if the possibility of negative values isn’t considered.

Logarithms can be negative, but I’ve never once seen a negative value for any of the log scale factors (except in comments on 80,000 Hours’ problem framework page). If people mistakenly assume that negative values are impossible, this might push up their view of what kind of values are “reasonable”, i.e. “range bias”, or cause us to prematurely filter out causes that would have negative log scale factors.

80,000 Hours does provide concrete examples for the values on the log scale factors, which can prevent this. It’s worth noting that according to their Crowdedness score, $100 billion dollars in annual spending corresponds to a 0, but Health in poor countries gets a 2 on Neglectedness with “The least developed countries plus India spend about $300 billion on health each year (PPP).” So, should this actually be negative? Maybe it shouldn’t, if these resources aren’t generally being spent effectively, or there aren’t a million people working on the problem.

We could also be biased away from further considering causes (or actions) that have negative log scale factors that make up for them with other factors. In particular, some small acts of kindness/​decency or helping an individual could be very low in Scale, but when they’re basically free in terms of time or resources spent, like thanking people when they do things for you, they’re still worth doing. However, I do expect that writing an article defending small acts of kindness probably has much less impact than writing one defending one of the typical EA causes, and may even undermine EA if it causes people to focus on small acts of kindness which aren’t basically free. $1 is not free, and is still generally better given to an EA charity, or to make doing EA more sustainable for you or someone else.

Furthermore, when you narrow the scope of a cause area further and further, you expect the Scale to decrease and the Neglectedness to increase. At the level of individual decisions, the Scale could often be negative. At the extreme end,

  • New cause area: review a specific EA article and provide feedback to the author.

  • New cause area: donate this $1 to the Against Malaria Foundation.


3. Neglecting the possibility that work on a cause could do more harm than good, if the possibility of undefined logarithms isn’t considered.

In this case, where the argument is negative (or 0), the logarithm is not defined. If people are incorrectly taking expectations after taking logarithms instead of before as in 1, then they should expect undefined values. The fact that we aren’t seeing them could be a sign that they aren’t seriously considering the possibility of net harm. If log scale values are incorrectly assumed to always to be defined, this is similar to the range bias in 2, and could bias our estimates upwards.

On the other hand, if they are correctly taking expectations before logarithms, then since the logarithm is undefined if and only if its argument is negative (or 0), if it would have been undefined, it’s because work on the cause does more harm than good in expectation, and we wouldn’t want to pursue it anyway. So, as in the above, if log scale values are incorrectly assumed to always be defined, then this may also prevent us from considering the possibility that work on a cause does more harm than good in expectation, and could also also bias our estimates upwards.

Note: I’ve rewritten this section since first publishing this post on the EA Forum to consider more possibilities of biases.


Bonus: Ignoring correlations between factors.

Summary: When there’s uncertainty in the factors and they correlate positively, we may be underestimating the marginal cost-effectiveness. When there’s uncertainty in the factors and they correlate negatively, we may be overestimating the marginal cost-effectiveness.

What we care about is , but the expected value of the product is not in general equal to the product of the expected values, i.e.

They are equal if the terms are uncorrelated, e.g. independent. If it’s the case that the higher in Scale a problem is, the lower in Solvability it is, i.e. the better it is to solve, the harder it is to make relative progress, we might overestimate the overall score, since, as an example, letting be a uniform random variable on , and , also uniform over the same interval, but perfectly anticorrelated, then we have

Of course, this is less than a factor of 2 in the linear scale, so less than a difference of 1 on the log scale.

(EDIT: everything below was added later.)

However, the quotient can be arbitrarily large (or arbitrarily small), so there’s no constant upper bound on how wrong we could be. For example, let and with probability , and and with probability . Then,

and the quotient goes to as goes to , since the left-hand side goes to and the right-hand side goes to .

On the other hand, if the distributions are more concentrated over the same interval, the gap is lower. Comparing to the first example, let have probability density function over and again . Then, we have:

With anticorrelation, I think this would bias us towards higher-risk higher-reward causes.

On the other hand, positive correlations lead to the opposite bias and underestimation. If , identically, and uniform over , we have:


The sign of the difference between and is the same as the sign of the correlation, since we have for the covariance between and ,

and the correlation is

where .

So, as a general trend, the more uncertainty in the factors, the greater the possibility for larger biases.


Possible examples.

One example could be that we don’t know the exact Scale of wild animal suffering, in part because we aren’t sure which animals are actually sentient, and if it does turn out that many more animals are sentient than expected, that might mean that relative progress on the problem is harder. It could actually turn out to be the opposite, though; if we think we could get more cost-effective methods to address wild invertebrate suffering than for for wild vertebrate suffering (invertebrates are generally believed to be less (likely to be) sentient than vertebrates, with a few exceptions), then the Scale and Solvability might be positively correlated.

Similarly, there could be a relationship between the Scale of a global catastrophic risk or x-risk and its Solvability. If advanced AI can cause value lock-in, how long the effects last might be related to how difficult it is to make relative progress on aligning AI, and more generally, how powerful AI will be is probably related to both the Scale and Solvability of the problem. How bad climate change or a nuclear war could be might be related to its Solvability, too, if worse risks are relatively more or less difficult to make progress on.


Some independence?

It might sometimes be possible to define a cause, its scope and the three factors in such a way that and are independent (and uncorrelated), or at least independent (or uncorrelated) given . For example, in the case of wild animal suffering, we should include all funding towards invertebrate welfare towards even if it turns out to be the case that invertebrates aren’t sentient. However, is defined in terms of the other two factors, so should not in general be expected to be independent from or uncorrelated with either. The independence of and (given ) and the law of iterated expectations allow us to write

However, should, in theory, consider future work, too, and the greater in or a problem is, the more future resources we might expect to go to it in the future, so might actually be anticorrelated with the other two factors.

This is something to keep in mind for marginal cost-effectiveness analysis, too.


Bounding the error.

Without actually calculating the covariance and thinking about how the factors may depend on one another, we have the following bound on the difference in terms of of the variances (by the Cauchy-Schwarz inequality, which is also why correlations are bounded between −1 and 1):

and this bounds the bias in our estimate. We can further divide each side by to get

and so

which bounds our bias as a ratio without considering any dependence between and . Taking logarithms bounds the difference .

Other bounds can be derived using the maximum values (or essential suprema) and minimum values (or essential infima) of and , or Hölder’s inequality, which generalizes the Cauchy-Schwarz inequality. For example, assuming is always positive (or nonnegative), because , we have

and so, dividing by

The inequality is reversed if .

Similarly, if and one of the other two factors are always positive (or nonnegative, and is always nonnegative), then

and so, dividing by ,

And again, the inequality is reversed if .

Here, we can take to be any of , or , and and to be the other two. So, if at most one factor ranges over multiple orders of magnitude, then taking the product of the expected values should be within a few order of magnitudes of the expected value of the product, since we can use the above bounds with as the factor with the widest log scale range.

With some care, I think similar bounds can be derived when is not always positive (or nonnegative), i.e. can be negative.

Finally, if one of the three factors, say , is independent of the other two (or the product of the other two), then we have , and it suffices to bound , since