Make your own cost-effectiveness Fermi estimates for one-off problems
In some recent work (particularly this article) I built models for estimating the cost effectiveness of work on problems when we don’t know how hard those problems are. The estimates they produce aren’t perfect, but they can get us started where it’s otherwise hard to make comparisons.
Now I want to know: what can we use this technique on? I have a couple of applications I am working on, but I’m keen to see what estimates other people produce.
There are complicated versions of the model which account for more factors, but we can start with a simple version. This is a tool for initial Fermi calculations: it’s relatively easy to use but should get us around the right order of magnitude. That can be very useful, and we can build more detailed models for the most promising opportunities.
The model is given by:
This expresses the expected benefit of adding another unit of resources to solving the problem. You can denominate the resources in dollars, researcher-years, or another convenient unit. To use this formula we need to estimate four variables:
-
R(0) denotes the current resources going towards the problem each year. Whatever units you measure R(0) in, those are the units we’ll get an estimate for the benefit of. So if R(0) is measured in researcher-years, the formula will tell us the expected benefit of adding a researcher year.
-
You want to count all of the resources going towards the problem. That includes the labour of those who work on it in their spare time, and some weighting for the talent of the people working in the area (if you doubled the budget going to an area, you couldn’t get twice as many people who are just as good; ideally we’d use an elasticity here).
-
Some resources may be aimed at something other than your problem, but be tangentially useful. We should count some fraction of those, according to how much resources devoted entirely to the problem they seem equivalent to.
-
-
B is the annual benefit that we’d get from a solution to the problem. You can measure this in its own units, but whatever you use here will be the units of value that come out in the cost-effectiveness estimate.
-
p and y/z are parameters that we will estimate together. p is the probability of getting a solution by the time y resources have been dedicated to the problem, if z resources have been dedicated so far. Note that we only need the ratio y/z, so we can estimate this directly.
-
Although y/z is hard to estimate, we will take a (natural) logarithm of it, so don’t worry too much about making this term precise.
-
I think it will often be best to use middling values of p, perhaps between 0.2 and 0.8.
-
And that’s it.
Example: How valuable is extra research into nuclear fusion? Assume:
-
R(0) = $5 billion (after a quick google turns up $1.5B for current spending, and adjusting upwards to account for non-financial inputs);
-
B = $1000 billion (guesswork, a bit over 1% of the world economy; a fraction of the current energy sector);
-
There’s a 50% chance of success (p = 0.5) by the time we’ve spent 100 times as many resources as today (log(y/z) = log(100) = 4.6).
Putting these together would give an expected societal benefit of (0.5*$1000B)/(5B*4.6) = $22 for every dollar spent. This is high enough to suggest that we may be significantly under-investing in fusion, and that a more careful calculation (with better-researched numbers!) might be justified.
Caveats
To get the simple formula, the model made a number of assumptions. Since we’re just using it to get rough numbers, it’s okay if we don’t fit these assumptions exactly, but if they’re totally off then the model may be inappropriate. One restriction in particular I’d want to bear in mind:
-
It should be plausible that we could solve the problem in the next decade or two.
It’s okay if this is unlikely, but I’d want to change the model if I were estimating the value of e.g. trying to colonise the stars.
Request for applications
So—what would you like to apply this method to? What answers do you get?
To help structure the comment thread, I suggest attempting only one problem in each comment. Include the value of p, and the units of R(0) and units of B that you’d like to use. Then you can give your estimates for R(0), B, and y/z as a comment reply, and so can anyone else who wants to give estimates for the same thing.
I’ve also set up a google spreadsheet where we can enter estimates for the questions people propose. For the time being anyone can edit this.
Have fun!
- 13 Dec 2014 14:25 UTC; 3 points) 's comment on Effective Altruism Outreach needs your donations this Christmas! by (
- 8 Oct 2015 11:56 UTC; 3 points) 's comment on An update on the Global Priorities Project by (
A research area with a great deal of uncertainty but potentially high payoff is anti-ageing medicine. But how good is it to put more resources into?
To be concrete, let’s look at the problem of being able to stop a majority of the ageing processes in cells. Let’s:
Measure R(0) (current resources for the area) in $
Measure B (annual benefits) in QALYs
Take p = 0.2
So the estimate for y/z should be how many times historical efforts to solve the problem we’ll need before there’s a 20% total chance of success.
I think this is a particularly uncertain problem in various ways: our error bars on estimates are likely to be large, and the model is not a perfect fit. But it’s also a good example of how we might begin with really no idea about how cost-effective we should think it is, and so produce a first number which can be helpful.
My estimates.
R(0): The SENS Foundation has an annual budget of around $4m, plus extra resources in the form of labour. Stem cell research has a global annual budget probably in the low billions, although it’s not all directly relevant. Some basic science may be of relevance, but this is likely to be fairly tangential. Overall I will estimate $1b here, although this could be out by an order of magnitude in either direction.
B: Around 100m people die every year. It’s unclear exactly what the effects of success would be on this figure, but providing a quarter of them with an extra 10 years of life seems conservative but not extremely so. So I estimate 250m QALYs/year
y/z: Real head-scratching time. I think 10 times historical resources wouldn’t get us up to a 20% chance of success, but 10,000 times historical resources would be more than enough. I’m going to split the difference and say 300.
Anti-aging seems like a plausible area for effective altruists to consider giving to, so thank you for raising this thought. It looks like GiveWell briefly looked into this area before deciding to focus its efforts elsewhere.
I’ve seen a few videos of Aubrey de Grey speaking about how SENS could make use of $100 million per year to fund research on rejuvenation therapies, so presumably SENS has plenty of room for more funding. SENS’s I-990 tax forms show that the organization’s assets jumped by quite a lot in 2012, though this was because of de Grey’s donations during this year, and though I can’t find SENS’s I-990 for 2013, I would naively guess that they’ve been able to start spending the money donated in 2012 during the last couple of years. I still think that it would be worthwhile to ask someone at SENS where the marginal donation to the foundation would go in the short term—maybe a certain threshold of donations needs to be reached before rejuvenation research can be properly begun in the most cost-effective way.
I agree with Aubrey that too much money is spent researching cures to specific diseases, relative to the amount spent researching rejuvenation and healthspan-extension technology. I’ve focused this response on SENS because, as a person with a decent science background, I feel like Aubrey’s assertion that (paraphrased from memory) “academic research is constrained in a way that rewards low expected value projects which are likely to yield results quickly over longer term, high expected value projects” is broadly true, and that extra research into rejuvenation technologies is, on the margin, more valuable than extra research into possible treatments for particular diseases.
Thanks, lots of useful things here. I absolutely agree with your last paragraph.
I agree that looking more carefully at SENS would be the right move for a deeper investigation of the area. I think before that step it’s worth having some idea of roughly how valuable the area is (which is what I was very crudely doing).
I don’t put too much stock in the particular numbers I produced here. They make anti-ageing look just slightly less promising than the best direct health interventions we know of (hence indeed better than a lot of medical research), but the previous time I came up with numbers for this problem—for a conference talk—I must have been in a more optimistic mood, because my estimate was a couple of orders of magnitude better. I wouldn’t be surprised if the truth is somewhere in the middle.
I would like to see more people provide estimates, even if not carefully justified, as I think we can get some wisdom of the crowds coming through, and to understand which figures are the most controversial or would benefit most from careful research.
I really like this work. I was originally going to mention a lack of consideration of the time value of money, but when I started reading the linked-to paper I realized it was your starting point.
Have you thought much about how to use it for x-risk reduction work? I assume that x-risks may be quite different than the examples you specified, with near-infinite values of B.
Yes, in fact I’ll be releasing an article soon which applies this framework to prioritisation of different methods for reducing existential risk.
We’ll need some extra tools to compare between reducing existential risk and other goods. This is something I hope to come back to.
Stumbling on this today-did this article ever get published? Would be keen to read
I’m not certain I remember what I was referring to here, but my best guess is that it was this article: https://forum.effectivealtruism.org/posts/HENbwrDYnTktRtNdE/report-allocating-risk-mitigation-across-time
Thanks!
It’s been argued that open borders could give a massive boost to the world economy. Let’s see what happens if we try to apply this cost-effectiveness model to the problem of successfully lobbying for open borders. (Lack of inevitability means the model is not ideally suited compared to with research problems, but it still seems like a reasonable position.)
We’ll look at the problem of getting to a political situation which permits emigration levels of 5% of the population of poor countries. This might produce a 20% increase in the population of rich countries.
Let’s:
Measure R(0) in $; remember to count any free press or similar that the cause gets.
Measure B in $
Take p = 0.3
My estimates:
R(0): I’m very unsure. It seems like it’s at least in the hundreds of millions of dollars, and not higher than tens of billions of dollars. So I will guess $5 billion. I’ve put little research into this and this number could easily update a lot.
B: Based on some of the estimates in this paper, emigration of 5% might add in the region of $2.5 trillion to the world economy.
y/z: If $1,000 of resources were dedicated annually to this for each of the ~2 billion people living in the rich world, I’d be happy that there was a significant chance of success. So I’ll estimate that y/z = 2000⁄5 = 400.