Math Pedantic/Computer Science Honours interested in Cause Prioritization. Currently working with QURI on Squiggle and Pedant.
SamNolan
Quantifying Uncertainty in GiveWell’s GiveDirectly Cost-Effectiveness Analysis
Quantifying Uncertainty in GiveWell Cost-Effectiveness Analyses
Type Checking GiveWell’s GiveDirectly Cost Effective Analysis
Pedant, a type checker for Cost Effectiveness Analysis
Have your community notified of new EA Jobs
A list of technical EA projects
A Prototype Application for allocating people to Effective Projects
Hello! My goodness I love this! You’ve really written this in a super accessible way!
Some citations: I have previously Quantified the Uncertainty in the GiveDirectly CEA (using Squiggle). I believe the Happier Lives Institute has done the same thing, as did cole_haus who didn’t do an analysis but built a framework for uncertainty analysis (much like I think you did). I just posted a simple example of calculating the Value of Information on GiveWell models. There’s a question about why GiveWell doesn’t quantity uncertainty
My partner Hannah currently has a grant where she’s working on quantifying the uncertainty of other GiveWell charities using techniques similar to mine, starting with New Incentives. Hopefully, we’ll have fruit to show for other GiveWell charities! There is a lot of interest in this type of work.
I’d love to chat with you (or anyone else interested in Uncertainty Quantification) about current methods and how we can improve them. You can book me on calendly. I’d still learning a lot about how to do this sort of thing properly, and am mainly learning by trying, so I would love to have a chat about ways to improve.
A General Treatment of the Moral Value of Information
An engineer’s approach to personal finance for effective altruists
Value of Infomation, an example with GiveDirectly
[Question] How many chickens are being saved by corporate campaigns?
Sorry for the late comment, but I was wondering:
We think the 2018 FP estimate of 10 hen-years/$ is likely a slight underestimate. Across the different tabs on the spreadsheet, we model four scenarios: 1, 10, 30 and 100 hen-years affected per dollar.
Why do you think it’s an underestimate?
My Hecking Goodness! This is the coolest thing I have ever seen in a long time! You’ve done a great job! I am like literally popping with excitement and joy. There’s a lot you can do once you’ve got this!
I’ll have to go through the model with a finer comb (and look through Nuno’s recommendations) and probably contribute a few changes, but I’m glad you got so much utility out of using Squiggle! I’ve got a couple of ideas on how to manage the multiple demographics problem, but honestly I’d love to have some chats with you about next steps for these models.
That’s true! could easily be something other than 1.5. In London, it was found to be 1.5, in 20 OECD countries, it was found to be about 1.4. James Snowden assumes 1.59.
I could but don’t represent eta with actual uncertainty! This could be an improvement.
Would love to! I’m in communication to set up an EA Funds grant to continue building these for other GiveWell charities. I’d also like to do this with ACE! but I’ll need to communicate with them about it.
Hey! Love the post. Just putting my comments here as they go.
Tldr This seems to be a special case of the more general theory of Value of Information. There’s a lot to be said about value of information, and there are a couple of parameter choices I would question.
The EA Forum supports both Math and Footnotes now! Would be lovely to see them included for readability.
I’m sure you’re familiar with Value of Information. It has a tag on the EA Forum. It seems as if you have presumed the calculations around value of information (For instance, you have given a probability and better-than-top-charity ratio , both of which can be explicitly calculated with Value of Information). The rest of the calculations seem valid and interesting.
For instance, when the total budget is 1 billion dollars, then this equation entails that a research project that costs 1 million dollars (c/(m-c)=0.001) is worth funding if it has at least a 1% chance (p=0.01) of producing an intervention that is at least 10% more cost-effective (n=1.1) than the best existing intervention. This is a surprisingly low bar relative to how hard it is to get funding for EA-aligned academic research
I might be wrong, but I think this is assuming that this is the only research project that is happening. I could easily assume that EA spends more than 0.1% of it’s resources on identifying/evaluting new interventions. Although, I’m yet to know of how to do the math with multiple research projects. It’s currently a bit beyond me.
There’s a common bias to choose numbers within, say and that may bias this investigation. For instance, when I calculated the value of information on GiveDirectly, when was . If you are unsure about whether a charity is cost-effective, often the tails of your certainty can drop fast.
Your “lower bound” is entirely of your own construction. It’s derived from your decleration at the start that p is the chance that you find a “investing dollars into the research generates an intervention that is at least times as effective as the best existing intervention with probability . If I was to call your construction the “Minimum value of information”, it’s possible to calculate the “Expected value of [Perfect|imperfect] information”, which I feel like might be a more useful number. Guesstimate can do this as well, I could provide an example if you’d like.
We have to remember that we are still uncertain about the cost-effectiveness of the new intervention, which means it would need to be expected to be more cost-effective even after considering all priors. This may increase or decrease . However, this is probably irrelevant to the argument.
Amusingly, we seem to come at this at two very different angles, I have a bias that I’d like EA to spend less on research (or less on research in specific directions) and you’re here to try and convince EA to spend more on research! Love you’re work, I’ll get onto your next post and we’ll chat soon.
Thank you so much for the post! I might communicate it as:
People are asking the question “How much money do you have to donate to get an expected value of 1 unit of good” Which could be formulated as:
where is the amount you donate and is the amount of utility you get out of it.
In most cases, this is linear, so: . And .
Solving for x in this case gets , but the mistake is to solve it and get .
Please correct me if this is a bad way to formulate the problem! Can’t wait to see your future work as well
Because this comes up when googling street outreach, as President of EA Melbourne (the EA group that ran the above-mentioned event), I’d love to tell you how it went.
Interestingly, people in the public seem open to ideas of effective altruism. However, the conversion rate is truly tiny, no one we saw on that day came to any future event. In the end, we decided that this was not a worthwhile activity.
Some interesting notes however:
People, especially in the current political climate (referring to Russia invading Ukraine here), are actually quite supportive of longtermist ideas! This is probably because longtermist ideas are the only types of problems that people face in developed nations (in comparison to Animal Welfare and Global Health and Development). We ran a giving game between animal welfare, global poverty and longtermist ideas, and the money spread was fairly even.
Almost no one puts any thought into where their money goes, although this may be just because they didn’t want to strike a conversation up with a stranger. Many people followed just what seemed like a good idea, sometimes confusing cause areas (for instance, thinking “Global Health” is about environmentalism, or possibly that “Animal Welfare” is about helping pet animals etc)
As is usual with street outreach, younger people are much more open to discussion.
Hello! Thanks for showing interest in my post.
First of all, I don’t represent GiveWell or anyone else but myself, so all of this is more or less speculation.
My best guess as why GiveWell does not quantify uncertainty in their estimates is because the technology to do this is still somewhat primitive. The most mature candidate I see is Causal, but even then it’s difficult to identify how one might do something like have multiple parallel analyses of the same program but in different countries. GiveWell has a lot of requirements that their host plaftorm needs t ohave. Google Sheets has the benefit that it can be used, understood, and edited by anyone. I’m currently working on Squiggle with QURI to make sweeten the deal to quantifying uncertainty explicitly, but there’s a long way to go before it becomes somehing that could be readily understood and trusted to be stable like Google Sheets.
On a second note, I would also say that providing lower and upper estimates for cost-effectiveness for its top charities wouldn’t actually be that valuable, in the sense that it doesn’t influence any real world decisions. I know that I decided to spend hours making the GiveDirectly quantification but in truth, the information gained from it directly is extremely little. The main reason I did it is that it makes a great proof of concept for usage in non-GiveWell fields which need it much more.
There are two reasons why there is so little information gained from it:
The uncertainty of GiveDirectly and other GiveWell supported charities is not actually that high (about an order of magnitude for GiveDirectly, I expect over 2-3 orders of magnitude for the others). For instance, I never expected in my quantifaction of uncertainty in GiveDirectly that there would be practically any probability mass of it being more effective than AMF. At least before counting for things like moral uncertainty.
My uncertainty about my chosen uncertainties are really high. If you strip away how fancy my work looks and just look at what I’ve contributed in comparison to what GiveWell has done, I’ve practically copied GiveWell’s work and pulled some numbers out of thin air for uncertanity with the help of Nuno. Some Bayesian Analysis is done under questionable assumptions etc.
I see much more value in quantifying uncertainty when we might expect the uncertainty to be much larger, for instance, when dealing with moral uncertainty, or animal welfare/longtermist interventions.