Math Pedantic/Computer Science Honours interested in Cause Prioritization. Currently working with QURI on Squiggle and Pedant.
SamNolan
Thanks a lot!
If I was to flesh this out further, it would likely involve a way of proposing EA projects that we could then curate. The form would likely be accessible via the browser, but yes, it’s currently just a very modest proof of concept.
I’ve been seeing you around and have loved some of your posts! The project is meant to try and find both highly skilled but also beginners in EA. I’m not sure what direction it needs to go in, as I kind of want to talk to the people that have proposed this idea in the past to try and get their thoughts on what it should look like. I should probably get in contact with them soon.
This is currently just a prototype, with many many bugs. I’ve actually joined the team and EA CoLabs. Which is a proper application of the concepts here.
Thanks for pointing that out! I just fixed it up.
Thanks for your considerations!
Yes, I agree. I can very much add tuple style function application, and it will probably be more intuitive if I do so. It’s just that the theory works out a lot easier if I do Haskell style functions.
It seems to be a priority however. I’ve added an issue for it.
The web interface should be able to write pedant code without actually installing Pedant. Needing to install custom software is definitely a barrier.
I definitely was considering adding some form of exporting feature to Pedant at some point. I’m not sure that it’s within the current scope/roadmap of Pedant, but maybe at some point in the future!
Causal is amazing, and if I could introduce Causal into this mix, this would save a lot of my time in developing, and I would be massively appreciative. It would likely help enable many of the things I’m trying to do.
Hopefully Pedant ends up pretty much being a continuation and completion of Squiggle, that’s the dream anyway. Basically Squiggle plus more abstraction features, and more development time poured into it.
Hello Michael!
Yes, I’ve heard of Idris (I don’t know it, but I’m a fan, I’m looking into Coq for this project). I’m also already a massive fan of your work on CEAs, I believe I emailed you about it a while back.
I’m not sure I agree with you about the DSL implementation issue. You seem to be mainly citing development difficulties, whereas I would think that doing this may put a stop to some interesting features. It would definitely restrict the amount of applications. For instance, I’m fully considering Pedant to be simply a serialization format for Causal. Which would be difficult to do if it was embedded within an existing language.
Making a language server that checks for dimensional errors would be very difficult to do in a non-custom language. It may just be possible in a language like Coq or Idris, but I think Coq and Idris are not particularly user friendly, in the sense that someone with no programming background could just “pick them up”.
I may be interested in writing your CEAs into Pedant in the future, because I find them very impressive!
Maybe, your work there is definitely interesting.
However, I don’t fully understand your project. Is it possible to refine a Cost Effectiveness Analysis from this? I’d probably need to see a worked example of your methodology before being convinced it could work.
Hey Neil,
How is this different from EA CoLabs? This team is working to connect people with projects and need as much help as they can help as they can get. Would it be worth joining them over starting a new project?
Would love to! I’m in communication to set up an EA Funds grant to continue building these for other GiveWell charities. I’d also like to do this with ACE! but I’ll need to communicate with them about it.
Oh no, I’ve missed this consideration! I’ll definitely fix this as soon as possible.
Now that I’ve realised this, I will remove the entire baseline consumption consideration. As projecting forward I assume GiveDirectly will just get better at selecting poor households to counteract the fact that they should be richer. Thanks for pointing this out!
That’s true! could easily be something other than 1.5. In London, it was found to be 1.5, in 20 OECD countries, it was found to be about 1.4. James Snowden assumes 1.59.
I could but don’t represent eta with actual uncertainty! This could be an improvement.
Thank you so much for the post! I might communicate it as:
People are asking the question “How much money do you have to donate to get an expected value of 1 unit of good” Which could be formulated as:
where is the amount you donate and is the amount of utility you get out of it.
In most cases, this is linear, so: . And .
Solving for x in this case gets , but the mistake is to solve it and get .
Please correct me if this is a bad way to formulate the problem! Can’t wait to see your future work as well
Haha, I came up with that example as well. You’re thinking about this in the same way I did!
I think to say that one is the “actual objective” is not very rigorous. Although I’m saying this from a place of making that same argument. It does answer a valid question of “how much money should one donate to get an expected 1 unit of good” (which is also really easy to communicate, dollars per life saved is much easier to talk about than lives saved per dollar). I’ve been thinking about it for a while and put a comment under Edo Arad’s one.
As for the second point about simple going . I agree that this is likely an error, and you have a good counterexample.
Hello! Thanks for showing interest in my post.
First of all, I don’t represent GiveWell or anyone else but myself, so all of this is more or less speculation.
My best guess as why GiveWell does not quantify uncertainty in their estimates is because the technology to do this is still somewhat primitive. The most mature candidate I see is Causal, but even then it’s difficult to identify how one might do something like have multiple parallel analyses of the same program but in different countries. GiveWell has a lot of requirements that their host plaftorm needs t ohave. Google Sheets has the benefit that it can be used, understood, and edited by anyone. I’m currently working on Squiggle with QURI to make sweeten the deal to quantifying uncertainty explicitly, but there’s a long way to go before it becomes somehing that could be readily understood and trusted to be stable like Google Sheets.
On a second note, I would also say that providing lower and upper estimates for cost-effectiveness for its top charities wouldn’t actually be that valuable, in the sense that it doesn’t influence any real world decisions. I know that I decided to spend hours making the GiveDirectly quantification but in truth, the information gained from it directly is extremely little. The main reason I did it is that it makes a great proof of concept for usage in non-GiveWell fields which need it much more.
There are two reasons why there is so little information gained from it:
The uncertainty of GiveDirectly and other GiveWell supported charities is not actually that high (about an order of magnitude for GiveDirectly, I expect over 2-3 orders of magnitude for the others). For instance, I never expected in my quantifaction of uncertainty in GiveDirectly that there would be practically any probability mass of it being more effective than AMF. At least before counting for things like moral uncertainty.
My uncertainty about my chosen uncertainties are really high. If you strip away how fancy my work looks and just look at what I’ve contributed in comparison to what GiveWell has done, I’ve practically copied GiveWell’s work and pulled some numbers out of thin air for uncertanity with the help of Nuno. Some Bayesian Analysis is done under questionable assumptions etc.
I see much more value in quantifying uncertainty when we might expect the uncertainty to be much larger, for instance, when dealing with moral uncertainty, or animal welfare/longtermist interventions.
- 31 Jul 2022 20:00 UTC; 23 points) 's comment on Why does GiveWell not provide lower and upper estimates for the cost-effectiveness of its top charities? by (
- 30 Aug 2022 20:51 UTC; 17 points) 's comment on Methods for improving uncertainty analysis in EA cost-effectiveness models by (
Hello! My goodness I love this! You’ve really written this in a super accessible way!
Some citations: I have previously Quantified the Uncertainty in the GiveDirectly CEA (using Squiggle). I believe the Happier Lives Institute has done the same thing, as did cole_haus who didn’t do an analysis but built a framework for uncertainty analysis (much like I think you did). I just posted a simple example of calculating the Value of Information on GiveWell models. There’s a question about why GiveWell doesn’t quantity uncertainty
My partner Hannah currently has a grant where she’s working on quantifying the uncertainty of other GiveWell charities using techniques similar to mine, starting with New Incentives. Hopefully, we’ll have fruit to show for other GiveWell charities! There is a lot of interest in this type of work.
I’d love to chat with you (or anyone else interested in Uncertainty Quantification) about current methods and how we can improve them. You can book me on calendly. I’d still learning a lot about how to do this sort of thing properly, and am mainly learning by trying, so I would love to have a chat about ways to improve.
My Hecking Goodness! This is the coolest thing I have ever seen in a long time! You’ve done a great job! I am like literally popping with excitement and joy. There’s a lot you can do once you’ve got this!
I’ll have to go through the model with a finer comb (and look through Nuno’s recommendations) and probably contribute a few changes, but I’m glad you got so much utility out of using Squiggle! I’ve got a couple of ideas on how to manage the multiple demographics problem, but honestly I’d love to have some chats with you about next steps for these models.
Hello!
I’ve read this article as part of the EA Forum Podcast. If you wanted an audio version.