Co-founder of Nonlinear, Charity Entrepreneurship, Charity Science Health, and Charity Science.
Kat Woods
Done! Hope others do the same so we can get lots of interesting, juicy data. Thanks Tom for putting this together. :)
The most recent EA survey might be a good thing to include.
Firstly, glad you guys are trying to solve this problem systematically. It looks like great work.
Secondly, you might be able to get more people to follow your recommendation if you explain the reasoning for it in a less mathematical way. Mainly because: a) Explaining ideas non-mathematically tends to make them propagate further, even in such an analytical crowd, and b) It would make it easier for people to understand and critique the idea, even if they are mathematically conversant.
Better ways to categorize or tag articles would be really helpful.
Great post. Completely agree with the general concept and have a few positive updates on the Charity Entrepreneurship front.
We are working with another team to get one of the other promising ideas from our initial CE research founded. A public post on this will come out sometime in the next month or so.
Additionally we are in fact working on expanding the model we used on Charity Entrepreneurship for health to a much wider subset of causes and crucial considerations to end up some with charities we/others can start in broader areas. Our first post on this, which is going up publicly very soon, is on explore/exploit and optimal stopping, but in the context of starting charities. We also talk about multi-armed bandit problems in it.
So this response could also be a whole post in of itself, but briefly, there were 3 big reasons:
1) We thought that it’s generally quite hard to start an extremely effective charity and also quite hard to influence pre-existing ones. Additionally it’s quite easy to start something ineffective. GiveWell only gives even us and New Incentives a 10-20% of successfully starting a charity, and I think these are relatively high rates compared to what I would expect to happen if we only attempted to inspire. (e.g. Our team already has experience founding an EA meta-charity for example). 2) We were in a pretty good position to start something. We had a strong team that worked well together and the timing seemed quite good for starting a direct charity in the poverty space and we thought this space was very high impact.
3) We figured once we had starting something we would be much stronger mentors and know the process a lot better. We have already found this to be very true as we are coaching other projects through this process.In general, I could imagine switching to a strategy that is more hands off and tries to inspire folks in a very meta way (e.g. incubator or heavy mentoring). If we see a few people pick up our CE ideas and take a good shot at them our probabilities of doing something like this would go up a lot.
What if you’re working on the wrong cause? Preliminary thoughts on how long to spend exploring vs exploiting.
Thanks. Good question. While researching this I did include a probability that I would be convinced of far future causes, but given the monster length of my post as is, decided not to include it. :P
My 95% confidence range of increased value of the best option actually ranges from 80% as good as my current one (ie making a poorer decision than I’d previously come to) to one million times better, because I put a greater than 2.5% chance of being convinced of a far future cause. However, the bulk of my probability lies closer to the smaller amounts, averaging out to the ~500x range.
However, I think that this will change dramatically over the years. I am trying to prioritize considerations that rule out large swathes of the decision space. I think that near the beginning of researching I will be able to make a decision on some calls that might rule out the higher values, narrowing my confidence interval so it no longer includes the extremely high numbers. This would lower the value of the marginal year of research quite a lot. That’s hard to include in the calculations though, and it very well might not happen, or may actually become wider or have higher upper bounds.
I echo Michael Plant’s sentiments. I’m glad you’re quantifying the benefits of this potential intervention.
I started looking through the CEA and thought it seemed optimistic in various ways, but then I realized I could just look at the end number and see if, even without adjustments, it beat GiveWell recommended charities. Unfortunately it doesn’t. You said that GiveWell’s charities are in the range of hundreds of dollars per DALY and that didn’t gel with my memory. I looked it up and AMF is around $1,965 per life saved, equivalent to 36 DALYs, so 1,965⁄36 = ~$54/DALY. SCI was $1,080 per life saved, so $1,080/36 = $30/DALY. GiveDirectly is $11,663/36=~$323/DALY, but the reason they recommend GiveDirectly is in part because it has a lower barrier to prove itself in evidence because it is a very “direct” intervention. (source: https://docs.google.com/spreadsheets/d/13b_qt-G_TQtoYNznNak3_5dzvzgCSUPJnk3l5dMisJo/edit#gid=1034883018) These numbers hold up roughly when you look at estimates from 2012 when they were still using DALYs.
Nevertheless, if you are considering where to donate, your best guess estimate is less cost-effective than GiveWell interventions. This is before it goes through the rigors of a GiveWell CEA as well, which would definitely have less optimistic numbers, especially given the low evidence base.
I’d like to end on a note that I think that posting new cause areas to the EA movement is scary because it’s a critical minded bunch, so hats off to you for having the courage to do so. Keep trying; I commend you for it. Unfortunately, even if you defend it as well as it can be defended, it might not win compared to the existing top charities. However, if nobody does this work, no new causes will be “discovered”, and so even if it doesn’t win, this sort of work is very likely to be net good in expectation.
The reason why interventions like AR or x-risk are accepted by the EA movement (although not by all EAs) is that from a CEA perspective they do better than GiveWell top charities. The reason a lot of people still don’t accept them as interventions though is because people discount based on evidence base differently, with some people taking non-evidence based CEAs more seriously than others. If drug policy does worse from a CEA perspective than GiveWell, AR and x-risk, and is worse from an evidence perspective than GiveWell charities, where is its advantage?
You could make a case that it’s better from a metric perspective (ie preventing unhappiness through depression rather than DALYs which has issues with it, like over-valuing preventing death according to a lot of value systems), but deworming improves lives; it doesn’t prevent death. Same with GiveDirectly.
For giving detailed feedback on the CEA, I unfortunately just don’t have the energy to do the full thing, but if the final number still isn’t enough to make me switch from GiveWell charities, it doesn’t make sense to look more into the details. However, one thing that jumped out to me that others mentioned was the chance of the ballot coming through. I think looking up the historical rate of ballot initiatives being passed would be a good thing to look into.
This is fantastic! Thank you for writing this. I think that far too often people see a problem then say, “it’s not tractable because I can’t think of anything you can do about it” before they’ve even given it 10 minutes thought. And often causes require far more than 10 minutes thought to come up with some good potential solutions!
Fair point, that deworming and cash transfers increase consumption instead of directly increase well being, or at least that’s what GiveWell’s main analysis rests on. I do recall that the GD study actually did look at SWB and on page 4 (bit.ly/2B97A1Y) it says that it increased a bunch of different happiness metrics as well (depression, stress, happiness and life satisfaction). However, if you only looked at that effect, GiveDirectly may not be that cost-effective. I haven’t investigated it that much from that angle.
In terms of preventing infant mortality, it seems unlikely that losing a child wouldn’t cause immense suffering to the parents, especially the mother. People often think that this wouldn’t happen because people just “get used to” babies dying, but the odds that a child will die is actually quite low nowadays, even in the developing world. In India, where I have the most experience, it’s measured in deaths per 1,000 live births, not 100, because it’s that’s rare. Additionally, because I don’t think death is nearly as bad as DALYs would have it, I looked a lot into parental mourning before choosing SMS reminders. I don’t have anything formal I wrote up I can point to (though I might at some point), but my research found that most parents, after the loss of a child, are depressed for around a year, with some tail ends of people who never appear to recover.
If it’s the metrics issue that’s leading to drug policy reform, I would recommend looking into preventing iron deficiency (through supplements or fortification) as an alternative. It’s more evidence based and iron deficiency causes massive unhappiness. Anecdotally I’ve had friends who transformed from sad grumpy monsters into happy productive members of society after realizing they were deficient. Additionally there’s evidence it increases income, increases IQ if taken during pregnancy, and decreases mortality in certain circumstances, so it’s pretty robust no matter the metrics you care about.
Lastly, I’ll admit that I haven’t read all of your posts / critiques of AMF’s effectiveness, so I’ll have to go and do that :)
Good question. We currently have $375k raised, and we have determined we could likely run a good quality study for less than originally estimated, so if we continue to run our operations at the same scale, we still have around $200k RFMF. However, there’s a decent probability that it will all be filled around July, so we are not actively seeking funding until after we get that sum confirmed one way or the other. We are recommending for those who can afford to do so to hold off until that time, and fill the gap if it does not get filled, or donate to their second choice if it does get filled. This of course depends on the urgency that the second choice needs funds and a lot of individual factors, so would depend on people’s personal circumstances.
Additionally, depending on how you take donor coordination into account, it still might make sense to donate now regardless, and we are indeed still accepting funds. More donations now will make it more likely that all of our RFMF is filled in July and make our organization more robust through being less reliant on a smaller number of large donors. Furthermore, it seems likely that other donation options are in a similar situation, it just might not be mentioned.
The best way to get the most up to date information is to contact us directly and ask for the latest.
“EA” doesn’t have a talent gap. Different causes have different gaps.
How to have cost-effective fun
Fair point. I actually generally try to model it as points (out of ten) happier per hour, but I didn’t want to go into tons of detail in the post. I definitely think you could make a much more complex model of maximizing fun than I do. Especially if you find that sort of activity fun!
I think EA Forum karma isn’t the best because a lot of the people who are particularly engaged in EA do not spend much time on the forum and instead focus on more action-relevant things for their org. The EA Forum will be biased towards people more interested in research and community related things as opposed to direct actions. For example, New Incentives is a very EA aligned org in direct poverty, but they spend most of their time doing cash transfers in Nigeria instead of posting on the forum.
To build on your idea though, I think forming some sort of index of involvement would get away from any one particular thing biasing the results. I think including karma in the index makes sense, along with length of involvement, hours per week involved in EA, percent donated, etc.
Thanks for posting this! I always love seeing people posting potential new areas of action and this certainly looks like it could be promising.
I think there are a lot of different ways to calculate impact. We generally try to follow GiveWell’s model as closely as possible as we have found them to be consistently stronger at truthful impact reporting. If we had used other meta-charities’ ways of calculating our impact, it would have been far higher (e.g. if we had included projections of future donations, or did not exclude counterfactuals as harshly as we did).
As for calculating our opportunity costs, I think this is an interesting question and a good thing to take into account. In our next impact report we will be sure to include these figures. For the time being we will put the calculations in this comment.
We based our calculations on what would happen if we did earning to give instead of running Charity Science. We took into account an estimate of earning based on each of our ages, time worked on CS, degree level, rate of taxes we would have to pay, cost of living, and the percentage each individual would donate, etc. We think we would have donated $45,500 in the first year. This may be substantially lower though, as Joey and I may have had to do some capacity-building to enter the for-profit sector, which would have cost time and money. It also assumes we find jobs immediately and have no extra job expenses such as clothing or travel. This gives a ratio of about 1:3 over the year, or 1:6 over the last 6 months.
In terms of replacements, I currently expect to hire minimum wage non-EA workers to do a large amount of the future fundraising (with either one ED staying as ED, or just having an EA heavy board). That is also worth keeping in mind if someone wants to calculate it from that perspective.
I really like people asking these sorts of questions. I hope to see this sort of rigour and these sorts of strong questions applied to all meta-charities consistently.
-Xio