I’m a doctor working towards the dream that every human will have access to high quality healthcare. I’m a medic and director of OneDay Health, which has launched 35 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
NickLaing
Can we help individual people cost-effectively? Our trial with three sick kids
Clean Water—the incredible 30% mortality reducer we can’t explain
Soaking Beans—a cost-effectiveness analysis
Remote Health Centers In Uganda—a cost effective intervention?
Bill Gates’ 400 million dollar bet—First Tuberculosis Vaccine in 100 years?
I understand the sentiment, but there’s a lot here I disagree with. I’ll discuss mainly one.In the case of global health, I disagree that”thoughtful people trying very hard to address a serious problem still almost always dramatically underrate the scale of technological progress.”
This doesn’t fit with the history of malaria and other infectious diseases where the opposite has happened, optimism about technological progress has often exceed reality.
About 60 years ago humanity was positive about eradicating malaria with technological progress. We had used (non-political) swamp draining and DDT spraying to massively reduce the global burden of malaria, wiping it out from countries like the USA and India. If you had done a prediction market in 1970, many malaria experts would have predicted we would have eradicated malaria by now—including potentially with vaccines, in fact it was a vibrant topic of conversation at the time, with many in the 60s believing a malaria vaccine would be here before now.
Again in 1979 after smallpox was eradicated, if you asked global health people how many human diseases we would eradicate by 2023, I’m sure the answer would have been higher than zero—the current situation.
Many diseases have had disturbingly slow technological progress, despite decades passing and billions of dollars spent. Tuberculosis for example could be considered a better vaccine candidate than malaria, yet it took 100 years to get a vaccine which might be 50% effective at best. We still have no good point of care test for the disease.
There are also rational reasons that led us to believe that in this specific case, making a malaria vaccine would be extremely difficult or even impossible—not just because we were luddites who lacked optimism. Malaria vaccines are the first ever to be invented that works against a large parasite—all previous vaccines worked against bacteria and viruses, and this even while we have failed to invent reliable vaccinations for many diseases that should be far easier than malaria. In the specific case of malaria you might be correct, we may have underrated the scale of progress, but I don’t think this can be generalised across the global health field, and certainly not “dramatically)
A couple of other notesFirst, I don’t think anyone has mentioned that vaccine trials have all been undertaken in the context of widespread mosquito net use. Their efficacy would be far far worse, and maybe even not over useful thresholds without the widespread net distributions which Effective Altruists have pushed so hard. Vaccine rollouts may have been partially made possible or sped up by fairly ubiquitous mosquito net use, rather than what you seem to suggest that progress may have been hampered by resources diverted away form vaccine development towards nets
I also think there are some misunderstandings about the fundamentals of malaria here as well. For example @John G. Halstead mentioned countries eradicating malaria through draining swamps, but this was only possible because they were at the edges of the malaria map, where cutting off malaria transmission is much easier. This isn’t a magic bullet closer to the equator. Draining Sub-saharan African swamps would not wipe out malaria there (although it might improve the situation somewhat)
I don’t think you need to be mournful in this case, because
There’s still a decent chance, even with 20⁄20 hindsight that this wasn’t a failure on the EA front, given that mosquito nets may aid vaccine efficiency, and also see @Linch and other’s comments below.
Even if we did get this bet wrong, and money would have been better spent on vaccine development in this case, it may be an outlier case, not because global health people generally underestimate technological progress.
Should 80,000 hours have more near-termist career content?
DISCLAIMER: (perhaps a double edge sword) I’ve lived in Uganda here for 10 years working in Healthcare.
Thanks Michael for all your efforts. I love StrongMinds and am considering donating myself. I run health centers here in Northern Uganda and have thought about getting in touch with you see if we can use something like the Strong minds in the health centers we manage. While working as a doctor here my estimate from experience that for perhaps between 1 in 5 and 1 and 10 of our patients, depression or anxiety is the biggest medical problem in their lives. I feel bad every year that we do nothing at all to help these people.
Point 1
First I read a reply below that seriously doubted that improving depression could have more positive psychological effect than preventing the grief of the death of a child. On this front I think it’s very hard to make a call in either direction, but it seems plausible to me that lifting someone out of depression could have a greater effect in many cases.
Point 2I however strongly disagree with your statement here about self reporting. Sadly I think it is not a good measure especially as a primary outcome measure.
“Also, what’s wrong with the self-reports? People are self-reporting how they feel. How else should we determine how people feel? Should we just ignore them and assume that we know best? Also, we’re comparing self-reports to other self-reports, so it’s unclear what bias we need to worry about.”
Self reporting doesn’t work because poor people here in Northern Uganda at least are primed to give low marks when reporting how they feel before an intervention, and then high marks afterwards—whether the intervention did anything or not. I have seen it personally here a number of times with fairly useless aid projects. I even asked people one time after a terrible farming training, whether they really thought the training helped as much as they had reported on the piece of paper. A couple of people laughed and said something like “No of course it didn’t help, but if we give high grades we might get more and better help in future”. this is an intelligent and rational response by recipients of aid, as of course good reports of an intervention increase their chances of getting more stuff in future, useful or not.
Dambisa Moyo says it even better in her book “Dead Aid”, but couldn’t find the quote. There might also be good research papers and other effective altruism posts that describe this failing of self reporting better than me so apologies if this is the case.
You also said “Also, we’re comparing self-reports to other self-reports”, which doesn’t help the matter, because those who don’t get help are likely to keep scoring the survey lowly because they feel like they didn’t get help
Because of this I struggle to get behind any assessment that relies on self-reporting, especially in low income countries like Uganda where people are often reliant on aid, and desperate for more. Ironically perhaps I have exactly the same criticism of GiveDirectly. I think that researchers of GiveDirectly should use exclusively (or almost exclusively) objective measures of improved life (hemoglobin levels, kids school grades, weight for height charts, assets at home) rather the before and after surveys they do. To their credit, recent GiveDirectly research seem to be using more objective measures in their effectiveness research.
https://www.givedirectly.org/research-at-give-directly/We can’t ignore how people feel, but we need to try and find objective ways of assessing it, especially in contexts like here in Uganda where NGOs have wrecked any chance of self reporting being very accurate. I feel like measuring improvement in physical aspects of depression could be a way forward. Just off the top of my head you could measure before and after mental agility scores, which should improve as depression improves, or quality of sleep before and after using a smart watch or phone. Perhaps even you could use continuous body monitoring for a small number of people, as they did here
https://www.vs.inf.ethz.ch/edu/HS2011/CPS/papers/sung05_measures-depression.pdf
Alternatively I’d be VERY interested in a head to head Cash transfer vs Strongminds RCT—should be pretty straightforward , even potentially using your same subjective before and after scores. Surely this would answer some important questions.
A similar comparative RCT was done in Kenya in 2020 of cash transfer vs. Psychotherapy, and the cash transfers clearly came through on top https://www.nber.org/papers/w28106.
Anyway I think Strong minds is a great idea and probably works well to the point I really want to use it myself in our health centers, but I don’t like the way you measure it’s effectiveness and therefore doubt whether it is as effective as stated here.
Thanks for all the good work!- 23 Mar 2023 11:53 UTC; 11 points) 's comment on Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality by (
- 30 Dec 2022 19:41 UTC; 2 points) 's comment on StrongMinds should not be a top-rated charity (yet) by (
EA and SBF on the front page of BBC… and its OK!
If this is true, I will update even further in the direction of the creation of anthropic being a net negative to the world.
Amazon is a massive multinational driven by profit almost alone, that will be continuously pushing for more and more, while paying less and less attention to safety.
It surprised me a bit that anthropic would allow this to happen.
Nice one. The success rate is quite phenomenal—especially how committed the founders are to bringing their concepts to fruition. Your biggest strength might be in selecting people even more than selecting causes.
My one slight issue with the data presentation is the use of “people reached” or “animals reached” as a headline metric. To some extent I understand using it outside of EA circles as we know that the biggest numbers sound the most impressive, but I don’t think it s a impact measurement with integrity. Basically any org that does mass media will reach millions very fast which is great, but it doesn’t necessarily translate to impact.
Endless NGOs spend millions on fairly useless media messages here - give me 10,000 dollars tomorrow in Uganda and I can reach 1 million people with whatever message you like—that’s not an impact measurement, what matters is the result of that message—which looks to be great with CE orgs. What sets your orgs apart is that their approach is backed by evidence and is likely to lead to real positive impact—not the fact that they can reach millions over the radio, anyone can do that!
Not the biggest deal, but I think within EA we can do better with our headline metrics.
Small question also, why no mention of Fortify Health, who I think are the CE org which has got the most funding to date and have done an amazing job?
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I’m not sure exactly why I felt this way, but here are a few ideas.
(High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing—more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
(High certainty) When other orgs are criticised or asked questions, they often don’t reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I’m not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI’s funding is less than many of orgs that have not been scrutinised as much.
(Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a “scrutiny rebalancing” of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don’t engage, so the thread falls off the frontpage. Orgs might reasonably decide that high degrees of transparency and engagement might do them net harm rather than good. This might not be good for anyone
Do you agree/disagree? And what could we do to make the situation better?
I disagree with fairly high confidence with this comment. “it’s shitty being a poor person in the poorest countries in the world.”
For a start, your comment here is misleading “When you ask them how happy they are on a life satisfaction/ happiness scale, they’ll give you around 4/10”—they were not asked how happy they are, what they were asked was this.
“Please imagine a ladder, with steps numbered from 0 at the bottom to 10 at the top. The top of the ladder represents the best possible life for you and the bottom of the ladder representsthe worst possible life for you. On which step of the ladder would you say you personally feelyou stand at this time?”—The answers they give make perfect sense—of course there are far better counterfactual lives for them, especially when they compare themselves with other people from higher income countries, but this doesn’t mean they aren’t happy.In Burkina Faso the next graphic shows that 80% of people from Burkina Faso said they were either very happy, or rather happy—which should answer the happiness question.
https://ourworldindata.org/happiness-and-life-satisfaction
Besides that, the examples you gave that I agree are “Likely” in Burkina Faso are FGM and your school comment which I think is very accurate and actually underappreciated. All the others (child marriage, stunting, mental illness) I would not consider “likely”, as their prevalence is well below 50%.
To answer your comment “you have to work out whether you think this life you’ve saved is more likely or not to be net positive. ”—We have worked it out, and the answer YES, a resounding yes.Yes your life might be worse than for people in richer countries, that’s why us global health people do what we do, but that doesn’t mean that people’s lives are “shitty”, nor that we should not talk with great care and dignity when we conside hypothetical people’s in low income countries.
I’m a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be “thought leaders” in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn’t make strong statements against AI growth and development even if they wanted to, because of their job and position.
The recent post “Sam Altman’s chip ambitions undercut OpenAI’s safety strategy” seems correct and important, while also almost absurdly obvious—the guy is trying to grow his company and they need more and better chips. We don’t seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise the backlash.I agree that these CEOs could be considered thought leaders in AI in general and the Future and potential of AI, and their statements about safety and the future are critically practically important and should be engaged with seriously. But I don’t really see the point of engaging with them as thought leaders in the AI safety discussion, it would make more sense to me to rather engage with intellectuals and commentators who can fully and transparently share their views without crippling levels of compromisation.
I’m interested to hear though arguments in favour of taking their thoughts more seriously.
I appreciate the impressive epistemic humility it must have taken for one of the original and most prestigious alignment research orgs to decide that right now prioritising policy and communications work over research might be the best course to follow. I would imagine that might be a somewhat painful decision for technical people who have devoted their life to finding a technical solution. Nice one!
“Although we plan to pursue all three of these priorities, it’s likely that policy and communications will be a higher priority for MIRI than research going forward.”
Those is one of the most tragically beautiful posts I think I have read on the forum. I wouldn’t usually just copy paste quotes, but I felt some of the comments hit unusually deep. The word “wisdom” even comes to mind...
“Much though I might value the personal freedom that comes with early retirement, I struggle to come up with any moral or practical argument that suggests it is worth more than what those donations accomplished. ”
“Almost everyone I know outside EA, from my parents to my colleagues to my neighbours, is not seeking to improve the wider world with any significant fraction of their resources. They’re just getting on with their lives and trying to do right by the people they meet.”
“When I look, I see a fair amount of frivolous expenditure and minimal attention given to non-financial ways of doing good; the choice is less ‘banker who donates’ vs. ‘doctor’ and more ‘banker who donates’ vs. ‘banker’. ”
“In the face of all this there is more than a slight temptation to throw up one’s hands and say ’Fine! You think my money is worthless? I guess I’ll keep it then; it’s definitely worth something to me”
-
Manifest should blow up in some unexpected way.
-
Elon Musk should announce he is giving all his money to EA causes.
-
EA should fund SBFs appeal process
-
Will Mac Askill should launch a new cryptocurrency “AskCoin”, where rich people buy large amounts of the cryptocurrency for the poorest people on earth, driving up the value.
Love it
-
Thanks so much for bringing this degree of honesty, openness and detail about a decision this big. As someone not deeply embroiled in the longtermist/rationalist world your uncertainty about whether you and others are doing net harm vs good on the AI alignment front is prett chilling. I’m looking forward to responses, hoping the picture is not quite as bleak as you paint!
One question on something I do know a little about (which could be answered in a couple of sentances or even perhaps a link). What’s your issue with Will Mckaskill as a public intellectual? I’ve watched Ted talks, heard him do interviews etc. and he seemed on shallow thought to be a good advocate for EA stuff in general.
As an unsuccessful applicant, I was impressed by MCF’s straightforward, non-onerous application, quick turnaround, (at least by the foundation NGO standards I’m used to) and the great specific feedback that I received as to why our initiative didn’t get funded.
I hope these initiatives go well, and the donor circle can further grow.
Great job!
Great work Charity entrepreneurship!
As a public health doctor in a low income country, I read the initial cause areas and had quite a few concerns about implementation. Then when I read the longer summaries I saw you had thought about almost all of them which is impressive—clearly done your homework ;)
Have a few comments
On the kangaroo care rollout front I have four thoughts
1. My instinct is that a generalist could well struggle with this initiative. They would be dealing at a high level with hospital management and senior staff at hospitals, and without medical expertise or at least a public health background they might not be taken very seriously and struggle to make headway. As you’ve obviously researched yourselves, and you’ve seen on the givewell review, sustaining kangaroo care in facilites is extremely difficult for a range of factors. That 2014 study managed only 5% sustainable practise n 4 African countries. A few NGOs have come around in our Ugandan facilities training midwives, and we are still quite bad at it (I haven’t pushed it as hard as I should either).
2. I believe (moderate uncertainty) that cultural resistance, or even cultral norms are an underappreciated barrier to Kangaroo care. You’ll notice most the stated barriers on Givewell are operational/practical, not cultural. There is a bit of a myth I’ve heard that Kangaroo care is “natural” or similar to “traditional practise”, which might make make implementation easier. In my experience modern-day cultural norms around birth both at home and in healthcare often differ wildly from kangaroo care
3. I strongly agree with your statement that”we note that it relies on very favorable stakeholder relations and management that may not be easy to find in some contexts. It is, therefore, promising but riskier relative to other interventions in global health.”
4. Working through a partner NGO can be a good plan, but as we all know the vast majority of NGOs are both hopelessly in efficient and not very effective. A meticulous in-country check of whether what an NGO Claims to have already implemented or achieved is true or not is essential before considering working with them. Usually an advantage of charity entrepreneurship stuff (I think) is the new NGO can do most of the intervention itself.
On the syphilis test front I have a simple logistics question/suggestion.
Might it not be cheaper and easier to just add a separate syphilis test rather than try to do the dual test?
Doing a combined test is easier and would be the best solution assuming no resource or logistical restrictions, but it has two disadvantages
The cost of the dual test can be considerably higher than the cost of seperate HIV test and syphilis tests combined. Stand alone syphilis tests are very cheap. Our health centers do separate tests as part of our standard antenatal work up—HIV, malaria, syphilis test for all in first trimester.
HIV programs are usually standardised and rigid. Convincing them to change their whole system to dual testing might be difficult or even impossible in some cases. The flipside is if you did convince a country or HIV treatment provider to do this, the ipact could be huge. Obviously some countries already do dual testing but you wouldn’t be working there.
On the free ORS distribution front.
I really like this, and just have a couple ofl thoughts.
First, an important challenge might be finding the best place to trial this intervention where there is both a high prevalence of diarrhoea and poor ORS coverage. In our OneDay Health center communities for example, serious cases of childhood diarrhoea are now surprisingly uncommon and not nearly as much of a problem as it was 10 years ago. And that’s in the most remote parts of Uganda. Clean water and ORS+zinc use has massively reduced diarrhoea death burden in many places (more so than malaria and pneumonia), so I feel this intervention needs to be especially well targeted. Not only on a country level, but to focus on the more remote, underserved parts of those countries.
Second I want to push back a bit on”An organization could also work with local producers, health officials, and stakeholders to improve product design, market awareness, availability, and use of ORS.” This sounds a little like generalist ineffective NGO speak to me. I think this could be a real money sink and have limited value. Perhaps focusing on just getting the ORS to the people who need it is a better approach.
Anyway keep up the great work! As someone “on the ground”, I’m always impressed by how realistically tractable your cause suggestions seem to be.