It currently seems likely to me that we’re going to look back on the EA promotion of bednets as a major distraction from focusing on scientific and technological work against malaria, such as malaria vaccines and gene drives.
I don’t know very much about the details of either. But it seems important to highlight how even very thoughtful people trying very hard to address a serious problem stillalmost always dramatically underrate the scale of technological progress.
I feel somewhat mournful about our failure on this front; and concerned about whether the same is happening in other areas, like animal welfare, climate change, and AI risk. (I may also be missing a bunch of context on what actually happened, though—please fill me in if so.)
I understand the sentiment, but there’s a lot here I disagree with. I’ll discuss mainly one.
In the case of global health, I disagree that”thoughtful people trying very hard to address a serious problem still almost always dramatically underrate the scale of technological progress.”
This doesn’t fit with the history of malaria and other infectious diseases where the opposite has happened, optimism about technological progress has often exceed reality.
About 60 years ago humanity was positive about eradicating malaria with technological progress. We had used (non-political) swamp draining and DDT spraying to massively reduce the global burden of malaria, wiping it out from countries like the USA and India. If you had done a prediction market in 1970, many malaria experts would have predicted we would have eradicated malaria by now—including potentially with vaccines, in fact it was a vibrant topic of conversation at the time, with many in the 60s believing a malaria vaccine would be here before now.
Again in 1979 after smallpox was eradicated, if you asked global health people how many human diseases we would eradicate by 2023, I’m sure the answer would have been higher than zero—the current situation.
Many diseases have had disturbingly slow technological progress, despite decades passing and billions of dollars spent. Tuberculosis for example could be considered a better vaccine candidate than malaria, yet it took 100 years to get a vaccine which might be 50% effective at best. We still have no good point of care test for the disease.
There are also rational reasons that led us to believe that in this specific case, making a malaria vaccine would be extremely difficult or even impossible—not just because we were luddites who lacked optimism. Malaria vaccines are the first ever to be invented that works against a large parasite—all previous vaccines worked against bacteria and viruses, and this even while we have failed to invent reliable vaccinations for many diseases that should be far easier than malaria. In the specific case of malaria you might be correct, we may have underrated the scale of progress, but I don’t think this can be generalised across the global health field, and certainly not “dramatically)
A couple of other notes
First, I don’t think anyone has mentioned that vaccine trials have all been undertaken in the context of widespread mosquito net use. Their efficacy would be far far worse, and maybe even not over useful thresholds without the widespread net distributions which Effective Altruists have pushed so hard. Vaccine rollouts may have been partially made possible or sped up by fairly ubiquitous mosquito net use, rather than what you seem to suggest that progress may have been hampered by resources diverted away form vaccine development towards nets
I also think there are some misunderstandings about the fundamentals of malaria here as well. For example @John G. Halstead mentioned countries eradicating malaria through draining swamps, but this was only possible because they were at the edges of the malaria map, where cutting off malaria transmission is much easier. This isn’t a magic bullet closer to the equator. Draining Sub-saharan African swamps would not wipe out malaria there (although it might improve the situation somewhat)
I don’t think you need to be mournful in this case, because
There’s still a decent chance, even with 20⁄20 hindsight that this wasn’t a failure on the EA front, given that mosquito nets may aid vaccine efficiency, and also see @Linch and other’s comments below.
Even if we did get this bet wrong, and money would have been better spent on vaccine development in this case, it may be an outlier case, not because global health people generally underestimate technological progress.
The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn’t just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who’s been in this space for less than two decades) has been thinking about it.
Do you think that if GiveWell hadn’t recommended bednets/effective altruists hadn’t endorsed bednets it would have led to more investment in vaccine development/gene drives etc.? That doesn’t seem intuitive to me.
To me GiveWell fit a particular demand, which was for charitable donations that would have reliably high marginal impact. Or maybe to be more precise, for charitable donations recommended by an entity that made a good faith effort without obvious mistakes to find the highest reliable marginal impact donation. Scientific research does not have that structure since the outcomes are unpredictable.
I don’t think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you’re advocating here. My oversimplified model of the situation is more like:
Some EAs don’t feel very persuaded by this kind of reasoning, and end up donating to global development stuff like bednets.
Some EAs are moved by this kind of reasoning, and decide not to engage with global development because this kind of reasoning suggests higher impact alternatives. They don’t really spend much time thinking about how to best address global development, because they’re doing things they think are more important.
(I think the EAs in the latter category have their own failure modes and wouldn’t obviously have gotten the malaria thing right (assuming you’re right that a mistake was made) if they had really tried to get it right, tbc.)
Thanks a lot that makes sense, this comment no longer stands after the edits so have retracted really appreciate the clarification!
(I’m not sure its intentional, but this comes across as patronizing to global health folks. Saying folks “don’t want to do this kind of thinking” is both harsh and wrong. It seems like you suggest that “more thinking” automatically leads people down the path of “more important” things than global health, which is absurd.
Plenty of people have done plenty of thinking through an EA lens and decided that bed nets are a great place to spend lots of money which is great.
Plenty of people have done plenty of thinking through an EA lens and decided to focus on other things which is great.
One group might be right and the other might be wrong, but it is far from obvious or clear, and the differences of opinion certainly don’t come from a lack of thought.
I think it helps to be kind and give folks the benefit of the doubt.)
I think you’re right that my original comment was rude; I apologize. I edited my comment a bit.
I didn’t mean to say that the global poverty EAs aren’t interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell’s meticulous reasoning. I’ve edited my comment to make it less sound like I’m saying that the global poverty EAs are dumb or uninterested in thinking.
But I do stand by the claim that you’ll understand EA better if you think of “promote AMF” and “try to reduce AI x-risk” as results of two fairly different reasoning processes, rather than as results of the same reasoning process. Like, if you ask someone why they’re promoting AMF rather than e.g. insect suffering prevention, the answer usually isn’t “I thought really hard about insect suffering and decided that the math doesn’t work out”, it’s “I decided to (at least substantially) reject the reasoning process which leads to seriously considering prioritizing insect suffering over bednets”.
Nice one makes much more sense now, appreciate the change a lot :), have retracted my comment now (I think it can still be read, haven’t mastered the forum even after hundreds of comments...)
I think this has been thought about a few times since EA started.
In 2015 Max Dalton wrote about medical research and said the below.
“GiveWell note that most funders of medical research more generally have large budgets, and claim that ‘It’s reasonable to ask how much value a new funder – even a relatively large one – can add in this context’. Whilst the field of tropical disease research is, as I argued above, more neglected, there are still a number of large foundations, and funding for several diseases is on the scale of hundreds of millions of dollars. Additionally, funding the development of a new drug may cost close to a billion dollars .
For these reasons, it is difficult to imagine a marginal dollar having any impact. However, as Macaskill argues at several points in Doing Good Better, this appears to only increase the riskiness of the donation, rather than reducing its expected impact.
In 2018 Peter Wildeford and Marcus A. Davis wrote about the cost effectiveness of vaccines and suggested that a malaria vaccine is competitive with other global health opportunities.
I think I’d be more convinced if you backed your claim up with some numbers, even loose ones. Maybe I’m missing something, but imo there just aren’t enough zeros for this to be a massive fuckup.
Fairly simple BOTEC:
2 billion people at significant risk of malaria (WHO says 3 billion “at risk” but I assume the first 2 billion is at significantly higher risk than the last billion).
note that Africa has ~95% of cases/deaths and a population of 1.2 billion; I assume you can get a large majority of the benefits if you ignore northern Africa too.
LLINs last 3 years.
a bednet covers ~1.5 people (can’t find a source so just a guess; note that the main protected population for bednets are mothers and their young children, who usually sleep in the same bed).
Say LLINs cost ~$4.50 for simple math (AMF says $2, GiveWell says $5-6; I think it depends on how you do moral accounting)
So it costs $2B/year to cover almost all vulnerable people with bednets at current margins.
and likely <1B/year if we are fine with just covering the most vulnerable 5⁄6 of Africa.
At 5-10%/year cost of capital, this is equivalent to $20B-$40B to have bednets forever.
even less if it’s more targeted.
Given how much money has already went into malaria R&D, we’re already at less than one OOM difference (This is assuming that future R&D costs and implementation costs are a rounding error, which seems very unlikely to me).
Getting rid of malaria forever is a lot better than bednets forever, but given how effective bednets are, and noting that even gene drives and vaccines are unlikely to be a completely “clean” solution either, adding half an OOM sounds about right, maybe 1 OOM is the upper bound.
Meanwhile the diminishing marginal curve for R&D is likely to be a lot sharper than the diminishing marginal returns curve for bednets.
I’ve never drawn out the curve so I don’t know what it looks like but I can easily see >1 OOM difference here.
So at least the view from 10,000 feet up doesn’t give you an obvious win for research vs bednets; and on balance I think it tilts in the other direction locally on EV grounds, even if you don’t adjust for benefits of certainty.
This analysis lacks a bunch of fairly important considerations in both directions (eg economic growth pushes in favor of wanting “band-aid” solutions now because richer countries are better equipped to deal with their own systemic problems, climate change pushes in favor of eradication), which might be enough to flip the direction of the inequality, but very unlikely to flip it by >1 OOM. And I suspect ballpicking numbers or analysis like the above is the core reason why some of the more quant-y committed global health/poverty EAs aren’t sold on the “obviously technological solutions are better than band-aid solutions” argument that SV people sometimes make unreflectively.
A different BOTEC: 500k deaths per year, at $5000 per death prevented by bednets, we’d have to get a year of vaccine speedup for $2.5 billion to match bednets.
I agree that $2.5 billion to speed up development of vaccines by a year is tricky. But I expect that $2.5 billion, or $250 million, or perhaps even $25 million to speed up deployment of vaccines by a year is pretty plausible. I don’t know the details but apparently a vaccine was approved in 2021 that will only be rolled out widely in a few months, and another vaccine will be delayed until mid-2024: https://marginalrevolution.com/marginalrevolution/2023/10/what-is-an-emergency-the-case-of-rapid-malaria-vaccination.html
So I think it’s less a question of whether EA could have piled more money on and more a question of whether EA could have used that money + our talent advantage to target key bottlenecks.
(Plus the possibility of getting gene drives done much earlier, but I don’t know how to estimate that.)
@Linch, see the article I linked above, which identifies a bunch of specific bottlenecks where lobbying and/or targeted funding could have been really useful. I didn’t know about these when I wrote my comment above, but I claim prediction points for having a high-level heuristic that led to the right conclusion anyway.
Do you want to discuss this in a higher-bandwidth channel at some point? Eg next time we’re in an EA social or something, have an organized chat with a moderator and access to a shared monitor? I feel like we’re not engaging with each other’s arguments as much in this setting, but we can maybe clarify things better in a higher-bandwidth setting.
(No worries if you don’t want to do it; it’s not like global health is either of our day jobs)
Global development EAs were very much looking into vaccines around 2015 and then and now it seemed that the malaria vaccine is just not crazy cost-effective, because you have to administer it more than once and it’s not 100% effective—see
This seems like significant evidence for the tractability of speeding things up. E.g. a single (unjustified) decision by the WHO in 2015 delayed the vaccine by almost a decade, four years of which were spent in fundraising. It seems very plausible that even 2015 EA could have sped things up by multiple years in expectation either lobbying against the original decision, or funding the follow-up trial.
Retracted my last comment, since as joshcmorrison pointed out, the vaccines aren’t mRNA-based.
Still, “Total malaria R&D investment from 2007 to 2018 was over $7 billion, according to data from Policy Cures Research in the report. Of that total, about $1.8 billion went to vaccine R&D.”
Moreover, I think there are structural reasons for relatively more of that funding to come from, e.g., Gates than from at least early-stage EA. Although COVID is an exception, vaccine work has traditionally taken many years. I think it is more likely that we’d see the right people approaching this work in an optimal manner if they were offered stable, multi-year funding. And I’m not sure whether at least early “EA” was in a position to offer that kind of funding on a basis that would seem reliable.
So it’s plausible to me that vaccine and similar funding was the highest EV option on the table in theory, and that it nevertheless made sense for EA to focus on bednet distribution and other efforts better suited to the funding flows it could guarantee.
I’m sympathetic to this. I also think it is interesting to look at how countries that eradicated malaria did so, and it wasn’t with bednets, it was through draining swamps etc.
(fwiw, I don’t think that criticism applies to EA work on climate change. Johannes Ackva is focused on policy change to encourage neglected low carbon technologies.)
The new malaria vaccines are mRNA vaccines, and mRNA vaccines were largely developed in response to COVID. I think billions were spent on mRNA R&D. That could have been too expensive for Open Phil, and they might not have been able to foresee the promise of mRNA in particular to invest so much specifically in it and not waste substantially on other vaccine R&D.
EDIT: By the US government alone, $337 million was invested in mRNA R&D pre-pandemic over decades (and the authors found $5.9 billion in indirect grants), and after the pandemic started, “$2.2bn (7%) supported clinical trials, and $108m (<1%) supported manufacturing plus basic and translational science”
https://www.bmj.com/content/380/bmj-2022-073747
Moderna also spent over a billion on R&D, and their focus is mRNA. (May be some or substantial overlap with US funding.) Pfizer and BioNTech also developed mRNA COVID vaccines together.
Maybe I’m misunderstanding your point, but the two malaria vaccine that were recently approved (RTS,S and R21/Matrix M) are not mRNA vaccines. They’re both protein-based.
That’s very useful info, ty. Though I don’t think it substantively changes my conclusion because:
Government funding tends to go towards more legible projects (like R&D). I expect that there are a bunch of useful things in this space where there are more funding gaps (e.g. lobbying for rapid vaccine rollouts).
EA has sizeable funding, but an even greater advantage in directing talent, which I think would have been our main source of impact.
There were probably a bunch of other possible technological approaches to addressing malaria that were more speculative and less well-funded than mRNA vaccines. Ex ante, it was probably a failure not to push harder towards them, rather than focusing on less scalable approaches which could never realistically have solved the full problem.
To be clear, I think it’s very commendable that OpenPhil has been funding gene drive work for a long time. I’m sad about the gap between “OpenPhil sends a few grants in that direction” and “this is a central example of what the EA community focuses on” (as bednets have been); but that shouldn’t diminish the fact that even the former is a great thing to have happen.
There’s a version of your agreement that I agree with, but I’m not sure you endorse, which is something like
If all the core EAs reoriented their perspective on global health away from trying to mostly do the right thing with scope-sensitive ethics while also following a bunch of explicit and illegible norms to something more like I will do everything in my power[1] and move heaven and earth to end malaria as soon as possible, I expect that there’s a decently large chance (less than 50% but still significant) that we’d see a lot more visible EA-led progress on malaria than what we currently observe.
To be concrete, things I can imagine a more monomaniacal version of global health EA might emphasize (note that some of them are mutually exclusive, and others might be seen as bad, even under the monomanical lens, after more research):
Targeting a substantially faster EA growth rate than in our timeline
Potentially have a tiered system of outreach where the cultural onboarding in EA is in play for a more elite/more philosophically minded subset but the majority of people just hear the “end malaria by any means possible” message
Lobbying the US and other gov’ts to a) increase foreign aid and b) to increase aid effectiveness, particularly focused on antimalarial interventions.
(if politically feasible, which it probably isn’t) potentially advocate that foreign aid must be tied with independently verified progress on malaria eradication).
Advocate more strongly, and more early on, for people to volunteer in antimalarial human challenge trials
Careful, concrete, and detailed CBEs (measuring the environmental and other costs to human life against malarial load) on when and where DDT usage is net positive
(if relevant) lobbying in developing countries with high malarial loads to use DDT for malaria control
Attempting to identify and fund DDT analogues that pass the CBE for countries with high malarial (or other insect-borne) disease load, even while the environmental consequences are pretty high (e.g. way too high to be worth the CBE for America).
(if relevant) lobbying countries to try gene drives at an earlier point than most conservative experts would recommend, maybe starting with island countries.
Write academic position papers on why the current vaccine approval system for malaria vaccines is too conservative
Be very willing to do side channel persuasion to emphasize that point
Write aggressive, detailed, and widely-disseminated posts whenever a group in your orbit (charities or WHO or Gates Foundation) is fucking up in your lights
etc
Framed that way, I think the key considerations look less like “people are just too focused on certainty and unwilling to make low-probability, high-EV plays” and “maybe EAs are underestimating the ability of science and technology to solve key problems” and more like “there’s a ton of subtle and illegible tradeoffs people are implicitly making, and trying to bulldoze over them just has a bunch of unexpected costs.” I can see a lot of ways the more monomaniacal version could backfire, but it’s definitely possible that in a counterfactual world EA would’ve done a lot more to visibly end malaria by now.
Hmm, your comment doesn’t really resonate with me. I don’t think it’s really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:
”Over the next 20 or 50 years, it’s very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there’s some way of speeding up this biggest lever.”
I don’t think you need this “move heaven and earth” philosophy to do that reasoning; I don’t think you need to focus on EA growth much more than we did. The mental step could be as simple as “Huh, bednets seem kinda incremental. Is there anything that’s much more ambitious?”
(To be clear I think this is a really hard mental step, but one that I would expect from an explicitly highly-scope-sensitive movement like EA.)
I think part of my disagreement is I’m not sure what counts as “incremental.” Like bednets are an intervention, that broadly speaking, can solve ~half the malaria problem forever at ~20-40 billion dollars, with substantial cobenefits. And attempts at “non-incremental” malaria solutions have already costed mid-high single digit billions. So it’s not like the ratios are massively off. Importantly, “non-incremental” solutions like vaccines likely still requires fairly expensive development, distribution, and ongoing maintenance. So small mistakes might be there, but I don’t see enough room left for us to be making large mistakes in the space.
That’s what I mean by “not enough zeroes.”
To be clear my argument is not insensitive to numbers. If the incremental solutions to the problem have a price tag of >1T (eg global poverty, or aging-related deaths), and non-incremental solutions have had a total price tag of <1B, then I’m much more sympathetic to the “the EV for trying to identify more scalable interventions is likely higher than incremental solutions now, even without looking at details”-style arguments.
Ah, I see. I think the two arguments I’d give here:
Founding 1DaySooner for malaria 5-10 years earlier is high-EV and plausibly very cheap; and there are probably another half-dozen things in this reference class.
We’d need to know much more about the specific interventions in that reference class to confidently judge that we made a mistake. But IMO if everyone in 2015-EA had explicitly agreed “vaccines will plausibly dramatically slash malaria rates within 10 years” then I do think we’d have done much more work to evaluate that reference class. Not having done that work can be an ex-ante mistake even if it turns out it wasn’t an ex-post mistake.
It currently seems likely to me that we’re going to look back on the EA promotion of bednets as a major distraction from focusing on scientific and technological work against malaria, such as malaria vaccines and gene drives.
I don’t know very much about the details of either. But it seems important to highlight how even very thoughtful people trying very hard to address a serious problem still almost always dramatically underrate the scale of technological progress.
I feel somewhat mournful about our failure on this front; and concerned about whether the same is happening in other areas, like animal welfare, climate change, and AI risk. (I may also be missing a bunch of context on what actually happened, though—please fill me in if so.)
I understand the sentiment, but there’s a lot here I disagree with. I’ll discuss mainly one.
In the case of global health, I disagree that”thoughtful people trying very hard to address a serious problem still almost always dramatically underrate the scale of technological progress.”
This doesn’t fit with the history of malaria and other infectious diseases where the opposite has happened, optimism about technological progress has often exceed reality.
About 60 years ago humanity was positive about eradicating malaria with technological progress. We had used (non-political) swamp draining and DDT spraying to massively reduce the global burden of malaria, wiping it out from countries like the USA and India. If you had done a prediction market in 1970, many malaria experts would have predicted we would have eradicated malaria by now—including potentially with vaccines, in fact it was a vibrant topic of conversation at the time, with many in the 60s believing a malaria vaccine would be here before now.
Again in 1979 after smallpox was eradicated, if you asked global health people how many human diseases we would eradicate by 2023, I’m sure the answer would have been higher than zero—the current situation.
Many diseases have had disturbingly slow technological progress, despite decades passing and billions of dollars spent. Tuberculosis for example could be considered a better vaccine candidate than malaria, yet it took 100 years to get a vaccine which might be 50% effective at best. We still have no good point of care test for the disease.
There are also rational reasons that led us to believe that in this specific case, making a malaria vaccine would be extremely difficult or even impossible—not just because we were luddites who lacked optimism. Malaria vaccines are the first ever to be invented that works against a large parasite—all previous vaccines worked against bacteria and viruses, and this even while we have failed to invent reliable vaccinations for many diseases that should be far easier than malaria. In the specific case of malaria you might be correct, we may have underrated the scale of progress, but I don’t think this can be generalised across the global health field, and certainly not “dramatically)
A couple of other notes
First, I don’t think anyone has mentioned that vaccine trials have all been undertaken in the context of widespread mosquito net use. Their efficacy would be far far worse, and maybe even not over useful thresholds without the widespread net distributions which Effective Altruists have pushed so hard. Vaccine rollouts may have been partially made possible or sped up by fairly ubiquitous mosquito net use, rather than what you seem to suggest that progress may have been hampered by resources diverted away form vaccine development towards nets
I also think there are some misunderstandings about the fundamentals of malaria here as well. For example @John G. Halstead mentioned countries eradicating malaria through draining swamps, but this was only possible because they were at the edges of the malaria map, where cutting off malaria transmission is much easier. This isn’t a magic bullet closer to the equator. Draining Sub-saharan African swamps would not wipe out malaria there (although it might improve the situation somewhat)
I don’t think you need to be mournful in this case, because
There’s still a decent chance, even with 20⁄20 hindsight that this wasn’t a failure on the EA front, given that mosquito nets may aid vaccine efficiency, and also see @Linch and other’s comments below.
Even if we did get this bet wrong, and money would have been better spent on vaccine development in this case, it may be an outlier case, not because global health people generally underestimate technological progress.
Great comment, thank you :) This changed my mind.
The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn’t just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who’s been in this space for less than two decades) has been thinking about it.
Do you think that if GiveWell hadn’t recommended bednets/effective altruists hadn’t endorsed bednets it would have led to more investment in vaccine development/gene drives etc.? That doesn’t seem intuitive to me.
To me GiveWell fit a particular demand, which was for charitable donations that would have reliably high marginal impact. Or maybe to be more precise, for charitable donations recommended by an entity that made a good faith effort without obvious mistakes to find the highest reliable marginal impact donation. Scientific research does not have that structure since the outcomes are unpredictable.
I don’t think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you’re advocating here. My oversimplified model of the situation is more like:
Some EAs don’t feel very persuaded by this kind of reasoning, and end up donating to global development stuff like bednets.
Some EAs are moved by this kind of reasoning, and decide not to engage with global development because this kind of reasoning suggests higher impact alternatives. They don’t really spend much time thinking about how to best address global development, because they’re doing things they think are more important.
(I think the EAs in the latter category have their own failure modes and wouldn’t obviously have gotten the malaria thing right (assuming you’re right that a mistake was made) if they had really tried to get it right, tbc.)
Thanks a lot that makes sense, this comment no longer stands after the edits so have retracted really appreciate the clarification!
(I’m not sure its intentional, but this comes across as patronizing to global health folks. Saying folks “don’t want to do this kind of thinking” is both harsh and wrong. It seems like you suggest that “more thinking” automatically leads people down the path of “more important” things than global health, which is absurd.
Plenty of people have done plenty of thinking through an EA lens and decided that bed nets are a great place to spend lots of money which is great.
Plenty of people have done plenty of thinking through an EA lens and decided to focus on other things which is great.
One group might be right and the other might be wrong, but it is far from obvious or clear, and the differences of opinion certainly don’t come from a lack of thought.
I think it helps to be kind and give folks the benefit of the doubt.)
I think you’re right that my original comment was rude; I apologize. I edited my comment a bit.
I didn’t mean to say that the global poverty EAs aren’t interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell’s meticulous reasoning. I’ve edited my comment to make it less sound like I’m saying that the global poverty EAs are dumb or uninterested in thinking.
But I do stand by the claim that you’ll understand EA better if you think of “promote AMF” and “try to reduce AI x-risk” as results of two fairly different reasoning processes, rather than as results of the same reasoning process. Like, if you ask someone why they’re promoting AMF rather than e.g. insect suffering prevention, the answer usually isn’t “I thought really hard about insect suffering and decided that the math doesn’t work out”, it’s “I decided to (at least substantially) reject the reasoning process which leads to seriously considering prioritizing insect suffering over bednets”.
(Another example of this is the “curse of cryonics”.)
Nice one makes much more sense now, appreciate the change a lot :), have retracted my comment now (I think it can still be read, haven’t mastered the forum even after hundreds of comments...)
Makes sense, though I think that global development was enough of a focus of early EA that this type of reasoning should have been done anyway.
I’m more sympathetic about it not being done after, say, 2017.
I think this has been thought about a few times since EA started.
In 2015 Max Dalton wrote about medical research and said the below.
“GiveWell note that most funders of medical research more generally have large budgets, and claim that ‘It’s reasonable to ask how much value a new funder – even a relatively large one – can add in this context’. Whilst the field of tropical disease research is, as I argued above, more neglected, there are still a number of large foundations, and funding for several diseases is on the scale of hundreds of millions of dollars. Additionally, funding the development of a new drug may cost close to a billion dollars .
For these reasons, it is difficult to imagine a marginal dollar having any impact. However, as Macaskill argues at several points in Doing Good Better, this appears to only increase the riskiness of the donation, rather than reducing its expected impact.
In 2018 Peter Wildeford and Marcus A. Davis wrote about the cost effectiveness of vaccines and suggested that a malaria vaccine is competitive with other global health opportunities.
Related: early discussion of gene drives in 2016.
I think I’d be more convinced if you backed your claim up with some numbers, even loose ones. Maybe I’m missing something, but imo there just aren’t enough zeros for this to be a massive fuckup.
Fairly simple BOTEC:
2 billion people at significant risk of malaria (WHO says 3 billion “at risk” but I assume the first 2 billion is at significantly higher risk than the last billion).
note that Africa has ~95% of cases/deaths and a population of 1.2 billion; I assume you can get a large majority of the benefits if you ignore northern Africa too.
LLINs last 3 years.
a bednet covers ~1.5 people (can’t find a source so just a guess; note that the main protected population for bednets are mothers and their young children, who usually sleep in the same bed).
Say LLINs cost ~$4.50 for simple math (AMF says $2, GiveWell says $5-6; I think it depends on how you do moral accounting)
So it costs $2B/year to cover almost all vulnerable people with bednets at current margins.
and likely <1B/year if we are fine with just covering the most vulnerable 5⁄6 of Africa.
At 5-10%/year cost of capital, this is equivalent to $20B-$40B to have bednets forever.
even less if it’s more targeted.
Given how much money has already went into malaria R&D, we’re already at less than one OOM difference (This is assuming that future R&D costs and implementation costs are a rounding error, which seems very unlikely to me).
Getting rid of malaria forever is a lot better than bednets forever, but given how effective bednets are, and noting that even gene drives and vaccines are unlikely to be a completely “clean” solution either, adding half an OOM sounds about right, maybe 1 OOM is the upper bound.
Meanwhile the diminishing marginal curve for R&D is likely to be a lot sharper than the diminishing marginal returns curve for bednets.
I’ve never drawn out the curve so I don’t know what it looks like but I can easily see >1 OOM difference here.
So at least the view from 10,000 feet up doesn’t give you an obvious win for research vs bednets; and on balance I think it tilts in the other direction locally on EV grounds, even if you don’t adjust for benefits of certainty.
This analysis lacks a bunch of fairly important considerations in both directions (eg economic growth pushes in favor of wanting “band-aid” solutions now because richer countries are better equipped to deal with their own systemic problems, climate change pushes in favor of eradication), which might be enough to flip the direction of the inequality, but very unlikely to flip it by >1 OOM. And I suspect ballpicking numbers or analysis like the above is the core reason why some of the more quant-y committed global health/poverty EAs aren’t sold on the “obviously technological solutions are better than band-aid solutions” argument that SV people sometimes make unreflectively.
A different BOTEC: 500k deaths per year, at $5000 per death prevented by bednets, we’d have to get a year of vaccine speedup for $2.5 billion to match bednets.
I agree that $2.5 billion to speed up development of vaccines by a year is tricky. But I expect that $2.5 billion, or $250 million, or perhaps even $25 million to speed up deployment of vaccines by a year is pretty plausible. I don’t know the details but apparently a vaccine was approved in 2021 that will only be rolled out widely in a few months, and another vaccine will be delayed until mid-2024: https://marginalrevolution.com/marginalrevolution/2023/10/what-is-an-emergency-the-case-of-rapid-malaria-vaccination.html
So I think it’s less a question of whether EA could have piled more money on and more a question of whether EA could have used that money + our talent advantage to target key bottlenecks.
(Plus the possibility of getting gene drives done much earlier, but I don’t know how to estimate that.)
@Linch, see the article I linked above, which identifies a bunch of specific bottlenecks where lobbying and/or targeted funding could have been really useful. I didn’t know about these when I wrote my comment above, but I claim prediction points for having a high-level heuristic that led to the right conclusion anyway.
Do you want to discuss this in a higher-bandwidth channel at some point? Eg next time we’re in an EA social or something, have an organized chat with a moderator and access to a shared monitor? I feel like we’re not engaging with each other’s arguments as much in this setting, but we can maybe clarify things better in a higher-bandwidth setting.
(No worries if you don’t want to do it; it’s not like global health is either of our day jobs)
Global development EAs were very much looking into vaccines around 2015 and then and now it seemed that the malaria vaccine is just not crazy cost-effective, because you have to administer it more than once and it’s not 100% effective—see
Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models
Modelling the relative cost-effectiveness of the RTS,S/AS01 malaria vaccine compared to investment in vector control or chemoprophylaxis
An article on why we didn’t get a vaccine sooner: https://worksinprogress.co/issue/why-we-didnt-get-a-malaria-vaccine-sooner
This seems like significant evidence for the tractability of speeding things up. E.g. a single (unjustified) decision by the WHO in 2015 delayed the vaccine by almost a decade, four years of which were spent in fundraising. It seems very plausible that even 2015 EA could have sped things up by multiple years in expectation either lobbying against the original decision, or funding the follow-up trial.
Retracted my last comment, since as joshcmorrison pointed out, the vaccines aren’t mRNA-based.
Still, “Total malaria R&D investment from 2007 to 2018 was over $7 billion, according to data from Policy Cures Research in the report. Of that total, about $1.8 billion went to vaccine R&D.”
https://www.devex.com/news/just-over-600m-a-year-goes-to-malaria-r-d-can-covid-19-change-that-98708/amp
Moreover, I think there are structural reasons for relatively more of that funding to come from, e.g., Gates than from at least early-stage EA. Although COVID is an exception, vaccine work has traditionally taken many years. I think it is more likely that we’d see the right people approaching this work in an optimal manner if they were offered stable, multi-year funding. And I’m not sure whether at least early “EA” was in a position to offer that kind of funding on a basis that would seem reliable.
So it’s plausible to me that vaccine and similar funding was the highest EV option on the table in theory, and that it nevertheless made sense for EA to focus on bednet distribution and other efforts better suited to the funding flows it could guarantee.
I’m sympathetic to this. I also think it is interesting to look at how countries that eradicated malaria did so, and it wasn’t with bednets, it was through draining swamps etc.
(fwiw, I don’t think that criticism applies to EA work on climate change. Johannes Ackva is focused on policy change to encourage neglected low carbon technologies.)
The new malaria vaccines are mRNA vaccines, and mRNA vaccines were largely developed in response to COVID. I think billions were spent on mRNA R&D. That could have been too expensive for Open Phil, and they might not have been able to foresee the promise of mRNA in particular to invest so much specifically in it and not waste substantially on other vaccine R&D.
Open Phil has been funding R&D on malaria for some time, including gene drives, but not much on vaccines until recently. https://www.openphilanthropy.org/grants/?q=malaria&focus-area[]=scientific-research
EDIT: By the US government alone, $337 million was invested in mRNA R&D pre-pandemic over decades (and the authors found $5.9 billion in indirect grants), and after the pandemic started, “$2.2bn (7%) supported clinical trials, and $108m (<1%) supported manufacturing plus basic and translational science” https://www.bmj.com/content/380/bmj-2022-073747
Moderna also spent over a billion on R&D, and their focus is mRNA. (May be some or substantial overlap with US funding.) Pfizer and BioNTech also developed mRNA COVID vaccines together.
Maybe I’m misunderstanding your point, but the two malaria vaccine that were recently approved (RTS,S and R21/Matrix M) are not mRNA vaccines. They’re both protein-based.
Oh, you’re right. My bad.
That’s very useful info, ty. Though I don’t think it substantively changes my conclusion because:
Government funding tends to go towards more legible projects (like R&D). I expect that there are a bunch of useful things in this space where there are more funding gaps (e.g. lobbying for rapid vaccine rollouts).
EA has sizeable funding, but an even greater advantage in directing talent, which I think would have been our main source of impact.
There were probably a bunch of other possible technological approaches to addressing malaria that were more speculative and less well-funded than mRNA vaccines. Ex ante, it was probably a failure not to push harder towards them, rather than focusing on less scalable approaches which could never realistically have solved the full problem.
To be clear, I think it’s very commendable that OpenPhil has been funding gene drive work for a long time. I’m sad about the gap between “OpenPhil sends a few grants in that direction” and “this is a central example of what the EA community focuses on” (as bednets have been); but that shouldn’t diminish the fact that even the former is a great thing to have happen.
There’s a version of your agreement that I agree with, but I’m not sure you endorse, which is something like
To be concrete, things I can imagine a more monomaniacal version of global health EA might emphasize (note that some of them are mutually exclusive, and others might be seen as bad, even under the monomanical lens, after more research):
Targeting a substantially faster EA growth rate than in our timeline
Potentially have a tiered system of outreach where the cultural onboarding in EA is in play for a more elite/more philosophically minded subset but the majority of people just hear the “end malaria by any means possible” message
Lobbying the US and other gov’ts to a) increase foreign aid and b) to increase aid effectiveness, particularly focused on antimalarial interventions.
(if politically feasible, which it probably isn’t) potentially advocate that foreign aid must be tied with independently verified progress on malaria eradication).
Advocate more strongly, and more early on, for people to volunteer in antimalarial human challenge trials
Careful, concrete, and detailed CBEs (measuring the environmental and other costs to human life against malarial load) on when and where DDT usage is net positive
(if relevant) lobbying in developing countries with high malarial loads to use DDT for malaria control
Attempting to identify and fund DDT analogues that pass the CBE for countries with high malarial (or other insect-borne) disease load, even while the environmental consequences are pretty high (e.g. way too high to be worth the CBE for America).
(if relevant) lobbying countries to try gene drives at an earlier point than most conservative experts would recommend, maybe starting with island countries.
Write academic position papers on why the current vaccine approval system for malaria vaccines is too conservative
Be very willing to do side channel persuasion to emphasize that point
Write aggressive, detailed, and widely-disseminated posts whenever a group in your orbit (charities or WHO or Gates Foundation) is fucking up in your lights
etc
Framed that way, I think the key considerations look less like “people are just too focused on certainty and unwilling to make low-probability, high-EV plays” and “maybe EAs are underestimating the ability of science and technology to solve key problems” and more like “there’s a ton of subtle and illegible tradeoffs people are implicitly making, and trying to bulldoze over them just has a bunch of unexpected costs.” I can see a lot of ways the more monomaniacal version could backfire, but it’s definitely possible that in a counterfactual world EA would’ve done a lot more to visibly end malaria by now.
Or everything in my power that’s legal and not breaking any obvious key ethical norms, since these things tend to backfire pretty fast.
Hmm, your comment doesn’t really resonate with me. I don’t think it’s really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:
”Over the next 20 or 50 years, it’s very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there’s some way of speeding up this biggest lever.”
I don’t think you need this “move heaven and earth” philosophy to do that reasoning; I don’t think you need to focus on EA growth much more than we did. The mental step could be as simple as “Huh, bednets seem kinda incremental. Is there anything that’s much more ambitious?”
(To be clear I think this is a really hard mental step, but one that I would expect from an explicitly highly-scope-sensitive movement like EA.)
Yeah so basically I contest that this alone will actually have higher EV in the malaria case; apologies if my comment wasn’t clear enough.
I think part of my disagreement is I’m not sure what counts as “incremental.” Like bednets are an intervention, that broadly speaking, can solve ~half the malaria problem forever at ~20-40 billion dollars, with substantial cobenefits. And attempts at “non-incremental” malaria solutions have already costed mid-high single digit billions. So it’s not like the ratios are massively off. Importantly, “non-incremental” solutions like vaccines likely still requires fairly expensive development, distribution, and ongoing maintenance. So small mistakes might be there, but I don’t see enough room left for us to be making large mistakes in the space.
That’s what I mean by “not enough zeroes.”
To be clear my argument is not insensitive to numbers. If the incremental solutions to the problem have a price tag of >1T (eg global poverty, or aging-related deaths), and non-incremental solutions have had a total price tag of <1B, then I’m much more sympathetic to the “the EV for trying to identify more scalable interventions is likely higher than incremental solutions now, even without looking at details”-style arguments.
Ah, I see. I think the two arguments I’d give here:
Founding 1DaySooner for malaria 5-10 years earlier is high-EV and plausibly very cheap; and there are probably another half-dozen things in this reference class.
We’d need to know much more about the specific interventions in that reference class to confidently judge that we made a mistake. But IMO if everyone in 2015-EA had explicitly agreed “vaccines will plausibly dramatically slash malaria rates within 10 years” then I do think we’d have done much more work to evaluate that reference class. Not having done that work can be an ex-ante mistake even if it turns out it wasn’t an ex-post mistake.