That’s very useful info, ty. Though I don’t think it substantively changes my conclusion because:
Government funding tends to go towards more legible projects (like R&D). I expect that there are a bunch of useful things in this space where there are more funding gaps (e.g. lobbying for rapid vaccine rollouts).
EA has sizeable funding, but an even greater advantage in directing talent, which I think would have been our main source of impact.
There were probably a bunch of other possible technological approaches to addressing malaria that were more speculative and less well-funded than mRNA vaccines. Ex ante, it was probably a failure not to push harder towards them, rather than focusing on less scalable approaches which could never realistically have solved the full problem.
To be clear, I think it’s very commendable that OpenPhil has been funding gene drive work for a long time. I’m sad about the gap between “OpenPhil sends a few grants in that direction” and “this is a central example of what the EA community focuses on” (as bednets have been); but that shouldn’t diminish the fact that even the former is a great thing to have happen.
There’s a version of your agreement that I agree with, but I’m not sure you endorse, which is something like
If all the core EAs reoriented their perspective on global health away from trying to mostly do the right thing with scope-sensitive ethics while also following a bunch of explicit and illegible norms to something more like I will do everything in my power[1] and move heaven and earth to end malaria as soon as possible, I expect that there’s a decently large chance (less than 50% but still significant) that we’d see a lot more visible EA-led progress on malaria than what we currently observe.
To be concrete, things I can imagine a more monomaniacal version of global health EA might emphasize (note that some of them are mutually exclusive, and others might be seen as bad, even under the monomanical lens, after more research):
Targeting a substantially faster EA growth rate than in our timeline
Potentially have a tiered system of outreach where the cultural onboarding in EA is in play for a more elite/more philosophically minded subset but the majority of people just hear the “end malaria by any means possible” message
Lobbying the US and other gov’ts to a) increase foreign aid and b) to increase aid effectiveness, particularly focused on antimalarial interventions.
(if politically feasible, which it probably isn’t) potentially advocate that foreign aid must be tied with independently verified progress on malaria eradication).
Advocate more strongly, and more early on, for people to volunteer in antimalarial human challenge trials
Careful, concrete, and detailed CBEs (measuring the environmental and other costs to human life against malarial load) on when and where DDT usage is net positive
(if relevant) lobbying in developing countries with high malarial loads to use DDT for malaria control
Attempting to identify and fund DDT analogues that pass the CBE for countries with high malarial (or other insect-borne) disease load, even while the environmental consequences are pretty high (e.g. way too high to be worth the CBE for America).
(if relevant) lobbying countries to try gene drives at an earlier point than most conservative experts would recommend, maybe starting with island countries.
Write academic position papers on why the current vaccine approval system for malaria vaccines is too conservative
Be very willing to do side channel persuasion to emphasize that point
Write aggressive, detailed, and widely-disseminated posts whenever a group in your orbit (charities or WHO or Gates Foundation) is fucking up in your lights
etc
Framed that way, I think the key considerations look less like “people are just too focused on certainty and unwilling to make low-probability, high-EV plays” and “maybe EAs are underestimating the ability of science and technology to solve key problems” and more like “there’s a ton of subtle and illegible tradeoffs people are implicitly making, and trying to bulldoze over them just has a bunch of unexpected costs.” I can see a lot of ways the more monomaniacal version could backfire, but it’s definitely possible that in a counterfactual world EA would’ve done a lot more to visibly end malaria by now.
Hmm, your comment doesn’t really resonate with me. I don’t think it’s really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:
”Over the next 20 or 50 years, it’s very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there’s some way of speeding up this biggest lever.”
I don’t think you need this “move heaven and earth” philosophy to do that reasoning; I don’t think you need to focus on EA growth much more than we did. The mental step could be as simple as “Huh, bednets seem kinda incremental. Is there anything that’s much more ambitious?”
(To be clear I think this is a really hard mental step, but one that I would expect from an explicitly highly-scope-sensitive movement like EA.)
I think part of my disagreement is I’m not sure what counts as “incremental.” Like bednets are an intervention, that broadly speaking, can solve ~half the malaria problem forever at ~20-40 billion dollars, with substantial cobenefits. And attempts at “non-incremental” malaria solutions have already costed mid-high single digit billions. So it’s not like the ratios are massively off. Importantly, “non-incremental” solutions like vaccines likely still requires fairly expensive development, distribution, and ongoing maintenance. So small mistakes might be there, but I don’t see enough room left for us to be making large mistakes in the space.
That’s what I mean by “not enough zeroes.”
To be clear my argument is not insensitive to numbers. If the incremental solutions to the problem have a price tag of >1T (eg global poverty, or aging-related deaths), and non-incremental solutions have had a total price tag of <1B, then I’m much more sympathetic to the “the EV for trying to identify more scalable interventions is likely higher than incremental solutions now, even without looking at details”-style arguments.
Ah, I see. I think the two arguments I’d give here:
Founding 1DaySooner for malaria 5-10 years earlier is high-EV and plausibly very cheap; and there are probably another half-dozen things in this reference class.
We’d need to know much more about the specific interventions in that reference class to confidently judge that we made a mistake. But IMO if everyone in 2015-EA had explicitly agreed “vaccines will plausibly dramatically slash malaria rates within 10 years” then I do think we’d have done much more work to evaluate that reference class. Not having done that work can be an ex-ante mistake even if it turns out it wasn’t an ex-post mistake.
That’s very useful info, ty. Though I don’t think it substantively changes my conclusion because:
Government funding tends to go towards more legible projects (like R&D). I expect that there are a bunch of useful things in this space where there are more funding gaps (e.g. lobbying for rapid vaccine rollouts).
EA has sizeable funding, but an even greater advantage in directing talent, which I think would have been our main source of impact.
There were probably a bunch of other possible technological approaches to addressing malaria that were more speculative and less well-funded than mRNA vaccines. Ex ante, it was probably a failure not to push harder towards them, rather than focusing on less scalable approaches which could never realistically have solved the full problem.
To be clear, I think it’s very commendable that OpenPhil has been funding gene drive work for a long time. I’m sad about the gap between “OpenPhil sends a few grants in that direction” and “this is a central example of what the EA community focuses on” (as bednets have been); but that shouldn’t diminish the fact that even the former is a great thing to have happen.
There’s a version of your agreement that I agree with, but I’m not sure you endorse, which is something like
To be concrete, things I can imagine a more monomaniacal version of global health EA might emphasize (note that some of them are mutually exclusive, and others might be seen as bad, even under the monomanical lens, after more research):
Targeting a substantially faster EA growth rate than in our timeline
Potentially have a tiered system of outreach where the cultural onboarding in EA is in play for a more elite/more philosophically minded subset but the majority of people just hear the “end malaria by any means possible” message
Lobbying the US and other gov’ts to a) increase foreign aid and b) to increase aid effectiveness, particularly focused on antimalarial interventions.
(if politically feasible, which it probably isn’t) potentially advocate that foreign aid must be tied with independently verified progress on malaria eradication).
Advocate more strongly, and more early on, for people to volunteer in antimalarial human challenge trials
Careful, concrete, and detailed CBEs (measuring the environmental and other costs to human life against malarial load) on when and where DDT usage is net positive
(if relevant) lobbying in developing countries with high malarial loads to use DDT for malaria control
Attempting to identify and fund DDT analogues that pass the CBE for countries with high malarial (or other insect-borne) disease load, even while the environmental consequences are pretty high (e.g. way too high to be worth the CBE for America).
(if relevant) lobbying countries to try gene drives at an earlier point than most conservative experts would recommend, maybe starting with island countries.
Write academic position papers on why the current vaccine approval system for malaria vaccines is too conservative
Be very willing to do side channel persuasion to emphasize that point
Write aggressive, detailed, and widely-disseminated posts whenever a group in your orbit (charities or WHO or Gates Foundation) is fucking up in your lights
etc
Framed that way, I think the key considerations look less like “people are just too focused on certainty and unwilling to make low-probability, high-EV plays” and “maybe EAs are underestimating the ability of science and technology to solve key problems” and more like “there’s a ton of subtle and illegible tradeoffs people are implicitly making, and trying to bulldoze over them just has a bunch of unexpected costs.” I can see a lot of ways the more monomaniacal version could backfire, but it’s definitely possible that in a counterfactual world EA would’ve done a lot more to visibly end malaria by now.
Or everything in my power that’s legal and not breaking any obvious key ethical norms, since these things tend to backfire pretty fast.
Hmm, your comment doesn’t really resonate with me. I don’t think it’s really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:
”Over the next 20 or 50 years, it’s very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there’s some way of speeding up this biggest lever.”
I don’t think you need this “move heaven and earth” philosophy to do that reasoning; I don’t think you need to focus on EA growth much more than we did. The mental step could be as simple as “Huh, bednets seem kinda incremental. Is there anything that’s much more ambitious?”
(To be clear I think this is a really hard mental step, but one that I would expect from an explicitly highly-scope-sensitive movement like EA.)
Yeah so basically I contest that this alone will actually have higher EV in the malaria case; apologies if my comment wasn’t clear enough.
I think part of my disagreement is I’m not sure what counts as “incremental.” Like bednets are an intervention, that broadly speaking, can solve ~half the malaria problem forever at ~20-40 billion dollars, with substantial cobenefits. And attempts at “non-incremental” malaria solutions have already costed mid-high single digit billions. So it’s not like the ratios are massively off. Importantly, “non-incremental” solutions like vaccines likely still requires fairly expensive development, distribution, and ongoing maintenance. So small mistakes might be there, but I don’t see enough room left for us to be making large mistakes in the space.
That’s what I mean by “not enough zeroes.”
To be clear my argument is not insensitive to numbers. If the incremental solutions to the problem have a price tag of >1T (eg global poverty, or aging-related deaths), and non-incremental solutions have had a total price tag of <1B, then I’m much more sympathetic to the “the EV for trying to identify more scalable interventions is likely higher than incremental solutions now, even without looking at details”-style arguments.
Ah, I see. I think the two arguments I’d give here:
Founding 1DaySooner for malaria 5-10 years earlier is high-EV and plausibly very cheap; and there are probably another half-dozen things in this reference class.
We’d need to know much more about the specific interventions in that reference class to confidently judge that we made a mistake. But IMO if everyone in 2015-EA had explicitly agreed “vaccines will plausibly dramatically slash malaria rates within 10 years” then I do think we’d have done much more work to evaluate that reference class. Not having done that work can be an ex-ante mistake even if it turns out it wasn’t an ex-post mistake.