This seems like a good direction to think about, but I’m skeptical it’s more useful to form organizations to do this, rather than just having EA people just coordinate to hire people. For example, Eliezer just hired a community matchmaker for a short period. For this type of idea, if it’s a useful enough service, I suspect there will be a relatively easy to sustain funding model from donors that care and value the service. This model doesn’t do the “brainstorming” phase, but I also think that’s the part which is hardest not to do directly with the funders / people interested, and it makes almost as much sense to have them pay someone to come up with ideas as a separate phase—there is little reason to think the people who are good at figuring out what people want also have the skills to do that thing.
I strongly dislike the idea of having tons more predictions—it increases effort both for people looking up forecasts, and for forecasters.Forecasting effort is a commons, in the economic sense—forecaster effort is limited, and overuse of forecasts is effectively diluting the available forecaster resources among more questions.
If so, several people at Open Phil are on Metaculus, and can write comments—but I’m skeptical that they will, or that they have very concrete ideas about what they will be doing—I think the plan is essentially to fund everything that seems very promising, and keep looking for more very promising things.
I think RAND is a good case study for interdisciplinary approaches to problem solving, though I’m biased. The key there, as in industry and most places other than academia, but unlike Santa Fe and the ARPAs, is a focus on solving concrete specific problems regardless of the tools used.Also, big +1 to cybernetics, which is an interesting case study for 2 reasons, first because of what worked, and second because of how it was supplanted / coopted into narrow disciplines, and largely fizzled out as its own thing.
There are trivial examples, like when the decay of a given uranium atom will occur, but it seems likely there are macroscopic phenomena that are also irreducibly uncertain over time. For instance, it’s probably the case that long-term weather prediction is fundamentally impossible past some point. Currently, we use 10-meter grids for simulating atmospheric dynamics, and have decent precision out to 2 weeks. But if we knew the positions / velocities / temperatures of every particle in the atmosphere as of today, let’s say, to 2 decimal places, (alongside future solar energy input fluctuations, temperature of the earth, etc.) we could in theory simulate it in full detail to know what things would be like in, say, a month—but we would lose precision over time, and because weather is a chaotic system, more than a couple months in the future, the loss of precision would be so severe that we would have essentially no information. And at some point, the degree of precision needed to extend how long we can predict hits hard limits due to quantum uncertainties, at which point we have fundamental reasons to think it’s impossible to know more.
I’ve mentioned in a different thread that we could refer to them as (1) aleatory versus (2) epistemic.
I would propose reusing the closely related idea of aleatory and epistemic uncertainty for cluelessness.Type 1 is aleatory, i.e. truly random, impossible to reduce, and fundamental. Type 2 is epistemic, i.e. we do not yet have the tools to fix it, but in theory it can be fixed. And this relates to what I’ve called the aleatory baseline problem in forecasting—it’s unclear how much of a prediction is irreducible uncertainty, and how much is just expensive to forecast.
Good find—but it seems pretty sparsely populated, and most consultants at large firms would be tricky to grab one-at-a-time.
[re: FHI Bio] Nonetheless, I’m somewhat surprised by the size of the team. In particular, I imagine that to meaningfully reduce bio-risk, one would need a bigger team. It’s therefore possible that failing to expand is a mistake.
Specifically on the point about FHI’s bio team, as a semi-insider providing information that isn’t on the web site but isn’t private, I’ll note that the team is actually larger, in several ways. First, they have summer fellows and Oxford PhD students not officially hired by FHI that they work with. They also have people working jointly at/with other organizations (e.g. Piers is at iGem, as is Tessa, and they both talk with FHI folks a lot. Greg is a CHS ELBI fellow, and works with lots of people at CHS/NTI/etc. I work with/for them as a contractor. And Andrew Snyder-Beattie at OpenPhil used to be in charge of the team, and has coordinated projects with other groups.) Lastly, they have also recently brought on board additional people, not reflected on the web page. (Not sure about timing or announcements, so I won’t say anything.)
There has been discussion about this for FHI, and I have spoken to a number of people there. They do have some specific ideas, but I agree that it would be beneficial for it to be 1) public, 2) explicit, and 3) actually used for evaluation. Unfortunately, I think that doing so would require a lot of work on their part, and it hasn’t been a big priority.
No, but I suspect there is interest in supporting someone to do this, if they are interested and at least somewhat qualified.
Definitely not expert enough to confidently answer this, but I thought the answer was obviously yes—I don’t think there are any diseases where it doesn’t happen as a natural part of response. (Even HIV is mostly fought off quickly, but in cases where it spreads, it infects enough immune cells that it persists and eventually destroys the immune system.)
That’s a good point. I seem to recall that the efficacy of (most) antivirals as prophylaxis against most diseases is approximately nil, and we can’t easily do COMPARE-style studies for prophylaxis, so I’m unsure if, in general, this is a good strategy. (And I don’t think HCTs for trying this out early on using a battery of drugs would be ethical, even ignoring sample size requirements, though perhaps animal studies could be done quickly.)But I definitely think post-exposure prophylaxis is potentially promising, if it’s likely to work. The two challenges are that 1) it requires contact tracing far better than what we saw during COVID—though we often manage such contact tracing for HIV, so it’s not at all impossible, and 2) in most countries, I can’t imagine that the prescription / medical system would adapt fast enough to allow such prescriptions, short of them actually becoming super-competent at response. So if this is a good idea, we need lots of preparation to actually make sure it can be used. Alternatively, I guess it could be used very early on to slow / stop initial spread, but for the cases I’m most concerned about, I don’t know how we’d know enough to try the strategy then.
Thanks so much for the excellent feedback. I’ve updated a bit, but I don’t think we disagree as much as it seems at first glance, or I’m not understanding your position. In general, I think you’re responding about antivirals in general, and I was talking about antivirals specifically as a response option for during a nascent pandemic. But I do see a few points of clear disagreement.1) Biological diversity & over-updating from one diseaseAntivirals work poorly everywhere. The “best” antivirals we have for flu, like tamiflu, don’t have any significant clinical impact, according to all of the studies not run by the company making it. And yes, antivirals are relatively more important for diseases that don’t have vaccines, but as I noted, HIV antiretrovirals are weak and only work slowly and in combinations, and “highly successful” seems like a weird claim given how long it took and how complex it is.And I agree that vaccines aren’t always practical for all diseases, at least yet. But that doesn’t lead me to think that we might be successful with antivirals.[Edit to add: “The success of COVID vaccines… does not, in my view, imply that they will be a sufficient defense against most or all possible threats.”No, but finding vaccines not working says nothing about the success of other approaches—nothing guarantees that anything works, so pessimism on one front doesn’t justify optimism on another, even if it causes us to invest differently.]2) Future promise of antivirals vs current performanceI think I agree with all of this, which is why I think antiviral work should continue to be funded. But none of this makes me think it’s a valuable target for emergency response. 3) Portfolio theory and scientific innovation
Agreed on our inability to pick winners, and the difficulty of exactly choosing relative investment amounts—but again, I’m not talking about foundational research, where a diversity of approaches are really important, I’m talking about last-ditch emergency response. We need more and faster COMPARE-like trials for extant treatments, but new drugs seem like a dumb place to put money if we need results this year.
Given the failure of antivirals to work even prophylactically, and the fundamental issues I mentioned, I don’t think that is the key issue.
Yes, treatments definitely ameliorate risks from not finding vaccines—but it seems that effective new treatments were far harder to find than vaccines.And yes, clearly symptomatic treatment with extant drugs is important—dexamethasone, but also prone positioning, and basic parts of treatment like pulse oximetry and ensuring sufficient fluids. But these don’t need 100-day crash research programs for new treatments, which is what was proposed, they need RECOVERY-like trials (perhaps more expansive, covering more parts of clinical care,) to start on day 1, instead of waiting months to start.
It’s still unclear, and the developing world detection and survival rates are a bit uncertain. I think you could probably get a decent approximation by looking at test positivity rates and testing volume compared to death rates over time in different countries, but I’m not going to put together the model to do it. We’re doing something related with IFR estimates by age at 1DaySooner, but using seroprevalence data, i.e. only where there is really good data for the estimate. I don’t have results of that yet.
Seems unlikely that <1/3 of all cases were detected at this point, since the recent outbreaks had far higher detection rates than the initial ones.
“And yet lots and lots of people far less credentialed than CHS epidemiologists had correctly figured out by the first week of March that it was smart to wear a mask”Not sure how much this is an answer—as I said in a different response, the question isn’t whether CHS was right (much less right about one specific thing,) but whether they did better overall than the other policy-influencing organizations.