NPI & IFR: thanks, it’s now explained in the text.
Re: Rigour
I think much of the problem is due not to our methods being “unrigourous” in any objective sense, but to interdisciplinarity. For example, in the survey case, we used mostly standard methods from a field called “discrete choice modelling” (btw, some EAs should learn it—it’s a pretty significant body of knowledge on “how to determine people’s utility functions”).
Unfortunately, it’s not something commonly found in the field of, for example, “mathematical modeling of infectious diseases”. It makes it more difficult for journals to review such a paper, because ideally they would need several different reviewers for different parts of the paper. This is unlikely to happen in practice, so usually the reviewers tend to either evaluate everything according to the conventions of their field, or to be critical and dismissive of things they don’t understand.
Similar thing is going on with use of “forecasting”-based methods. There is published scientific literature on their use, their track record is good, but before the pandemic there was almost no “published literature” on the subject of their use in combination with epidemic modelling (there is now!).
The second part of the problem is that we were ultimately more interested in “what is actually true” than what “looks rigorous”. A paper that contains few pages of equations, lots of complex modeling, and many simulations can look “rigorous” (in the sense of the stylized dialogue). If at the same time, for example, it contains completely and obviously wrong assumptions about the IFR of covid it will still pass many tests of “rigorousness” because it only shows that “under assumptions that do not hold in our world we reach conclusions that are irrelevant to our world” (the implication is true). At the same time, it can have disastrous consequences, if used by policymakers, who assume something like “research tracks reality”.
Ex post, we can demonstrate that some of our methods (relying on forecasters) were much closer to reality (e.g. based on serological studies) than a lot of published stuff.
Ex ante, it was clear this will be the case to many people who understand both academic research and forecasting.
Re: Funding
For the record, EpiFor is a project that has ended, and is not seeking any funding. Also, as noted in the post, we were actually able to get some funding offered: just not in a form which the university was able to accept, etc.
It’s not like there is one funder evaluating whether to fund IHME, or EpidemicForecasting. In my view the problems pointed to here are almost completely unrelated, and I don’t want them to get conflated in some way
NPI & IFR: thanks, it’s now explained in the text.
Re: Rigour
I think much of the problem is due not to our methods being “unrigourous” in any objective sense, but to interdisciplinarity. For example, in the survey case, we used mostly standard methods from a field called “discrete choice modelling” (btw, some EAs should learn it—it’s a pretty significant body of knowledge on “how to determine people’s utility functions”).
Unfortunately, it’s not something commonly found in the field of, for example, “mathematical modeling of infectious diseases”. It makes it more difficult for journals to review such a paper, because ideally they would need several different reviewers for different parts of the paper. This is unlikely to happen in practice, so usually the reviewers tend to either evaluate everything according to the conventions of their field, or to be critical and dismissive of things they don’t understand.
Similar thing is going on with use of “forecasting”-based methods. There is published scientific literature on their use, their track record is good, but before the pandemic there was almost no “published literature” on the subject of their use in combination with epidemic modelling (there is now!).
The second part of the problem is that we were ultimately more interested in “what is actually true” than what “looks rigorous”. A paper that contains few pages of equations, lots of complex modeling, and many simulations can look “rigorous” (in the sense of the stylized dialogue). If at the same time, for example, it contains completely and obviously wrong assumptions about the IFR of covid it will still pass many tests of “rigorousness” because it only shows that “under assumptions that do not hold in our world we reach conclusions that are irrelevant to our world” (the implication is true). At the same time, it can have disastrous consequences, if used by policymakers, who assume something like “research tracks reality”.
Ex post, we can demonstrate that some of our methods (relying on forecasters) were much closer to reality (e.g. based on serological studies) than a lot of published stuff.
Ex ante, it was clear this will be the case to many people who understand both academic research and forecasting.
Re: Funding
For the record, EpiFor is a project that has ended, and is not seeking any funding. Also, as noted in the post, we were actually able to get some funding offered: just not in a form which the university was able to accept, etc.
It’s not like there is one funder evaluating whether to fund IHME, or EpidemicForecasting. In my view the problems pointed to here are almost completely unrelated, and I don’t want them to get conflated in some way