Why did EA organizations fail at fighting to prevent the COVID-19 pandemic?

The COVID-19 pandemic likely was due to a lab leak in Wuhan. Currently, it’s still up for public debate but likely the topic will be closed when the US intelligence community reports on their attempts to gather information of what happened at the Wuhan Institute of Virology between September and December of 2019 and suspicious activities in it around that time.

However, even in the remote change that this particular pandemic didn’t happen as a downstream consequence of gain-of-function research, we had good reason to believe that the reason was extremely dangerous.

Marc Lipsitch, who should be on the radar of the EA community given that he spoke at EA Boston in 2017, wrote in 2014 about the risks of gain of function research:

A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.

If we make conservative assumption about with 20 full-time people working on gain of function research and take his lower bound of risk that gives us 1% chance of gain of function research causing a pandemic per year. These are conservative numbers and there’s a good chance that the real chances are higher then that.

When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn’t react. That’s despite those organizations counting pandemics as a possible Global Catastrophic Risk.

Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn’t the big EA organizations listen more?

Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn’t also extremely flawed?