One last guess:
My ideology-of-all-public-officials guess is pretty weak compared to an obvious alternative: simple public-choice herding at the executive level. (200 units instead of a million.)
If governments were each minimising their own reputation loss by (correctly) predicting that they wouldn’t be punished for doing what everyone was doing, this could be enough to prevent ~all innovation. As much as you want safety in numbers, you doubly don’t want to be the first to risk and lose. No entrainment needed, let alone intentional coordination.
(What could explain Israel’s contract being redacted? The dodgy data-sharing agreement? No, that came out. The reputational risk of being seen to have rushed something out in [Autumn 2020]? No: it worked, so why not unredact now?)
I don’t recognise my post in this description. I openly acknowledge that there are bottlenecks., including unknown bottlenecks. I put a 150% interval on the key uncertainty. (I am protected somewhat from Hofstadter’s law there by the reference point of the Braintree facility, with its almost known lead time.)
It’s not unrealistic to pay weekend overtime or make new weekend hires for regulators, in the biggest health crisis of the century. It’s not unrealistic for a single Chinese scientist to just decide on his own to release the genome he already sequenced, two weeks early. It’s not unrealistic to decide to stop wasting three-quarters of the precious vaccine, once you realise you are doing so and that the rules you set blindly a year ago were arbitrary choices. Approval sharing between rich countries is not unrealistic. It is not totally unrealistic to imagine that a suitably alarmed establishment might throw whole-number fractions of their wealth at solutions, rapidly; the Manhattan project had much worse odds than any COVID vaccine effort. Challenge trials didn’t happen, afaict, because a quite small number of people said so.
To the extent that any of the above were unrealistic, it’s for tractable social reasons, and tractable social reasons serve my point about institutional decision-making.
Moreover, while reference class forecasting is a powerful element in prediction, it often fails when equilibria are not efficient. Policy seems like a perfect example. Yudkowsky’s updated view is closer to my post than to one-step thought-terminating modesty.
It is important to realise how slow vaccines and all medical innovation is in general. That’s why I say it. But it is also important to investigate whether that is for hard-to-change reasons, or not.
If we had used your reference class in March 2020, you would (and we did) predict that the COVID vaccines were going to take 7 years. If this forecast was taken seriously by the wrong people—as it may well have been, given the sluggishness of the “Warp Speed” initiatives! - then a great deal of good would have been, and was, lost.