Thanks for a really great, clear, informative, thoughtful summary!
I wonder what we could do to get some of the next steps happening sooner, since what typically happens is that we’d wait until we have a pandemic, and then we won’t have the necessary data and it will be too late.
A big challenge I see (as an engineer) is doing the risk/benefit analysis fairly. As EA’s, we tend to imagine a rational world where a technology which saves 100 lives but causes two people to suffer serious eye problems would be a success. But in the real world, those 100 people won’t even realise their lives have been saved, while those two people will have 50 lawyers calling them up asking them to sue the far-UVC companies and the regulators and officials who approved the system, and sympathetic, uninformed juries ready to award them huge judgments.
The answer to this, for engineers, has always been getting things into standards (as you mention), so that instead of entering into individual cases, the engineers’ defend themselves by proving they followed the standards, and the creators of the standards demonstrate the technical rationale and the experimental data supporting it.
Standards, predictably, tend to be very conservative. Very few bridges collapse, very few planes crash due to mechanical failure. If you were to do a cost-benefit analysis of the impact of reducing safety standards to the point where we’d have twice as many bridge-collapses and plane-crashes caused by mechanical failure, and put all the money saved into GiveWell-recommended charities, it would surely have a massively positive benefit, but no society would be willing to do this! So the same challenge could be faced here: even a small risk of negative effects will have a huge weighting in any standards. Your idea of far-UVC systems with two settings is intriguing …
Thanks for a really great, clear, informative, thoughtful summary!
I wonder what we could do to get some of the next steps happening sooner, since what typically happens is that we’d wait until we have a pandemic, and then we won’t have the necessary data and it will be too late.
A big challenge I see (as an engineer) is doing the risk/benefit analysis fairly. As EA’s, we tend to imagine a rational world where a technology which saves 100 lives but causes two people to suffer serious eye problems would be a success. But in the real world, those 100 people won’t even realise their lives have been saved, while those two people will have 50 lawyers calling them up asking them to sue the far-UVC companies and the regulators and officials who approved the system, and sympathetic, uninformed juries ready to award them huge judgments.
The answer to this, for engineers, has always been getting things into standards (as you mention), so that instead of entering into individual cases, the engineers’ defend themselves by proving they followed the standards, and the creators of the standards demonstrate the technical rationale and the experimental data supporting it.
Standards, predictably, tend to be very conservative. Very few bridges collapse, very few planes crash due to mechanical failure. If you were to do a cost-benefit analysis of the impact of reducing safety standards to the point where we’d have twice as many bridge-collapses and plane-crashes caused by mechanical failure, and put all the money saved into GiveWell-recommended charities, it would surely have a massively positive benefit, but no society would be willing to do this! So the same challenge could be faced here: even a small risk of negative effects will have a huge weighting in any standards. Your idea of far-UVC systems with two settings is intriguing …