It seems like a lot of the problems around dishonesty are facilitated by having the same organization responsible for determining what the situation requires, installing it, and verifying that it is installed correctly and/​or performing adequately. Could we pull these apart into separate entities? For example, when we had hazardous material remediation on our house the verification was a completely separate organization.
With UV it seems like a big part of the problem is that it’s hard to tell if the system as installed is doing what it should. With fresh air you can check CO2 levels, but there isn’t something similar for UV. I wonder if it would be possible to develop a cheap test for this, that could be performed as part of standard acceptance testing for a new system? And ideally repeated on stove schedule? Perhaps something like finding a gas that changes into a different one when exposed to UV, releasing it into the building, and then monitoring levels of both that gas and the one UV modifies it to?
This seems like it pushes for upper room UV over in-duct UV, because in the former case you can check that it’s installed properly pretty easily: verify that UV levels are sufficiently low at head height and sufficiently high above that. Sometimes the need for testing that lower room UV levels are sufficiently low is described as a downside for upper room UV, but maybe it isn’t if you actually need testing regardless and testing is more likely to happen and be done correctly in the upper room case.
Re 1) This is probably a factor, but I’d guess it would have low tractability and even if completely corrected would have limited impact. This was the basis for CheckMe that I mention in the post, and since then there are many technological innovations that make crosschecking really simple but have limited impact. For instance, if properly implemented, digital instruments integrated into platforms like measure quick would fix a ton of problems, but I don’t see much happening irl with this.
2) Exactly! Not sure on test, but operating within parameters doesn’t seem like a crazy ask.
3) Yes, that’s definitely on of the points I was trying to make. If we’re choosing between systems that have the theoretical capacity to work in a highly optimized way but are failure-prone and opaque vs systems that work sub-optimally but are readily verified and less failure-prone then I think we should choose the latter.
Really interesting post! Some ideas:
It seems like a lot of the problems around dishonesty are facilitated by having the same organization responsible for determining what the situation requires, installing it, and verifying that it is installed correctly and/​or performing adequately. Could we pull these apart into separate entities? For example, when we had hazardous material remediation on our house the verification was a completely separate organization.
With UV it seems like a big part of the problem is that it’s hard to tell if the system as installed is doing what it should. With fresh air you can check CO2 levels, but there isn’t something similar for UV. I wonder if it would be possible to develop a cheap test for this, that could be performed as part of standard acceptance testing for a new system? And ideally repeated on stove schedule? Perhaps something like finding a gas that changes into a different one when exposed to UV, releasing it into the building, and then monitoring levels of both that gas and the one UV modifies it to?
This seems like it pushes for upper room UV over in-duct UV, because in the former case you can check that it’s installed properly pretty easily: verify that UV levels are sufficiently low at head height and sufficiently high above that. Sometimes the need for testing that lower room UV levels are sufficiently low is described as a downside for upper room UV, but maybe it isn’t if you actually need testing regardless and testing is more likely to happen and be done correctly in the upper room case.
Thanks!
Re 1) This is probably a factor, but I’d guess it would have low tractability and even if completely corrected would have limited impact. This was the basis for CheckMe that I mention in the post, and since then there are many technological innovations that make crosschecking really simple but have limited impact. For instance, if properly implemented, digital instruments integrated into platforms like measure quick would fix a ton of problems, but I don’t see much happening irl with this.
2) Exactly! Not sure on test, but operating within parameters doesn’t seem like a crazy ask.
3) Yes, that’s definitely on of the points I was trying to make. If we’re choosing between systems that have the theoretical capacity to work in a highly optimized way but are failure-prone and opaque vs systems that work sub-optimally but are readily verified and less failure-prone then I think we should choose the latter.