[Another hobbyist here]
I agree with Tsunayoshi’s answer.
Another thing to keep in mind that even the best studies on rapid antigen tests usually compare against PCR tests; that is, if they agree with PCR tests in all cases, the sensitivity is reported as 100%. However, the sensitivity of PCR tests is (as far as I can tell) not 100%, and can vary a lot based on factors such as how the sample is collected and transported.
Here’s an article on the issue. Key quote:
Whether a SARS-CoV-2 test detects clinical disease depends on biologic factors, pre-analytic factors, and analytic performance. Someone with a large amount of virus in their nose/throat will have a positive test with a nose/throat swab. However, someone with little to no virus in their nose or throat may have a negative test even if they have virus somewhere else (like the lungs). [...] If no virus is present at the site of collection, the collection fails to get virus in the sample, or the sample is severely degraded from storage or transport (for example baking in the sun on a car dash) then the test will be negative no matter how sensitive the test is.
Then there’s studies like Kucirka et al, which is summarized in a later paper via this graph of false negative rates in PCR tests:
The study concludes
If clinical suspicion is high, infection should not be ruled out on the basis of RT-PCR alone, and the clinical and epidemiologic situation should be carefully considered.
I don’t know how trustworthy the Kucirka et al study is, since the false negative rates reported are a lot worse than any I’ve seen elsewhere. But I think the upshot is that even “gold-standard” PCR testing is messy, and we shouldn’t trust studies that estimate antigen-test sensitivity by comparison to PCR (or at least adjust for low PCR sensitivity).
A different conclusion that I think is reasonable is that RT-PCR tests are a good baseline given competent administration and possibly re-testing. I don’t know enough about the mechanics of testing to evaluate whether a given study does well on this or not.
There’s a variant of attitude (1) which I think is worth pointing out:
b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin.
Some arguments for (1b):
Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk.
The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics.
Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the ‘good’ types of progress more than the ‘bad’ ones, which seems possible.