I’d love to hear his thoughts on defensive measures for “fuzzier” threats from advanced AI, e.g. manipulation, persuasion, “distortion of epistemics”, etc. Since it seems difficult to delineate when these sorts of harms are occuring (as opposed to benign forms of advertising/rhetoric/expression), it seems hard to construct defenses.
This is a related concept mechanisms for collective epistemics like prediction markets or community notes, which Vitalik praises here. But the harms from manipulation are broader, and could route through “superstimuli”, addictive platforms, etc. beyond just the spread of falsehoods. See manipulation section here for related thoughts.
I love seeing posts from people making tangible progress towards preventing catastrophes—it’s very encouraging!
I know nothing about this area, so excuse me if my question doesn’t make sense or was addressed in your post. I’m curious what the returns are on spending more money on sequencing, e.g. running the machine more than one a week or running it on more samples. If we were spending $10M a year instead of $1.5M on sequencing, how much less than 0.2% of people would have to be infected before an alert was raised?
Some other questions:
How should I feel about 0.2%? Where is 0.2% on the value spectrum from no alert system and an alert system that triggered on a single infection?
How many people’s worth of wastewater can be tested with $1.5M of sequencing?
Thanks for the update; it was interesting even as a layperson.