“I think the argument for extinction level events from artificial pathogens is profoundly weak. These complex systems—things like transmissibility, lethality—that’s a hard thing to predict, to actually engineer. Just ask yourself how hard is it for record labels to predict which songs are going to be a hit. Until it hits the market, there’s really no where to know. And that’s a relatively simple situation, which pathogen is going to be immunologically a hit. You might have dialed in this little detail just a little bit wrong—oh it killed the patient too fast to transmit it.”
This sounded like an improvised answer instead of a succinct summary of the strongest argument against extinction level events being likely—which I think would have been a more ideal thing to include in the video.
In particular, Michael’s answer leads me to wonder how he knows that future technological developments wouldn’t make it much easier to predict which pathogens would “immunologically be a hit” and yet the video doesn’t tell me.
Or a second objection: Why, given a long enough time horizon, wouldn’t the possibility of a bad actor engineering many, many pathogens until one of them finally hits wouldn’t be a concern?
Michael Montague, 7:48:
This sounded like an improvised answer instead of a succinct summary of the strongest argument against extinction level events being likely—which I think would have been a more ideal thing to include in the video.
In particular, Michael’s answer leads me to wonder how he knows that future technological developments wouldn’t make it much easier to predict which pathogens would “immunologically be a hit” and yet the video doesn’t tell me.
Or a second objection: Why, given a long enough time horizon, wouldn’t the possibility of a bad actor engineering many, many pathogens until one of them finally hits wouldn’t be a concern?