Not sure how to interpret this question but the interpretation that comes to mind is “there is some risk that bioweapons cause extinction”, on other words “there is a non-infinitesimal probability that bioweapons cause extinction”, in which case yes that is certainly true.
Or, a slightly stronger interpretation could be “the risk from bioweapons is at least as large as the risk from asteroids”, which I am also pretty confident is true.
However people interpret the question is how we should discuss it, but when I was writing it, I was wondering about whether bioweapons can cause extinction/ existential risks or not per se. I.e. can bioweapons either: a) kill everyone
b) Kill enough of the population, forever, such that we can never achieve much as a species. I’m not sure about the feasibility of either.
It seems like I interpreted this question pretty differently to Michael (and, judging by the votes, to most other people). With the benefit of hindsight, it probably would have been helpful to define what percentage risk the midpoint (between agree and disagree) corresponds to?[1] Sounds like Michael was taking it to mean ‘literally zero risk’ or ‘1 in 1 million,’ whereas I was taking it to mean 1 in 30 (to correspond to Ord’s Precipice estimate for pandemic x-risk).
(Also, for what it’s worth, for my vote I’m excluding scenarios where a misaligned AI leverages bioweapons—I count that under AI risk. (But I am including scenarios where humans misuse AI to build bioweapons.) I would guess that different voters are dealing with this AI-bio entanglement in different ways.)
This is helpful. If this was actually for a debate week, I’d have made it ‘more than 5% extinction risk this century’ and (maybe) excluded risks from AI.
Not sure how to interpret this question but the interpretation that comes to mind is “there is some risk that bioweapons cause extinction”, on other words “there is a non-infinitesimal probability that bioweapons cause extinction”, in which case yes that is certainly true.
Or, a slightly stronger interpretation could be “the risk from bioweapons is at least as large as the risk from asteroids”, which I am also pretty confident is true.
However people interpret the question is how we should discuss it, but when I was writing it, I was wondering about whether bioweapons can cause extinction/ existential risks or not per se. I.e. can bioweapons either:
a) kill everyone
b) Kill enough of the population, forever, such that we can never achieve much as a species.
I’m not sure about the feasibility of either.
It seems like I interpreted this question pretty differently to Michael (and, judging by the votes, to most other people). With the benefit of hindsight, it probably would have been helpful to define what percentage risk the midpoint (between agree and disagree) corresponds to?[1] Sounds like Michael was taking it to mean ‘literally zero risk’ or ‘1 in 1 million,’ whereas I was taking it to mean 1 in 30 (to correspond to Ord’s Precipice estimate for pandemic x-risk).
(Also, for what it’s worth, for my vote I’m excluding scenarios where a misaligned AI leverages bioweapons—I count that under AI risk. (But I am including scenarios where humans misuse AI to build bioweapons.) I would guess that different voters are dealing with this AI-bio entanglement in different ways.)
Though I appreciate that it was better to run the poll as is than to let details like this stop you from running it at all.
This is helpful. If this was actually for a debate week, I’d have made it ‘more than 5% extinction risk this century’ and (maybe) excluded risks from AI.