[Question] AI+bio cannot be half of AI catastrophe risk, right?

Recently, I have heard an increasing number of prominent voices mention the risk of AI helping to create catastrophic bioweapons. This has been mentioned in the context of AI safety, and not in the context of biosecurity. Thus, it seems anecdotally that people see a significant portion of AI risk being that of an AI somehow being instrumental in causing a civilization-threatening pandemic. That said, I have failed to find any even precursory exploration of how much of AI risk is this risk of AI being instrumental in creating catastrophic bioweapons. Does anyone know of any attempts to quantify the “overlap” between AI and bio? Or could someone please try to do so?

One reason to quantify this overlap is if bio+AI is used as a primary or at least prominent example to the public, it seems useful to have some analysis underpinning such statements. Even showing that bio+AI is actually just a small portion of AI risk might be helpful so that when using this example, the person using this example can also mention that this is just one of many ways AI could end up contributing to harming us. Or if it is indeed a large portion of AI risk, state this with a bit more clarity.

Other reasons to have such analysis might be:

  1. Assisting grantmakers allocate funds. For example, if there is a large overlap, it might be that grantmakers currently investing in AI safety also might want to donate to biosecurity interventions likely to help in an “AI assisted pandemic”

  2. Help talent decide what problem to work on. It might, for example be that policy experts worried about AI Safety also want to focus on legislation and policy around biosecurity

  3. Perhaps foster more cooperation between AI and biosecurity professionals

  4. Perhaps a quantification here could help both AI safety experts and biosec professionals know what type of scenarios to prepare for? For example, it could put more emphasis in AI safety work on trying to prevent AI from becoming too capable in biology (e.g. by removing such training material).

  5. Probably other reasons I have not had time to think about

Relatedly, and I would be very careful in drawing conclusions from this, I just went through the Metaculus predictions for the Ragnarök question series and found that these add up to 132%. Perhaps this indicate overlaps between the categories, or perhaps it is just an effect of the different forecasters for the different questions (there seems to be large variation in how many people have forecasted on each question). Let us assume for the sake of the argument that the “extra” 32% very roughly represents overlap between the different categories. Then, with very little understanding of the topic, I might guess that perhaps half of the biorisk of 27% would also resolve as AI caused catastrophe, so roughly 15%. This means 15% of the bio risk is the same as 15% of the AI risk which would reduce the 32% excess (132%-100%) to about 32%-15%=~15%. Perhaps the remaining 15% are overlaps between AI and nuclear, and possibly other categories. However, this would mean almost half the AI risk is biorisk. This seems suspiciously high but at least it could explain why so many prominent voices uses the example of AI + bio when talking about how AI can go wrong. Moreover, if all these extra 32% are indeed overlaps with AI, it means there is almost no “pure” AI catastrophe risk which seems suspicious. Anyways, these are the only numbers I have come across that at least points towards some kind of overlap between AI and bio.

Thanks for any pointers or thoughts on this!