AIxBio looks pretty bad and it would be great to see more people work on it
We’re pretty close to having a country of virologists in a data center with AI models that can give detailed and accurate instructions for all steps of a biological attack — with recent reasoning models, we might have this already
These models have safeguards but they’re trivial to overcome — Pliny the Liberator manages to jailbreak every new model within 24 hours and open sources the jailbreaks
Open source will continue to be just a few months behind the frontier given distillation and amplification, and these can be fine-tuned to remove safeguards in minutes for less than $50
People say it’s hard to actually execute the biology work, but I don’t see any bottlenecks to bioweapon production that can’t be done by a bio undergrad with limitless scientific knowledge; on my current understanding, the bottlenecks are not manual dexterity bottlenecks like playing a violin which require years of practice, they are knowledge bottlenecks
Bio supply chain controls that make it harder to get ingredients aren’t working and aren’t on track to work
So it seems like we’re very close to democratizing (even bespoke) bioweapons. When I talk to bio experts about this they often reassure me that few people want to conduct a biological attack, but I haven’t seen much analysis on this and it seems hard to be highly confident.
While we gear up for a bioweapon democracy it seems that there are very few people working on worst-case bio, and most of the people working on it are working on access controls and evaluations. But I don’t expect access controls to succeed, and I expect evaluations to mostly be useful for scaring politicians, due in part to the open source issue meaning we just can’t give frontier models robust safeguards. The most likely thing to actually work is biodefense.
I suspect that too many people working on GCR have moved into working on AI alignment and reliability issues and too few are working on bio. I suspect there are bad incentives, given that AI is the new technology frontier and working with AI is good career capital, and given that AI work is higher status.
When I talk to people at the frontier of biosecurity, I learn that there’s a clear plan and funding available, but the work is bottlenecked by entrepreneurial people who can pick up a big project and execute on it autonomously — these people don’t even need a bio background. On my current guess, the next 3-5 such people who are ambivalent about what to do should go into bio rather than AI, in part because AI seems to be more bottlenecked by less generalist skills, like machine learning, communications, and diplomacy.
I think the main reasons that EAs are working on AI stuff over bio stuff is that there aren’t many good routes into worst case bio work afaict largely due to infohazard concerns from field building, and the x-risk case for biorisk not being very compelling (maybe due to infohazard concerns around threat models).
I think these are fair points, I agree the info hazard stuff has smothered a lot of talent development and field building, and I agree the case for x-risk from misaligned advanced AI is more compelling. At the same time, I don’t talk to a lot of EAs and people in the broader ecosystem these days who are laser focused on extinction over GCR, that seems like a small subset of the community. So I expect various social effects, making a bunch more money, and AI being really cool and interesting and fast-moving are probably a bigger deal than x-risk compellingness simpliciter. Or at least they have had a bigger effect on my choices!
But insufficiently successful talent development / salience / comms is probably the biggest thing, I agree.
Yup! The highest level plan is in Kevin Esvelt’s “Delay, Detect, Defend”: use access controls and regulation to delay worst-case pandemics, build a nucleic acid observatory and other tools to detect amino acid sequences for superpandemics, and defend by hardening the world against biological attacks.
The basic defense, as per DDD, is:
Develop and distribute adequate PPE to all essential workers
Make sure the supply chain is robust to ensure that essential workers can distribute food and essential supplies in the event of a worst-case pandemic
Environmental defenses like far-UVC that massively reduce the spread and replication rate of pandemic pathogens
IMO “delay” has so far basically failed but “detect” has been fairly successful (though incompletely). Most of the important work now needs to rapidly be done on the “defend” side of things.
There’s a lot more details on this and the biosecurity community has really good ideas now about how to develop and distribute effective PPE and rapidly scale environmental defenses. There’s also now interest in developing small molecule countermeasures that can stop pandemics early but are general enough to stop a lot of different kinds of biological attacks. A lot of this is bottlenecked by things like developing industrial-scale capacity for defense production or solving logistics around supply chain robustness and PPE distribution. Happy to chat more details or put you in touch with people better suited than me if it’s relevant to your planning.
AIxBio looks pretty bad and it would be great to see more people work on it
We’re pretty close to having a country of virologists in a data center with AI models that can give detailed and accurate instructions for all steps of a biological attack — with recent reasoning models, we might have this already
These models have safeguards but they’re trivial to overcome — Pliny the Liberator manages to jailbreak every new model within 24 hours and open sources the jailbreaks
Open source will continue to be just a few months behind the frontier given distillation and amplification, and these can be fine-tuned to remove safeguards in minutes for less than $50
People say it’s hard to actually execute the biology work, but I don’t see any bottlenecks to bioweapon production that can’t be done by a bio undergrad with limitless scientific knowledge; on my current understanding, the bottlenecks are not manual dexterity bottlenecks like playing a violin which require years of practice, they are knowledge bottlenecks
Bio supply chain controls that make it harder to get ingredients aren’t working and aren’t on track to work
So it seems like we’re very close to democratizing (even bespoke) bioweapons. When I talk to bio experts about this they often reassure me that few people want to conduct a biological attack, but I haven’t seen much analysis on this and it seems hard to be highly confident.
While we gear up for a bioweapon democracy it seems that there are very few people working on worst-case bio, and most of the people working on it are working on access controls and evaluations. But I don’t expect access controls to succeed, and I expect evaluations to mostly be useful for scaring politicians, due in part to the open source issue meaning we just can’t give frontier models robust safeguards. The most likely thing to actually work is biodefense.
I suspect that too many people working on GCR have moved into working on AI alignment and reliability issues and too few are working on bio. I suspect there are bad incentives, given that AI is the new technology frontier and working with AI is good career capital, and given that AI work is higher status.
When I talk to people at the frontier of biosecurity, I learn that there’s a clear plan and funding available, but the work is bottlenecked by entrepreneurial people who can pick up a big project and execute on it autonomously — these people don’t even need a bio background. On my current guess, the next 3-5 such people who are ambivalent about what to do should go into bio rather than AI, in part because AI seems to be more bottlenecked by less generalist skills, like machine learning, communications, and diplomacy.
I think the main reasons that EAs are working on AI stuff over bio stuff is that there aren’t many good routes into worst case bio work afaict largely due to infohazard concerns from field building, and the x-risk case for biorisk not being very compelling (maybe due to infohazard concerns around threat models).
I think these are fair points, I agree the info hazard stuff has smothered a lot of talent development and field building, and I agree the case for x-risk from misaligned advanced AI is more compelling. At the same time, I don’t talk to a lot of EAs and people in the broader ecosystem these days who are laser focused on extinction over GCR, that seems like a small subset of the community. So I expect various social effects, making a bunch more money, and AI being really cool and interesting and fast-moving are probably a bigger deal than x-risk compellingness simpliciter. Or at least they have had a bigger effect on my choices!
But insufficiently successful talent development / salience / comms is probably the biggest thing, I agree.
can you spell out the clear plan? feel free to DM me also
Yup! The highest level plan is in Kevin Esvelt’s “Delay, Detect, Defend”: use access controls and regulation to delay worst-case pandemics, build a nucleic acid observatory and other tools to detect amino acid sequences for superpandemics, and defend by hardening the world against biological attacks.
The basic defense, as per DDD, is:
Develop and distribute adequate PPE to all essential workers
Make sure the supply chain is robust to ensure that essential workers can distribute food and essential supplies in the event of a worst-case pandemic
Environmental defenses like far-UVC that massively reduce the spread and replication rate of pandemic pathogens
IMO “delay” has so far basically failed but “detect” has been fairly successful (though incompletely). Most of the important work now needs to rapidly be done on the “defend” side of things.
There’s a lot more details on this and the biosecurity community has really good ideas now about how to develop and distribute effective PPE and rapidly scale environmental defenses. There’s also now interest in developing small molecule countermeasures that can stop pandemics early but are general enough to stop a lot of different kinds of biological attacks. A lot of this is bottlenecked by things like developing industrial-scale capacity for defense production or solving logistics around supply chain robustness and PPE distribution. Happy to chat more details or put you in touch with people better suited than me if it’s relevant to your planning.