I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.
While AI risk is a lot more important overall (on my views there’s ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there’s a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that’s much harder to come by in AI, especially outside of technical safety).
If you’re a generalist working on AI because it’s the most important thing, I’d seriously consider making the switch. A good place to start could be applying to work with my colleague ASB to help our bio team seed and scale organizations working on stuff like pathogen detection, PPE stockpiling, and sterilization tech. IMO switching should be especially appealing if:
You find yourself unsatisfied by how murky the theories of change are in AI world and how hard it is to feel good about whether your work is actually important and net positive
You have a hard sciences or engineering background, especially mechanical engineering, materials science, physics, etc (or of course a background in biology, though that’s less necessary/relevant than you may assume!)
You want a vibe of solving technical problems with strong feedback loops rather than a vibe of doing communications and politics, but you’re not a good fit for ML research
To be clear, bio is definitely not my lane and I don’t have super deep thinking on this topic beyond what I’m sharing in this quick take (and I’m partly deferring to others on the overall size of bio risk). But from my zoomed-out view, the problem seems both very real and refreshingly tractable.
I’m largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don’t think it’s a giant skew, because bioweapons are just one path to AI takeover).
ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account.
If you feel moved by these things and are a good fit to work on them, that’s a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it’s a pretty apples-to-apples comparison to bio.