Insofar as AI x-risk looks like LLMs while awesome stuff like medicine (and robotics and autonomous vehicles and more) doesn’t look like LLMs, caution on LLMs doesn’t delay other awesome stuff.* So when you talk about slowing AI progress, make it clear that you only mean AI on the path to dangerous capabilities.
AI biologists seem extremely dangerous to me—something “merely” as good at viral genomes as GPT-4 is at language would already be an existential threat to human civilization, if not necessarily homo sapiens.
AI biologists seem extremely dangerous to me—something “merely” as good at viral genomes as GPT-4 is at language would already be an existential threat to human civilization, if not necessarily homo sapiens.