Great question! If AI kills us in the next few years, it seems likely it would be by using a pathway to power that is currently accessible by humans; and just acts as a helper/accelerator to human actions.
The top two existential risks that meet that criteria are engineered bioweapon and nuclear exchange.
Currently, there is a great deal of research into how LLMs can assist research in a broad set of fields, with good results performing similar to a human specialist and in creativity tasks to identify new possibilities. Nothing that human researches can’t do, but the speed and low cost of these models is already surprising and likely to accelerate many fields.
For bioweapon risk, the risk I see would be direct development where the AI is an assistant to the human-led efforts. The specific bioengineering skills to create a an AI designed bug are scarce, but the equipment isn’t.
How could an AI accelerate nuclear risk? One path I could see is again AI-assisted, human led, except for controlling social media content and attitude to increase global tensions. This seems less likely than the bioweapon option.
I like this take: if AI is dangerous enough to kill us in three years, no feasible amount of additional interpretability research would save us.
Our efforts should instead go to limiting the amount of damage that initial AIs could do. That might involve work securing dangerous human-controlled technologies. It might involve creating clever honey pots to catch unsophisticated-but-dangerous AIs before they can fully get their act together. It might involve lobbying for processes or infrastructure to quickly shut down Azure or AWS.
Great question! If AI kills us in the next few years, it seems likely it would be by using a pathway to power that is currently accessible by humans; and just acts as a helper/accelerator to human actions.
The top two existential risks that meet that criteria are engineered bioweapon and nuclear exchange.
Currently, there is a great deal of research into how LLMs can assist research in a broad set of fields, with good results performing similar to a human specialist and in creativity tasks to identify new possibilities. Nothing that human researches can’t do, but the speed and low cost of these models is already surprising and likely to accelerate many fields.
For bioweapon risk, the risk I see would be direct development where the AI is an assistant to the human-led efforts. The specific bioengineering skills to create a an AI designed bug are scarce, but the equipment isn’t.
How could an AI accelerate nuclear risk? One path I could see is again AI-assisted, human led, except for controlling social media content and attitude to increase global tensions. This seems less likely than the bioweapon option.
What others are there?
I like this take: if AI is dangerous enough to kill us in three years, no feasible amount of additional interpretability research would save us.
Our efforts should instead go to limiting the amount of damage that initial AIs could do. That might involve work securing dangerous human-controlled technologies. It might involve creating clever honey pots to catch unsophisticated-but-dangerous AIs before they can fully get their act together. It might involve lobbying for processes or infrastructure to quickly shut down Azure or AWS.