Both examples show how we can act to reduce the catastrophe rate over time, but there are also 3 key risk factors applying upward pressure on the catastrophe rate:
The lingering nature of present threats
Our ongoing ability to generate new threats
Continuously lowering barriers to entry/access
In the case of AI, its usually viewed that AI will be aligned or misaligned, meaning this risk is either be solved or not. It’s also possible that AI may be aligned initially, and become misaligned later[11]. The need for ongoing protection from bad AI would therefore be ongoing. In this scenario we’d need systems in place to stop AI being misappropriated or manipulated, similar to how we guard nuclear weapons from dangerous actors. This is what I term “lingering risk”.
I just want to flag one aspect of this I haven’t seen mentioned, which is that much of this lingering risk naturally grows with population, since you have more potential actors. If you have acceptable risk per century with 10 BSL-4 labs, the risk with 100 labs might be too much. If you have acceptable risk with one pair of nuclear rivals in cold war, a 20-way cold war could require much heavier policing to meet the same level of risk. I expanded on this in a note here.
I just want to flag one aspect of this I haven’t seen mentioned, which is that much of this lingering risk naturally grows with population, since you have more potential actors. If you have acceptable risk per century with 10 BSL-4 labs, the risk with 100 labs might be too much. If you have acceptable risk with one pair of nuclear rivals in cold war, a 20-way cold war could require much heavier policing to meet the same level of risk. I expanded on this in a note here.