I completely agree with this position, but my take is different: Nuclear war risk is high all the time, and all geopolitical and climate risks can increase it. It is perhaps not existential for the species, but certainly it is for cilivization. Given this, for me it is the top risk, and to some extent, all efforts for progress, political stabilization, climate risk mitigation are modestly important in themselves, and massively important to affect nuclear war risk.
Now, the problem with AI risk is that our understanding of why and how IA works is limited. If my understaing is correct, we have constructed Alpha Zero mainly by growing it, not by designing it. We really dont understand “how it works”. The “black box risk” is huge, and until we have a better theoretical understanding of AI , all efforts will be mainly useless. The “information bottleneck principle” tried it, but interest on it faded. I think other generalizing principles have not been proposed, but I am a user, not a developer, so I could be wrong.
It increases large migrations, political unstability, drought… and that create geopolitical unstability, and the probability of conventional war, revolution, etc and those events can easily trigger a nucelar as long a nuclear power is involved. Do the math: around 1⁄3 of people lives in nuclear armed nations.
Nuclear war turn historical risk into something that can have geological time consequences. I don’t believe there is really any other existential risk (on the timescale of decades) other than nuclear war. There is only one “precipice” but many ways to fall there.
I completely agree with this position, but my take is different: Nuclear war risk is high all the time, and all geopolitical and climate risks can increase it. It is perhaps not existential for the species, but certainly it is for cilivization. Given this, for me it is the top risk, and to some extent, all efforts for progress, political stabilization, climate risk mitigation are modestly important in themselves, and massively important to affect nuclear war risk.
Now, the problem with AI risk is that our understanding of why and how IA works is limited. If my understaing is correct, we have constructed Alpha Zero mainly by growing it, not by designing it. We really dont understand “how it works”. The “black box risk” is huge, and until we have a better theoretical understanding of AI , all efforts will be mainly useless. The “information bottleneck principle” tried it, but interest on it faded. I think other generalizing principles have not been proposed, but I am a user, not a developer, so I could be wrong.
Would you mind writing a bit more about the connection between climate change and nuclear risk?
It increases large migrations, political unstability, drought… and that create geopolitical unstability, and the probability of conventional war, revolution, etc and those events can easily trigger a nucelar as long a nuclear power is involved. Do the math: around 1⁄3 of people lives in nuclear armed nations.
Nuclear war turn historical risk into something that can have geological time consequences. I don’t believe there is really any other existential risk (on the timescale of decades) other than nuclear war. There is only one “precipice” but many ways to fall there.