As for the existential threat, it is for a few reasons, I will make a more detailed post about it later. First off, I believe very few things are existential threats to humanity itself. Humans are extremely resilient and live in every nook and cranny on earth. Even total nuclear war would have plenty of survivors. As far as I see it, only an asteroid or aliens could wipe us out unexpectedly. AI could wipe out humanity, however I believe it would be a voluntary extinction in that case. Future humans may believe AI has qualia, and is much more efficient at creating utility than biological life. I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.
I do believe there is an existential threat to civilization, however it is not present yet, and we will be capable of self-sustaining colonies off Earth by the time is will arise(meaning that space acceleration would be a form of existential threat reduction). Large portions of Africa, and smaller portions of the Americas and Asia are not at a civilizational level that would make a collapse possible, however they will likely cross that threshold this century. If there is a global civilizational collapse, I do not think civilization would ever return. However, there are far too many unknowns as too how to avoid said collapse meaningfully. Want to prevent a civilization ending nuclear war? You could try to bolster the power of the weaker side to force a cold war. Or maybe you want to make the sides more lopsided so intimidation will be enough. However as we do not know which strategy is more effective, and they have opposite praxis, there is no way to know if you would be increasing existential threats or not.
Lastly, most existential threat reduction is political by nature. Politics are also extremely unpredictable, and extremely hard to influence even if you know what you are doing. Politics have incredibly strong driving forces behind them, being nationalism, desperation/fear, corruption, ect, and these driving forces can easily drown out philosophy and the idea of long-term altruism. People want to win before they do good, and largely believe they must win to do the most good.
TLDR: I believe most “existential threats” are not existential or not valid threats, those that do exist have unknowable ways to minimize them, and the political nature of most forms of existential threat reduction make them nearly impossible to influence in the name of long term altruism.
Just because something is difficult, doesn’t mean it isn’t worth trying to do, or at least trying to learn more about so you have some sense of what to do. Calling something “unknowable”—when the penalty for not knowing it “civilization might end with unknown probability”—is a claim that should be challenged vociferously, because if it turns out to be wrong in any aspect, that’s very important for us to know.
I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.
I’d recommend reading more about how people worried about AI conceive of the risk; I’ve heard zero people in all of EA say that this scenario is what worries them. There are many places you could start: Stuart Russell’s “Human Compatible” is a good book, but there’s also the free Wait But Why series on superintelligence (plus Luke Muehlhauser’s blog post correcting some errors in that series).
There are many good reasons to think that AI risk may be fairly low (this is an ongoing debate in EA), but before you say one side is wrong, you have to understand what they really believe.
I have not seen that, but I will check it out.
As for the existential threat, it is for a few reasons, I will make a more detailed post about it later. First off, I believe very few things are existential threats to humanity itself. Humans are extremely resilient and live in every nook and cranny on earth. Even total nuclear war would have plenty of survivors. As far as I see it, only an asteroid or aliens could wipe us out unexpectedly. AI could wipe out humanity, however I believe it would be a voluntary extinction in that case. Future humans may believe AI has qualia, and is much more efficient at creating utility than biological life. I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.
I do believe there is an existential threat to civilization, however it is not present yet, and we will be capable of self-sustaining colonies off Earth by the time is will arise(meaning that space acceleration would be a form of existential threat reduction). Large portions of Africa, and smaller portions of the Americas and Asia are not at a civilizational level that would make a collapse possible, however they will likely cross that threshold this century. If there is a global civilizational collapse, I do not think civilization would ever return. However, there are far too many unknowns as too how to avoid said collapse meaningfully. Want to prevent a civilization ending nuclear war? You could try to bolster the power of the weaker side to force a cold war. Or maybe you want to make the sides more lopsided so intimidation will be enough. However as we do not know which strategy is more effective, and they have opposite praxis, there is no way to know if you would be increasing existential threats or not.
Lastly, most existential threat reduction is political by nature. Politics are also extremely unpredictable, and extremely hard to influence even if you know what you are doing. Politics have incredibly strong driving forces behind them, being nationalism, desperation/fear, corruption, ect, and these driving forces can easily drown out philosophy and the idea of long-term altruism. People want to win before they do good, and largely believe they must win to do the most good.
TLDR: I believe most “existential threats” are not existential or not valid threats, those that do exist have unknowable ways to minimize them, and the political nature of most forms of existential threat reduction make them nearly impossible to influence in the name of long term altruism.
Just because something is difficult, doesn’t mean it isn’t worth trying to do, or at least trying to learn more about so you have some sense of what to do. Calling something “unknowable”—when the penalty for not knowing it “civilization might end with unknown probability”—is a claim that should be challenged vociferously, because if it turns out to be wrong in any aspect, that’s very important for us to know.
I’d recommend reading more about how people worried about AI conceive of the risk; I’ve heard zero people in all of EA say that this scenario is what worries them. There are many places you could start: Stuart Russell’s “Human Compatible” is a good book, but there’s also the free Wait But Why series on superintelligence (plus Luke Muehlhauser’s blog post correcting some errors in that series).
There are many good reasons to think that AI risk may be fairly low (this is an ongoing debate in EA), but before you say one side is wrong, you have to understand what they really believe.