Let me rephrase this in a deliberately inflammatory way: if you’re under ~50, unaligned AI might kill you and everyone you know.
I’m not sure how we can expect the public, or even experts, to meaningfully engage a threat as abstract, speculative and undefined as unaligned AI when very close to the entire culture, including experts of all kinds, relentlessly ignores the very easily understood nuclear weapons which literally could kill us all right now, today, before we sit down to lunch.
What I learned from studying nuclear weapons as an average citizen is that there’s little evidence that intellectual analysis is capable of delivering us from this ever present existential threat. Very close to everyone already knows the necessary basic facts about nuclear weapons, and yet we barely even discuss this threat, even in presidential campaigns where we are selecting a single human being to have sole authority over the use of these weapons.
People like us are on the wrong channel when it comes to existential threats. Human beings don’t learn such huge lessons through intellectual analysis, we learn through pain, if we learn at all. As example, even though European culture represents a kind of pinnacle of rational thought, European culture relentlessly warred upon itself for centuries, and stopped only when the pain of WWII became too great to bear, and the threat of nuclear annihilation left no room for further warring. And yet, even then some people didn’t get the message, and have returned to reckless land grab warring today.
The single best hope for escaping the nuclear threat is a small scale nuclear terrorist strike on a single city. Seventy years of failure proves that we’re never going to truly grasp the nuclear threat through facts and reason. We’re going to have to see it for ourselves. The answer is not reason, but pain.
This is bad news for the AI threat, because by the time that threat is converted from abstract to real, and we can see it with our own eyes and feel the pain, it will likely be too late to turn back.
I’m not sure how we can expect the public, or even experts, to meaningfully engage a threat as abstract, speculative and undefined as unaligned AI when very close to the entire culture, including experts of all kinds, relentlessly ignores the very easily understood nuclear weapons which literally could kill us all right now, today, before we sit down to lunch.
What I learned from studying nuclear weapons as an average citizen is that there’s little evidence that intellectual analysis is capable of delivering us from this ever present existential threat. Very close to everyone already knows the necessary basic facts about nuclear weapons, and yet we barely even discuss this threat, even in presidential campaigns where we are selecting a single human being to have sole authority over the use of these weapons.
People like us are on the wrong channel when it comes to existential threats. Human beings don’t learn such huge lessons through intellectual analysis, we learn through pain, if we learn at all. As example, even though European culture represents a kind of pinnacle of rational thought, European culture relentlessly warred upon itself for centuries, and stopped only when the pain of WWII became too great to bear, and the threat of nuclear annihilation left no room for further warring. And yet, even then some people didn’t get the message, and have returned to reckless land grab warring today.
The single best hope for escaping the nuclear threat is a small scale nuclear terrorist strike on a single city. Seventy years of failure proves that we’re never going to truly grasp the nuclear threat through facts and reason. We’re going to have to see it for ourselves. The answer is not reason, but pain.
This is bad news for the AI threat, because by the time that threat is converted from abstract to real, and we can see it with our own eyes and feel the pain, it will likely be too late to turn back.