As a skeptic of AI as the main (certainly as the only) existential risk to focus on, I do agree there are “vibe” similarities with apocalyptic religious claims. The same applies to climate change discussions outside of EA. My guess as to why would be that it is almost impossible to discuss existential risks purely rationally—we cannot help but feel anxiety and other strong negative emotions at the prospect of dying, which can cloud our judgment.
Ultimately, however, I do not view that as a sufficient reason to dismiss existential risk claims.
If you’ll pardon me for the somewhat somber example—take a hypochondriac who becomes obsessed with the idea they might die from lung cancer. They quit smoking and start doing all the right things. However they eventually die in a car accident that could have been averted had they been wearing a seatbelt.
What can we conclude from this scenario? 1. That person was right to worry about existential risk. Yes, while they were alive, the very fact that they were talking about existential risk showed that their fears hadn’t yet materialized. But they did die from an existential risk in the end. 2. That person was right to worry about existential risks from lung cancer. Smoking does help cause lung cancer. Had they survived the accident but not quit smoking, who knows what would have happened. 3. That person was wrong not to worry about existential risks from car accidents. 4. That person was probably wrong to obsess over only one existential risk out of many. 5. That person probably would have been better off not living in fear. They could have enjoyed themselves while living wisely and prudently.
What we cannot conclude from this scenario: 1. Humans cannot die. 2. Humans cannot die from lung cancer. 3. I can smoke ten packs a day for decades without fear of consequence.
I don’t think the vibe of climate apocalyptic claims is analogous to apocalyptic religious claims in the way apocalyptic AI is. This is because climate change is a process based while religious apocalypses and apocalyptic AI are events. Whether or not that is enough reason to dismiss the claims is subjective but it should decrease their credibility to some degree.
I am confused as to how your thought experiment interacts with the argument in this paper. If no one had ever died of lung cancer before the man would be irrational no?
To be clear the argument of the thought experiment was more that “just because someone is being a bit of a maniac about an existential risk does not mean that they’re wrong or that existential risk does not exist.” So that’s why I took an example of a risk we know can happen—the existential risk to one human. It was not an attempt at a full analogy to AI X-risk.
It is true that the difference between humans dying and the entire human species going extinct is that we can know that humans have died in the past without dying ourselves.
So if we’re going for an analogy here, it is a scenario in which no one has yet died of X, but there is a plausible reason to believe that X can happen, X is somewhat likely to happen (as likely as these things can be anyway), and if X happens, it can cause death.
I would argue that if you can establish that well enough, the claim is worth taking seriously regardless of “weirdness”/apocalyptic overtones, which are somewhat excusable due to the emotions involved in fear of death. Of course if the claim can be made without them even better!
I agree with all of this. The argument is not that AI risk claims are wacky or emotional and therefore should be considered untrustworthy. A claim be wacky or emotional impo doesn’t effect its epistemic credibility. It is that they directly parrel other apocalyptic claims that they both share a history with co-exist within the same culture(though being Christian apocalyptic beliefs) Additionally this is not proof that AI risk isn’t real it is merely a reason to think it is less epistemically credible.
Regardless of the reality of AI risk, this is a reason that people will justifiably distrust it.
Interesting tack at the problem!
As a skeptic of AI as the main (certainly as the only) existential risk to focus on, I do agree there are “vibe” similarities with apocalyptic religious claims. The same applies to climate change discussions outside of EA. My guess as to why would be that it is almost impossible to discuss existential risks purely rationally—we cannot help but feel anxiety and other strong negative emotions at the prospect of dying, which can cloud our judgment.
Ultimately, however, I do not view that as a sufficient reason to dismiss existential risk claims.
If you’ll pardon me for the somewhat somber example—take a hypochondriac who becomes obsessed with the idea they might die from lung cancer. They quit smoking and start doing all the right things. However they eventually die in a car accident that could have been averted had they been wearing a seatbelt.
What can we conclude from this scenario?
1. That person was right to worry about existential risk. Yes, while they were alive, the very fact that they were talking about existential risk showed that their fears hadn’t yet materialized. But they did die from an existential risk in the end.
2. That person was right to worry about existential risks from lung cancer. Smoking does help cause lung cancer. Had they survived the accident but not quit smoking, who knows what would have happened.
3. That person was wrong not to worry about existential risks from car accidents.
4. That person was probably wrong to obsess over only one existential risk out of many.
5. That person probably would have been better off not living in fear. They could have enjoyed themselves while living wisely and prudently.
What we cannot conclude from this scenario:
1. Humans cannot die.
2. Humans cannot die from lung cancer.
3. I can smoke ten packs a day for decades without fear of consequence.
I don’t think the vibe of climate apocalyptic claims is analogous to apocalyptic religious claims in the way apocalyptic AI is. This is because climate change is a process based while religious apocalypses and apocalyptic AI are events. Whether or not that is enough reason to dismiss the claims is subjective but it should decrease their credibility to some degree.
I am confused as to how your thought experiment interacts with the argument in this paper. If no one had ever died of lung cancer before the man would be irrational no?
To be clear the argument of the thought experiment was more that “just because someone is being a bit of a maniac about an existential risk does not mean that they’re wrong or that existential risk does not exist.” So that’s why I took an example of a risk we know can happen—the existential risk to one human. It was not an attempt at a full analogy to AI X-risk.
It is true that the difference between humans dying and the entire human species going extinct is that we can know that humans have died in the past without dying ourselves.
So if we’re going for an analogy here, it is a scenario in which no one has yet died of X, but there is a plausible reason to believe that X can happen, X is somewhat likely to happen (as likely as these things can be anyway), and if X happens, it can cause death.
I would argue that if you can establish that well enough, the claim is worth taking seriously regardless of “weirdness”/apocalyptic overtones, which are somewhat excusable due to the emotions involved in fear of death. Of course if the claim can be made without them even better!
I agree with all of this. The argument is not that AI risk claims are wacky or emotional and therefore should be considered untrustworthy. A claim be wacky or emotional impo doesn’t effect its epistemic credibility. It is that they directly parrel other apocalyptic claims that they both share a history with co-exist within the same culture(though being Christian apocalyptic beliefs) Additionally this is not proof that AI risk isn’t real it is merely a reason to think it is less epistemically credible.
Regardless of the reality of AI risk, this is a reason that people will justifiably distrust it.