I don’t think the vibe of climate apocalyptic claims is analogous to apocalyptic religious claims in the way apocalyptic AI is. This is because climate change is a process based while religious apocalypses and apocalyptic AI are events. Whether or not that is enough reason to dismiss the claims is subjective but it should decrease their credibility to some degree.
I am confused as to how your thought experiment interacts with the argument in this paper. If no one had ever died of lung cancer before the man would be irrational no?
To be clear the argument of the thought experiment was more that “just because someone is being a bit of a maniac about an existential risk does not mean that they’re wrong or that existential risk does not exist.” So that’s why I took an example of a risk we know can happen—the existential risk to one human. It was not an attempt at a full analogy to AI X-risk.
It is true that the difference between humans dying and the entire human species going extinct is that we can know that humans have died in the past without dying ourselves.
So if we’re going for an analogy here, it is a scenario in which no one has yet died of X, but there is a plausible reason to believe that X can happen, X is somewhat likely to happen (as likely as these things can be anyway), and if X happens, it can cause death.
I would argue that if you can establish that well enough, the claim is worth taking seriously regardless of “weirdness”/apocalyptic overtones, which are somewhat excusable due to the emotions involved in fear of death. Of course if the claim can be made without them even better!
I agree with all of this. The argument is not that AI risk claims are wacky or emotional and therefore should be considered untrustworthy. A claim be wacky or emotional impo doesn’t effect its epistemic credibility. It is that they directly parrel other apocalyptic claims that they both share a history with co-exist within the same culture(though being Christian apocalyptic beliefs) Additionally this is not proof that AI risk isn’t real it is merely a reason to think it is less epistemically credible.
Regardless of the reality of AI risk, this is a reason that people will justifiably distrust it.
I don’t think the vibe of climate apocalyptic claims is analogous to apocalyptic religious claims in the way apocalyptic AI is. This is because climate change is a process based while religious apocalypses and apocalyptic AI are events. Whether or not that is enough reason to dismiss the claims is subjective but it should decrease their credibility to some degree.
I am confused as to how your thought experiment interacts with the argument in this paper. If no one had ever died of lung cancer before the man would be irrational no?
To be clear the argument of the thought experiment was more that “just because someone is being a bit of a maniac about an existential risk does not mean that they’re wrong or that existential risk does not exist.” So that’s why I took an example of a risk we know can happen—the existential risk to one human. It was not an attempt at a full analogy to AI X-risk.
It is true that the difference between humans dying and the entire human species going extinct is that we can know that humans have died in the past without dying ourselves.
So if we’re going for an analogy here, it is a scenario in which no one has yet died of X, but there is a plausible reason to believe that X can happen, X is somewhat likely to happen (as likely as these things can be anyway), and if X happens, it can cause death.
I would argue that if you can establish that well enough, the claim is worth taking seriously regardless of “weirdness”/apocalyptic overtones, which are somewhat excusable due to the emotions involved in fear of death. Of course if the claim can be made without them even better!
I agree with all of this. The argument is not that AI risk claims are wacky or emotional and therefore should be considered untrustworthy. A claim be wacky or emotional impo doesn’t effect its epistemic credibility. It is that they directly parrel other apocalyptic claims that they both share a history with co-exist within the same culture(though being Christian apocalyptic beliefs) Additionally this is not proof that AI risk isn’t real it is merely a reason to think it is less epistemically credible.
Regardless of the reality of AI risk, this is a reason that people will justifiably distrust it.