I think you are one of the few people who disregards x-risk and has a well-considered probability estimate for which it makes sense to disregard x-risk. (Modulo some debate around how to handle tiny probabilities of enormous outcomes.)
I was more intending to critique the sort of people who say “AI risk isn’t a concern” without having any particular P(doom) in mind, which in my experience is almost all such people.
Thanks, Michael. I agree AI risk should not be dismissed without looking into how large it is. On the other hand, there is not an obvious relationship between existential risk, and the cost-effectiveness of decreasing it. The cost-effectiveness decreases as the risk increases because this decreases the value of the future, unless the risk is concentrated in a time of perils. In addition, a higher risk of human extinction does not necessarily imply a higher existential risk because some AI systems may well be sentient.
I think you are one of the few people who disregards x-risk and has a well-considered probability estimate for which it makes sense to disregard x-risk. (Modulo some debate around how to handle tiny probabilities of enormous outcomes.)
I was more intending to critique the sort of people who say “AI risk isn’t a concern” without having any particular P(doom) in mind, which in my experience is almost all such people.
Thanks, Michael. I agree AI risk should not be dismissed without looking into how large it is. On the other hand, there is not an obvious relationship between existential risk, and the cost-effectiveness of decreasing it. The cost-effectiveness decreases as the risk increases because this decreases the value of the future, unless the risk is concentrated in a time of perils. In addition, a higher risk of human extinction does not necessarily imply a higher existential risk because some AI systems may well be sentient.