I think this misunderstands what people mean when they compare arguments about the importance of AI safety to a Pascal’s wager.
Pascal’s wager refers to situations where a tiny probability of enormous value seemingly leads to ridiculous conclusions if you try to do naive expected value calculations with it. When people say that strong longtermism is a Pascal’s wager, the “small probability” they are talking about is not the probability of extinction, which as you point out, is significant. The small probability is the probability that the future will contain “septillions of future sapients”. That is the probability that is small. And it gets even smaller if the probability of extinction soon is high! So a large probability of extinction this century makes the Pascal’s wager comparison more relevant as a critique of strong longtermism, not less. It is multiplying this small probability by the value of those septillions of potential “sapients” that gives you the astronomical value that says existential risk reduction should almost automatically dominate our concerns.
I think you’re completely right to point out that people should care a lot about things which might carry a 10% chance of causing human extinction, even ignoring their stance on longtermism. But some people believe that existential risk has astronomically more value than just the impact it will have on the next few generations, and that therefore tiny changes in the probability of existential risk almost automatically trump any other concern, however small those changes are. When people talk about Pascal’s wager in the context of strong longtermism or AI safety, I think it is this claim that they are challenging, not the claim that we should care about extinction at all. And that criticism is just as valid, actually more valid, if the probability of extinction from AI safety is high (though I of course agree that if there are people who use the Pascal’s Wager argument to dismiss all work on AI risk then they are making a serious mistake).
I think this misunderstands what people mean when they compare arguments about the importance of AI safety to a Pascal’s wager.
Pascal’s wager refers to situations where a tiny probability of enormous value seemingly leads to ridiculous conclusions if you try to do naive expected value calculations with it. When people say that strong longtermism is a Pascal’s wager, the “small probability” they are talking about is not the probability of extinction, which as you point out, is significant. The small probability is the probability that the future will contain “septillions of future sapients”. That is the probability that is small. And it gets even smaller if the probability of extinction soon is high! So a large probability of extinction this century makes the Pascal’s wager comparison more relevant as a critique of strong longtermism, not less. It is multiplying this small probability by the value of those septillions of potential “sapients” that gives you the astronomical value that says existential risk reduction should almost automatically dominate our concerns.
I think you’re completely right to point out that people should care a lot about things which might carry a 10% chance of causing human extinction, even ignoring their stance on longtermism. But some people believe that existential risk has astronomically more value than just the impact it will have on the next few generations, and that therefore tiny changes in the probability of existential risk almost automatically trump any other concern, however small those changes are. When people talk about Pascal’s wager in the context of strong longtermism or AI safety, I think it is this claim that they are challenging, not the claim that we should care about extinction at all. And that criticism is just as valid, actually more valid, if the probability of extinction from AI safety is high (though I of course agree that if there are people who use the Pascal’s Wager argument to dismiss all work on AI risk then they are making a serious mistake).