I think this is a reasonable point of disagreement. Though, as you allude to, it is separate from the point I was making.
I do think it is generally very important to distinguish between:
Advocacy for a policy because you think it would have a tiny impact on x-risk, which thereby outweighs all the other side effects of the policy, including potentially massive near-term effects, because reducing x-risk simply outweighs every other ethical priority by many orders of magnitude.
Advocacy for a policy because you think it would have a moderate or large effect on x-risk, and is therefore worth doing because reducing x-risk is an important ethical priority (even if it isn’t, say, one million times more important than every other ethical priority combined).
I’m happy to debate (2) on empirical grounds, and debate (1) on ethical grounds. I think the ethical philosophy behind (1) is quite dubious and resembles the type of logic that is vulnerable to Pascal-mugging. The ethical philosophy behind (2) seems sound, but the empirical basis is often uncertain.
I’m curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it’s feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here.
Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.