Hi Owen!
Re: inoculation of criticism. Agreed that it doesn’t make criticism impossible in every sense (otherwise my post wouldn’t exist). But if one reasons with numbers only (i.e., EV reasoning), then longtermism becomes unavoidable. As soon as one adopts what I’m calling “Bayesian epistemology”, then there’s very little room to argue with it. One can retort: Well, yes, but there’s very little room to argue with General Relativity, and that is a strength of the theory, not a weakness. But the difference is that GR is very precise: It’s hard to argue with because it aligns so well with observation. But there are lots of observations which would refute it (if light didn’t bend around stars, say). Longtermism is difficult to refute for a different reason, namely because it’s so easy to change the underlying assumptions. (I’m not trying to equate moral theories with empirical theories in every sense, but this example gets the point across I think.)
Your second point does seem correct to me. I think I try to capture this sentiment when I say
Greaves and MacAskill argue that we should have no moral discount factor, i.e., a “zero rate of pure time preference”. I agree — but this is besides the point. While time is morally irrelevant, it is relevant for solving problems.
Here I’m granting that the moral view that future generations matter could be correct. But this, on my problem/knowledge-focused view of progress, is irrelevant for decision making. What matters is maintaining the ability to solve problems and correct our (inevitable) errors.
Well, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that’s a fruitless and impossible endeavour. History is not an iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge at the time. We cannot pretend to know what these will be. Even if you were to gather all the available evidence we have on nuclear near misses, and generate some sort of probability based on this, the answer would look something like:
“Assuming that in 2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x”.
We can debate the merits of such a model, but I think it’s clear that it would be of limited use.
None of this is to say that we shouldn’t be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences.