Hi Jack,
I think you’re right, the comparison to astrology isn’t entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
On the falsifiability point—I agree that the claims are technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make this point before, but apparently I ignored him). In principle, all of their claims are falsifiable (if we go extinct, then sure, I guess we’ll know how big the future will be). Perhaps it’s better if I write “easily varied” or “amenable to drastic change” in place of irrefutable/unfalsifiable?
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
Hi Owen!
Re: inoculation of criticism. Agreed that it doesn’t make criticism impossible in every sense (otherwise my post wouldn’t exist). But if one reasons with numbers only (i.e., EV reasoning), then longtermism becomes unavoidable. As soon as one adopts what I’m calling “Bayesian epistemology”, then there’s very little room to argue with it. One can retort: Well, yes, but there’s very little room to argue with General Relativity, and that is a strength of the theory, not a weakness. But the difference is that GR is very precise: It’s hard to argue with because it aligns so well with observation. But there are lots of observations which would refute it (if light didn’t bend around stars, say). Longtermism is difficult to refute for a different reason, namely because it’s so easy to change the underlying assumptions. (I’m not trying to equate moral theories with empirical theories in every sense, but this example gets the point across I think.)
Your second point does seem correct to me. I think I try to capture this sentiment when I say
Here I’m granting that the moral view that future generations matter could be correct. But this, on my problem/knowledge-focused view of progress, is irrelevant for decision making. What matters is maintaining the ability to solve problems and correct our (inevitable) errors.