I think you’re right, the comparison to astrology isn’t entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
On the falsifiability point—I agree that the claims are technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make this point before, but apparently I ignored him). In principle, all of their claims are falsifiable (if we go extinct, then sure, I guess we’ll know how big the future will be). Perhaps it’s better if I write “easily varied” or “amenable to drastic change” in place of irrefutable/unfalsifiable?
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
The fact that they can be manipulated and changed doesn’t strike me as much of a criticism. The more relevant question is if people actually do manipulate and change the estimates to fit their narrative. If they do we should call out these particular people, but even in this case I don’t think it would be an argument against longtermism generally, just against the particular arguments these ‘manipulaters’ would put forward.
The authors do at least set out their assumptions for the one quadrillion which they call their conservative estimate. For example, one input into the figure is an estimate that earth will likely be habitable for another 1 billion years, which is cited from another academic text. Now I’m not saying that their one quadrillion estimate is brilliantly thought through (I’m not saying it isn’t either), I’m just countering a claim I think you’re making that Greaves and MacAskill would likely add zeros or inflate this number if required to protect strong longtermism e.g. to maintain that their conservative longtermist EV calculation continues to beat GiveWell’s cost-effectiveness calculation for AMF. I don’t see evidence to suggest they would and I personally don’t think they would manipulate in such a way. That’s not to say that the one quadrillion figure may not change, but I would hope and would expect this to be for a better reason than “to save longtermism”.
To sum up I don’t think your “amenable to drastic change” point is particularly relevant. What I do think is more relevant is that the one quadrillion estimate is slightly arbitrary, and I see this as a subtly different point. I may address this in a different comment.
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
Yes if you’re happy to let your calculations be driven by very small probabilities of enormous value I suppose you’re right that the great filter would never be conclusive. Whether or not it is reasonable to allow this is an open question in decision theory and I don’t think it’s something that all longtermists accept.
The authors themselves don’t appear to be all that comfortable with accepting it:
All we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence
This implies if they think a credence is miniscule or a long-lasting influence negligible that they might throw away the calculation.
Hi Jack,
I think you’re right, the comparison to astrology isn’t entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
On the falsifiability point—I agree that the claims are technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make this point before, but apparently I ignored him). In principle, all of their claims are falsifiable (if we go extinct, then sure, I guess we’ll know how big the future will be). Perhaps it’s better if I write “easily varied” or “amenable to drastic change” in place of irrefutable/unfalsifiable?
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
The fact that they can be manipulated and changed doesn’t strike me as much of a criticism. The more relevant question is if people actually do manipulate and change the estimates to fit their narrative. If they do we should call out these particular people, but even in this case I don’t think it would be an argument against longtermism generally, just against the particular arguments these ‘manipulaters’ would put forward.
The authors do at least set out their assumptions for the one quadrillion which they call their conservative estimate. For example, one input into the figure is an estimate that earth will likely be habitable for another 1 billion years, which is cited from another academic text. Now I’m not saying that their one quadrillion estimate is brilliantly thought through (I’m not saying it isn’t either), I’m just countering a claim I think you’re making that Greaves and MacAskill would likely add zeros or inflate this number if required to protect strong longtermism e.g. to maintain that their conservative longtermist EV calculation continues to beat GiveWell’s cost-effectiveness calculation for AMF. I don’t see evidence to suggest they would and I personally don’t think they would manipulate in such a way. That’s not to say that the one quadrillion figure may not change, but I would hope and would expect this to be for a better reason than “to save longtermism”.
To sum up I don’t think your “amenable to drastic change” point is particularly relevant. What I do think is more relevant is that the one quadrillion estimate is slightly arbitrary, and I see this as a subtly different point. I may address this in a different comment.
Yes if you’re happy to let your calculations be driven by very small probabilities of enormous value I suppose you’re right that the great filter would never be conclusive. Whether or not it is reasonable to allow this is an open question in decision theory and I don’t think it’s something that all longtermists accept.
The authors themselves don’t appear to be all that comfortable with accepting it:
This implies if they think a credence is miniscule or a long-lasting influence negligible that they might throw away the calculation.