Thanks for writing this! I think it’s important to question longtermism. I’ve actually found myself becoming slowly more convinced by it, but I’m still open to it being wrong. I’m looking forward to chewing on this a bit more (and you’ve reminded me I still have to properly read Vaden’s post) but for now I will leave you with a preliminary thought.
Just as the astrologer promises us that “struggle is in our future” and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute.
I don’t think this is fair. In their paper the authors say:
We have the technological power to avoid what would be extinction-level events for other animals, including the power to detect and deflect asteroids (NASA 2007). Of course, human civilisation itself introduces its own risks, such as from nuclear war and man-made pathogens. But it seems hard, given our state of knowledge, to be very confident that we will destroy ourselves, and so we should think there is at least a significant chance that we have a very large future ahead of ourselves.
It seems possible to me that the claim that the future is in expectation vast could be refuted. The authors actually implicitly acknowledge in the bold text above that the claim would be refuted if one were to accept that we will very likely destroy ourselves, or that it is very likely that we will be destroyed and there is unlikely to be much we can do to reduce that risk.
So could the claim realistically be refuted? I think so. For example, one possible solution to the Fermi paradox is that there is a great filter that causes the vast majority of civilisations to cease to exist before colonising space. It seems possible to me that the great filter could emerge as the best answer to the Fermi paradox, in which case the size of the future may no longer be ‘vast in expectation’.
That is just one way in which the claim could be refuted and I suspect there are others. So I don’t think your unfalsifiable critique is justified, although I would be happy to hear a response to this.
I think you’re right, the comparison to astrology isn’t entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
On the falsifiability point—I agree that the claims are technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make this point before, but apparently I ignored him). In principle, all of their claims are falsifiable (if we go extinct, then sure, I guess we’ll know how big the future will be). Perhaps it’s better if I write “easily varied” or “amenable to drastic change” in place of irrefutable/unfalsifiable?
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
The fact that they can be manipulated and changed doesn’t strike me as much of a criticism. The more relevant question is if people actually do manipulate and change the estimates to fit their narrative. If they do we should call out these particular people, but even in this case I don’t think it would be an argument against longtermism generally, just against the particular arguments these ‘manipulaters’ would put forward.
The authors do at least set out their assumptions for the one quadrillion which they call their conservative estimate. For example, one input into the figure is an estimate that earth will likely be habitable for another 1 billion years, which is cited from another academic text. Now I’m not saying that their one quadrillion estimate is brilliantly thought through (I’m not saying it isn’t either), I’m just countering a claim I think you’re making that Greaves and MacAskill would likely add zeros or inflate this number if required to protect strong longtermism e.g. to maintain that their conservative longtermist EV calculation continues to beat GiveWell’s cost-effectiveness calculation for AMF. I don’t see evidence to suggest they would and I personally don’t think they would manipulate in such a way. That’s not to say that the one quadrillion figure may not change, but I would hope and would expect this to be for a better reason than “to save longtermism”.
To sum up I don’t think your “amenable to drastic change” point is particularly relevant. What I do think is more relevant is that the one quadrillion estimate is slightly arbitrary, and I see this as a subtly different point. I may address this in a different comment.
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
Yes if you’re happy to let your calculations be driven by very small probabilities of enormous value I suppose you’re right that the great filter would never be conclusive. Whether or not it is reasonable to allow this is an open question in decision theory and I don’t think it’s something that all longtermists accept.
The authors themselves don’t appear to be all that comfortable with accepting it:
All we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence
This implies if they think a credence is miniscule or a long-lasting influence negligible that they might throw away the calculation.
Thanks for writing this! I think it’s important to question longtermism. I’ve actually found myself becoming slowly more convinced by it, but I’m still open to it being wrong. I’m looking forward to chewing on this a bit more (and you’ve reminded me I still have to properly read Vaden’s post) but for now I will leave you with a preliminary thought.
I don’t think this is fair. In their paper the authors say:
It seems possible to me that the claim that the future is in expectation vast could be refuted. The authors actually implicitly acknowledge in the bold text above that the claim would be refuted if one were to accept that we will very likely destroy ourselves, or that it is very likely that we will be destroyed and there is unlikely to be much we can do to reduce that risk.
So could the claim realistically be refuted? I think so. For example, one possible solution to the Fermi paradox is that there is a great filter that causes the vast majority of civilisations to cease to exist before colonising space. It seems possible to me that the great filter could emerge as the best answer to the Fermi paradox, in which case the size of the future may no longer be ‘vast in expectation’.
That is just one way in which the claim could be refuted and I suspect there are others. So I don’t think your unfalsifiable critique is justified, although I would be happy to hear a response to this.
Hi Jack,
I think you’re right, the comparison to astrology isn’t entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
On the falsifiability point—I agree that the claims are technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make this point before, but apparently I ignored him). In principle, all of their claims are falsifiable (if we go extinct, then sure, I guess we’ll know how big the future will be). Perhaps it’s better if I write “easily varied” or “amenable to drastic change” in place of irrefutable/unfalsifiable?
The great filter example is interesting, actually. For if we’re working in a Bayesian framework, then surely we’d assign such a hypothesis a probability. And then the number of future people could again be vast in expectation.
The fact that they can be manipulated and changed doesn’t strike me as much of a criticism. The more relevant question is if people actually do manipulate and change the estimates to fit their narrative. If they do we should call out these particular people, but even in this case I don’t think it would be an argument against longtermism generally, just against the particular arguments these ‘manipulaters’ would put forward.
The authors do at least set out their assumptions for the one quadrillion which they call their conservative estimate. For example, one input into the figure is an estimate that earth will likely be habitable for another 1 billion years, which is cited from another academic text. Now I’m not saying that their one quadrillion estimate is brilliantly thought through (I’m not saying it isn’t either), I’m just countering a claim I think you’re making that Greaves and MacAskill would likely add zeros or inflate this number if required to protect strong longtermism e.g. to maintain that their conservative longtermist EV calculation continues to beat GiveWell’s cost-effectiveness calculation for AMF. I don’t see evidence to suggest they would and I personally don’t think they would manipulate in such a way. That’s not to say that the one quadrillion figure may not change, but I would hope and would expect this to be for a better reason than “to save longtermism”.
To sum up I don’t think your “amenable to drastic change” point is particularly relevant. What I do think is more relevant is that the one quadrillion estimate is slightly arbitrary, and I see this as a subtly different point. I may address this in a different comment.
Yes if you’re happy to let your calculations be driven by very small probabilities of enormous value I suppose you’re right that the great filter would never be conclusive. Whether or not it is reasonable to allow this is an open question in decision theory and I don’t think it’s something that all longtermists accept.
The authors themselves don’t appear to be all that comfortable with accepting it:
This implies if they think a credence is miniscule or a long-lasting influence negligible that they might throw away the calculation.