Yeah, I agree with your first two paragraphs. (I donāt think I understand the third one; feel free to restate that, if youāve got time.)
In particular, itās worth noting that I agree that itās not currently clear that we can predict (decision-relevant) things about the long-term with above chance accuracy (see also the long-range forecasting tag). Above, I merely claimed that āwe very often predict things that depend on things we donāt fully understand, and with above chance accuracyāāi.e., I didnāt specify long-term.
It does seem very likely to me that itās possible to predict decision-relevant things about the long-term future at least slightly better than complete guesswork. But it seems plausible to me that our predictive power becomes weak enough that that outweighs the increased scale of the future, such that we should focus on near-term effects instead. (I have in mind basically Tarsneyās way of framing the topic from his āEpistemic Challengeā paper. There are also of course factors other than those two things that could change the balance, like population ethical views or various forms of risk-aversion.)
This seems like a super interesting and important topic, both for getting more clarity on whether we should adopt strong longtermism and on how to act given longtermism.
---
I specified ādecision-relevantā above because of basically the following points Tarsney makes in his Epistemic Challenge paper:
The epistemic challenge to longtermism emphasizes the difficulty of predicting the far future. But to understand the challenge, we must specify more precisely the kind of predictions weāre interested in. After all, some predictions about the far future are relatively easy. For instance, I can confidently predict that, a billion years from now, the observable universe will contain more than 100 and fewer than 10^100 stars. (And this prediction is quite precise, since (100, 10^100) comprises only an infinitesimal fraction of the natural numbers!)
But our ability to make predictions like these doesnāt have much bearing on the case for longtermism. For roughly the same reason that it is relatively easy to predict, the number of stars in the observable universe is very difficult to affect. And what we need, for practical purposes, is the ability to predictably affect the world by doing one thing rather than another. That is, we need the ability to make practical predictionsāpredictions that, if I choose Oj , the world will be different in some particular way than it would have been if I had chosen Ok.
Even long-term practical predictions are sometimes easy. For instance, if I shine a laser pointer into the sky, I can predict with reasonable confidence that a billion years from now, some photons will be whizzing in a certain direction through a certain region of very distant space, that would not have been there if I had pointed the laser pointer in a different direction. I can even predict what the wavelength of those photos will be, and that it would have been different if I had used my green instead of my red laser pointer.
But our ability to make predictions like these isnāt terribly heartening either, since photons whizzing through one region or another of empty space is not (presumably) a feature of the world that matters. What we really want is the ability to make long-term evaluative practical predictions: predictions about the effects of our present choices on evaluatively significant features of the far future. The epistemic challenge to longtermism claims that our ability to make this sort of prediction is so limited that, even if we concede the astronomical importance of the far future, the longtermist thesis still comes out false.
Agree that this is important, and itās something Iāve been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.
Yeah, I agree with your first two paragraphs. (I donāt think I understand the third one; feel free to restate that, if youāve got time.)
In particular, itās worth noting that I agree that itās not currently clear that we can predict (decision-relevant) things about the long-term with above chance accuracy (see also the long-range forecasting tag). Above, I merely claimed that āwe very often predict things that depend on things we donāt fully understand, and with above chance accuracyāāi.e., I didnāt specify long-term.
It does seem very likely to me that itās possible to predict decision-relevant things about the long-term future at least slightly better than complete guesswork. But it seems plausible to me that our predictive power becomes weak enough that that outweighs the increased scale of the future, such that we should focus on near-term effects instead. (I have in mind basically Tarsneyās way of framing the topic from his āEpistemic Challengeā paper. There are also of course factors other than those two things that could change the balance, like population ethical views or various forms of risk-aversion.)
This seems like a super interesting and important topic, both for getting more clarity on whether we should adopt strong longtermism and on how to act given longtermism.
---
I specified ādecision-relevantā above because of basically the following points Tarsney makes in his Epistemic Challenge paper:
Agree that this is important, and itās something Iāve been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.