(Iāll put a bundle of smaller, disconnected reactions in this one thread.)
Masrani writes:
This observationāthat in expectation, the future is not vast, but undefinedāpasses a few basic sanity checks. First, we know from common sense that we cannot predict the future, in expectation or otherwise. Prophets have been trying this for millennia with little successāit would be rather surprising if the probability calculus somehow enabled it.
But empirical evidence and common sense actually clearly demonstrate that we can predict the future with better than chance accuracy, at least in some domains, and sometimes very easily
E.g., I can predict that the sun will rise tomorrow
Masrani seems to confuse (1) pure time discounting /ā a pure rate of time preference with (2) time discounting for other reasons (e.g., due to the possibility that the future wonāt come to pass due to a catastrophe; see Greaves).
In particular, Masrani seems to claim that Greaves and MacAskillās paper is wrong to reject pure time discounting, but bases that claim partly on the fact that there could be a catastrophe in future (which is a separate matter from pure time discounting).
E.g., Masrani writes: āWe should be biased towards the present for the simple reason that tomorrow may not arrive. The further out into the future we go, the less certain things become, and the smaller the chance is that weāll actually make it there. Preferring good things to happen sooner rather than later follows directly from the finitude of life.ā
---
Another, separate point about discounting:
Masrani writes:
If one does not discount the future, then one is equally concerned about every moment in time
But as far as I can tell, this is false, at least taken if literally; instead, how concerned one should be about a given moment in time depends in part on whatās happening at the time (e.g. how many moral patients there are, and what theyāre experiencing).
Second, we know from basic results in epistemology (discussed here before) that predicting the future course of human history is impossible when that history depends on future knowledge, which we by definition donāt know. We cannot know today what we will only learn tomorrow. It is not the case that someone standing in 1200 would assign a ālow credenceā to the statement āthe internet will be invented in the 1990āsā. They wouldnāt be able to think the thought in the first place, much less formalize it mathematically.
But we very often predict things that depend on things we donāt fully understand, and with above chance accuracy.
E.g. I can often predict with decent success what someone will do, even without knowing everything they know, and even when some things that they know and that I donāt know are relevant to what Iāll do.
To be clear, Iād agree with lots of weaker claims in this vicinity, like that predicting the future is very hard, and that one thing that makes it harder is that we lack some knowledge which future people will have (e.g., about the nature of future technologies).
But saying we canāt ever predict the future at all is too strong.
Yes, this seems to be a problem, but itās also a problem with naive expected value thinking that prioritizes predictions without looking at adaptive planning or value of information. And I think Greaves and MacAskill donāt really address these issues sufficiently in their paperāthough I agree that they have considered them and are open to further refinement of their ideas.
But I donāt beleive that itās clear we predict things about the long term āwith above chance accuracy.ā If we do, itās not obvious how to construct the baseline probability we would expect to outperform.
Critically, the requirement for this criticism to be correct is that our predictions are not good enough to point to interventions that have higher expected benefit than more-certain ones, and this seems very plausible. Constructing the case for whether or not it is true seems valuable, but mostly unexplored.
Yeah, I agree with your first two paragraphs. (I donāt think I understand the third one; feel free to restate that, if youāve got time.)
In particular, itās worth noting that I agree that itās not currently clear that we can predict (decision-relevant) things about the long-term with above chance accuracy (see also the long-range forecasting tag). Above, I merely claimed that āwe very often predict things that depend on things we donāt fully understand, and with above chance accuracyāāi.e., I didnāt specify long-term.
It does seem very likely to me that itās possible to predict decision-relevant things about the long-term future at least slightly better than complete guesswork. But it seems plausible to me that our predictive power becomes weak enough that that outweighs the increased scale of the future, such that we should focus on near-term effects instead. (I have in mind basically Tarsneyās way of framing the topic from his āEpistemic Challengeā paper. There are also of course factors other than those two things that could change the balance, like population ethical views or various forms of risk-aversion.)
This seems like a super interesting and important topic, both for getting more clarity on whether we should adopt strong longtermism and on how to act given longtermism.
---
I specified ādecision-relevantā above because of basically the following points Tarsney makes in his Epistemic Challenge paper:
The epistemic challenge to longtermism emphasizes the difficulty of predicting the far future. But to understand the challenge, we must specify more precisely the kind of predictions weāre interested in. After all, some predictions about the far future are relatively easy. For instance, I can confidently predict that, a billion years from now, the observable universe will contain more than 100 and fewer than 10^100 stars. (And this prediction is quite precise, since (100, 10^100) comprises only an infinitesimal fraction of the natural numbers!)
But our ability to make predictions like these doesnāt have much bearing on the case for longtermism. For roughly the same reason that it is relatively easy to predict, the number of stars in the observable universe is very difficult to affect. And what we need, for practical purposes, is the ability to predictably affect the world by doing one thing rather than another. That is, we need the ability to make practical predictionsāpredictions that, if I choose Oj , the world will be different in some particular way than it would have been if I had chosen Ok.
Even long-term practical predictions are sometimes easy. For instance, if I shine a laser pointer into the sky, I can predict with reasonable confidence that a billion years from now, some photons will be whizzing in a certain direction through a certain region of very distant space, that would not have been there if I had pointed the laser pointer in a different direction. I can even predict what the wavelength of those photos will be, and that it would have been different if I had used my green instead of my red laser pointer.
But our ability to make predictions like these isnāt terribly heartening either, since photons whizzing through one region or another of empty space is not (presumably) a feature of the world that matters. What we really want is the ability to make long-term evaluative practical predictions: predictions about the effects of our present choices on evaluatively significant features of the far future. The epistemic challenge to longtermism claims that our ability to make this sort of prediction is so limited that, even if we concede the astronomical importance of the far future, the longtermist thesis still comes out false.
Agree that this is important, and itās something Iāve been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.
Masrani focuses quite a bit on the idea that longtermism relies on comparisons to an infinite amount of potential future good. But Greaves and MacAskillās paper doesnāt actually mention infinity at any point, and neither their argument nor the othe standard arguments Iāve seen rely at all on infinities.
E.g., Masrani writes: āBy āthis observationā I just mean the fact that longtermism is a really really bad idea because it lets you justify present day suffering forever, by always comparing it to an infinite amount of potential future good (forever).ā
(I wonāt say more on this here, since the comments section of the link-post for Masraniās post already contains an extensive discussion of whether and how infinities might be relevant relation to longtermism.)
Therefore there are no uncertainties associated with predictions made in expectation. Adding the magic words āin expectationā allows longtermists to make predictions about the future confidently and with absolute certainty.ā
But I think that this is simply false: our predictions (as well as other credences) can differ in how āresilientā they are
(Iāll put a bundle of smaller, disconnected reactions in this one thread.)
Masrani writes:
But empirical evidence and common sense actually clearly demonstrate that we can predict the future with better than chance accuracy, at least in some domains, and sometimes very easily
E.g., I can predict that the sun will rise tomorrow
See also Phil Tetlockās work
Masrani seems to confuse (1) pure time discounting /ā a pure rate of time preference with (2) time discounting for other reasons (e.g., due to the possibility that the future wonāt come to pass due to a catastrophe; see Greaves).
In particular, Masrani seems to claim that Greaves and MacAskillās paper is wrong to reject pure time discounting, but bases that claim partly on the fact that there could be a catastrophe in future (which is a separate matter from pure time discounting).
E.g., Masrani writes: āWe should be biased towards the present for the simple reason that tomorrow may not arrive. The further out into the future we go, the less certain things become, and the smaller the chance is that weāll actually make it there. Preferring good things to happen sooner rather than later follows directly from the finitude of life.ā
---
Another, separate point about discounting:
Masrani writes:
But as far as I can tell, this is false, at least taken if literally; instead, how concerned one should be about a given moment in time depends in part on whatās happening at the time (e.g. how many moral patients there are, and what theyāre experiencing).
Masrani writes:
But we very often predict things that depend on things we donāt fully understand, and with above chance accuracy.
E.g. I can often predict with decent success what someone will do, even without knowing everything they know, and even when some things that they know and that I donāt know are relevant to what Iāll do.
To be clear, Iād agree with lots of weaker claims in this vicinity, like that predicting the future is very hard, and that one thing that makes it harder is that we lack some knowledge which future people will have (e.g., about the nature of future technologies).
But saying we canāt ever predict the future at all is too strong.
Yes, this seems to be a problem, but itās also a problem with naive expected value thinking that prioritizes predictions without looking at adaptive planning or value of information. And I think Greaves and MacAskill donāt really address these issues sufficiently in their paperāthough I agree that they have considered them and are open to further refinement of their ideas.
But I donāt beleive that itās clear we predict things about the long term āwith above chance accuracy.ā If we do, itās not obvious how to construct the baseline probability we would expect to outperform.
Critically, the requirement for this criticism to be correct is that our predictions are not good enough to point to interventions that have higher expected benefit than more-certain ones, and this seems very plausible. Constructing the case for whether or not it is true seems valuable, but mostly unexplored.
Yeah, I agree with your first two paragraphs. (I donāt think I understand the third one; feel free to restate that, if youāve got time.)
In particular, itās worth noting that I agree that itās not currently clear that we can predict (decision-relevant) things about the long-term with above chance accuracy (see also the long-range forecasting tag). Above, I merely claimed that āwe very often predict things that depend on things we donāt fully understand, and with above chance accuracyāāi.e., I didnāt specify long-term.
It does seem very likely to me that itās possible to predict decision-relevant things about the long-term future at least slightly better than complete guesswork. But it seems plausible to me that our predictive power becomes weak enough that that outweighs the increased scale of the future, such that we should focus on near-term effects instead. (I have in mind basically Tarsneyās way of framing the topic from his āEpistemic Challengeā paper. There are also of course factors other than those two things that could change the balance, like population ethical views or various forms of risk-aversion.)
This seems like a super interesting and important topic, both for getting more clarity on whether we should adopt strong longtermism and on how to act given longtermism.
---
I specified ādecision-relevantā above because of basically the following points Tarsney makes in his Epistemic Challenge paper:
Agree that this is important, and itās something Iāve been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.
Masrani focuses quite a bit on the idea that longtermism relies on comparisons to an infinite amount of potential future good. But Greaves and MacAskillās paper doesnāt actually mention infinity at any point, and neither their argument nor the othe standard arguments Iāve seen rely at all on infinities.
E.g., Masrani writes: āBy āthis observationā I just mean the fact that longtermism is a really really bad idea because it lets you justify present day suffering forever, by always comparing it to an infinite amount of potential future good (forever).ā
(I wonāt say more on this here, since the comments section of the link-post for Masraniās post already contains an extensive discussion of whether and how infinities might be relevant relation to longtermism.)
Masrani writes:
But I think that this is simply false: our predictions (as well as other credences) can differ in how āresilientā they are
See e.g. Credal resilience and Use resilience, instead of imprecision, to communicate uncertainty