By this measure, the preceding analysis suggests that the choice between longtermist and short-termist interventions could be extremely Pascalian. We have found that longtermist interventions can have much greater expected value than their short-termist rivals even when the probability of having any impact at all on the far future is minuscule (2 x 10^14, for a fairly large investment of resources) and when, conditional on having an impact, most of the expected value of the longtermist intervention is conditioned on further low-probability assumptions (e.g., the prediction of large-scale interstellar settlement, astronomical values of vs, large values of s, and—in particular—small values of r). It could turn out that the vast majority of the expected value of a typical longtermist intervention—and, more importantly, the component of its expected value that gives it the advantage over its short-termist competitors—depends on a conjunction of improbable assumptions with joint probability on the order of (say) 10^18 or less. In this case, by the measure proposed above, the choice between L and B is extremely Pascalian (1-(2x10^18) or greater).
On the other hand, there is tremendous room for reasonable disagreement about the relevant probabilities. If you think that, in the working example, p is on the order of (say) 10^7, and that the assumptions of eventual interstellar settlement, astronomical values of vs, large values of s, and very small values of r are each more likely than not, then the amount of tail probability we would have to ignore to prefer B might be much greater—say, 10^8 or more.
These numbers should not be taken too literally—they are much less robust, I think, than the expected value estimates themselves, and at any rate, it’s not yet clear whether we should care that a choice situation is Pascalian in the sense defined above, or if so, at what threshold of Pascalian-ness we should begin to doubt the conclusions of expectational reasoning. So the remarks in this section are merely suggestive. But it seems to me there are reasonable grounds to worry that the case for longtermism is problematically dependent on a willingness to take expectational reasoning to a fanatical extreme.
I think maybe a useful framing to have in mind is that Tarsney’s paper was not aimed at actually working out the likelihood of each model structure relative to the other, or working out what precise parameter estimates would be most appropriate. And those are things we should be very uncertain about.
So perhaps our 90% credible interval (or something like that) for what we’d believe after some years of further research should include both probability estimates/​distributions in which the case for longtermism survives without fanaticism and probability estimates/​distributions in which the case for longtermism would survive only if we accept fanaticism.
Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I don’t think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper.
So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).
Here’s a relevant section from the paper:
I think maybe a useful framing to have in mind is that Tarsney’s paper was not aimed at actually working out the likelihood of each model structure relative to the other, or working out what precise parameter estimates would be most appropriate. And those are things we should be very uncertain about.
So perhaps our 90% credible interval (or something like that) for what we’d believe after some years of further research should include both probability estimates/​distributions in which the case for longtermism survives without fanaticism and probability estimates/​distributions in which the case for longtermism would survive only if we accept fanaticism.
Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I don’t think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper.
So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).
Yeah, I plan to suggest some questions for Rob to ask Tarsney later today. Perhaps this’ll be one of them :)