By the rules of the expected value game, the case for longtermism appears to survive the epistemic challenge with which we confronted it. But it has prevailed in a way that should make us slightly uneasy: by appealing to potentially-minuscule probabilities of astronomical quantities of value.
I think that this particular sentence is false or misleading. As Tarsney notes earlier and later, his model and parameter estimates[1] suggests that the case for longtermism survives given either acceptance of fanaticism or plausible but non obvious empirical views. That is, on some plausible empirical views, longtermism doesnât require an appeal to minuscule probabilities of astronomical quantities of value.
(Tarsneyâs sentence may still be technically accurate, since he says potentially-minuscule. But it seems at least a bit misleading to me.)
[1] Along with certain ethical and decision-theoretic assumptions, e.g. total utilitarianism.
I agree with you that Tarsney hasnât been clear, but I think youâve got it the wrong way around (please tell me if you think Iâm wrong though). The abstract to the paper says:
But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these âPascalianâ probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
These two sentences seem to say different things, as you have outlined. The first implies that you need fanaticism, whilst the second implies you need either fanaticism or non-obvious but plausible empirical views. Counter to you I think the former is actually correct.
Tarsney initially runs his model using point estimates for the parameters and concludes that the case for longtermism is âplausible-but-uncertainâ if we assume that humanity will eventually spread to the starts, and âextremely demandingâ if we donât make that assumption. Therefore longtermism doesnât really âsurvive the epistemic challengeâ when using point estimates.
Tarsney says however that âThe ideal Bayesian approach would be to treat all the model parameters as random variables rather than point estimatesâ. So if weâre Bayesians we can pretty much ignore the conclusions so far and everything is still to play for.
When Tarsney does incorporate uncertainty for all parameters, the expectational superiority of longtermism becomes clear because âthe potential upside of longtermist interventions is so enormousâ. In other words the use of random variables allows for fanaticism to take over and demonstrates the superiority of longtermism.
So it seems to me that it really is fanaticism that is doing the work here. Would be interested to hear your thoughts.
EDIT: On a closer look at his paper Tarsney does say that it isnât clear how Pascalian the superiority of longtermism is because of the âtremendous room for reasonable disagreement about the relevant probabilitiesâ. Perhaps this is what youâre getting at Michael?
These two sentences seem to say different things, as you have outlined.
I actually think that those two sentences are consistent with each other. And I think that, as Tarsney says, his models and estimates do not show that fanaticism is necessarily required for the case for longtermism to hold.
Basically (from memory and re-skimming), Tarsney gives two model structures, some point estimates for most of the parameters, and then later some probability distributions for the parameters. He intends both models to represent plausible empirical views. He intends his point estimates and probability distributions to represent beliefs that are reasonable but at the pessimistic end for longtermism (so itâs not crazy to think those things, but his all-things-considered beliefs about those parameters would probably be more favourable to longtermism). And he finds that the case for longtermism holds given the following assumptions:
You use one of the model structures (cubic growth model), his pessimistic parameter estimates, and a âprima facie plausibleâ value for the long-run rate of ENEs
You use the model structure thatâs less favourable to longtermism (steady state growth), his pessimistic parameter estimates, and an âextremely demandingâ value for the long-run rate of ENEs
So he thinks that the case is âextremely precariousâ if we use that model and we use point estimates
We use distributions to represent our uncertainty both between those model structures and over some parameters, with the distributions based on setting lower bounds that Tarsney thinks are âquite conservative and hard to reasonably disputeâ
(There are various complications, caveats, and additional points, but this stuff is key.)
So his reasoning is consistent with it being that case that the most reasonable empirical position would support longtermism without requiring any minuscule probabilities of extremely huge payoffs, or with that not being the case.
E.g., that could be the case is if we should have a non-minuscule credence in the cubic growth model and that âprima facie plausibleâ value for the long-run rate of ENEs.
When Tarsney does incorporate uncertainty for all parameters, the expectational superiority of longtermism becomes clear because âthe potential upside of longtermist interventions is so enormousâ. In other words the use of random variables allows for fanaticism to take over and demonstrates the superiority of longtermism.
Incorporating uncertainty, and this suggesting that the potential upside of one thing makes that the thing we should go for, doesnât necessarily mean fanaticism is involved. E.g., I made many job applications that I expected would turn out to have not been worth the time they took, due to the potential upside, and without having a clear point estimate for my odds of getting the job or how valuable thatâd be (so I sort-of implicitly had a probability distribution over possible credences). Thisâd only be fanatical if the probabilities involved were minuscule and the payoffs huge enough to âmake up for thatâ, and Tarsneyâs analysis suggests that that may or may not be the case when it comes to longtermism.
By this measure, the preceding analysis suggests that the choice between longtermist and short-termist interventions could be extremely Pascalian. We have found that longtermist interventions can have much greater expected value than their short-termist rivals even when the probability of having any impact at all on the far future is minuscule (2 x 10^14, for a fairly large investment of resources) and when, conditional on having an impact, most of the expected value of the longtermist intervention is conditioned on further low-probability assumptions (e.g., the prediction of large-scale interstellar settlement, astronomical values of vs, large values of s, andâin particularâsmall values of r). It could turn out that the vast majority of the expected value of a typical longtermist interventionâand, more importantly, the component of its expected value that gives it the advantage over its short-termist competitorsâdepends on a conjunction of improbable assumptions with joint probability on the order of (say) 10^18 or less. In this case, by the measure proposed above, the choice between L and B is extremely Pascalian (1-(2x10^18) or greater).
On the other hand, there is tremendous room for reasonable disagreement about the relevant probabilities. If you think that, in the working example, p is on the order of (say) 10^7, and that the assumptions of eventual interstellar settlement, astronomical values of vs, large values of s, and very small values of r are each more likely than not, then the amount of tail probability we would have to ignore to prefer B might be much greaterâsay, 10^8 or more.
These numbers should not be taken too literallyâthey are much less robust, I think, than the expected value estimates themselves, and at any rate, itâs not yet clear whether we should care that a choice situation is Pascalian in the sense defined above, or if so, at what threshold of Pascalian-ness we should begin to doubt the conclusions of expectational reasoning. So the remarks in this section are merely suggestive. But it seems to me there are reasonable grounds to worry that the case for longtermism is problematically dependent on a willingness to take expectational reasoning to a fanatical extreme.
I think maybe a useful framing to have in mind is that Tarsneyâs paper was not aimed at actually working out the likelihood of each model structure relative to the other, or working out what precise parameter estimates would be most appropriate. And those are things we should be very uncertain about.
So perhaps our 90% credible interval (or something like that) for what weâd believe after some years of further research should include both probability estimates/âdistributions in which the case for longtermism survives without fanaticism and probability estimates/âdistributions in which the case for longtermism would survive only if we accept fanaticism.
Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I donât think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper.
So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).
Just a nitpick
I think that this particular sentence is false or misleading. As Tarsney notes earlier and later, his model and parameter estimates[1] suggests that the case for longtermism survives given either acceptance of fanaticism or plausible but non obvious empirical views. That is, on some plausible empirical views, longtermism doesnât require an appeal to minuscule probabilities of astronomical quantities of value.
(Tarsneyâs sentence may still be technically accurate, since he says potentially-minuscule. But it seems at least a bit misleading to me.)
[1] Along with certain ethical and decision-theoretic assumptions, e.g. total utilitarianism.
I agree with you that Tarsney hasnât been clear, but I think youâve got it the wrong way around (please tell me if you think Iâm wrong though). The abstract to the paper says:
These two sentences seem to say different things, as you have outlined. The first implies that you need fanaticism, whilst the second implies you need either fanaticism or non-obvious but plausible empirical views. Counter to you I think the former is actually correct.
Tarsney initially runs his model using point estimates for the parameters and concludes that the case for longtermism is âplausible-but-uncertainâ if we assume that humanity will eventually spread to the starts, and âextremely demandingâ if we donât make that assumption. Therefore longtermism doesnât really âsurvive the epistemic challengeâ when using point estimates.
Tarsney says however that âThe ideal Bayesian approach would be to treat all the model parameters as random variables rather than point estimatesâ. So if weâre Bayesians we can pretty much ignore the conclusions so far and everything is still to play for.
When Tarsney does incorporate uncertainty for all parameters, the expectational superiority of longtermism becomes clear because âthe potential upside of longtermist interventions is so enormousâ. In other words the use of random variables allows for fanaticism to take over and demonstrates the superiority of longtermism.
So it seems to me that it really is fanaticism that is doing the work here. Would be interested to hear your thoughts.
EDIT: On a closer look at his paper Tarsney does say that it isnât clear how Pascalian the superiority of longtermism is because of the âtremendous room for reasonable disagreement about the relevant probabilitiesâ. Perhaps this is what youâre getting at Michael?
I actually think that those two sentences are consistent with each other. And I think that, as Tarsney says, his models and estimates do not show that fanaticism is necessarily required for the case for longtermism to hold.
Basically (from memory and re-skimming), Tarsney gives two model structures, some point estimates for most of the parameters, and then later some probability distributions for the parameters. He intends both models to represent plausible empirical views. He intends his point estimates and probability distributions to represent beliefs that are reasonable but at the pessimistic end for longtermism (so itâs not crazy to think those things, but his all-things-considered beliefs about those parameters would probably be more favourable to longtermism). And he finds that the case for longtermism holds given the following assumptions:
You use one of the model structures (cubic growth model), his pessimistic parameter estimates, and a âprima facie plausibleâ value for the long-run rate of ENEs
You use the model structure thatâs less favourable to longtermism (steady state growth), his pessimistic parameter estimates, and an âextremely demandingâ value for the long-run rate of ENEs
So he thinks that the case is âextremely precariousâ if we use that model and we use point estimates
We use distributions to represent our uncertainty both between those model structures and over some parameters, with the distributions based on setting lower bounds that Tarsney thinks are âquite conservative and hard to reasonably disputeâ
(There are various complications, caveats, and additional points, but this stuff is key.)
So his reasoning is consistent with it being that case that the most reasonable empirical position would support longtermism without requiring any minuscule probabilities of extremely huge payoffs, or with that not being the case.
E.g., that could be the case is if we should have a non-minuscule credence in the cubic growth model and that âprima facie plausibleâ value for the long-run rate of ENEs.
Incorporating uncertainty, and this suggesting that the potential upside of one thing makes that the thing we should go for, doesnât necessarily mean fanaticism is involved. E.g., I made many job applications that I expected would turn out to have not been worth the time they took, due to the potential upside, and without having a clear point estimate for my odds of getting the job or how valuable thatâd be (so I sort-of implicitly had a probability distribution over possible credences). Thisâd only be fanatical if the probabilities involved were minuscule and the payoffs huge enough to âmake up for thatâ, and Tarsneyâs analysis suggests that that may or may not be the case when it comes to longtermism.
Hereâs a relevant section from the paper:
I think maybe a useful framing to have in mind is that Tarsneyâs paper was not aimed at actually working out the likelihood of each model structure relative to the other, or working out what precise parameter estimates would be most appropriate. And those are things we should be very uncertain about.
So perhaps our 90% credible interval (or something like that) for what weâd believe after some years of further research should include both probability estimates/âdistributions in which the case for longtermism survives without fanaticism and probability estimates/âdistributions in which the case for longtermism would survive only if we accept fanaticism.
Thanks yeah, I saw this section of the paper after I posted my original comment. I might be wrong but I donât think he really engages in this sort of discussion in the video, and I had only watched the video and skimmed through the paper.
So overall I think you may be right in your critique. It might be interesting to ask Tarsney about this (although it might be a fairly specific question to ask).
Yeah, I plan to suggest some questions for Rob to ask Tarsney later today. Perhaps thisâll be one of them :)