(But that claim was not necessary for any of Tarsney’s arguments; he just gave it as one reason why the actual case for longtermism might be stronger than his deliberately conservative estimates suggest.)
Context and explanation:
A core part of Tarsney’s model is—roughly speaking—the amount by which spending $1 million on mitigating existential risks changes the probability of being in the target state at a given time, relative to the probability that would occur if the short-termist intervention was used. This parameter is represented by p. The target state means something like “The accessible region of the Universe contains an intelligent civilization”.
Tarsney makes:
a lower-bound estimate [of p] based on the details of our working example, that is almost certainly far too pessimistic, but nevertheless informative.
The estimate proceeds in two stages: First, how much could humanity as a whole change the probability of [the target state at a particular time] (i.e., roughly, the probability that we survive the next thousand years), relative to the status quo, if we committed all our collective time and resources solely to this objective for the next thousand years? “One percent” seems like a very safe lower bound here (remembering that we are dealing with epistemic probabilities rather than objective chances). Now, if we assume that each unit of time and resources makes the same marginal contribution to increasing the probability of [the target state at that time], we can calculate p simply by computing the fraction of humanity’s resources over the next thousand years that can be bought for 1 million, and multiplying it by 0.01. This yields p [roughly equal to 10 to the power of negative 14].
Tarsney writes that “This is an extremely conservative lower bound”, and that “I think it would be justifiable to adjust p upward from this lower-bound estimate by a several-order-of-magnitude “fudge factor”, if we were so inclined” (though he doesn’t do this for his paper). He gives two reasons for this.
The first has to do with diminishing marginal returns and the fact that we’ll by default spend far less than all our collective time and resources over the next 1000 years to reducing existential risk. Thus, spending an extra $1 million on the current margin will probably achieve far more than one would expect by “simply by computing the fraction of humanity’s resources over the next thousand years that can be bought for 1 million”. This argument makes sense to me, and I do think it suggests Tarsney’s estimate for p is a very conservative one (as he intends).
But then he writes:
Second, resources committed at [an] earlier time should have greater impact, all else being equal. (If nothing else, this is true because resources that might be committed to existential risk mitigation, say, 500 years from now can do nothing to prevent any of the existential catastrophes that might occur in the next 500 years, while resources committed today are potentially impactful any time in the next thousand years.)
It’s definitely true that there are many reasons why resources committed at an earlier time could have a greater impact. And the reason Tarsney raises is a valid one; we could describe this as discounting for the possibility that the later use of resources would be “too late”. This is an extreme example of how we might miss “windows of opportunity” if we wait too long.
But there are also many reasons why resources committed at a later time could have a greater impact. This is especially true if we don’t count resources as committed to a problem when they’re used in an investment-like way in order to generate more resources that can be committed later, but it’s even true if we do count resources as already committed to a problem when they’re “merely invested”.
In particular, it’s possible that “leverage over the future” (or hingyness, pivotality, etc.) will increase in future. This could occur if:
We know more in future about what we should do
Longtermist priorities become more neglected in future
Windows of opportunity that aren’t currently open become open in future
E.g., there could be a future point at which important global governance institutions are being set up or policy frameworks for a currently unforeseen technology are being set
(For explanation and discussion of the above points, see here.)
Of course, the opposite effects could also occur. My point is merely that “resources committed at [an] earlier time should have greater impact, all else being equal” seems to be either false or misleading.
(I think it’d be reasonable for Tarsney to merely claim that his all-things-considered view is that resources committed at an earlier time will in practice probably have a greater impact. But this more uncertain stance would then weaken the case for a several-order-of-magnitude upwards adjustment of p.)
tl;dr: Tarsney writes “resources committed at earlier time should have greater impact, all else being equal”. I think that this is misleading and an oversimplification. See Crucial questions about optimal timing of work and donations and other posts tagged Timing of Philanthropy.
(But that claim was not necessary for any of Tarsney’s arguments; he just gave it as one reason why the actual case for longtermism might be stronger than his deliberately conservative estimates suggest.)
Context and explanation:
A core part of Tarsney’s model is—roughly speaking—the amount by which spending $1 million on mitigating existential risks changes the probability of being in the target state at a given time, relative to the probability that would occur if the short-termist intervention was used. This parameter is represented by p. The target state means something like “The accessible region of the Universe contains an intelligent civilization”.
Tarsney makes:
Tarsney writes that “This is an extremely conservative lower bound”, and that “I think it would be justifiable to adjust p upward from this lower-bound estimate by a several-order-of-magnitude “fudge factor”, if we were so inclined” (though he doesn’t do this for his paper). He gives two reasons for this.
The first has to do with diminishing marginal returns and the fact that we’ll by default spend far less than all our collective time and resources over the next 1000 years to reducing existential risk. Thus, spending an extra $1 million on the current margin will probably achieve far more than one would expect by “simply by computing the fraction of humanity’s resources over the next thousand years that can be bought for 1 million”. This argument makes sense to me, and I do think it suggests Tarsney’s estimate for p is a very conservative one (as he intends).
But then he writes:
It’s definitely true that there are many reasons why resources committed at an earlier time could have a greater impact. And the reason Tarsney raises is a valid one; we could describe this as discounting for the possibility that the later use of resources would be “too late”. This is an extreme example of how we might miss “windows of opportunity” if we wait too long.
But there are also many reasons why resources committed at a later time could have a greater impact. This is especially true if we don’t count resources as committed to a problem when they’re used in an investment-like way in order to generate more resources that can be committed later, but it’s even true if we do count resources as already committed to a problem when they’re “merely invested”.
In particular, it’s possible that “leverage over the future” (or hingyness, pivotality, etc.) will increase in future. This could occur if:
We know more in future about what we should do
Longtermist priorities become more neglected in future
Windows of opportunity that aren’t currently open become open in future
E.g., there could be a future point at which important global governance institutions are being set up or policy frameworks for a currently unforeseen technology are being set
(For explanation and discussion of the above points, see here.)
Of course, the opposite effects could also occur. My point is merely that “resources committed at [an] earlier time should have greater impact, all else being equal” seems to be either false or misleading.
(I think it’d be reasonable for Tarsney to merely claim that his all-things-considered view is that resources committed at an earlier time will in practice probably have a greater impact. But this more uncertain stance would then weaken the case for a several-order-of-magnitude upwards adjustment of p.)