Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios—along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.
That said, there is one broad limitation to this analysis that hasn’t gotten quite as much attention so far as I think it deserves. (Basil: yes, this is the thing we discussed last summer….) This is that low real, risk-free interest rates are compatible with the belief 1) that there will be no AI-driven growth explosion, as you discuss—but alsowith some AI-growth-explosion-compatible beliefs investors might have, including 2) that future growth could well be very fast or very slow, and 3) that growth will be fast but marginal utility in consumption will nevertheless stay high, because AI will give us such mindblowing new things to spend on (my “new products” hobby-horse). So it seems impossible to put any upper bound (below 100%) on the probability people are assigning to near-term explosive growth purely by looking at real, risk-free interest rates.
To infer that investors believe (1), one of course has to think hard about all the alternatives (including but not limited to (2) and (3)) and rule them out. But (if I’m not mistaken) all you do along these lines is to partly rule out (2), by exploring the implications of putting a yearly probability on the economy permanently stagnating. I found that helpful. As you observe, merely (though I understand that you don’t see it as “merely”!) introducing a 20% chance of stagnation by 2053 is enough to mostly offset the interest rate increases produced by an 80% chance of Cotra AI timelines. You don’t currently incorporate any negative-growth scenarios, but even a small chance of negative growth seems like it should be enough to fully offset said interest rate increase. This is because of the asymmetry produced by diminishing marginal utility: the marginal utility of an extra dollar saved can only fall to zero, if you turn out to be very rich in the future, whereas it can rise arbitrarily high if you turn out to be very poor. (You note this when you say “the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure”, but I think the departure from caring about what we would normally call the expected growth rate is important and kind of obscured here.)
This seems especially relevant given that what investors should be expected to care about is the expected growth rate of their own future consumption, rather than of GDP. Even if they’re certain that AI is coming and bound to accelerate GDP growth, they could worry that it stands some chance of making a small handful of people rich and themselves poor. You write that “truly transformative AI leading to 30%+ economy-wide growth… would not be possible without having economy-wide benefits”, but this is not so clear to me. You might think that’s crazy, but given that I don’t, presumably some other investors don’t.
Anyway: this is all to say that I’m skeptical of inferring much from risk-free interest rates alone. This doesn’t mean we can’t draw inferences from market data, though! For one thing, on the hypothesis that investors believe “(2)”, we would probably expect to see the “insurance value” of bonds, and thus the equity premium, rising over time (as we do, albeit weakly). For another thing, one can presumably test how the market reacts to AI news. I’m certainly interested to see any further work people do in this direction.
The short answer here is: yes agreed, the level of real interest rates certainly seems consistent with “market has some probability on TAI and some [possibly smaller] probability on a second dark age”.
Whether that’s a possibility worth putting weight on—speaking for myself, I’m happy to leave that up to readers.
(ie: seems unlikely to me! What would the story there be? Extremely rapid diminishing returns to innovation from the current margin, or faster-than-expected fertility declines?)
As you say, perhaps the possibility of the stagnation/degrowth scenario would have other implications for other asset prices, which could be informative for assessing likelihood.
For what it’s worth, I suspect many readers do think there’s some chance of stagnation (i.e. put 5% credence or more). Will MacAskill devotes an entire chapter to growth stagnation in What We Owe the Future. In fact he thinks it’s the most likely of the four future trajectories discussed in the book, giving it 35% credence (see note 22 to chapter 2, p. 273-4).
The Samotsvety forecasters think this is too high, but each still puts at least 1% credence on the scenario and their aggregated forecast is 5%. Low, but suggesting it’s worth considering.
Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios—along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.
That said, there is one broad limitation to this analysis that hasn’t gotten quite as much attention so far as I think it deserves. (Basil: yes, this is the thing we discussed last summer….) This is that low real, risk-free interest rates are compatible with the belief
1) that there will be no AI-driven growth explosion,
as you discuss—but also with some AI-growth-explosion-compatible beliefs investors might have, including
2) that future growth could well be very fast or very slow, and
3) that growth will be fast but marginal utility in consumption will nevertheless stay high, because AI will give us such mindblowing new things to spend on (my “new products” hobby-horse).
So it seems impossible to put any upper bound (below 100%) on the probability people are assigning to near-term explosive growth purely by looking at real, risk-free interest rates.
To infer that investors believe (1), one of course has to think hard about all the alternatives (including but not limited to (2) and (3)) and rule them out. But (if I’m not mistaken) all you do along these lines is to partly rule out (2), by exploring the implications of putting a yearly probability on the economy permanently stagnating. I found that helpful. As you observe, merely (though I understand that you don’t see it as “merely”!) introducing a 20% chance of stagnation by 2053 is enough to mostly offset the interest rate increases produced by an 80% chance of Cotra AI timelines. You don’t currently incorporate any negative-growth scenarios, but even a small chance of negative growth seems like it should be enough to fully offset said interest rate increase. This is because of the asymmetry produced by diminishing marginal utility: the marginal utility of an extra dollar saved can only fall to zero, if you turn out to be very rich in the future, whereas it can rise arbitrarily high if you turn out to be very poor. (You note this when you say “the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure”, but I think the departure from caring about what we would normally call the expected growth rate is important and kind of obscured here.)
This seems especially relevant given that what investors should be expected to care about is the expected growth rate of their own future consumption, rather than of GDP. Even if they’re certain that AI is coming and bound to accelerate GDP growth, they could worry that it stands some chance of making a small handful of people rich and themselves poor. You write that “truly transformative AI leading to 30%+ economy-wide growth… would not be possible without having economy-wide benefits”, but this is not so clear to me. You might think that’s crazy, but given that I don’t, presumably some other investors don’t.
Anyway: this is all to say that I’m skeptical of inferring much from risk-free interest rates alone. This doesn’t mean we can’t draw inferences from market data, though! For one thing, on the hypothesis that investors believe “(2)”, we would probably expect to see the “insurance value” of bonds, and thus the equity premium, rising over time (as we do, albeit weakly). For another thing, one can presumably test how the market reacts to AI news. I’m certainly interested to see any further work people do in this direction.
Thanks for these comments!
The short answer here is: yes agreed, the level of real interest rates certainly seems consistent with “market has some probability on TAI and some [possibly smaller] probability on a second dark age”.
Whether that’s a possibility worth putting weight on—speaking for myself, I’m happy to leave that up to readers.
(ie: seems unlikely to me! What would the story there be? Extremely rapid diminishing returns to innovation from the current margin, or faster-than-expected fertility declines?)
As you say, perhaps the possibility of the stagnation/degrowth scenario would have other implications for other asset prices, which could be informative for assessing likelihood.
For what it’s worth, I suspect many readers do think there’s some chance of stagnation (i.e. put 5% credence or more). Will MacAskill devotes an entire chapter to growth stagnation in What We Owe the Future. In fact he thinks it’s the most likely of the four future trajectories discussed in the book, giving it 35% credence (see note 22 to chapter 2, p. 273-4).
The Samotsvety forecasters think this is too high, but each still puts at least 1% credence on the scenario and their aggregated forecast is 5%. Low, but suggesting it’s worth considering.